Google’s AI model will potentially listen in on all your phone calls — or at least ones it suspects are coming from a fraudster.

To protect the user’s privacy, the company says Gemini Nano operates locally, without connecting to the internet. “This protection all happens on-device, so your conversation stays private to you. We’ll share more about this opt-in feature later this year,” the company says.

“This is incredibly dangerous,” says Meredith Whittaker, the president of a foundation for the end-to-end encrypted messaging app Signal.

Whittaker —a former Google employee— argues that the entire premise of the anti-scam call feature poses a potential threat. That’s because Google could potentially program the same technology to scan for other keywords, like asking for access to abortion services.

“It lays the path for centralized, device-level client-side scanning,” she said in a post on Twitter/X. “From detecting ‘scams’ it’s a short step to ‘detecting patterns commonly associated w/ seeking reproductive care’ or ‘commonly associated w/ providing LGBTQ resources’ or ‘commonly associated with tech worker whistleblowing.’”

  • fubarx@lemmy.ml
    link
    fedilink
    English
    arrow-up
    15
    ·
    6 months ago

    One of the things they glided around was whether a lot of this on-device stuff needs a special processor chip with AI+security to work?

    The Pixel phones (especially newer ones) made by Google have them, but the vast majority of Android phones don’t.

    So either these features only work on latest Google phones (which will piss off licensees and partners), or they’re using plain old CPU/GPUs to do this sort of detection, in which case it will be sniffable by malicious third-parties.

    And let’s not forget that if the phone can listen to your conversation to detect malicious intent, any country can legally compel Google to provide them with the data by claiming it is part of a law-enforcement investigation.

    Things are going to get spicy in Android-land.

    • areyouevenreal@lemm.ee
      link
      fedilink
      English
      arrow-up
      1
      ·
      6 months ago

      It’s not just google who have AI stuff built into their phones. All recent SoCs I have seen have had NPUs going back the last couple generations. A lot of older or cheap phones won’t have one, but the new devices will.

      I don’t see the problem with using the phones normal GPU. This shouldn’t be more insecure than making a call currently is. I am pretty sure android phones don’t have a secure enclave just for making calls as you can give different apps access to calling features, and most calls I make are through third party apps anyway, not via POTS. That being said android is pretty secure anyway provided you don’t give permissions to the wrong app. It’s more secure than your average Linux system, as each app has its own user and is only allowed to access things it has explicit permissions to access. Secure enclaves aren’t all that in my opinion.

      And let’s not forget that if the phone can listen to your conversation to detect malicious intent, any country can legally compel Google to provide them with the data by claiming it is part of a law-enforcement investigation.

      The point of doing it locally is the audio never gets sent to google directly. That being said they could definitely do some dodgy things by training the ML model to search for words like abortion, drugs, transgender, etc depending on what the laws are in the country the phone is being used in.