Apple and Google’s AI wizardry promises privacy — at a price | GeekComparison

Apple and Google's AI wizardry promises privacy — at a price

Getty Images

Since the inception of the iPhone, many of the smart gadgets in smartphones have come from elsewhere: the corporate computers known as the cloud. Mobile apps sent user data to the cloud for useful tasks such as transcribing speech or suggesting message responses. Now Apple and Google say smartphones are smart enough to perform some critical and sensitive machine learning tasks, like those on their own.

At Apple’s WWDC event this month, the company said its virtual assistant Siri will transcribe speech without tapping the cloud in some languages ​​on recent and future iPhones and iPads. Speaking at its own I/O developer event last month, Google said the latest version of its Android operating system has a feature intended for secure processing of sensitive data on the device, the Private Compute Core. Its first use involves controlling the company’s version of the Smart Reply feature built into the mobile keyboard that can suggest responses to incoming messages.

Apple and Google both say on-device machine learning provides more privacy and faster apps. Not sending personal data reduces the risk of exposure and saves time spent waiting for data to traverse the internet. At the same time, preserving data on devices aligns with the tech giants’ long-term interest in connecting consumers to their ecosystems. People who hear their data may be more privately processed, may be more willing to agree to share more data.

The companies’ recent promotion of on-device machine learning comes after years of work on technology to limit the data their clouds can “see”.

In 2014, Google started collecting data about Chrome browser usage through a technique called differential privacy. This technique adds noise to collected data in a way that limits what those samples reveal about individuals. Apple has used the technique on data collected from phones to inform emoji and typing predictions and for web browsing data.

More recently, both companies have adopted a technology called federated learning. Allows a cloud-based machine learning system to be updated without bringing in raw data; instead, individual devices process data locally and only share digested updates. As with differential privacy, the companies have only discussed the use of federated learning in limited cases. Google has used the technique to keep its mobile typing predictions up to date with language trends; Apple has published research on its use to update speech recognition models.

Rachel Cummings, an assistant professor at Columbia who has previously consulted on privacy for Apple, says the rapid shift to machine learning on phones is striking. “It’s incredibly rare to see anything go from first conception to large-scale in so few years,” she says.

Those advancements require advances not only in computer science, but also for businesses to address the practical challenges of processing data on consumer-owned devices. Google has said its federated learning system only taps users’ devices when they are plugged in, idle, and with a free internet connection. The technology was made possible in part by improvements in the power of mobile processors.

Beefier mobile hardware also contributed to Google’s 2019 announcement that speech recognition for its virtual assistant on Pixel devices would be completely on-device, free from the crutch of the cloud. Apple’s new on-device speech recognition for Siri, announced this month at WWDC, will use the “neural engine” the company has added to its mobile processors to amplify machine learning algorithms.

The technical feats are impressive. The question is to what extent they will meaningfully change users’ relationships with tech giants.

Presenters at Apple’s WWDC said Siri’s new design was a “major privacy update” that addressed the risk associated with accidentally sending audio to the cloud, saying it was the biggest privacy concern of users about voice assistants. . Some Siri commands, such as setting timers, can be recognized completely locally, making for a quick response. Still, in many cases transcribed commands to Siri—presumably also from accidental recordings—will be sent to Apple servers for software to decode and respond to. Siri voice transcription will still be cloud-based for HomePod smart speakers often installed in bedrooms and kitchens, where accidental recording could be more of a concern.

Google is also promoting on-device data processing as a privacy gain and has indicated that it will expand the practice. The company expects partners such as Samsung using its Android operating system to adopt the new Privacy Compute Core and use it for features that rely on sensitive data.

Google has also made local analytics of browsing data part of its proposal to reinvent online ad targeting called FLoC, claiming to be more private. Academics and some rival tech companies have said the design is likely to help Google consolidate its dominance of online advertising by making targeting for other companies more difficult.

Michael Veale, a lecturer in digital rights at University College London, says on-device data processing can be a good thing, but adds that the way tech companies are promoting it shows that they are primarily motivated by a desire to trap people. adhere to lucrative digital ecosystems.

“Privacy is confused with keeping data confidential, but it’s also about limiting power,” Veale says. “If you’re a big technology company and you manage to redefine privacy as the sole confidentiality of data, then you can just keep working and get a license to work.”

A Google spokesperson said the company is “building for privacy wherever computing takes place” and that data sent to the Private Compute Core for processing “should be tied to user value.” Apple did not respond to a request for comment.

Cummings of Columbia says new privacy techniques and the way companies market them are making the trade-offs of digital life more complex. In recent years, as machine learning has become more widely deployed, technology companies have steadily expanded the amount of data they collect and analyze. There is evidence that some consumers are misunderstanding the privacy protections touted by tech giants.

An upcoming survey study by Cummings and collaborators at Boston University and the Max Planck Institute showed descriptions of differential privacy from tech companies, media, and academics to 675 Americans. When they heard about the technique, people were about twice as likely to report that they would be willing to share data. But there was evidence that descriptions of differential privacy benefits also raised unrealistic expectations. One-fifth of respondents expected their data to be protected from police searches, which differential privacy does not. The latest statements from Apple and Google about on-device data processing may open up new opportunities for misunderstanding.

This story originally appeared on wired.com

Leave a Comment