Featured
- Get link
- Other Apps
Apple
is promising personalized AI in a private cloud. Here’s how that will work.
Apple’s first big salvo in the AI wars makes a bet that
people will care about data privacy when automating tasks.
At its Worldwide Developer Conference on Monday, Apple for
the first time unveiled its vision for supercharging its product line up with
artificial intelligence. The key feature, which will run across virtually all
of its product line, is Apple Intelligence, a suite of AI-based capabilities
that promises to deliver personalized AI services while keeping sensitive data
secure.
It represents Apple’s largest leap forward in using our
private data to help AI do tasks for us. To make the case it can do this
without sacrificing privacy, the company says it has built a new way to handle
sensitive data in the cloud.
Apple says its privacy-focused system will first attempt to fulfil
AI tasks locally on the device itself. If any data is exchanged with cloud
services, it will be encrypted and then deleted afterward. The company also
says the process, which it calls Private Cloud Compute, will be subject to
verification by independent security researchers.
The pitch offers an implicit contrast with the likes of
Alphabet, Amazon, or Meta, which collect and store enormous amounts of personal
data. Apple says any personal data passed on to the cloud will be used only for
the AI task at hand and will not be retained or accessible to the company, even
for debugging or quality control, after the model completes the request.
Simply put, Apple is saying people can trust it to analyse
incredibly sensitive data—photos, messages, and emails that contain intimate
details of our lives—and deliver automated services based on what it finds
there, without actually storing the data online or making any of it
vulnerable.
It showed a few examples of how this will work in upcoming
versions of iOS. Instead of scrolling through your messages for that podcast
your friend sent you, for example, you could simply ask Siri to find and play
it for you. Craig Federighi, Apple’s senior vice president of software
engineering, walked through another scenario: an email comes in pushing back a
work meeting, but his daughter is appearing in a play that night. His phone can
now find the PDF with information about the performance, predict the local
traffic, and let him know if he’ll make it on time. These capabilities will
extend beyond apps made by Apple, allowing developers to tap into Apple’s AI
too.
Because the company profits more from hardware and services
than from ads, Apple has less incentive than some other companies to collect
personal online data, allowing it to position the iPhone as the most private
device. Even so, Apple has previously found itself in the crosshairs of privacy
advocates. Security flaws led to leaks of explicit photos from iCloud in 2014.
In 2019, contractors were found to be listening to intimate Siri recordings for
quality control. Disputes about how Apple handles data requests from law
enforcement are ongoing.
The first line of defence against privacy breaches,
according to Apple, is to avoid cloud computing for AI tasks whenever possible.
“The cornerstone of the personal intelligence system is on-device processing,”
Federighi says, meaning that many of the AI models will run on iPhones and Macs
rather than in the cloud. “It’s aware of your personal data without collecting
your personal data.”
That presents some technical obstacles. Two years into the
AI boom, pinging models for even simple tasks still requires enormous amounts
of computing power. Accomplishing that with the chips used in phones and
laptops is difficult, which is why only the smallest of Google’s AI models can
be run on the company’s phones, and everything else is done via the cloud.
Apple says its ability to handle AI computations on-device is due to years of
research into chip design, leading to the M1 chips it began rolling out in
2020.
Yet even Apple’s most advanced chips can’t handle the full
spectrum of tasks the company promises to carry out with AI. If you ask Siri to
do something complicated, it may need to pass that request, along with your
data, to models that are available only on Apple’s servers. This step, security
experts say, introduces a host of vulnerabilities that may expose your
information to outside bad actors, or at least to Apple itself.
“I always warn people that as soon as your
data goes off your device, it becomes much more vulnerable,” says
Albert Fox Cahn, executive director of the Surveillance Technology Oversight
Project and practitioner in residence at NYU Law School’s Information Law
Institute.
Apple claims to have mitigated this risk with its new
Private Cloud Computer system. “For the first time ever, Private Cloud Compute
extends the industry-leading security and privacy of Apple devices into the
cloud,” Apple security experts wrote in their announcement, stating that
personal data “isn’t accessible to anyone other than the user—not even to
Apple.” How does it work?
Historically, Apple has encouraged people to opt in to
end-to-end encryption (the same type of technology used in messaging apps like
Signal) to secure sensitive iCloud data. But that doesn’t work for AI. Unlike
messaging apps, where a company like WhatsApp does not need to see the contents
of your messages in order to deliver them to your friends, Apple’s AI models
need unencrypted access to the underlying data to generate responses. This is
where Apple’s privacy process kicks in. First, Apple says, data will be used
only for the task at hand. Second, this process will be verified by independent
researchers.
Needless to say, the architecture of this system is
complicated, but you can imagine it as an encryption protocol. If your phone
determines it needs the help of a larger AI model, it will package a request
containing the prompt it’s using and the specific model, and then put a lock on
that request. Only the specific AI model to be used will have the proper key.
When asked by MIT Technology Review whether
users will be notified when a certain request is sent to cloud-based AI models
instead of being handled on-device, an Apple spokesperson said there will be
transparency to users but that further details aren't available.
Dawn Song, co-Director of UC Berkeley Centre on Responsible
Decentralized Intelligence and an expert in private computing, says Apple’s new
developments are encouraging. “The list of goals that they announced is well
thought out,” she says. “Of course there will be some challenges in meeting
those goals.”
Cahn says that to judge from what Apple has disclosed so
far, the system seems much more privacy-protective than other AI products out
there today. That said, the common refrain in his space is “Trust
but verify.” In other words, we won’t know how secure these systems
keep our data until independent researchers can verify its claims, as Apple
promises they will, and the company responds to their findings.
“Opening yourself up to independent review by researchers is
a great step,” he says. “But that doesn’t determine how you’re going to respond
when researchers tell you things you don’t want to hear.” Apple did not respond
to questions from MIT Technology Review about how the company
will evaluate feedback from researchers.
The privacy-AI bargain
Apple is not the only company betting that many of us will
grant AI models mostly unfettered access to our private data if it means they
could automate tedious tasks. OpenAI’s Sam Altman described his
dream AI tool to MIT Technology Review as one “that knows
absolutely everything about my whole life, every email, every conversation I’ve
ever had.” At its own developer conference in May, Google announced Project Astra,
an ambitious project to build a “universal AI agent that is helpful in everyday
life.”
It’s a bargain that will force many of us to consider for
the first time what role, if any, we want AI models to play in how we interact
with our data and devices. When ChatGPT first came on the scene, that wasn’t a
question we needed to ask. It was simply a text generator that could write us a
birthday card or a poem, and the questions it raised—like where its training
data came from or what biases it perpetuated—didn’t feel quite as
personal.
Now, less than two years later, Big Tech is making
billion-dollar bets that we trust the safety of these systems enough to fork
over our private information. It’s not yet clear if we know enough to make that
call, or how able we are to opt out even if we’d like to. “I do worry that
we’re going to see this AI arms race pushing ever more of our data into other
people’s hands,” Cahn says.
Apple will soon release beta versions of its Apple
Intelligence features, starting this fall with the iPhone 15 and the new macOS
Sequoia, which can be run on Macs and iPads with M1 chips or newer. Says Apple
CEO Tim Cook, “We think Apple intelligence is going to be indispensable.”
Source: MIT Technology Review
- Get link
- Other Apps
Popular Posts
- Get link
- Other Apps
- Get link
- Other Apps
Comments
Post a Comment