Meta has confirmed that it will pause plans to start training its AI systems using data from its users in the European Union and U.K.
The move follows pushback from the Irish Data Protection Commission (DPC), Meta’s lead regulator in the EU, which is acting on behalf of several data protection authorities across the bloc. The U.K.’s Information Commissioner’s Office (ICO) also requested that Meta pause its plans until it could satisfy concerns it had raised.
“The DPC welcomes the decision by Meta to pause its plans to train its large language model using public content shared by adults on Facebook and Instagram across the EU/EEA,” the DPC said in a statement Friday. “This decision followed intensive engagement between the DPC and Meta. The DPC, in cooperation with its fellow EU data protection authorities, will continue to engage with Meta on this issue.”
While Meta is alreadytapping user-generated content to train its AI in markets such as the U.S., Europe’s stringentGDPR regulationshas created obstacles for Meta — and other companies — looking to improve their AI systems, including large language models with user-generated training material.
However, Meta last month began notifying users of anupcoming changeto its privacy policy, one that it said will give it the right to use public content on Facebook and Instagram to train its AI, including content from comments, interactions with companies, status updates, photos and their associated captions. The companyargued that it needed to do thisto reflect “the diverse languages, geography and cultural references of the people in Europe.”
These changes were due to come into effect on June 26 — 12 days from now. But the plans spurrednot-for-profit privacy activist organizationNOYB(“none of your business”) to file 11 complaints with constituent EU countries, arguing that Meta is contravening various facets of GDPR. One of those relates to the issue of opt-in versus opt-out,vis à viswhere personal data processing does take place, users should be asked their permission first rather than requiring action to refuse.
Meta, for its part, was relying on a GDPR provision called “legitimate interests” to contend that its actions were compliant with the regulations. This isn’t the first time Meta has used this legal basis in defense,having previously done soto justify processing European users’ for targeted advertising.
It always seemed likely that regulators would at least put a stay of execution on Meta’s planned changes, particularly given how difficult the company had made it for users to “opt out” of having their data used. The company said that it sent out more than 2 billion notifications informing users of the upcoming changes, but unlike other important public messaging that are plastered to the top of users’ feeds, such asprompts to go out and vote, these notifications appeared alongside users’ standard notifications: friends’ birthdays, photo tag alerts, group announcements and more. So if someone doesn’t regularly check their notifications, it was all too easy to miss this.
And those who did see the notification wouldn’t automatically know that there was a way to object or opt-out, as it simply invited users to click through to find out how Meta will use their information. There was nothing to suggest that there was a choice here.
Moreover, users technically weren’t able to “opt out” of having their data used. Instead, they had to complete an objection form where they put forward their arguments for why they didn’t want their data to be processed — it was entirely at Meta’s discretion as to whether this request was honored, though the company said it would honor each request.
Although the objection form was linked from the notification itself, anyone proactively looking for the objection form in their account settings had their work cut out.
On Facebook’s website, they had to first click theirprofile photoat the top-right; hitsettings & privacy; tapprivacy center; scroll down and click on theGenerative AI at Metasection; scroll down again past a bunch of links to a section titledmore resources. The first link under this section is called “How Meta uses information for Generative AI models,” and they needed to read through some 1,100 words before getting to a discrete link to the company’s “right to object” form. It was a similar story in the Facebook mobile app.
Earlier this week, when asked why this process required the user to file an objection rather than opt-in, Meta’s policy communications managerMatt Pollardpointed Madconsole to itsexisting blog post, which says: “We believe this legal basis [“legitimate interests”] is the most appropriate balance for processing public data at the scale necessary to train AI models, while respecting people’s rights.”
To translate this, making this opt-in likely wouldn’t generate enough “scale” in terms of people willing to offer their data. So the best way around this was to issue a solitary notification in amongst users’ other notifications; hide the objection form behind half-a-dozen clicks for those seeking the “opt-out” independently; and then make them justify their objection, rather than give them a straight opt-out.
In an updated blog post Friday, Meta’s global engagement director for privacy policy Stefano Fratta said that it was “disappointed” by the request it has received from the DPC.
“This is a step backwards for European innovation, competition in AI development and further delays bringing the benefits of AI to people in Europe,” Fratta wrote. “We remain highly confident that our approach complies with European laws and regulations. AI training is not unique to our services, and we’re more transparent than many of our industry counterparts.”
AI arms race
None of this is new, and Meta is inan AI arms racethat has shone agiant spotlight on the vast arsenal of dataBig Tech holds on all of us.
Earlier this year,Reddit revealed that it’s contracted to makenorth of $200 million in the coming years for licensing its data to companiessuch as ChatGPT-maker OpenAI and Google. And the latter of those companies is alreadyfacing huge finesfor leaning on copyrighted news content to train its generative AI models.
But these efforts also highlight the lengths to which companies will go to ensure that they can leverage this data within the constrains of existing legislation; “opting in” is rarely on the agenda, and the process of opting out is often needlessly arduous. Just last month,someone spotted some dubious wordingin an existing Slack privacy policy that suggested it would be able to leverage user data for training its AI systems, with users able to opt out only by emailing the company.
And last year, Googlefinally gave online publishers a wayto opt their websites out of training its models by enabling them to inject a piece of code into their sites. OpenAI, for its part, isbuilding a dedicated toolto allow content creators to opt out of training its generative AI smarts; this should be ready by 2025.
While Meta’s attempts to train its AI on users’ public content in Europe is on ice for now, it likely will rear its head again in another form after consultation with the DPC and ICO — hopefully with a different user-permission process in tow.
“In order to get the most out of generative AI and the opportunities it brings, it is crucial that the public can trust that their privacy rights will be respected from the outset,” Stephen Almond, the ICO’s executive director for regulatory risk, said in a statement Friday. “We will continue to monitor major developers of generative AI, including Meta, to review the safeguards they have put in place and ensure the information rights of U.K. users are protected.”