Key Takeaways
After Meta informed users of its platforms that it intended to use their personal data to train AI, European privacy advocates jumped into action, sending off a barrage of legal complaints in the EU and the UK to prevent the practice.
In a sign that the strategy is working, Meta has stalled plans to train its AI models on user data while European data protection authorities investigate.
At the beginning of June, Meta informed users of impending changes to its privacy policy that would grant the company permission to use their personal data to train its AI models.
The move immediately caught the attention of data rights campaigners in Europe, who expressed concern that it violated the General Data Protection Regulation (GDPR) that governs how companies process people’s data in the EU and the UK.
In the EU, the case was taken up by renowned GDPR lawyer and persistent thorn in Meta’s side Max Schrems.
Through his campaign group noyb, Schrems filed complaints with 11 EU data protection authorities, arguing that the policy change represents a violation of privacy rights already established in the Court of Justice of the EU (CJEU)
The CJEU “has already made it clear that Meta has no ‘legitimate interest’ to override users’ right to data protection when it comes to advertising. Yet the company is trying to use the same arguments for the training of undefined ‘AI technology’,” he said. “It seems that Meta is once again blatantly ignoring the judgments of the CJEU.”
Following Schrems’ actions, in June, Meta announced : “we’re delaying our change to the use of your information to develop and improve AI at Meta.”
Like other Big Tech players in the region, Meta is stalling the rollout of new AI features for European users while regulators sift through the implications of its new data policies.
Commenting on the decision, Meta Global Engagement Director Stefano Fratta claimed the company isn’t the only one tapping user data to train AI models.
“We are following the example set by others, including Google and OpenAI, both of which have already used data from Europeans to train AI,” he said . “Our approach is more transparent and offers easier controls than many of our industry counterparts already training their models on similar publicly available information.”
While it took campaigners in the UK a little longer to organize, on Tuesday, July 16, the Open Rights Group (ORG) sent a formal complaint to the Information Commissioners Office (ICO) asking it to investigate Meta’s policy change.
Like its peers in the EU, ORG’s concerns center on the fact that users will need to opt out of Meta’s AI data training regime. This, they argue, violates the GDPR’s high standard for consent, which must be unambiguous and involve a clear affirmative action.
In a statement last month, the ICO confirmed that it had been in contact with Meta over users’ privacy concerns.
“We are pleased that Meta has reflected on the concerns we shared from users of their service in the UK, and responded to our request to pause and review plans to use Facebook and Instagram user data to train generative AI,” commented ICO Executive Director for Regulatory Stephen Almond.
“We will continue to monitor major developers of generative AI, including Meta, to review the safeguards they have put in place and ensure the information rights of UK users are protected,” he added.
While the ORG complaint acknowledged Meta’s move to pause the proposed AI training, the organization said it will continue to fight for legally binding commitments to prevent Meta from resuming its plans.