Key Takeaways
When Meta announced that it would not release an upcoming multimodal version of Llama in the EU, it blamed the “unpredictable nature of the European regulatory environment.” Although not explicitly, the move seemed to refer to a wave of legal challenges that derailed the firm’s plans to train AI models using EU customer data.
Now, Meta has confirmed that its concerns relate to EU privacy law and data access. In a recent open letter , CEO Mark Zuckerberg and a group of industry allies lamented the state of data protection regulation in the EU, which they argued is holding back European AI developers.
In the EU and the U.K., where data usage is regulated by the General Data Protection Regulation (GDPR), Meta has been forced to suspend plans to train AI models on Facebook and Instagram users’ data while privacy watchdogs review the practice.
Data protection authorities took up the issue following a string of legal challenges by privacy advocates, who argued that the wholesale harvesting of data for AI training violates multiple provisions of the GDPR.
At the center of the dispute is a policy change that meant Facebook and Instagram users would have to opt out of having their personal data used to train AI. Critics argued that this flies in the face of the legally established GDPR norm that requires positive consent (i.e., opt-in) from data subjects.
In the latest letter, Meta and the other signatories said, “Interventions by the European Data Protection Authorities have created huge uncertainty about what kinds of data can be used to train AI models.”
They claim this risks causing the EU to fall behind other regions with more permissive regulatory climates. Without access to data, the next generation of AI models “won’t understand or reflect European knowledge, culture or languages,” they argued. The EU will also miss out on other innovations, like Meta’s AI assistant,” the letter added.
According to the letter’s signatories, including Spotify CEO Daniel Ek, the regulatory impasse that has stalled Meta’s European AI training is especially damaging given the opportunity open-source AI could create in the EU.
As Zuckerberg and Ek outlined in an opinion piece last month, “With more open-source developers than America has, Europe is particularly well placed to make the most of this open-source AI wave. Yet its fragmented regulatory structure, riddled with inconsistent implementation, is hampering innovation and holding back developers.”
Meta’s Llama models are published under a permissive license and freely available for developers to use and modify. To help EU-based developers make the most of the technology, Zuckerberg called for a “simplified regulatory structure” to “accelerate the growth of open-source AI.”
A simplified regulatory structure would accelerate the growth of open-source AI and provide crucial support to European developers and the broader creator ecosystem that contributes to and thrives on these innovations.
While the Meta CEO’s argument is compelling, his claim that the firm hasn’t violated any laws and is simply the victim of a bureaucratic holdup is questionable
EU legislation does not offer an obvious answer to the question at the heart of the matter: Does using personal data to train AI require explicit positive consent from platform users?
In the end, regulators will likely pose this question to the courts, who must determine the legal status of Meta’s AI data harvesting.
The social media giant lost a similar battle over its use of data for targeted ads, which, incidentally, was won by the very same lawyers now challenging its AI training policy.
Going forward, the outcome of any new legal proceedings would have profound consequences for Meta and any AI developer seeking to use EU data.