Key Takeaways
The next version of Meta’s LLaMA-3 will have multimodal capabilities, bringing the AI model in line with Google and OpenAI’s most advanced offerings. But the Big Tech company said it won’t be released in the EU.
Meta’s EU snub may refer to GDPR litigation that has already derailed its plans to train AI on European platform users’ data. Other potential concerns include LlAMA’s classification under the EU’s AI Act and an ongoing European Commission investigation into social media disinformation.
Citing the “unpredictable nature of the European regulatory environment” in an exclusive comment with the Verge on Thursday, July 18, Meta confirmed that LLaMA-3 would not be released in the EU.
After Meta informed Facebook and Instagram of plans to use their personal data for AI training, digital rights lawyers filed a string of complaints in the UK and the EU.
Arguing that the new policy violates European privacy law (GDPR), noyb in the EU and the Open Rights Group (ORG) in the UK took the case to local data protection agencies, prompting Meta to suspend its plans.
The decision means the LLaMA-powered Meta AI won’t be available to European users, serving a major blow to the firm’s social media chatbot integration.
Meta has made LLaMA models the central pillar of its AI strategy, banking that it can catch up in a race where it has fallen behind its Big Tech peers by making them more freely available to builders and researchers.
The company itself has described the LLaMA family as open source. But AI researchers have criticized this definition, pointing out that the models fall short of the standards expected by the open-source community.
While the distinction may seem moot for end users of AI products, it has important consequences for LLaMA’s classification under EU regulation.
Article 3(63) of the AI Act defines the concept of a “general-purpose AI model” which entails specific legal obligations for providers. In general parlance, the concept refers to foundation models and many of the Big Tech LLMs on the market today, including the largest of the LLaMA models, fall within its scope.
However, Article 53(2) provides an exemption for AI distributed under a free and open-source license, creating a potential loophole that could allow Meta to avoid the greater compliance burden applied to large foundation models.
In this light, the question of whether LlaMA models really are open source becomes an important factor in determining the kind of documentation and evaluation Meta is obliged to produce in the EU. But by taking the technology off the market, the company dodges this question entirely.
If it seems like Meta is tip-toeing around EU regulation, that’s because it is. Regulatory scrutiny of the firm has ramped up significantly since 2022, when the EU introduced new legislation aimed squarely at American Big Tech companies.
One half of a two-pronged regulatory package that also includes the Digital Markets Act (DMA), the Digital Services Act (DSA) has formed the legal basis for a number of probes into Meta’s social media business in the EU.
Most recently, the European Commission investigated fake news on Facebook and Instagram. Flagging potential violations of DSA provisions requiring platform operators to tackle disinformation, the probe will review Meta’s policies and practices in the runup to the EU’s parliamentary elections in June.
The ongoing investigation could further complicate Meta’s plans for LLaMA.
The exact status of different AI systems under the DSA isn’t exactly clear. While the underlying models don’t fall within the regulation’s scope, specific applications of the technology like Meta AI potentially might. If this turns out to be the case, Meta would be required to implement “reasonable, proportionate and effective mitigation measures” against the platform being used to spread disinformation.