OpenAI faces allegations of violating data protection laws in Italy with its widely-used AI model, ChatGPT.
These developments follow a brief ban imposed by Italy’s data protection watchdog last year, highlighting ongoing concerns about user privacy and data security in the AI industry. The new accusations come from Italy’s DPA, a regulator that works alongside the European Union’s European Data Protection Board, which set up a special task force to monitor ChatGPT.
Last year, Italy’s data protection authority temporarily banned ChatGPT. The decision by the Italian watchdog Garante to ban the platform was founded on concerns surrounding the mass collection and storage of personal data, which saw the regulator launch an investigation into whether ChatGPT complied with GDPR. At the time, Sam Altman, CEO of OpenAI shared his view on OpenAI’s compliance via Twitter .
Now, OpenAI must once again present its defense arguments against accusations that its AI model ChatGPT breaches data protection, within 30 days.
“Following the temporary ban on processing imposed on OpenAI by the Garante on 30 March of last year, and based on the outcome of its fact-finding activity, the Italian DPA concluded that the available evidence pointed to the existence of breaches of the provisions contained in the EU GDPR” the regulator’s official communication revealed on Monday.
In response to the allegations, OpenAI is required to present a comprehensive defense to demonstrate its adherence to EU privacy standards. The company’s defense strategy is not only crucial for its operations in Italy but could also set a precedent for how AI companies address privacy concerns on a global scale. The outcome of this situation could have far-reaching implications for the future of AI regulation.
Despite the allegations OpenAI maintains that its practices align with EU privacy rules, the company has highlighted its commitment to user privacy and data security, pointing to specific measures and policies implemented to ensure compliance with the EU’s General Data Protection Regulation (GDPR).
OpenAI’s website includes a section on security and compliance , which explicitly states that OpenAI complies with both GDPR and CCPA standards.
The EU’s landmark AI Act was announced in December 2023, providing the world’s first comprehensive AI regulation after being proposed in April 2021. The proposal still has to be formally adopted by both Parliament and the Council to become EU law. Following the leak of an unofficial version of the AI Act, European Parliament Senior Advisor Laura Caroli shared a consolidated 258-page document online.
One of the key takeaways for so-called ‘high-risk AI systems’ was that obligations for high-risk AI systems would not be applicable until 36 months after entry into force. This may mean that AI companies that fall into this category will have a broader window to meet regulatory compliance.
Sam Altman said in May last year that OpenAI might consider leaving Europe if it was unable to comply with the new AI regulations by the European Union. After several run-ins with regulators in Europe, the AI company is facing considerable scrutiny that may force it to make significant regulatory adjustments, or leave Europe.
As the deadline for OpenAI’s legal defense gets closer, the outcome could set a precedent for other tech companies in Europe, potentially leading to a shift in how AI technologies are developed and deployed globally. On the other hand, if OpenAI is unable to align with EU standards, it could prove damaging to the development of the industry in Europe.