Home / News / Technology / EU Watchdog Requires Investment Companies to Document AI Data Sources to Safeguard Retail Investors
Technology
4 min read

EU Watchdog Requires Investment Companies to Document AI Data Sources to Safeguard Retail Investors

Published May 30, 2024 3:02 PM
James Morales
Published May 30, 2024 3:02 PM

Key Takeaways

  • The European Securities and Markets Authority (ESMA) has issued new guidance on investment service providers’ use of AI.
  • The guidance stresses transparency and risk management.
  • An emphasis on data sourcing and quality is central to the EU’s emerging AI regulation.

Like most sectors, retail investment services are being transformed by Artificial Intelligence, with firms increasingly adopting AI to help communicate with clients, analyze and manage risk, detect fraud, enhance operational efficiency, and comply with regulations.

Responding to the trend, the European Securities and Markets Authority (ESMA) has issued new guidance  on service providers’ use of AI, explicating how they fit within the framework of the 2014 Financial Instruments Directive (MiFID II ).

Safeguarding Investors

To protect investors from losses, MiFID II outlines firms’ asset custody and reporting obligations. 

Setting out basic principles for service providers’ use of AI, ESMA’s latest intervention is designed to mitigate the risk that AI could compromise the EU’s investor protections. 

The guidance stresses that firms must ensure AI systems operate transparently and fairly, stating that “AI tools should not undermine the client’s best interest.” 

Specific risks highlighted include algorithmic biases, data quality issues, and a potential lack of transparency.

“AI systems often function as ‘black boxes,’ making it difficult for staff to understand their decision-making processes,” the document notes. Consequently, it stresses that ensuring AI decisions can be explained and understood is essential for maintaining investor trust and regulatory compliance.

Data Sourcing and Quality

One crucial factor covered by ESMA’s latest guidance is the question of data sourcing.

When using AI tools in investment decision-making processes, the regulator stressed that input data must be “relevant, sufficient, and representative, ensuring that the algorithms are trained and validated on accurate and comprehensive and sufficiently broad datasets.”

Rather than expanding on what counts as relevant, sufficient, and representative data (which might risk foreclosing certain sources out of an aversion to compliance risk), the document emphasizes transparency.

Whether sourced internally or acquired from third parties, ESMA called for firms to apply “rigorous oversight” to their data gathering and filtering practices.

Where third-party AI tools are used, ESMA noted that the relevant MiFID II outsourcing requirements apply.

Implications For AI Firms

Under Article 31 of MiFID II, investment firms must retain the necessary resources and expertise to effectively supervise third-party service providers.

The directive also stipulates specific contract requirements for outsourcing agreements and enhanced rules that govern outsourcing to companies not based in the EU.

In other words, the ESMA guidance explicitly brings a host AI companies within the scope of MiFID II, setting out contractual rights and obligations that could affect how they deal with EU financial service providers.

On its own, the new guidance won’t have any profound immediate consequences for the likes of OpenAI, Google and Microsoft. However, it forms part of a wider pattern in EU regulation.

Much of the language around transparency and risk management maps directly onto the AI Act, which OpenAI and its peers actively lobbied  the EU to water down.

Although the emerging regulatory paradigm stops short of mandating full transparency into model weights or disclosure of trade secrets, it demands a higher degree of transparency than AI developers have been used to so far.

Going forward, the question of what counts as sufficient visibility into how AI systems work and what data they were trained on promises to be a fiercely contested regulatory frontier.

Was this Article helpful? Yes No