The financial industry, long characterized by its reliance on human expertise and intuition, is now embracing the power of artificial intelligence (AI) to revolutionize trading strategies. AI-powered trading systems, fueled by vast troves of data and sophisticated algorithms, are poised to reshape the market landscape, promising enhanced efficiency, reduced risk, and potentially explosive returns.
However, this technological leap also brings inherent challenges, including data biases, algorithmic instability, and the potential for unintended consequences that could destabilize the financial system.
As AI-driven trading systems gain prominence, it is essential to carefully evaluate the risks and opportunities they present. People need to make sure these powerful tools are deployed responsibly and ethically to foster a more resilient and equitable financial future.
AI trading systems, sometimes known as algorithmic trading or automated trading, are computerized systems that use artificial intelligence (AI) to analyze data and execute trades. These systems help trade a variety of financial instruments, including stocks, bonds, currencies, and commodities.
Even in Europe, despite the approval of the first-ever law that regulates AI, many worry about AI utilization in finance.
The Consob , the overseeing authority for the Italian Stock Exchange, has released a legal dossier addressing concerns related to the utilization of AI systems. It focused on “strong AI systems.” These systems possess ‘self-learning capabilities’ that enable them to generate autonomous and unpredictable outcomes compared to their inputs.
The document highlights that these tools can manipulate the market through rapid execution and cancellation orders in intervals as brief as thousandths of a second, as well as through more gradual yet intricately nuanced dynamics, given the algorithmic black box’s inscrutability.
The question of accountability for any market-related crimes committed by AI prompts three potential scenarios, according to Consob. The first involves assigning legal personality to the most advanced AI systems. This is fraught with obstacles in the criminal and administrative realms. The second places responsibility on those who create the risk, namely the developer or producer of the AI. The third contemplates “transcending the very concept of responsibility” and advocating for the “socialization of damage.” This is a burden not borne primarily by individuals but rather by the community as a whole.
Consob has actively evaluated digital matters, with its chairman, Paolo Savona, emphasizing the necessity of regulating cryptocurrencies. This latest development represents a logical extension of a similar approach to issues impacting finance and monetary stability.
As regulators in Italy and the USA express concerns about the potential for a regulatory void, many experts noted the new EU AI Act – the European regulation unveiled on Friday, December 8, and scheduled to be in effect between 2024 and 2026 – fails to address trading in instances of high-risk AI use. Notably, the Act largely overlooks applications of AI in the realm of finance.
Brando Benifei, co-rapporteur of the AI Act in Brussels from the Italian party PD, sheds light on the matter: “The text delves into the issue of the solvency of mortgage applicants. Systems dealing with loans fall into the category of high-risk applications, subject to more rigorous certification and controls. However, the broader applications of Artificial Intelligence in the financial sector are left to sector-specific regulations.” This is where the findings of Consob come into play.