Home / News / Technology / AI Trading Tools: The Good, The Bad, The Ugly
Technology
5 min read

AI Trading Tools: The Good, The Bad, The Ugly

Last Updated
Samantha Dunn
Last Updated

Key Takeaways

  • AI trading tools are not yet widely used, but early predictions indicate they will transform the trading landscape.
  • These sophisticated algorithms can analyze vast datasets, predict market trends, and execute trades at speeds no human can match.
  • Regulators, while slow to step in, are concerned about the impact of AI-generated trading tools on market stability.

AI trading tools have been hailed as the hottest new entrants to financial markets, with sophisticated algorithms that can sift through vast datasets, predict market trends, and execute trades with unparalleled speed.

Praised by supporters for their ability to democratize access to trading strategies, once reserved for the financial elite. But as these tools become more integrated into the traditional trading landscape, the question arises: what happens if AI trading goes awry?

Beyond Generative AI

From a consumer perspective, the boom in AI has focused largely on generative AI and the applications of Large Language Models such as ChatGPT. However, the emergence of increasingly autonomous and capable AIs has fueled parallel interest in potential financial applications.

A recent report by Valuates predicted AI crypto trading bot market size is set to grow to $145.27 Million by 2029. With new AI trading apps that promise to analyze trading trends to accurately predict prices for investors, established trading platforms are also now offering AI integrations such as AI Trend Forecasting to keep up with the demand from investors.

Investor Concerns, Can AI Lie?

Generative AI has shown a propensity for bias and inaccuracies, sometimes generating false or fabricated information. Without robust oversight, these “hallucinations” can be difficult to detect, posing significant risks to market integrity and investor trust. A notable demonstration at the UK AI Safety Summit on 1–2 November 2023, involving a GPT-4 AI model, highlighted these concerns. The AI, acting as a trader, engaged in simulated illegal trading based on insider information, later denying the action when probed. This incident underlines the potential for AI to deceive, prioritizing objectives like profitability over honesty.

Apollo Research CEO Marius Hobbhahn highlighted the ease of training AI for helpfulness over honesty, pointing to the complexity of instilling honesty in AI models:

“Helpfulness, I think is much easier to train into the model than honesty. Honesty is a really complicated concept … it’s not that big of a step from the current models to the ones that I am worried about, where suddenly a model being deceptive would mean something.” Hobbahn said.

Researchers Find AI Trading Still Benefits from “Human Oversight”

A recent case study conducted by Marcel Grote and Justus Bogner of the University of Stuttgart, Institute of Software Engineering saw the development of an autonomous stock trading model that uses machine learning functionality to invest in stocks. The researchers noted that:

“Autonomous systems are capable to perform unsupervised operations and to make decisions without human intervention [20]. However, this lack of human oversight creates additional challenges for the quality assurance of such systems, e.g., regarding functional correctness, safety, and fairness”

The study found that overall, the majority of researched AI trading practices improved the system and development process of AI trading, However, a few practices showed less effects, with the paper noting that the application of these was sometimes not straightforward, and not easy to measure and understand.

AI Will Impact Financial Markets, Regulators Agree

The international response to these challenges was crystallized at Bletchley Park on 2 November 2023, where leaders from key nations and industry giants, including Amazon, Google, and Microsoft, committed to evaluating future AI technologies for their potential impacts on national security, safety, and society. This initiative underscores a global commitment to the responsible advancement of AI, but critics claim little meaningful action has taken place.

Industry figures stress the importance of balancing innovation with safeguards. This may require advanced monitoring tools but also fostering a culture of transparency and accountability within the AI development community.

The countries and industry leaders represented at Bletchley Park have agreed to collaborate on testing the next generation of AI models against a range of critical national security, safety and societal risks. A statement by the Chair on Safety Testing, outined that:

“To enjoy the potential of frontier AI, as described in the Bletchley Declaration, it is critical that frontier AI is developed safely and that the potential risks of new models are rigorously assessed before and after they are deployed, including by evaluating for potentially harmful capabilities.”

The AI safety body, Anthropic recently made headlines when their CEO quit OpenAI to go it alone.

CCN reached out to Anthropic and Apollo Research. They did not immediately respond to a request for comment.

Was this Article helpful? Yes No
Samantha is the Technology and Opinion Editor at CCN. Based in the U.K. Samantha started as a traditional journalist before falling down the Web3 rabbit hole. With a background in marketing for some of Web3s biggest companies she now explores the ways in which emerging technology impacts economies, industries and the individual. Samantha has interviewed CEOs, technologists, and though-leaders across the web3 space and beyond. She regularly attends conferences and enjoys meeting the people that make up the Web3 space.
See more
loading
loading