Home / News / Technology / AI / Global AI Summit: China Moves Faster, but It’s Full of Problems, Warns Ex-UK Envoy Fu Ying
AI
3 min read

Global AI Summit: China Moves Faster, but It’s Full of Problems, Warns Ex-UK Envoy Fu Ying

Published
James Morales
Published

Key Takeaways

  • Former Chinese ambassador to the U.K. Fu Ying said open-source AI has helped Chinese developers “move faster” than their Western peers.
  • However, she acknowledged that Chinese AI is still “full of problems.”
  • Although Chinese AI models have closed the performance gap with American ones, they have fallen behind in terms of safety.

Ahead of the Paris AI Summit on Monday, the former Chinese ambassador to the U.K., Fu Ying, described the country’s explosive AI innovation in recent years, arguing that Chinese developers “move faster” than their Western peers.

However, Ying acknowledged that that speed may come at a cost, as Chinese AI is “full of problems.”

China’s AI Explosion

Although last month’s release of DeepSeek R1 turned the world’s attention to Chinese AI, Ying said the country has enjoyed an “explosive period” of innovation since 2017, when the government published its AI development plan .

That plan laid out national goals and the state’s development strategy through 2030, underscored by four “basic principles” that have shaped Beijing’s AI policy in the years since:

  1. Technology-Led
  2. Systems Layout
  3. Market-Dominant
  4. Open-Source and Open

Open-Source Driving Chinese AI Innovation

After eight years, it is possible to see how Beijing’s four principles manifest in today’s leading Chinese AI models

Most notably, the country’s open-source AI scene has made significant progress, with DeepSeek R1 and Alibaba’s Qwen series achieving performance results that rival even the best proprietary American models.

During the Paris panel, Ying argued that open-source AI offers “better opportunities to detect and solve problems,” adding that “the lack of transparency among the giants makes people nervous.”

Speed, Safety, and Transparency

The flipside to the rapid performance gains achieved by Chinese AI models is that they have started to fall behind their American and European peers in terms of safety.

In comments shared with CCN, the CEO of Enkrypt AI Sahil Agarwal pointed to research his company carried out which found that DeepSeek R1 was less safe and secure than models created by OpenAI and Anthropic.

For instance, it was 4.5 times more likely than OpenAI’s o1 to generate functional hacking tools.

Agarwal warned that the “AI arms race between the U.S. and China” was leading to potentially dangerous security vulnerabilities as both nations compete for technological supremacy.

In Paris, Agarwal’s concerns were echoed by Professor Yoshua Bengio, who countered Ying’s view by arguing that China’s open-source AI models could be more prone to misuse.

That being said, Bengio acknowledged that the innate transparency of open models means it is easier to identify issues with DeepSeek’s R1 than it is with proprietary models like OpenAI’s.

Was this Article helpful? Yes No

James Morales

Although his background is in crypto and FinTech news, these days, James likes to roam across CCN’s editorial breadth, focusing mostly on digital technology. Having always been fascinated by the latest innovations, he uses his platform as a journalist to explore how new technologies work, why they matter and how they might shape our future.
See more