Key Takeaways
Ahead of the Paris AI Summit on Monday, the former Chinese ambassador to the U.K., Fu Ying, described the country’s explosive AI innovation in recent years, arguing that Chinese developers “move faster” than their Western peers.
However, Ying acknowledged that that speed may come at a cost, as Chinese AI is “full of problems.”
Although last month’s release of DeepSeek R1 turned the world’s attention to Chinese AI, Ying said the country has enjoyed an “explosive period” of innovation since 2017, when the government published its AI development plan .
That plan laid out national goals and the state’s development strategy through 2030, underscored by four “basic principles” that have shaped Beijing’s AI policy in the years since:
After eight years, it is possible to see how Beijing’s four principles manifest in today’s leading Chinese AI models
Most notably, the country’s open-source AI scene has made significant progress, with DeepSeek R1 and Alibaba’s Qwen series achieving performance results that rival even the best proprietary American models.
During the Paris panel, Ying argued that open-source AI offers “better opportunities to detect and solve problems,” adding that “the lack of transparency among the giants makes people nervous.”
The flipside to the rapid performance gains achieved by Chinese AI models is that they have started to fall behind their American and European peers in terms of safety.
In comments shared with CCN, the CEO of Enkrypt AI Sahil Agarwal pointed to research his company carried out which found that DeepSeek R1 was less safe and secure than models created by OpenAI and Anthropic.
For instance, it was 4.5 times more likely than OpenAI’s o1 to generate functional hacking tools.
Agarwal warned that the “AI arms race between the U.S. and China” was leading to potentially dangerous security vulnerabilities as both nations compete for technological supremacy.
In Paris, Agarwal’s concerns were echoed by Professor Yoshua Bengio, who countered Ying’s view by arguing that China’s open-source AI models could be more prone to misuse.
That being said, Bengio acknowledged that the innate transparency of open models means it is easier to identify issues with DeepSeek’s R1 than it is with proprietary models like OpenAI’s.