As the AI sector continues to grow, new regulations pop up all the time. All over the world, different jurisdictions are trying to figure out how they’re going to walk this road. Some believe regulation will help AI integrate more safely into our lives.
Others argue it will smother the technology before it reaches its full potential.
To my mind, the real challenge is not in whether we regulate AI — that’s a given. But the question is “how” we do it.
When President Trump entered office for the second time, AI innovation became one of the main focuses of his administration. In early 2025, he signed Executive Order 14179, titled “Removing Barriers to American Leadership in Artificial Intelligence.”
The aim of the order was to revoke the previous administration’s AI policies, thus eliminating what it considered burdensome regulations that hindered innovation.
This approach emphasizes promoting AI development to maintain global dominance of the U.S. in this field, and revising or rescinding policies conflicting with these.
The U.S. government is actively recruiting talent and organizing working groups to advance AI research without setting any particularly restrictive measures. At least, for now.
In contrast to the “open doors” approach, Europe decided to take the opposite route. With its AI Act, also launched early this year, the EU aims to be a leader in AI ethics and responsible use of this technology.
An admirable goal, to be certain — but leading in ethics does not always mean leading on innovation.
In fact, there is a troubling trend emerging, where more and more European tech-focused companies are shifting their sights and relocating to the U.S. or other jurisdictions with a more favourable regulatory outlook on AI.
Previous research from Hoxton Ventures found that nearly all European tech startups with more than $500 million in revenue succeeded only after expanding into the U.S. market.
As such, new firms that have their sights set on scaling globally will naturally seek to follow the same example.
We’ve seen this happen before. When crypto regulations tightened in Europe, crypto startups simply moved to friendlier countries. Now the same trend is repeating itself in the AI sector.
Unlike crypto, AI is not a niche thing. It’s a revolutionary technological development that’s going to power everything in the future — from the military, to healthcare, to finance and education.
Some argue that AI is too powerful to go unregulated, but choking its progress is not going to accomplish anything except falling behind at the global stage.
You don’t need classified labs or a national budget to build powerful AI models. The case with DeepSeek proved that not so long ago, a little-known Chinese group built a very capable model using common, off-the-shelf GPUs.
No secret technology, no billion-dollar facilities or vast team sizes, just time and skill.
That’s the world we live in now. AI development is decentralized, fast-moving, and can be accomplished even by small groups.
Regulation can’t stop it, it can only shape how this technology is going to be used. But the stricter it becomes, the more it pushes talent and capital away.
In the EU, users can’t access some of the latest ChatGPT features like advanced voice chat or memory source. This is not a win for safety. It’s a missed opportunity for local innovation and user benefit.
If Europe’s regulatory complexity makes it nearly impossible for young AI companies to experiment and advance, it risks becoming an “innovation desert” before too long.
Don’t get me wrong: despite my earlier thoughts, I am not saying that regulation is the enemy here — it’s not. Being too rigid in regulation, however, can be.
Europe thrives on structure and bureaucracy, and that won’t change overnight. The only thing that can be done about it is for the EU to decide how flexible it’s going to be in response.
Instead of blanket bans or rigid rules, I believe Europe should consider more phased regulations. Allow sandboxes for AI startups, so that companies can test and iterate in safe and controlled environments.
Regulate AI use, not its development — yes, that distinction matters. You can’t stop someone from learning how to write code, but it is possible to regulate what they do with it.
If transparency and ethical practices are important for the EU, then companies that work on them deserve to be rewarded. It will encourage further adoption of said practices.
Being on the lookout for AI misuse is not the same as outright blocking open research or experimentation.
In the end, it’s not about choosing between innovation and safety. It’s about designing rules that support both.