Key Takeaways
Upon his return to the White House last month, Donald Trump set about dismantling Biden-era regulations, including a landmark Executive Order on artificial intelligence.
By revoking Biden’s order, Trump set the tone for his administration’s approach to AI, which views most government rulemaking as unnecessary red tape that stifles American innovation.
But Washington’s new tendency toward deregulation runs in the opposite direction to a lot of state-level legislation. In blue states especially, signs of friction are already starting to show.
So far in his presidency, Trump has issued two executive orders that relate to AI.
The first simply revoked Biden’s previous order, ending a mandate that required Federal agencies to engage in AI rulemaking and risk assessments.
The second called for the development of AI “free from ideological bias or engineered social agendas.”
It also orders staffers to develop an action plan “to sustain and enhance America’s global AI dominance in order to promote human flourishing, economic competitiveness and national security.”
Commenting on the end of Biden-era policies, Lily Li, an attorney specializing in AI law, told CCN that most government agencies responded to the Biden order by reviewing AI security, privacy and potential algorithmic bias.
However, she said the latter category of risk assessment “rubs the Trump administration the wrong way.”
Reading between the lines, this aversion to agencies considering bias risks in their procurement decisions can also be seen in Trump’s flagship AI order, with its reference to “social agendas” seeming to nod to diversity and anti-discrimination mandates.
Now, as administration officials go about reversing Biden’s bias mitigation efforts, the course of government AI use is set to alter dramatically.
One area Li said she is keeping a close eye on is law enforcement agencies’ use of biometric surveillance tools.
Specifically, she suggested that under Trump, Immigration and Customs Enforcement (ICE) could move to ramp up the use of facial recognition at the border. And unlike during the previous administration, it wouldn’t have the same mandate to address algorithmic bias risk.
Although ICE is among the most salient examples, there are many federal agencies that would likely increase their usage of facial recognition, if they were rid of rules designed to protect marginalized populations from discrimination or over-surveillance.
However, the technology is highly regulated by state statutes relating to privacy concerns and discrimination risks. And if anything, these regulations are increasing, not decreasing.
“We’re already seeing legislation at the state level, with the Colorado AI Act going into effect next year, [and] the California Privacy Protection Agency engaging in rulemaking right now regarding AI technologies,” Li observed.
This trend—more rules at the state level and fewer at the federal level—will inevitably create friction, and the likelihood of opposing sides clashing in the courtroom is high.
Meanwhile, although deregulatory winds in Washington may streamline government AI procurement and deployment, they won’t be able to prevent state lawmakers from imposing new risk assessment and reporting obligations on AI developers.
Among the most comprehensive state frameworks proposed thus far is Colorado’s Act Concerning Consumer Protections in Interactions With Artificial Intelligence Systems .
Emerging from a state legislature with one of the strongest Democratic majorities in the country, the bill devotes significant attention to the issue of algorithmic discrimination and grants the Colorado Attorney General sweeping powers to prosecute non-compliant developers and deployers of AI systems.
Notably, the legislation stands in direct opposition to the Trump administration’s anti-DEI (diversity, equity and inclusion) agenda. For instance, it includes exemptions for AI applications that are intended to “increase diversity or redress historical discrimination.”
Neither are blue states the only ones moving ahead with new AI regulations.
In March 2024, Tennessee enacted the Ensuring Likeness Image and Voice Security (ELVIS) Act. As perhaps the strictest deepfake law in the country, the ELVIS Act prohibits any unauthorized use of a person’s voice or likeness.
Even some of Trump’s own social media posts appear to run foul of the law.