Home / News / Technology / Joe Biden Proposes Ban on AI Voice Impersonations: Aren’t They Already Illegal?
Technology
4 min read

Joe Biden Proposes Ban on AI Voice Impersonations: Aren’t They Already Illegal?

Last Updated March 8, 2024 4:20 PM
James Morales
Last Updated March 8, 2024 4:20 PM

Key Takeaways

  • During a State of the Union address on Thursday, Joe Biden said he would ban AI voice impersonation.
  • However, the President didn’t specify what would be covered by the proposed ban.
  • In certain contexts, existing laws already criminalize the use of AI to impersonate people.

Throughout American history, the State of the Union address has served as a register of the nation’s political mood, highlighting each administration’s priorities and reflecting the dominant concerns of their time. In that tradition, Joe Biden’s speech on Thursday, March 7, briefly addressed a pressing concern for many Americans: the rapid acceleration of Artificial Intelligence (AI). 

Specifically, the US President called for a ban on “AI voice impersonation”. But what exactly did Biden mean by the reference? Without any details, his comments create more questions than they do answers.

What Did Joe Biden Say in his State of the Union Speech?

In a speech dominated by the ongoing conflicts in Ukraine, Israel and Palestine, Biden only turned toward his domestic agenda at the end of Thursday’s address.

On the topic of AI, he said he would work to harness its promise and “protect us from its peril.” And how does he intend to do that? “Ban AI voice impersonation and more!” the president exclaimed. 

It’s uncertain just whose voice would be covered by the proposed ban, which could potentially spell the end of viral AI hits like Johnny Cash singing  Barbie Girl.

A more likely scenario, however, is that Biden had more malicious applications of the technology in mind.

Recent Controversies Surrounding AI-Generated Voice Impersonations

As it happens, the President himself was recently the subject of a nefarious AI impersonation that used his voice to discourage voting in New Hampshire’s presidential primary.

The deepfake robocall campaign was just one instance of an alarming rise in the abuse of generative AI tools to manipulate voters, which has sparked concerns in the US and abroad.

In the wake of the controversy, the Federal Communication Commission (FCC) issued a statement declaring that that disseptive AI-generated calls are illegal under the Telephone Consumer Protection Act (TCPA).

“We confirm that the TCPA’s restrictions on the use of ‘artificial or prerecorded voice’ encompass current AI technologies that generate human voices,” the FCC said at the time.

But if the TCPA already criminalizes instances of AI impersonation like the New Hampshire robocalls, what areas might additional legislation target?

What New Rules Could the Government Introduce?

Johnny Cash aside, many living entertainers have expressed alarm that their voice could be impersonated by AI without permission.

The issue took center stage when the Screen Actors Guild-American Federation of Television and Radio Artists (SAG-AFTRA) went on strike last year.

After weeks of negotiations with studios, SAG-AFTRA secured an agreement that established consent and compensation requirements for productions that use AI to replicate actors’ voices and appearances. But from a legal perspective, AI-generated impersonations in the entertainment sector remain a grey area.

Across the US, an emerging body of case law is slowly establishing how new AI techniques interact with intellectual property rights.

Meanwhile, some states have already introduced new rules governing the use of deepfakes in elections.

Federal intervention could ultimately change the course of both unfolding narratives, determining the legality of various applications of AI impersonation.

Was this Article helpful? Yes No