Key Takeaways
In a US election cycle that has been mired in an unprecedented wave of AI-powered disinformation, legal questions over the use of AI in elections have been pushed to the fore as authorities wrestle with the implications of the technology
With answers to those questions beginning to emerge, seven months after a deepfake robocall impersonated President Joe Biden’s voice ahead of the New Hampshire Democratic primary, the company behind the campaign, Lingo Telecom, has been slapped with a $1 million fine.
The Federal Communications Commission (FCC) announced the penalty on Wednesday, Aug. 21. The penalty settles an enforcement action brought by the regulator that accused Lingo of using “AI voice cloning technology to spread disinformation.”
The Atlanta-based firm has also agreed to implement a “historic compliance plan” requiring strict adherence to the FCC’s caller ID authentication rules, the agency said.
“Every one of us deserves to know that the voice on the line is exactly who they claim to be,” FCC Chairwoman Jessica Rosenworcel expressed. “If AI is being used, that should be made clear to any consumer, citizen, and voter who encounters it.”
Enforcement Bureau Chief Loyaan A. Egal added that whether orchestrated by domestic political actors or foreign election meddlers, the combination of AI voice-cloning and caller ID spoofing presents a “significant threat” to the American public.
Although the settlement ends the FCC’s enforcement against Lingo Telecom, Steve Kramer, the mastermind of the deep fake robocall, still faces charges.
Kramer, who claims he orchestrated the campaign to demonstrate AI ethics rather than to manipulate the election result, is currently on bail for 26 criminal counts of voter intimidation and impersonating officials in New Hampshire. He is also facing a $6 million fine from the FCC.
In separate civil litigation, the League of Women Voters filed a lawsuit against Lingo, Kramer, and the Texas-based robocall broadcaster Life Corporation.
“Defendants sent the New Hampshire Robocalls to New Hampshire voters for the purpose of intimidating, threatening, or coercing, or attempting to intimidate, threaten, or coerce them, into not voting in the New Hampshire Primary,” the plaintiffs alleged.
The robocall featured an AI voice impersonation of President Biden: “It’s important that you save your vote for the November election” and “Your vote makes a difference in November, not this Tuesday.”
While the FCC’s latest action charged Lingo with violating the Telephone Consumer Protection Act (TCPA), in the wake of the New Hampshire deep fake controversy, Biden proposed a general ban on AI voice impersonation in March, and the Senate Judiciary Committee held a hearing on the issue in April.
In the absence of a federal ban, several states have moved to prohibit the misuse of AI in political campaigns. California, Texas, Michigan, Washington, and Minnesota have all banned campaigns from using deepfake representations of political figures. Similar legislative efforts are underway in Georgia, Kansas, New York, Florida, and Wisconsin.