A bipartisan bill aimed at targeting the misuse of artificial intelligence in political campaigns has been co-sponsored by Democratic Minority leader Vic Miller and Republican Rep. Pat Proctor.
The proposed bill , highlighting bipartisan support, aims to curb the creation and dissemination of AI-generated “deepfakes” or false representations of candidates and public officials in political advertisements.
In a political context where advertising is used to drive forward political campaigns, the legislation’s introduction underscores a growing concern over the impact of AI-generated media on the integrity of elections.
The bipartisan AI bill specifically prohibits the use of AI in crafting images, audio, and video that falsely depict a candidate or public official, acknowledging the rapid advancement and accessibility of AI technologies like ChatGPT, Midjourney, and advanced photo-editing tools.
While the bill does not prohibit basic photo editing and allows the use of AI media with clear disclosures, concerns remain about the adequacy of the proposed penalties. The call for more severe consequences reflects the urgency and gravity of preventing misinformation, especially when timed to critically impact election outcomes.
Kansas state previously passed a related generative artificial intelligence policy that went into effect in July 2023.
State lawmakers are trying to get ahead of potentially damaging AI manipulation in the upcoming US elections, by ensuring that there are legal consequences for anyone attempting to deceive voters.
Other states in the US have similar bills. Representative Gail Chasey spoke to local media about a similar AI bill, HB 182, sharing her view that this is not an “anti-AI bill ” but rather lawmakers are trying to “put some guidelines” around its use.
HB 182 is waiting to be heard by the House Judiciary. Should the bill be passed, it will become law effective immediately.
Recent examples of AI misuse in the political sphere as well as in other public spheres, have heightened concerns around Deep Fakes. The manipulated videos of public figures appearing to say things they never did, or altered images designed to mislead viewers have led to criticism of social media platforms through which these deep fakes are often shared.
Taylor Swift was the victim of a Deep Fake last month, which led to X blocking all search terms of the American singer.
Public figures have also acknowledged the potential of deep fakes to sway public opinion and influence election outcomes. In a recent interview with Fox Business host Maria Bartiramo, former President Trump called AI the “most dangerous thing out there.”
As AI capabilities become more accessible and their applications more widespread, the urgency to develop effective countermeasures and regulatory frameworks grows. US lawmakers in various states are proposing AI regulation signaling a growing consensus on the need for regulatory frameworks that keep pace with technological advancements.