Home / News / Technology / Microsoft’s Brad Smith Acknowledges AI deepfakes Risk in EU elections
Technology
6 min read

Microsoft’s Brad Smith Acknowledges AI deepfakes Risk in EU elections

Published February 13, 2024 3:54 PM
James Morales
Published February 13, 2024 3:54 PM

Key Takeaways

  • The prevalence of deepfakes aimed at swaying elections has exploded.
  • Around the world, AI-generated video and audio have been used to influence voters.
  • Microsoft President Brad Smith said the recent EU elections successfully navigated the threat.

When deepfakes first arrived on the scene, they were easy to identify by telltale signs of AI generation such as out-of-sync lip movements or eerie unblinking eyes. These days, however, increasingly sophisticated AI voice and video engines can generate deepfakes that often pass as real.

With US elections taking place in November, experts suggest we can expect a barrage of fake videos to hit American screens in the coming months. For a glimpse of what’s to come, one need only look at countries where the election cycle is ahead of the US. From Indonesia to the EU, voters have been targeted with a series of AI-generated deepfakes that range from legitimate political advertising to outright misinformation.

Few AI Deepfakes Detected in EU Elections, Says Microsoft President

In a recent address , Microsoft President Brad Smith revealed that although the company only detected a limited number of AI-generated deepfakes spread around the 2024 EU elections, the threat they posed to democracy remained his “biggest concern about AI.”

Smith emphasized that, while the problem is currently manageable, vigilance is necessary. “AI didn’t create election deception, but we must ensure it doesn’t help deception flourish,” he remarked, highlighting the imperative for robust safeguards and international cooperation to combat AI-driven misinformation.

Overall, however, Smith was optimistic about the steps being taken to minimize the threat. Along with other tech companies, he said Microsoft has committed to measures aimed at curbing the misuse of AI. These include improving detection technologies and collaborating with governments to establish regulatory frameworks that ensure AI tools are not weaponized against democratic institutions.

“As society embraces the benefits of AI, we have a responsibility to help ensure these tools don’t become weaponized in elections,” he stated.

AI-Generated Videos Bring Politicians Back From the Dead

“I am Suharto, the second president of Indonesia,” the former general says in a three-minute video originally posted by the deputy chairman of the Golkar party. 

From an outsider’s perspective, there’s nothing unusual about the video. Political parties often mobilize former leaders to promote their cause, even those denounced as dictators by their adversaries. 

But General Suharto died in 2008. The video is a deepfake, “made using AI technology to remind us how important our votes are in general elections,” Golkar’s Erwin Aksa explained.

Across the Bay of Bengal, a similar strategy a similar strategy has been deployed by the DMK. In the run-up to India’s parliamentary elections on February 27, the Tamil Nadu-base party has resurrected its former leader M Karunanidhi to endorse today’s DMK candidates. 

The creators of deepfake Suharto and AI Karunanidhi claim their campaigns are innocuous. They’re not trying to fool anyone, they argue, just appealing to voters’ familiarity with the late politicians. 

But there is a darker side to the use of deepfake technology in elections.

Did Fake Audio Recordings Sway Slovakian Elections?

Days before Slovakia’s parliamentary elections in September, a deepfake audio recording that purportedly showed the Progressive party leader Michal Šimečka admitting to rigging the vote was circulated on social media. To make matters worse, in another audio clip, his AI-generated likeness discussed raising the price of beer. 

The recordings went viral and were widely shared by supporters of the SMER party, which ultimately clinched the election with 10 more seats than Šimečka’s Progressives.

Discussing the audio clips after the election, the politician said  the deepfake “probably had some effect” on the results, but acknowledged that there is no way of knowing just how much. 

Generative AI and Voter Suppression

In Slovakia, AI deepfakes were used to spread false information about political candidates. But elsewhere, the technology has been used as a tool for voter suppression.

For example, ahead of Pakistan’s election on February 8, a coordinated misinformation campaign spread artificially generated depictions of former president Imran Khan and other politicians in which they appeared to call for a boycott of the election.

The fake news operation was one of the most well-organized attempts at AI-powered election meddling documented so far. It also shares similarities with deepfake campaigns that have been observed in the US.

Deepfakes Threaten to Undermine American Democracy

With the US election season well underway, the use of deepfakes to spread misinformation is already rising.

Last month, thousands of New Hampshire residents received fake robocalls that used an AI-generated simulation of President Biden’s voice to discourage voting in the state’s primaries. “It’s important that you save your vote for the November election … voting this Tuesday only enables the Republicans in their quest to elect Donald Trump again,” the call said.

During a press conference last week, New Hampshire’s Attorney General John Formella said  that a cease-and-desist letter had been sent to the Texas-based company Life Corp concerning the deepfake calls.

“We are committed to keeping our elections free and fair,” Formella told reporters, adding that he had ordered a criminal investigation into Life Corp’s alleged election interference.

Curbing the Misuse of AI Tools in Elections

In the wake of the New Hampshire incident, the Federal Communication Commission (FCC) has declared that AI-generated calls are illegal under the Telephone Consumer Protection Act (TCPA).

“We confirm that the TCPA’s restrictions on the use of ‘artificial or prerecorded voice’ encompass current AI technologies that generate human voices,”  the FCC said in a statement on Thursday, February 8.

Meanwhile, Kansas legislators have proposed a bill that would prohibit political campaigns from using AI-generated representations of electoral candidates.

Meanwhile, Arizona Governor Katie Hobbs recently vetoed a bill  that would have established criminal penalties for the creation and distribution of deepfakes, instead urging the state Senate to send her another bill she said “significantly overlaps with” the vetoed bill’s intent.

Alongside the efforts of Lawmakers, businesses have also moved to prevent the malicious use of AI tools in elections. 

For example, OpenAI recently emphasized that the use of its tools to impersonate real people, including politicians, is prohibited. Moreover, the company said that it would not allow its AI to be used by “applications that deter people from participation in democratic processes.”

Was this Article helpful? Yes No