Home / News / Technology / Microsoft Joins Battle Against AI Deepfakes, Claims “Support is Bipartisan” in 2024 Elections
7 min read

Microsoft Joins Battle Against AI Deepfakes, Claims “Support is Bipartisan” in 2024 Elections

Last Updated July 8, 2024 2:29 PM
Giuseppe Ciccomascolo
Last Updated July 8, 2024 2:29 PM

Key Takeaways

  • The issue of deepfakes in election interference has grown increasingly alarming in recent years.
  • Big tech companies like Microsoft have played a pivotal role in combating deepfakes globally.
  • While Microsoft’ support is “bipartisan,” other big tech executives support particular candidates.

Microsoft has recently intensified its efforts to combat deepfakes ahead of the 2024 elections. Deepfakes, sophisticated digital manipulations of videos and audio, pose a significant threat to electoral integrity by spreading misinformation and undermining trust in democratic processes.

In response, Microsoft, alongside industry leaders such as Adobe, Google, Meta, and OpenAI, has spearheaded the “Tech Accord.” This alliance aims to develop advanced tools like watermarks and detection techniques to identify and debunk deepfakes.

Microsoft’s Role In Tackling Deepfake

To combat deepfakes, big tech companies have been helping political parties and campaigns worldwide. For example, Microsoft helped governments to tackle deepfakes in Taiwan, India, the EU, the UK, and France. This year, Satya Nadella’s led company will attend both US national political conventions to provide information and training, focusing on policy discussions about deepfakes and education, without endorsing any party.

Microsoft will launch a US public awareness campaign encouraging vigilance against deepfake manipulation and urging voters to verify sources and use authoritative election resources. The campaign aims to increase public awareness and action.

“We will not endorse a candidate or political party. Our presence at the conventions instead will be grounded in policy discussions on the importance of combating election deepfakes and promoting education and learning,” Microsoft said  in a blog post.

In February, Microsoft, through the Tech Accord commitment, pledged to prevent deceptive AI content from interfering with elections. Steps include increasing transparency, authenticating content, and safeguarding democratic processes.

“We will launch a US public awareness campaign to encourage people to be vigilant to the risk of deepfake manipulation in this year’s election, and to Check, Re-Check and Vote. Though threats to democracy have always existed, the tactics of adversaries are constantly evolving,” Microsoft added.

“This campaign encourages voters to verify sources and points to key authoritative election resources, to ultimately help voters protect themselves from deepfake deception. It’s simple, action-oriented, and increases public awareness. ”

At Microsoft, we understand that technology is not just about hardware and software; it’s about the people it serves and the processes it enhances. As we look toward the conventions, we are reminded of the importance of coming together to address the challenges and opportunities that lie ahead,” Microsoft’s Ginny Badanes said .

Nadella’s Call For Action

Microsoft CEO, Satya Nadella, had already expressed deep concern over the proliferation of AI deepfakes and called for immediate action to prevent their continued production. During an interview  on NBC Nightly News with Lester Holt, he said: “First and foremost, this is extremely alarming and reprehensible. Therefore, we must take action. Regardless of our stance on specific issues, I believe we all benefit from a safe online environment.”

He emphasized the importance of ensuring online safety for both content creators and consumers, stating, “No one desires an online world that is unsafe. It is in our best interest to act swiftly on this.”

Microsoft has faced accusations of censorship in the past, particularly concerning restrictions on Bing search results related to the 1989 Tiananmen Square protests. This led to criticism from organizations like Reporters Without Borders, which highlighted the irony of tech companies restricting freedom of information.

Nadella advocated for implementing “guardrails” to mitigate the risks associated with unchecked AI technologies. He stressed the need for robust measures to promote the creation of safe content, acknowledging ongoing efforts in this area. Nadella also suggested that collaboration between law enforcement and tech platforms could effectively manage these challenges.

Nadella’s words arrived after a deepfake issue affecting Taylor Swift, but this was not an isolated case. In fact, ahead of New Hampshire’s Presidential Primary Election on January 23, 2024, misleading robocalls targeted residents on January 21, featuring an AI-generated voice clone of President, Joe Biden. These called falsely instructed voters to abstain from participating.

An investigation promptly traced the deceptive communications back to Life Corporation, a Texas-based entity, and identified Walter Monk as an involved individual.

AI Tech Alliance

Twenty companies, including Adobe, Microsoft, Google, Meta, and OpenAI, launched a “Tech Accord” to develop tools like watermarks and detection techniques to identify, label, and debunk deepfakes.

AI models like ChatGPT and DALL-E have raised concerns about fake media disrupting major 2024 elections in the US, UK, EU, India, and beyond. “This is a pivotal year for democracy. With over 4 billion people voting globally, security and trust are essential,” said  Max Peterson, VP at Amazon Web Services.

Cybersecurity experts have noted that hackers from China, Russia, North Korea, and Iran use AI to enhance their attacks with generated media. The Tech Accord companies pledged to develop detection technology and open standards-based identifiers for deepfakes and watermarks, ensuring platforms and generators use the same tools to combat harmful fake content in elections.

The pledge emphasized that technology alone can’t fully mitigate AI risks, requiring support from governments and organizations to raise public awareness about deepfakes. Companies involved in the pledge include Adobe, Amazon, Anthropic, Arm, ElevenLabs, Google, IBM, Inflection AI, LinkedIn, McAfee, Meta, Microsoft, Nota, OpenAI, Snap, Stability AI, TikTok, Trend Micro, Truepic, and X.

Bipartisan Support

Microsoft’s participation in conventions is based on “bipartisan support .” The company provides technology for the public good, and not endorsing any political party. With many elections occurring this year, Microsoft is working to protect voters and election authorities globally. It “empowers them to recognize and critically evaluate deepfakes.”

However, examining their actions and affiliations can reveal implicit biases. Historically, Microsoft’s employee contributions have favored Democratic candidates. And its policy advocacy often aligns with Democratic positions on data privacy and net neutrality. Although their stance on cybersecurity and AI regulation remains neutral.

Different US candidates have varying views on AI and technology. Pro-AI candidates like Joe Biden support AI development with ethical guidelines and regulations. While Andrew Yang emphasized the need for a national AI strategy. Conversely, Elizabeth Warren has been critical of big tech. She advocates for breaking up major firms to prevent monopolistic practices, potentially affecting AI development.

Donald Trump, who runs for his second term as US President, has garnered support  from some segments of the crypto community due to his deregulatory stance, which some enthusiasts view as favorable for innovation compared to Democratic counterparts. This dynamic underscores the intersection of tech industry interests and political ideologies, shaping perceptions of policy implications for AI, cybersecurity, and digital innovation.

On the other hand, tech companies typically avoid direct political endorsements. However, executives and employees may individually express preferences. For instance, figures from Facebook/Meta have faced scrutiny from both political parties over misinformation handling. Google maintains a neutral stance, although its executives have historically supported Democratic candidates.

Deepfake And Elections

The problem of deepfakes in election interference has become increasingly concerning over the latest years. Deepfakes, which are highly realistic and digitally altered videos or audio recordings, can severely disrupt political processes by spreading misinformation, manipulating public opinion, and undermining trust in democratic institutions.

One recent instance involved a deepfake audio recording of President Joe Biden used in robocalls during election campaigns, spreading false narratives to sway voter decisions. Additionally, deepfake videos of political figures have been circulated online, damaging reputations and influencing voter perception. These deepfakes often spread rapidly on social media and through targeted ads, exploiting personal data to tailor misinformation effectively. Automated calls and messages further disseminate manipulated content.

The prevalence of deepfakes erodes public trust in media, political figures, and the electoral process, undermining informed decision-making. They can deepen political divides by provoking strong emotional reactions, leading to increased polarization. Addressing deepfake interference involves complex legal and ethical questions about free speech, privacy, and digital content regulation.

Combatting deepfakes requires advances in artificial intelligence (AI) and machine learning to develop detection tools. But also government regulations to criminalize malicious deepfakes and public education to foster skepticism about online content. Collaboration between tech companies, governments, and civil society is essential to mitigate the threat.

Was this Article helpful? Yes No