Key Takeaways
In a similar strategy to the one pursued by Donald Trump, Elon Musk has increasingly used AI-generated deepfakes to promote his political worldview and attack Kamala Harris.
Since she entered the presidential race in July, Musk has shared doctored images, videos and audio featuring Harris in X posts he claims are obvious satire, but which have raised concerns about misinformation.
The first instance of the trend occurred in July, when Musk reposted a fake campaign video with an AI voiceover in which Harris is depicted referring to herself as “the ultimate diversity hire” and a “deep state puppet”.
The post was condemned by Democrats, including California Governor Gavin Newsom, who responded by threatening to ban AI voice ads.
But less than two months later, Musk is back at it.
In his latest provocation, the X CEO shared an AI-generated image depicting Harris in communist red, complete with a hammer and sickle emblazoned on her cap, and captioned, “Can you believe she wears that outfit!?”. The post was a response to a Harris campaign tweet containing the words, “Donald Trump vows to be a dictator on day one.”
In both cases, critics called out Musk for not identifying his posts as being artificially generated. Some pointed out that this may even violate X’s misinformation policy. However, Musk has argued that they are satirical and not intended to deceive anyone.
Some users responded to Musk’s tweet with their own AI mockups, including one showing Musk in a Nazi uniform. Riffing off his original post, the image is captioned, “Can you believe Elon wears that outfit??”
While many social media users are able to identify Musk’s campaign as parody, other political deepfakes are clearly intended to mislead.
The 2024 campaign season has witnessed a surge in AI-generated deepfakes designed to mislead voters.
Soon after Biden stepped aside, a deepfake recording depicting Harris incoherently fumbling her words spread rapidly on TikTok, with many users apparently unaware that it had been doctored.
An investigation by Media Matters for America found that videos using the audio had been viewed millions of times without any label identifying its provenance.
In another example, a deepfake audio falsely attributed to Senator Elizabeth Warren purported that she supported policies she has traditionally opposed. This manipulated audio was directed at specific voter demographics in swing states, intended to damage her credibility and sway voter allegiances.
Some of the most egregious cases of disinformation have resulted in legal action being taken against the perpetrators. For example, Lingo Telecom was recently fined $1 million for its role in a deepfake robocall campaign that impersonated Joe Biden.
But while regulators may crack down on such industrial election interference, platforms like X must also work to ensure AI-generated content isn’t presented as real.