Key Takeaways
In 2016 and 2020, Russian troll farms were central to the state’s efforts to influence US elections. Now, as Americans prepare to vote once more in November, Russian interference campaigns are embracing AI-generated content to fuel their social media disinformation efforts.
According to Ukraine’s national security advisor Oleksiy Danilov, generative AI has allowed Russian actors to push their election meddling to a new level, granting them the ability to create tens of thousands of fake social media accounts without the need for an army of trolls standing behind them.
In comments made to the Times, Danilov claimed that the Kremlin had invested heavily in AI-powered espionage. “Artificial intelligence is a huge step forward for Russia,” he said, adding that it makes the impact of the state’s election interference “exponentially greater.”
With just 2 or 3 agents, he said Russian actors could use AI tools to create tens of thousands of convincing fake accounts on Telegram, Facebook, Twitter and Instagram. In Ukraine, the national security advisor said that AI-powered disinformation operations were routinely disseminating up to 166 million posts every week.
Such large-scale deployment of artificially generated social media content threatens to warp Americans’ perception of each other, ramping up divisive rhetoric and driving a further wedge between groups that are already at loggerheads.
Facing an unprecedented volume of fake accounts, disingenuous content and misleading deepfakes, social media platforms have implemented measures to prevent generative AI from being abused.
Currently, Facebook, Instagram and Threads identify images that were created using Meta’s AI models. But earlier this month, Meta announced plans to expand the “Imagined with AI” label to a greater range of AI-generated images.
“We’ve been working with industry partners to align on common technical standards that signal when a piece of content has been created using AI,” the company’s President for Global Affairs Nick Cleg stated. “Being able to detect these signals will make it possible for us to label AI-generated images that users post.”
Meta’s latest announcement follows similar moves by X and TikTok, which have also incorporated labels for AI-generated content.
To help social media platforms distinguish between real and artificial images and videos, AI firms have embraced practices that identify AI-generated materials.
Of course, most image generators already embed markers in file metadata, but this technique can’t help identify synthetic content that has been copied or reformated. Instead, developers have developed “invisible watermarks” that hide information about how images were created in the actual pixel data.
Examples of invisible watermarking technologies include Google’s SynthID and Meta’s Stable Signature.
Meanwhile, OpenAI is reportedly working on a new technique that would insert special words into text generated by GPT language models to help identify it later.