Home / News / Technology / Russian AI Tool Can Impact Election: Ukraine Advisor Warns of “Tens of Thousands” Fake Social Media Posts
Technology
4 min read

Russian AI Tool Can Impact Election: Ukraine Advisor Warns of “Tens of Thousands” Fake Social Media Posts

Last Updated February 28, 2024 4:16 PM
James Morales
Last Updated February 28, 2024 4:16 PM

Key Takeaways

  • Russia’s ability to meddle in other country’s elections has been turbocharged by generative AI.
  • According to Ukraine’s national security advisor Oleksiy Danilov, just a few Russian agents can now create hundreds of thousands of fake social media accounts.
  • To prevent the spread of misinformation, social media platforms and developers are working on tools that can identify AI-generated content.

In 2016 and 2020, Russian troll farms were central to the state’s efforts to influence US elections. Now, as Americans prepare to vote once more in November, Russian interference campaigns are embracing AI-generated content to fuel their social media disinformation efforts.

According to Ukraine’s national security advisor Oleksiy Danilov, generative AI has allowed Russian actors to push their election meddling to a new level,  granting them the ability to create tens of thousands of fake social media accounts without the need for an army of trolls standing behind them.

AI a “Huge Step Forward” For Russian Disinformation 

In comments made to the Times, Danilov claimed  that the Kremlin had invested heavily in AI-powered espionage. “Artificial intelligence is a huge step forward for Russia,” he said, adding that it makes the impact of the state’s election interference “exponentially greater.”

With just 2 or 3 agents, he said Russian actors could use AI tools to create tens of thousands of convincing fake accounts on Telegram, Facebook, Twitter and Instagram. In Ukraine, the national security advisor said that AI-powered disinformation operations were routinely disseminating up to 166 million posts every week.

Such large-scale deployment of artificially generated social media content threatens to warp Americans’ perception of each other, ramping up divisive rhetoric and driving a further wedge between groups that are already at loggerheads.

Social Media Platforms Crack Down on AI Abuse

Facing an unprecedented volume of fake accounts, disingenuous content and misleading deepfakes, social media platforms have implemented measures to prevent generative AI from being abused.

Currently, Facebook, Instagram and Threads identify images that were created using Meta’s AI models. But earlier this month, Meta announced  plans to expand the “Imagined with AI” label to a greater range of AI-generated images.

“We’ve been working with industry partners to align on common technical standards that signal when a piece of content has been created using AI,” the company’s President for Global Affairs Nick Cleg stated. “Being able to detect these signals will make it possible for us to label AI-generated images that users post.”

Meta’s latest announcement follows similar moves by X and TikTok, which have also incorporated labels for AI-generated content.

Invisible Watermarks Help Identify AI-Generated Content

To help social media platforms distinguish between real and artificial images and videos, AI firms have embraced practices that identify AI-generated materials.

Of course, most image generators already embed markers in file metadata, but this technique can’t help identify synthetic content that has been copied or reformated. Instead, developers have developed “invisible watermarks” that hide information about how images were created in the actual pixel data.

Examples of invisible watermarking technologies include Google’s SynthID and Meta’s Stable Signature. 

Meanwhile, OpenAI is reportedly  working on a new technique that would insert special words into text generated by GPT language models to help identify it later.

Was this Article helpful? Yes No