Home / Crypto Analysis / Explainers / It Was Hackers, Now It’s AI: What Scares US Candidates
Explainers
5 min read

It Was Hackers, Now It’s AI: What Scares US Candidates

Last Updated January 29, 2024 11:38 AM
Giuseppe Ciccomascolo
Last Updated January 29, 2024 11:38 AM
Key Takeaways
  • Artificial intelligence-generated content can be used to sow confusion and misinformation among voters.
  • The United Statew took a proactive approach to AI legislation, but comprehensive national regulation like China’s is unlikely.
  • Budgetary constraints hinder the ability of election officials to counter AI misuse.

The main risk of more widespread artificial intelligence (AI) is cybersecurity. This is a significant problem in a year full of important elections on a global level. AI will not bring new risks to the 2024 elections, but will worsen existing ones, Moody’s analysts said .

They explained: “Adversaries could use it to confuse voters, for example, by spreading false messages about candidates, impersonating election officials, or spreading incorrect information about locations or hours of operation. Artificial intelligence can also facilitate cyberattacks.”

Interference Is A Risk

The main AI-related challenge facing countries in 2024 revolves around its impact on elections, particularly with GenAI‘s rapid progress in image and audio generation blurring the distinction between authentic and manipulated content. This issue gains prominence as around four billion people in more than 50 countries take part in national or supranational elections this year.

A notable instance occurred in September 2023, just days before Slovakia’s elections, where a fake audio recording surfaced on Facebook. In it, party leaders appeared to be conspiring to manipulate the election. This exemplifies the potential for AI-generated content to influence electoral processes.

While AI itself won’t introduce new risks to the 2024 elections, it exacerbates existing ones. Adversaries may use AI to sow confusion among voters by disseminating false messages about rival candidates. They could impersonate election officials or spread misinformation about voting center details and operating hours. Additionally, AI can facilitate cyberattacks by gathering information about targets and creating programs to overload election servers or communication systems.

Nations with robust institutions capable of establishing safeguards and responding effectively to emerging threats have a better chance to mitigate cyber interference in the electoral process.

Legal Frameworks To Become Clearer

In the United States, advances in AI legislation are anticipated, although the likelihood of a comprehensive national regulation akin to China‘s remains low. Instead of federal regulation, President Joe Biden has taken a proactive approach, initiating voluntary agreements with major AI service providers. Furthermore, through a White House Executive Order, he invoked the Defense Production Act to establish federal agency oversight over the AI industry and critical sectors utilizing AI.

This executive order lets multiple federal agencies develop reporting standards concerning the safety, security, and trustworthiness of AI systems. It also directs the formulation of frameworks for the ethical and responsible use of AI. The October 2022 release of the Blueprint for an AI Bill of Rights, designed to shield the American public from potential AI-related harm, will play a pivotal role in shaping policies and safeguarding civil liberties.

Some US States have independently legislated on anti-discrimination, data privacy, and testing standards. These laws potentially outpace national legislation and federal agency rules. California, for example, took action in 2019 by banning deepfakes, subsequently influencing many other states.

Ongoing intellectual property and copyright infringement lawsuits, such as The New York Times’ case against Microsoft and OpenAI, may contribute to legal clarity in these areas, setting precedents. Enhanced legal clarity resulting from federal policies, state laws, and court outcomes will be seen as positives.

Budget Isn’t Adequate To Tackle AI Misuse

Adam Marré, Chief Information Security Officer at cybersecurity firm Arctic Wolf, told  Fortune: “Sowing discord, confusion, and chaos through artificial intelligence and social media has become notably easier than hacking into government systems.”

Generative AI enhances phishing emails, making them more sophisticated and harder for consumers to detect, as AI can correct traditional red flags like spelling errors and formatting issues. The potential for misinformation extends beyond political lies. This includes false details about voting dates, closed polling places, or misleading ballot confirmations, necessitating heightened vigilance from election officials.

Despite awareness of the AI threat, a significant gap exists in officials’ ability to counter it. More than a third of State and local government leaders express inadequacy in their budgets to address cybersecurity concerns for upcoming elections, as an Arctic Wolf poll  revealed.

A Problem For Big Institutions Too

This funding challenge isn’t exclusive to small municipalities. Even well-funded territories may feel ill-prepared due to the scale of the systems they safeguard and the magnitude of the potential problem.

Election offices, traditionally understaffed and overworked, often share IT resources with other departments. They also lack dedicated staff altogether, notes  Lawrence Norden, Senior Director of the Brennan Center for Justice. Budgets must cover a wide array of needs. These include poll workers, voting equipment, paper ballots, mail-in ballot trackers, and physical security.

The increased threats to poll workers add another layer of complexity. In the realm of robust cybersecurity measures, some offices simply lack the necessary financial resources. And this highlights the urgent need for enhanced funding and support.

Was this Article helpful? Yes No