Home / News / Technology / OpenAI Bans AI Tools For Political Campaigns: Voting Rights For Humans Only
5 min read

OpenAI Bans AI Tools For Political Campaigns: Voting Rights For Humans Only

Last Updated January 17, 2024 1:12 PM
Samantha Dunn
Last Updated January 17, 2024 1:12 PM

    Key Takeaways

  • OpenAI has laid out its policies around using its tools in campaigning.
  • Transparency is a key focus for the creators of ChatGPT.
  • AI can negatively impact democratic principles if guardrails are not in place.

In a blog  post titled “How OpenAI is approaching 2024 worldwide elections” the AI research organization outlines its commitment to preventing abuse, providing transparency on AI-generated content, and improving access to accurate voting information.

OpenAI, the company behind ChatGPT, says they want to ensure that their tools are not used to “undermine the democratic process”.

The Politics of AI

In the run-up to the global 2024 elections, OpenAI has said it will not allow people to use its tech to build applications for political campaigns and lobbying, and will not allow applications that deter people from participation in democratic processes. OpenAI is also working with the National Association of Secretaries of State (NASS), and will direct users to non-partisan website CanIVote.org when asked specific voting-related questions.

The concerns around AI usage as a political tool are centered around its access to large amounts of data, machine learning abilities, and its potential for disinformation and misinformation. Another key concern is AI’s potential to undermine democratic values  by perpetuating and amplifying social inequalities. A 2023 briefing  document by the European Parliament outlines how AI poses multiple risks to democracies by equipping malicious entities with a “wide variety of techniques to influence public opinion”.

The research paper cited a UNESCO  study that underlined how AI has the potential to improve the democratic process through educational tools that could empower citizens to participate in politics, as well as supporting the overall policymaking process by generating more value across its various stages. The paper concluded that “despite its benefits, AI has the potential to affect the democratic process in a negative way”.

Calls For Transparency

As Artificial Intelligence is increasingly used in our everyday lives, its potential to infiltrate all aspects of our political, social, and economic realities has led to a key focus on its regulation.

OpenAI underlined their commitment to improving transparency across their tech, stating they would be focusing on “better transparency around image provenance—including the ability to detect which tools were used to produce an image—can empower voters to assess an image with trust and confidence in how it was made. As well as “misleading “deepfakes”, scaled influence operations, or chatbots impersonating candidates”.

Elon Musk chimed in on Twitter, responding to OpenAI’s snapshot of how they are preparing for 2024’s worldwide elections, tweeting “please make sure that you minimize political bias”.

Tech Leaders Warn of AI Risks

Elon Musk is an early commentator on the risks of AI, famously comparing the dangers of AI to nuclear weapons, and sharing his strong views in several Tweets  over the past several years.

Speaking at the AI Safety Summit  at Bletchley Park, the tech mogul said that “AI is one of the biggest threats to humanity” in a conversation with UK Prime Minister Rishi Sunak.

Other tech leaders directly working in the field of Artificial Intelligence have also expressed their commitment to taking AI risks seriously, notably OpenAI’s Sam Altman who signed  a statement created by The Center for AI Safety (CAIS) titled “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war”.

AI Trending at Davos 2024

With AI as a key focus at the World Economic Forum, regulators and industry leaders have joined to discuss AI’s impact on global democracy as well as its risks and benefits. Brad Smith, Microsoft President, joined Davos to discuss the future of AI governance, while Google’s Ruth Porat talked about the need for robust cybersecurity.

There is a consensus by many regulators and industry leaders that AI has the potential to threaten the global processes humans have put in place to protect their rights. Although OpenAI has outlined its proactive stance on AI moderation and its plans to put safeguards in place that protect democratic principles, the full scope of AIs risk is still not clear.

Discussions surrounding these risks, as well as the benefits of AI, will continue to take place at Davos 2024.

Was this Article helpful? Yes No