Home / News / Technology / Biden-Harris Administration Commits to Using Artificial Intelligence in a “Responsible Way”
Technology
3 min read

Biden-Harris Administration Commits to Using Artificial Intelligence in a “Responsible Way”

Last Updated March 28, 2024 5:16 PM
Samantha Dunn
Last Updated March 28, 2024 5:16 PM

Key Takeaways

  • The Biden-Harris administration has introduced a set of requirements aimed at regulating the use of artificial intelligence (AI) across federal agencies
  • This includes a focus on hiring talent across the federal workforce.
  • The administration aims to set a domestic precedent for AI governance and inspire global standards.

The Biden-Harris administration has introduced a comprehensive set of requirements aimed at regulating the use of artificial intelligence across federal agencies.

The government simultaneously revealed its intention to recruit 100 AI professionals into the government by mid-2024.

Harris Outlines Measures for Responsible AI Usage

Vice President Kamala Harris shared new standards that require federal entities to implement stringent safeguards for AI applications that might impact Americans’ rights or safety. These new protocols, stemming from a White House press call  issued on March 28, dictate that agencies within 60 days appoint a chief AI officer, disclose their AI engagements, and establish protective frameworks.

This directive follows President Joe Biden’s executive order  on AI, issued in October 2023, which emphasized the government’s dual objective: to leverage AI’s potential while mitigating its inherent risks.

US Government Mandates AI Risk Assessment

Harris highlighted the administration’s vision for a future where AI is utilized to advance the public interest, stressing the importance of transparency and accountability, noting “When government agencies use AI tools, we will now require them to verify that those tools do not endanger the rights and safety of the American people”.

As part of this vision, every year, US government agencies will be required to publish a list of their AI systems, assess the associated risks, and detail the management of these risks.

This move is part of a broader government effort to foster a culture of responsibility and oversight in the deployment of AI technologies, ensuring they align with national values and public welfare.

The AI initiative includes plans to bolster the federal workforce with 100 AI professionals by next summer.

How the EU’s AI Act Differs From The US

The EU’s recently approved AI Act adopts a risk-based framework, focusing narrowly on what it defines as high-risk AI, requiring rigorous assessments and approvals. In contrast, the U.S. casts a wider net, covering all AI applications but leaning on voluntary disclosures rather than mandatory compliance.

While the EU act expresses caution against applications like social scoring and intrusive facial recognition, the U.S. strategy is more about preventing harm without outright bans.

The 2023 AI Safety Summit at Bletchley Park brought together international governments, leading AI companies, civil society groups and experts to assess the risks of AI, and discuss co-ordinated action on Artificial Intelligence.

So far, the EU is the only global power to establish comprehensive legislation on AI.

Was this Article helpful? Yes No