Home / News / Technology / UK Government Commits to More Transparent AI Following Accusations of Bias
Technology
3 min read

UK Government Commits to More Transparent AI Following Accusations of Bias

Published
James Morales
Published
By James Morales
Edited by Insha Zia

Key Takeaways

  • UK government departments must disclose their use of automated decision-making systems.
  • Controversial AI systems used bAI tools used by the Home Office and Department for Work and Pensions have sparked multiple controversies.
  • The  Public Law Project (PLP) has identified 55 automated decision-making systems used by government departments.

With government departments in the UK increasingly embracing AI tools, critics are concerned about the risk of algorithmic bias and a lack of transparency in automated decision-making systems.

However, following pressure from campaigners, the government has committed to publishing a public register of AI tools, the Guardian reported on Monday, Aug. 26.

Government AI Use Lacks Transparency

At the heart of the government’s latest commitment is the idea that people should be entitled to know when AI has been used to make decisions that affect them. 

In the EU, this concept has already been enshrined in law with the AI Act, which guarantees citizens certain rights to launch complaints and request explanations about AI systems.

In contrast, the UK’s approach relies on voluntary reporting standards that have failed to create the intended transparency. 

Although the Res­ponsible Technology Adoption Unit created a repository where public bodies deploying AI can record details in 2021, just nine entries have been made to date. Meanwhile, the Public Law Project (PLP) has identified 55 automated decision-making systems used by the central government.

Home Office Controversies

Campaigners in the UK are calling for greater AI transparency cite numerous examples in which automated systems have been found to perpetuate bias.

One of the most high-profile cases occurred in 2020, when the Home Office agreed to suspend an automated visa application processing program that, according to the Joint Council for the Welfare of Immigrants, “took decades of institutionally racist practices, such as targeting particular nationalities for immigration raids, and turned them into software.”

The department was challenged again last year over an algorithmic tool designed to identify fake marriages used to bypass immigration controls. In its complaint, the PLP alleged that the tool “fails certain nationalities at disproportionate rates,” unfairly discriminating against Greeks, Bulgarians, Romanians, and Albanians.

Persistent AI Bias 

Aside from the Home Office, the Department for Work and Pensions has also come under fire for its use of AI.

In January, the DWP stopped routinely suspending benefit claims flagged by an AI-powered fraud detector following parliamentary scrutiny.

However, despite committing to enhancing transparency and publishing a review of bias risks, last week, the PLP concluded : “There is a real risk that […] technologies of this nature may lead to discriminatory outcomes” and “there is scant detail on the safeguards that DWP is purportedly adopting to mitigate harm.”

Amid ongoing controversies and persistent public distrust in AI,  The Department for Science, Innovation and Technology (DSIT) confirmed this weekend that government departments would disclose their use of such tools “shortly,” according to a mandatory algorithmic transparency recording standard .

Was this Article helpful? Yes No
James Morales is CCN’s blockchain and crypto policy reporter. He has been working in the news media since 2020, writing about topics such as payments, banking and financial technology. These days, he likes to explore the latest blockchain innovations and the evolving landscape of global crypto regulation. With an educational background in social anthropology and media studies, James uses his platform as a journalist to explore how new technologies work, why they matter and how they might shape our future.
See more
loading
loading