On Friday, December 8, lawmakers reached a provisional deal on new rules to govern the use of artificial intelligence (AI) in the EU, finally reaching a compromise on the contentious issue of AI-powered surveillance tools.
After heated negotiations, the EU Council successfully pushed to water down the European Parliament’s proposed restrictions on the use of facial recognition software in public spaces. But while the concessions were deemed necessary to prevent the legislation from collapsing, they have far-reaching implications for privacy and human rights that critics argue could disproportionately impact migrants.
At heart, the AI Act intends to protect the rights and freedoms of EU citizens in the context of powerful AI systems and their increasingly consequential role in society.
However, during negotiations, parliament’s mandate to protect civil rights clashed with the position of Council members who sought exemptions for law enforcement agencies, which they argued should be able to use a class of AI dubbed remote biometric identification (RBI).
In the end, the European Parliament backed down from its hardline stance on RBI. In fact, it stated that the technology would be banned except “in cases of victims of certain crimes, prevention of genuine, present, or foreseeable threats, such as terrorist attacks, and searches for people suspected of the most serious crimes.”
For critics of state surveillance, such a broad definition offers little guarantee against the abuse of RBI by government agencies and leaves the question of checks and balances for another day.
Exemplifying widespread criticism of the technology, a joint statemen t authored by over 100 organizations including Amnesty International, Liberty and Privacy Network argued that RBI systems “reinforce the over-policing, disproportionate surveillance, detention and imprisonment” of migrants and other groups that are already subject to structural discrimination
The coalition called for EU lawmakers to enforce strict safeguards to protect fundamental rights including freedom of assembly and expression, the right to a fair trial, the presumption of innocence, non-discrimination, and the right to claim asylum.
Alongside RBI, campaigners have also fiercely rejected the use of predictive AI systems in the areas of policing, criminal justice and border control. They argue these technologies compound existing prejudices and run counter to the presumption of innocence underpinning European legal systems.
A provisional agreement on the proposed AI Act acknowledged that the use of AI systems in the fields of migration, asylum, and border control warranted the additional safeguards reserved for high-risk applications of the technology.
However, it fell short of the strong restrictions on RBI and predictive policing called for by many campaigners.
With the EU holding its breath as Parliament and the Council work toward a final text for the AI Act, the issue of migrant rights may not be at the top of everyone’s agenda. However, the potentially authoritarian deployment of AI will concern a broad contingent of Europeans if lawmakers fail to find the right balance of interests.