Examining Loopholes in the AI Act’s Predictive Policing Ban

~ This post has been authored by the Editorial Team of The Writ Review.

The EU Council approved the final text of the Artificial Intelligence Act on May 21st, a significant step after years of anticipation. This pioneering regulation positions the EU as a global leader in establishing a comprehensive legal framework for AI. The Act aims to protect fundamental rights and foster safe and trustworthy AI through a risk-based approach, with stricter scrutiny for higher-risk applications. It includes a list of “prohibited uses” of AI at the highest risk level (Article 5), citing potential threats to rights such as human dignity and freedom (see Recital 28). Questions remain about whether bans on specific AI applications, like predictive policing, will have practical impact or serve primarily symbolic purposes. This raises broader concerns about the Act’s commitment to human-centric AI and its inclusive protective scope.

Predictive policing, although not explicitly defined in the Act, is commonly understood, following Perry et al., as the use of analytical techniques to predict criminal behavior and identify potential targets. This can involve predictive mapping to pinpoint crime hotspots and predicting the likelihood of individuals becoming victims or perpetrators (predictive identification). While predictive identification holds promise for crime prevention, it has sparked criticism, especially regarding human rights.

Predictive identification, originally a high-risk AI application, is now explicitly banned under Article 5(1)(d) of the Act. This analysis critiques the ban’s effectiveness, noting initial lobbying efforts by human rights groups to include it after earlier drafts overlooked this issue. The discussion highlights potential loopholes like the “human in the loop” exception and the Act’s national security exemption, which could weaken the ban’s impact on curtailing predictive identification.

Prohibitions under the Act

Before the final adoption of the AI Act, predictive identification faced scrutiny, especially following experiments like the Netherlands‘ “living labs”. Amnesty International’s report highlighted the “Sensing Project”, where data on passing cars (such as license plates and brands) was used to predict petty crimes like pickpocketing and shoplifting. This system disproportionately targeted cars with Eastern European plates, revealing bias in predictive identification. In 2020, a Dutch court ruled SyRI, a fraud detection tool using criteria like “foreign names” and “dual nationality”, violated privacy rights under the ECHR during the child benefits scandal.

Initially absent from the Commission’s proposal, a ban on predictive policing gained traction after human rights groups, led by Fair Trials, lobbied for its inclusion in the Act. The IMCO-LIBE report subsequently recommended banning predictive identification under Article 5, citing concerns over presumption of innocence, human dignity, and discrimination. Intensive lobbying efforts continued during negotiations, supported by over 100 human rights organizations. The restrictions were later adopted.

The issue of oversight

The prohibition in the Act targets instances of predictive identification solely based on profiling or assessing an individual’s personality traits. However, ambiguity surrounds the distinction between profiling and assessing, particularly concerning the level of human involvement required. The Act references GDPR definitions but lacks clarity on terms like “automated processing” and “meaningful human intervention,” crucial for interpreting the ban’s scope. This uncertainty could enable law enforcement to justify predictive identification with minimal human oversight, potentially undermining the ban’s effectiveness.

Furthermore, the Act excludes from prohibition AI systems supporting human assessment of criminal involvement based on verifiable facts linked to criminal activity. This exception requires clear criteria for what constitutes meaningful human involvement, as current definitions may not align with typical AI operational modes. Additionally, predictive identification not meeting the prohibition criteria may still be classified as high-risk under the Act, subjecting it to stringent safety and transparency requirements.

The requirement for human oversight in high-risk AI applications, as stipulated by the Act, also lacks clear definition, posing challenges in ensuring responsible and ethical use of predictive identification. Moreover, studies indicate potential biases introduced by human decision-makers interacting with AI outcomes, distinct from biases inherent in AI systems themselves. Thus, even with human oversight, concerns persist regarding the equitable use of predictive identification systems.

The Act includes a broad exemption for AI systems used in national security, acknowledging that this area is outside the EU’s jurisdiction (Article 4(2) TEU). This exemption has sparked concerns about its impact on the ban on predictive identification. While activities like espionage and terrorism are recognized as falling under national security, Member States have discretion over which predictive identification practices may be exempt.

Human rights NGOs focusing on digital rights criticize this exemption as overly broad and potentially conflicting with European law. Groups like Article 19 and Access Now argue it could create a digital rights gap under the guise of national security. They fear Member States might exploit this exemption to justify using predictive identification, undermining the ban and risking fundamental rights. For example, predictive policing in counter-terrorism could unfairly target minority communities and non-Western individuals, exacerbating biases.

Advocates advocate for a case-by-case approach to national security exemptions, consistent with ECJ case law such as La Quadrature du Net. This approach aims to balance national security with protecting fundamental rights when deploying predictive identification technologies.

Conclusion

The ban on predictive identification, initially seen as a victory for fundamental rights, is significantly weakened by two main loopholes: the “human in the loop” defense and the national security exemption. The former allows law enforcement to claim human involvement in AI systems, which, due to vague definitions of “meaningful human intervention,” could undermine the ban’s intent. The latter, with its broad and ambiguous nature, provides Member States ample room to bypass the ban. These gaps could render the ban on predictive policing merely symbolic, failing to offer real protection for fundamental rights. This is at odds with the AI Act’s goals of protecting humanity and creating beneficial AI. The ban’s current form may not protect marginalized groups, such as minorities and non-Western individuals, who are at risk of being disproportionately targeted by law enforcement, particularly in counter-terrorism efforts. To ensure the ban serves its intended purpose, there is a need for precise definitions, strict human involvement guidelines, and a balanced approach to national security exceptions. Without these reforms, the ban risks being ineffective, unable to address the genuine issues and potential dangers of AI in policing.