Civic voices heard: European Parliament calls for facial recognition ban

15-10-2021
ECNL joined 40+ CSOs calling for stronger fundamental rights protection in the resolution on AI.

In an inspiring victory for human rights advocacy and civil society, the European Parliament (EP) adopted on October 5, 2021 a resolution on “Artificial intelligence in criminal law and its use by the police and judicial authorities in criminal matters”, outlining clear red lines. 

These rightfully include banning:

  1. Facial recognition and automated analysis and/or recognition of other human features, such as gait, fingerprints, DNA, voice, and other biometric and behavioural signals, in publicly accessible spaces;
  2. Private facial recognition databases in law enforcement;
  3. Predictive policing, i.e. algorithmic-driven behavioural predictions to identify people likely to commit a crime, on the basis of historical data and past behaviour, group membership, location, or any other such characteristics;
  4. AI-enabled mass scale scoring of individuals.

Immediate response from a united civil society

Beyond the specifics of the report, we celebrate the extraordinary advocacy and coalition-building efforts displayed by civil society during this process, which ECNL joined as well, shepherded by EDRi . As part of the Reclaim Your Face campaign, of which ECNL are proud members, civil society in Europe and beyond made their voices heard: we refuse that law enforcement and the judiciary have access to oppressive technology that can target also activists, human rights defenders and civil society at large, and we want effective legislation that protects our fundamental rights and civic space. 

When the European People’s Party tried to authorise the use of predictive policing and biometric surveillance for law enforcement through amendments to the EP’s Committee on Civil Liberties, Justice and Home Affairs (LIBE) report, civil society immediately pushed back. Within hours, EDRi put together an open letter, with ECNL signature alongside 40+ CSOs, urging the EP to vote against the proposed amendments. 

At ECNL, we applaud and echo the EP’s view that the EU AI Act should first and foremost aim to prevent harm. This means applying the precautionary principle in all applications of AI in the context of law enforcement.

We especially welcome the following elements of the resolution:

  • The need to push back against the narrative that biometric surveillance is necessary for security and crime prevention (spoiler alert: it’s not.);
  • Understanding “trustworthy AI” as systems designed to protect and benefit all members of society, including vulnerable and marginalised groups with an emphasis on racialised persons, especially since AI systems can perpetuate and exacerbate existing systemic discrimination;
  • Recognising the power asymmetry between those who develop AI systems and those who are subject to them;
  • The need to consider the full scope of adverse fundamental rights impacts of AI systems in the context of law enforcement, which should always be considered high-risk, if not inherently incompatible with fundamental rights;
  • Guaranteeing fundamental rights throughout the entire lifecycle of an AI system.

We are also thrilled that the EP incorporated many civil society recommendations in the resolution, including a call for:     

  • Robust oversight, including algorithmic explainability, transparency, traceability and verification;
  • Meaningful transparency, through the creation of public registers, with information on the types of tools in use, the purposes for which they are used, the types of crime they are applied to, and the names of the companies or organisations that developed those tools (including a specific obligation to disclose the use of Clearview AI or equivalent technologies);
  • Mandatory fundamental rights impact assessments conducted prior to the development of AI systems for law enforcement or the judiciary, with adequate transparency regarding the process, meaningful stakeholder engagement including civil society, and clear safeguards for addressing the identified risks;
  • A moratorium on the deployment of facial recognition systems for law enforcement purposes that have the function of identification;
  • Ending all funding of biometric research, deployment, or programmes that are likely to result in indiscriminate mass surveillance in public spaces;
  • Creating guidelines outlining the conditions for deployment of AI systems in law enforcement and judiciary by the EU Fundamental Rights Agency, the European Data Protection Board, and the European Data Protection Supervisor.         

Overall, the EP has signaled the importance of enabling strong protection of fundamental rights in the AI space. It also acknowledged that civil society has a clear standing in raising their voice in this debate. We look forward to continuing to participate in all aspects of this process, from non-binding guidelines to the regulatory framework.

We will continue to push – collectively – for even stronger safeguards, and strive to include advocates from all corners of civil society, especially those who do not traditionally focus on digital rights and who represent marginalised groups.  ECNL has more to say about the AI future in Europe and beyond – read our position statement on the 2021 EU AI Act here.