EU reaches agreement on the Artificial Intelligence Act

13-12-2023
CSOs fear the final EU AI Act will fall short of effectively protecting people from harmful effects of AI.

On 8 December, following over 36 hours of closed-door negotiations, the EU institutions reached a political agreement on the Artificial Intelligence Act (AI Act). However, the final text and crucial details will be hammered out in technical meetings scheduled throughout December and January. Based on media reports and press releases, we are concerned that while several fundamental rights safeguards have been adopted, the AI Act falls short of effectively protecting people from harmful uses of AI.  

Limits to some uses of AI 

According to the press release by the European Parliament, the compromise includes several prohibitions. However, many of them include exceptions or are limited to specific circumstances, which begs the question of how meaningful they will be in practice. So far, we know that, among others, the AI Act will include:   

  • A partial ban on live facial recognition in public spaces, with significant exceptions for the use of these systems by law enforcement in the search of certain victims, the localisation or identification of suspects of certain crimes and prevention of terrorist threats. These exceptions are likely to pave the way for the use of biometric mass surveillance, which is incompatible with fundamental rights, as warned by the Reclaim Your Face campaign that ECNL is part of as well as by independent bodies such as the UN High Commissioner for Human Rights and the European Data Protection Supervisor
  • A limitation to the use of retrospective facial recognition (e.g. analysis of CCTV records with biometric software). Such systems, according to the Parliament press release, should only be allowed for the search for suspects of serious crimes. However, the scope of these crimes and applicable safeguards remain unclear; 
  • A ban on biometric categorisation systems which infer sensitive characteristics (e.g. race, political, religious, philosophical beliefs or sexual orientation), although it is unclear if the wording will be strong enough to prevent abuse and whether it will apply also in the area of law enforcement; 
  • A prohibition of emotion recognition in the workplace and educational institutions, but not in areas where this technology, described by experts as pseudo-science, is likely to have the biggest impacts on people, such as in law enforcement and migration. 

Fundamental rights safeguards and redress 

Following continued 2-year advocacy by ECNL and our partners in the EDRi coalition, the institutions agreed to include an obligation for deployers of high-risk AI systems to assess the system’s impact on fundamental rights prior to putting it into use. According to media reports, this obligation will apply to all public sector deployers and to some private sector deployers. However, fundamental rights impact assessments (FRIA) will not be meaningful if the final text does not specify the criteria for the assessments or mandate the European Commission to develop guidelines in meaningful consultation with civil society. ECNL has made extensive proposals for such criteria.  It is unclear whether the AI Act will guarantee a sufficient level of transparency of FRIAs and stakeholder engagement in the process, which are necessary to prevent “human rights washing”. Finally, we do not have the full picture of whether it will be possible for individuals and civil society organisations to challenge poorly performed or inaccurate impact assessments.  

Broad loopholes for national security and law enforcement will undermine the regulation 

All of the abovementioned protections are likely to be compromised given that, according to media reports, the EU Member States pressured for a blanket national security exemption. This means that EU governments, who were fiercely opposed to some of the prohibitions during the negotiations, will be able to abuse the vague definition of national security to bypass the necessity to comply with fundamental rights safeguards included in the AI Act. Such result is despite ECNL’s and other experts' analysis that such a blanket exemption goes beyond the EU treaties and the jurisdiction of the European Court of Justice. It also goes against the expectations of EU citizens, as demonstrated by a poll commissioned by ECNL.  

In addition, press releases remain quiet about the final agreement regarding exceptions and derogations for law enforcement authorities. Notably, the final text will likely exempt law enforcement authorities from transparency obligations, e.g. the requirement to register high-risk AI systems in the public EU database or to publish the results of FRIAs. Without these safeguards, public scrutiny and accountability of law enforcement uses of potentially harmful technology will be severely undermined. 

Last but not least, the final text will include a loophole giving AI developers a wide margin of discretion to decide that their system is not “high-risk”. In these cases, AI developers would be able to unilaterally exempt themselves from all AI Act obligations and safeguards. It remains to be seen if the AI Act will contain meaningful safeguards preventing such abuse. 

An opaque, closed-door negotiation  

Despite over 2 years of advocacy and civil society engagement in the legislative process, most of the compromises were agreed during the final stretch of the so-called trilogue process, which is notoriously opaque and inaccessible, without clear procedural rules. Closed-door negotiations lasted over 36 hours in total, with the first part continuing non-stop for 22 hours. During this time, after a night without sleep, the Council Presidency pressured the European Parliament to accept far-reaching compromises in the area of national security and law enforcement, which have not previously been discussed during preparatory meetings. This caused a wave of criticism of the trilogue process by civil society, journalists and academic experts. Opaque trilogues prevent meaningful participation of citizens and civil society in public decision-making and privilege well-connected lobbyists. We call on the EU institutions to engage in a deep revision of the legislative process in order to guarantee that the requirements of a transparent and participatory lawmaking are met.