On the 14 June, the European Parliament adopted its position on the EU AI Act, paving the way for the final negotiations on the first binding regulation of Artificial Intelligence (AI) in the world. The European Parliament’s position significantly improves fundamental rights protections compared to the original text proposed by the European Commission in 2021. It bans inherently discriminatory uses of AI (e.g., facial recognition systems), it improves transparency and accountability requirements and obliges companies and public institutions to assess the impact of high-risk AI systems on fundamental rights.
Today’s vote was a success of civil society, who relentlessly advocated for these safeguards from the very beginning of the legislative process. The final Parliament text addresses a large majority of issues which over 120 CSOs, including ECNL, raised in a statement published in November 2021.
While we welcome important advancements in fundamental rights protection, we regret that two important amendments were not adopted: prohibitions of harmful AI systems used in migration and provisions ensuring a stronger role for civil society in the enforcement of new rules.
Key safeguards in the European Parliament’s position
Prohibitions of facial recognition and other AI systems incompatible with fundamental rights
The European Parliament supported a full prohibition of the development and use of remote biometric identification (RBI), e.g. facial recognition, in real time. This is despite last-minute attempts from the centre-right European People’s Party (EPP), the largest in the Parliament, to introduce far reaching exceptions to the ban. The European Parliament also supported the prohibition of the use of RBI systems retrospectively, e.g. to analyse CCTV footage after the event, unless it is authorised by judge and necessary for the targeted search connected to a specific serious criminal offense.
Other prohibitions include biometric categorisation based on characteristics such as gender, race or religion; predictive policing systems (like the infamous COMPAS tool used to assess the risk of reoffending in the US); creation of biometric databases through scraping of photos on the internet (think: Clearview AI), as well as emotion recognition systems in law enforcement, border management, education and workplace.
Mandatory fundamental rights impact assessments
Following the advocacy efforts of ECNL and broader civil society, the European Parliament agreed that harms related to AI systems not only arise from how the system has been designed, but also from how and in which context it is deployed. Therefore, the text now includes an obligation for all deployers of high-risk systems, both from the private and the public sector, to assess the impact of the AI system on fundamental rights and specify how they are planning to mitigate risks, prior to putting it into use. When conducting the fundamental rights impact assessment, the deployer should also gather feedback from civil society and representatives of persons likely to be affected by the system. Thankfully, despite our concerns, technical standardisation bodies will not be able to impose rules for how to conduct this assessment.
Improved transparency
The text of the European Parliament also imposes an obligation on all deployers of high-risk AI systems to register them in a publicly available EU database. In addition, public authorities will have to publish the results of the fundamental rights impact assessments. This is a major win, because it will allow people affected by AI, and civil society watchdogs, to have access not only to information on which AI systems are sold on the EU market, but also to verify which of these systems are actually put into use and by whom.
Redress for people affected by AI systems
The European Parliament text includes an explicit right for people affected by AI systems to file a complaint with a national supervisory authority, if they believe that an AI system they were subjected to violates the AI Act. This is also a significant improvement, since in the original text individuals did not have access to any remedies for AI-related harms.
No blanket exemptions for national security or law enforcement
Last but not least, the European Parliament did not include any exemptions from fundamental rights safeguards for AI systems developed or used for national security, nor did it weaken transparency and accountability requirements in the area of law enforcement.
Concerns
Surveillance instead of protection for people on the move
Members of the European Parliament failed to support amendments aimed to protect people crossing borders from discriminatory risk assessment systems and AI predicting migration trends in order to facilitate illegal pushbacks. Such amendments had been proposed as a result of civil society advocacy under the banner of the #ProtectNotSurveil campaign.
The role of civil society is not strong enough
The European Parliament position ensures a strong representation of civil society in the advisory group to the EU-level AI Office, which is in line with our recommendations issued in early 2022. However, MEPs did not support amendments which would empower CSOs to represent people affected by AI systems in their complaints to supervisory authorities or bring complaints themselves. We see this as a lost opportunity for systemic solutions which would enable civil society to assist and support competent authorities in the enforcement of the AI Act.
Loopholes in categorising an AI system as high-risk
MEPs did not remediate a loophole in Article 6 of the AI Act which defines high-risk AI systems. This loophole will allow companies developing AI to argue that their systems, even when they are intended to be used in pre-identified high-risk contexts such as law enforcement, workplace or migration, do not actually pose a “significant” risk to fundamental rights. We worry that this will open the door for companies to escape entirely the requirements of the AI Act.
Next steps
The adoption of the European Parliament’s text paves the way for the start of the trilogue – i.e., inter-institutional negotiations between the European Parliament, the Council (composed of relevant EU Member States ministries) and the European Commission. Unfortunately, the Council’s position contains many dangerous provisions, including a blanket exemption from the scope of the AI Act for systems designed, developed or used for national security purposes. We hope that the important fundamental rights protections voted for in the European Parliament will be maintained in the negotiations, and that any loopholes in these safeguards will be removed. Over the next months, at ECNL we will continue our monitoring and advocacy in this final chapter of the legislative process.