Hope on the horizon for digital civic space, as EU Parliament advances protection of rights

11-05-2023
The wins, concerns and what's missing: ECNL's reaction as the AI Act is adopted in the European Parliament committees.

Today, the European Parliament committees for internal market and civil liberties (IMCO/LIBE) adopted their position on the EU Artificial Intelligence Act. ECNL welcomes important amendments, which advance the protection of fundamental rights and civic freedoms, in particular prohibitions of remote biometric identification, both in real-time and after the capture of images, and the obligation to conduct a fundamental rights impact assessment before putting an AI system into use. However, we note that stronger safeguards are necessary to protect migrants and refugees from harmful tech and to empower civil society to enforce the AI Act. 

The wins 

The position of the IMCO-LIBE committees contains a number of amendments that put civic freedoms first: 

  • Prohibitions: A comprehensive list of prohibited AI practices now includes the prohibition of remote biometric identification, emotion recognition, predictive policing and discriminatory biometric categorisation. 
  • Impact assessment: In line with our recommendations, companies and public authorities deploying high-risk AI systems will also have to assess the impact of these systems on people’s fundamental rights and consult with relevant external stakeholders. Public bodies will have to publish the results of the impact assessment for public scrutiny.  
  • Increased civic engagement: Members of European Parliament (MEPs) have also heard our arguments for stakeholder engagement in the governance of AI - civil society will have a strong representation in the advisory forum to the EU-level “AI Office”, a new agency responsible for implementation and enforcement of the AI Act. 
  • No loopholes for security / law enforcement use: Unlike the EU member states, the Parliament has fortunately listened to EU citizens and has not proposed loopholes and exemptions for AI systems used in the area of national security or law enforcement. MEPs have also clarified that the regulation will apply to EU agencies like Europol and Frontex, which use AI increasingly for profiling people and predicting crime. 

The concerns 

  • Loopholes impede protection of rights: While the committee position improves safeguards for high-risk AI systems compared to the original proposal, it created a potentially dangerous loophole for which systems will be considered “high-risk” in the first place. According to the new version of Article 6, an AI system will be not simply be catergorised as “high-risk” when it is intended to be used in one of the high-risk areas, e.g. law enforcement, migration, biometrics, welfare, employment or justice. In addition, the system will also have to pose a “significant” risk to fundamental rights. This is a highly vague concept that is prone to abuse by companies developing AI who have an inherent interest not to be subject to the law’s requirements. If the company considers that their AI system does not pose a risk that is significant, they will be able to escape entirely the safeguards put in place by the AI Act.   
  • Fundamental rights protection left in the hands of solely technical experts: Another concerning development is that at the last minute, MEPs removed changes in Article 40 which would ensure that issues related to fundamental rights cannot be part of harmonised standards developed by industry-dominated standardisation bodies. This raises concern about the potential undermining of fundamental rights protections. Standardisation bodies charged with shaping the interpretation and practical application of fundamental rights in the context of AI do not have relevant fundamental rights expertise and, notoriously, do not ensure strong participation of civil society in their discussions and decision-making.  

Still missing 

  • Civil society can’t represent people complaining about AI abuse: The Parliament’s position gives people affected by AI the right to complain to a supervisory authority. However, to make the exercise of this right easier, people should also be able to mandate a public interest organisation to represent them, similarly to how it is envisioned in the EU data protection law, the General Data Protection Regulation (GDPR).  
  • Civil society has limited voice to raise concerns: The new text of the AI Act also does not give any rights to civil society organisations to flag violations (e.g. identified through the analysis of publicly available fundamental rights impact assessments) and complain to the authority directly, without the need to represent a specific person who was affected. This is especially relevant where the person who suffered harm is in a vulnerable situation, which might make them fear the repercussions of coming forward with a complaint, e.g. if they are an activist, a refugee or an employee. 
  • Prohibitions on harmful technology in the area of migration: The adopted text falls short on ensuring the full protection of fundamental rights in the area of migration. Despite the #ProtectNotSurveil campaign that ECNL is a member of, MEPs have not decided to prohibit harmful migration tech: predictive analytics systems used for preventing migration that will exacerbate violence at the borders and lead to push-backs or automated risk assessment systems that entrench racism and bias and erode human dignity and freedom from discrimination. 

Our additional asks to Parliament  

The position of the committees will be put to plenary vote in mid-June when MEPs will still have a chance to improve the law. We ask all MEPs to ensure that discriminatory tech used in the area of migration is prohibited and that civil society has more opportunities to contribute to keeping the developers and deployers of AI systems to account.