Civil society calls on EU legislators to ensure the AI Act protects the rule of law

27-09-2023
Over 60 CSOs call on European lawmakers to ensure that the AI Act is fully coherent with rule of law standards.

In an open letter, drafted and coordinated by ECNL, the Civil Liberties Union for Europe (Liberties), and the European Civic Forum (ECF), over 60 organisations call for strong rule of law safeguards in the EU Artificial Intelligence (AI) Act. 

The negotiations on the AI Act are now in the final stages, and key transparency and accountability mechanisms that civil society has been advocating for and managed to ensure in the European Parliament's position are under threat. The AI Act is not just a market regulation, it is also closely interlinked with the rule of law: the misuse of AI systems, including opaque and unaccountable deployment of AI systems by public authorities, poses a serious threat to the rule of law, fundamental rights, and democracy.

To protect the rule of law from AI deployments, we call for the following:  
  • Fundamental rights impact assessments are a must  

Fundamental rights impact assessments (FRIAs) should be “an obligation for all deployers of high-risk AI technologies” to ensure that their use upholds the principles of justice, accountability, and fairness. We call for rule of law standards to be added to the impact assessments, with a structured framework to evaluate the potential impacts, biases, and unintended consequences of AI deployment. FRIAs must be not just a recommendation but a necessary safeguard to ensure that AI systems are designed and deployed in full accordance with the values of the EU and the EU Charter of Fundamental Rights.  

  • No general exemption for national security or dangerous loopholes allowing big tech to self-declare if their system is high risk

EU legislators must reject the European Council’s proposed amendment to Article 2, which aims to exclude AI systems developed or used for national security purposes from the scope of the Act. Furthermore, lawmakers should return to the original Commission’s proposed version of the AI Act, thereby removing newly added loopholes that would give AI developers the power to unilaterally exempt themselves from the safeguards set out in the AI Act (Article 6(2)).