150 human rights organisations call on EU institutions to protect people’s rights in the EU AI Act

12-07-2023
As final negotiations on the AI Act kick off, civil society asks the EU to focus on empowering people, limiting discriminatory surveillance and pushing back on big tech lobbying.

As European Union institutions begin negotiations on the final text of the Artificial Intelligence (AI) Act, 150 civil society organisations from all around the world, including ECNL, call on them to ensure that the regulation puts people and fundamental rights first. AI systems are already used to monitor and control us in public spaces in a way which violates our civic freedoms, facilitates violations of the right to claim asylum, predicts our emotions and categorises us. These systems make crucial decisions that determine our access to public services, welfare, education and employment.

Without strong regulation, companies and governments will continue to use AI systems that exacerbate mass surveillance, structural discrimination, centralised power of large technology companies, unaccountable public decision-making and environmental damage.

We call on EU institutions to:

1. Empower people affected by AI systems with a framework of accountability, transparency, accessibility, and redress, including:

  • An obligation on developers and deployers of high-risk AI systems to conduct and publish fundamental rights impact assessments and to meaningfully engage civil society and affected communities in this process;
  • Ensuring transparency for people affected by AI as well as a right to complain if their rights have been violated;
  • Empowering civil society to represent people affected and lodge standalone complaints.

2. Draw limits on harmful and discriminatory surveillance, including:

  • Ensuring effective prohibitions, including a full ban on real-time and retrospective remote biometric identification in publicly accessible spaces;
  • Removing transparency loopholes for AI used by law enforcement and migration authorities;
  • Rejecting a blanket exemption from AI Act obligations for systems developed or used for national security purposes.

3. Push back on corporate lobbying and remove loopholes that undermine the regulation, such as the possibility for companies to arbitrarily claim their AI systems do not pose significant fundamental rights risks despite being developed or used in high-risk contexts.