ECNL position statement on the EU AI Act

26-07-2021
In its current state, the AI Act misses an opportunity to effectively protect the rights of persons and communities being subjected to AI systems.
Image
cover image with title of paper and illustration of global connection

ECNL strongly supports rights-based regulation of artificial intelligence (AI) systems and welcomes the European Commission’s initiative to draft a proposal for an EU-wide AI Act. However, we are deeply concerned about the current approach of the draft. In our submission to the consultation process at European Commission (deadline August 6), we highlighted the following key points:

Missed opportunity. In its current state, the AI Act misses an opportunity to effectively protect the rights of persons and communities being subjected to AI systems, placing business and operational interests as well as harmonization of the internal market for AI products above people’s fundamental rights.

Power imbalance. The Act does not sufficiently address the severe power imbalance that exists between those who develop and deploy AI systems, and the communities that are subjected to them. This imbalance is especially acute for historically marginalized and under-represented groups.

Inadequate legal requirements. Only a few AI systems are subject to (inadequate) legal requirements, while the vast majority of AI systems are under no impact assessment nor regulation at all. The AI Act actually supports the acceleration of AI systems by preventing Member-States from further regulating them. ECNL encourages taking a sector-specific approach when it comes to further regulatory requirements, with an emphasis on potential users and developers.

Narrow list of prohibited AI systems. The scope and list of prohibited AI systems is too narrow and fails to include other AI systems that are incompatible with human rights. Emotion recognition technology; biometric categorisation for the purpose of predicting ethnicity, gender, political or sexual orientation; and risk assessments for criminal justice and asylum should be prohibited entirely. ECNL recommends

  1. expanding the list of prohibited AI practices in line with the European Data Protection Board and the European Data Protection Supervisor’s demands;
  2. removing the condition to prove “physical or psychological harm”; and
  3. narrowing down the scope of exceptions.

Human rights impact assessment. A thorough, inclusive and transparent human rights impact assessment (HRIA) must be the starting point for all subsequent regulatory actions of any AI system. The obligations generally pertain to AI providers only, failing to consider those pertaining to AI users. ECNL strongly recommends that AI users conduct human rights due diligence, including human rights impact assessments, before deploying the AI systems, and continuously thereafter.

Right to redress. An effective right to redress for affected groups should be added to the AI Act, with meaningful support (including adequate resources) to stakeholders so that they can fully exercise this right.

Evaluating the risks to human rights. Criteria for determining the risk level should include, at a minimum, those related to product design (including intent); severity of impact; due diligence mechanisms; causal link; potential for remedy; and context.

See more on this issue in our paper on evaluating the risks of AI systems to human rights from a tier-based approach.

Companies' responsibility. AI providers, the vast majority of which are private sector actors, are tasked with carrying out most of the requirements in the AI Act, yet there is no mention of companies’ responsibility to respect human rights in their activities and supply chains.

Meaningful stakeholder participation. The AI Act misses an important opportunity to enable civic participation and require meaningful stakeholder engagement, especially of at-risk and marginalized groups. Meaningful stakeholder participation, including external stakeholders such as CSOs, should be mandatory in the context of human rights due diligence by AI providers and users, with sufficient resources dedicated supporting this. ECNL also recommends adding an explicit right of CSOs and external stakeholders to appeal these decisions and consider them as having a legitimate interest.

Disproportionate role of standardisation bodies.  ECNL is alarmed about the disproportionate role that standardisation bodies like the  European Committee for Standardization (CEN), the European Committee for Electrotechnical Standardization (CENELEC)  have, and their power to adopt standards related to the AI Act. Given that AI providers will de facto follow these standards when conducting conformity assessments, external stakeholders including civil society, academics and affected communities should participate in the development of standards.

Disproportionate impact on already marginalized groups. AI systems disproportionately impact already marginalized and at-risk groups, further exacerbating existing inequality. any analysis should be intersectional at its heart, i.e. acknowledging that persons with intersecting forms of identity face elevated (often unique) harms.

ECNL encourages everyone to use this content in their own submissions. You can download the full statement below: