How can the EU include stakeholders to ensure effective implementation of the EU AI Act?

16-03-2022
The first set of ECNL's proposals on the EU AI Act amendments focuses on the governance structure and supervisory tasks.

Within our ongoing advocacy towards a regulatory framework for AI and human rights, ECNL has developed and submitted three sets of proposals with recommendations and amendments to the EU policymakers on the draft Artificial Intelligence Act (AIA).

Our proposals are based on the extensive in-house and external research, expert opinions, ECNL’s engagement in Council of Europe’s CAHAI work, our engagement in UNESCO’s work on AI Recommendations and OECD’s working group on trustworthy AI, as well as two years of discussions with a variety of stakeholders.

We argue that the overall goal of a regulatory framework of AI is to ensure truly trustworthy AI, providing flexibility for the development of AI whilst protecting the fundamental rights of all the people affected. Specifically, our proposals aim to strengthen the AIA legal framework, remove some of its inconsistencies, ensure meaningful stakeholder engagement in AI governance and support flexible, rights-and-risk based approach and innovation.  

Participation, exemptions, impact assessment

ECNL's proposals for the AIA amendments include:  

  1. Stakeholder engagement in AI governance structure and supervisory tasks;
  2. Scope of the AIA; namely, Article 2 – exemptions and exclusions from the AIA; 
  3. Risk designation and impact assessment of AI systems – including how to integrate fundamental rights risk assessment with evidence of existing nascent research and models. 
Image
big circle on network background with the text: "ECNL proposals to the EU AI Act amendments: Stakeholder engegement in AI governance"

Governance structure and supervisory tasks of the EU AI Act

In this article we present the first set of our proposals on the governance structure and supervisory tasks of the EU AI Act.

Our proposal introduces amendments to ensure an accessible and effective mechanism so that interested stakeholders can raise concerns regarding the implementation of the AIA. These are related to Article 57 and 59 of the AIA, which regulates the governance structure and supervisory tasks of the EU and Member States.  

For the purpose of supervising the implementation of the AIA, the currently proposed governance system includes the establishment of a European Artificial Intelligence Board (the Board) consisting of national supervisory authorities, the European Data Protection Supervisor (EDPS), and the European Commission as Chair of the Board. In other words: both the Commission and Member States will evaluate their own implementation by and among themselves – with the only notable exception of the EDPS. As such, the AIA fails to provide the opportunity for other relevant stakeholders – such as CSOs – to contribute to the evaluation and supervision of the implementation of the AIA.  

The problem with having no participation of external stakeholders in the governance of the AIA is that it will hinder effective implementation of the AIA and ultimately trustworthy (high risk) AI systems. Excluding relevant external stakeholders from participation in the governance of AI supervisory authorities results in missing out on their knowledge, observations, expertise and lived experience needed for understanding the factual impact of the AI systems.

Who is going to address their blind spots? If someone becomes aware of consistent structural misapplications of the AIA that result, e.g., in discriminating a particular community, where and when will they address this and call for a review? This is not covered by the Act.

In addition, considering that not everyone – particularly marginalised and vulnerable communities – has the resources to litigate such misapplications that results in harming them, they need access to the supervisory body where they can still raise their concerns.   

We therefore propose that, at a minimum, an advisory group composed of relevant external stakeholders (including CSOs and representatives of affected groups) should be included in the AIA to assist the AI Board in its supervisory tasks. 

Read more in our proposal:

 

Background - Expected timeline of the EU AIA (as of March 2022)

April 2021: The proposal for an EU AI Act was announced by the Commission. The draft text is now being discussed by the co-legislators, the European Parliament and the Council (EU Member states).

In the European Parliament, the leading committees responsible for the proposal are the Committee on Internal Market and Consumer Protection (IMCO; rapporteur: Brando Benifei, S&D, Italy) and the Committee on Civil Liberties, Justice and Home Affairs (LIBE; rapporteur:  Dragos Tudorache, Renew, Romania) under a joint committee procedure. The legal Affairs Committee (JURI), the Committee on Industry, Research and Energy (ITRE) and the Committee on Culture and Education (CULT) are also tasked with issuing Opinions on specific areas of the Act within their competence.  

11 April 2022: The IMCO and LIBE Committees are expected to table their Reports, with possibility of tabling amendments until 18 May 2022.

26-27 October 2022: Expected final vote on the IMCO-LIBE Reports.

November 2022: Expected vote in Plenary. Following the vote in the Plenary, the amended text will undergo further informal negotiations (aka “Trialogue”) between the Commission, the Council and the Parliament, in order to reach an agreement on an identical text, which will then be put up for a final vote in the European Parliament.

Second half of 2023: A final vote on the AIA is not to be expected realistically before the second half of 2023 (or even later).

In the Council, negotiations to find a common position between Member States have started. Both the Slovenian and French presidencies have put forward changes to the proposal of the Commission. For a most recent analysis of the proposed amendments, see EDRi's analysis here.