EU AI Act must have a standardised methodology for impact assessments

04-04-2022
The third set of ECNL's proposals on the EU AI Act amendments focuses on risk designation and impact assessments; flexibility mechanisms that support innovation and fundamental rights.
Image
big circle on network background with the text: ECNL proposal to the EU AI Act amendments: Risk designation and impact assessments

Turning point for AI accountability

We are at a turning point for the future of Artificial Intelligence (AI) accountability within the EU. This is crucial for understanding and determining the levels of risk of AI systems: if we don’t understand the impact of an AI system on fundamental rights, we don’t have enough evidence to assess its risk level.

Our proposal for amendments calls on the EU to adopt meaningful fundamental rights impact assessment requirements for AI systems development and deployment. It is based on ECNL’s previous research as well as on proposals of the Council of Europe CAHAI committee for a potential future legally binding AI instrument

More flexibility to respond to AI challenges

Currently, the EU Artificial Intelligence Act (AIA) provisions fail to provide enough flexibility to respond promptly to emerging challenges posed by AI systems. The risk-based mechanism adopted by the AIA needs to be more agile, easy to review and update so that it can adapt to new applications in a future proof manner. Equally, the impact of AI systems on rights, people and society should be assessed on a rolling basis.

Standardised methodology to underpin impact assessments

Therefore, we recommend responsibility for both AI providers and users (i.e., deployers) to assess the impact of AI to ensure proportionate check-and-balance process. In the amendments, we propose a set of overall criteria that should always underpin an impact assessment and we call on the Commission to develop and adopt a more detailed standardised methodology for the whole impact assessment process.

Such methodology should as a minimum include a basic module assessment for all providers and users except high-risk AI; and a detailed module, for high-risk AI providers and users. This methodology should also include space for engagement and consultation with external stakeholders and publication of key findings. 

Innovation and trust-building hand in hand

The industry and the private sector are already preparing for standardisation of impact assessment, risk assessment and certification standards and processes in the design phase (namely, ISO, IEEE, AI start ups and SMEs – see examples here and here.)

A basic impact assessment exercise should not be seen as an onerous cost for providers of AI systems, even when such systems are not included in the “high-risk” areas. This approach is supportive of innovation and trust-building at the same time, as it gives providers an early orientation about potential risks and impacts that could be unacceptable in further deployment process. Therefore, it fully reflects the goals of the EU AIA.

For the users, conducting a fundamental rights impact assessment in advance of deployment assists with accountability and potential liability for the use of AI system. Leading EU legal networks call for model rules in AI impact assessment before deployment.  Member States have also demonstrated willingness to implement similar obligations on users of AI systems (see for example Dutch model of human rights impact assessment Impact Assessment Mensenrechten en Algoritmes, or German study on overcoming AI discrimination in public use). 

Read more details in our proposed amendments below: