ECNL’s call to the European Commission: Include rights-based impact assessments in the voluntary AI Code of Conduct

05-07-2023
Why AI impact assessment is a useful instrument to be included in the upcoming AI code for companies.

In June 2023, ECNL wrote to the European Commission, urging for the inclusion of artificial intelligence (AI) impact assessments as a key instrument for designing and developing AI within the voluntary AI Code of Conduct for companies. 

AI impact assessment is increasingly becoming an instrument of choice for diverse policy, regulatory and standard setting efforts in responsible AI. Based on the ongoing research, academic and practical work, ECNL strongly believes that it is a useful instrument to be included in the upcoming AI Code of Conduct. AI impact assessment is both complementary to other instruments, such as AI audits conducted post-deployment and use, as well as being a crucial precautionary step in the design and development phase that should be conducted based on the internationally recognised human rights framework. Moreover, AI impact assessments are already: 

  • included as a standard within several standard setting documents;
  • proposed as a requirement by several regulatory efforts, within the EU and globally ;
  • in use in major tech companies.

See the Resources list below for more information.

To be meaningful and effective, AI impact assessments need to fulfill the basic governance and process criteria: 

  1. Normative framework that defines the scope and content of the assessment, the type of impact(s) that is being assessed, benchmarks used for different impacts and any enforcement or rewards mechanisms that ensure the assessment will actually take place at a needed time. 
  2. Process rules which define stages and trigger points for implementing the assessment and its iterations, key procedures and different roles of those involved, as well as the assessment team requirements and responsibilities. 
  3. Methodology for the assessment which defines indicators used, scales for assessment, guidance for balancing competing interests and providing for proportionality assessment (trade-offs). 
  4. Engagement of different individuals and groups defines identification of impacted stakeholders, methods and processes for their participation and input. 
  5. Oversight of the assessment process defines its documentation, publication requirements, monitoring and feedback mechanisms. 

Finally, we urge the institutions and policy makers, in particular within the EU, to provide a clear regulatory framework and guidance criteria for conducting meaningful and effective AI impact assessment beyond voluntary codes of conduct.