In June 2023, ECNL wrote to the European Commission, urging for the inclusion of artificial intelligence (AI) impact assessments as a key instrument for designing and developing AI within the voluntary AI Code of Conduct for companies.
AI impact assessment is increasingly becoming an instrument of choice for diverse policy, regulatory and standard setting efforts in responsible AI. Based on the ongoing research, academic and practical work, ECNL strongly believes that it is a useful instrument to be included in the upcoming AI Code of Conduct. AI impact assessment is both complementary to other instruments, such as AI audits conducted post-deployment and use, as well as being a crucial precautionary step in the design and development phase that should be conducted based on the internationally recognised human rights framework. Moreover, AI impact assessments are already:
- included as a standard within several standard setting documents;
- proposed as a requirement by several regulatory efforts, within the EU and globally ;
- in use in major tech companies.
See the Resources list below for more information.
To be meaningful and effective, AI impact assessments need to fulfill the basic governance and process criteria:
- Normative framework that defines the scope and content of the assessment, the type of impact(s) that is being assessed, benchmarks used for different impacts and any enforcement or rewards mechanisms that ensure the assessment will actually take place at a needed time.
- Process rules which define stages and trigger points for implementing the assessment and its iterations, key procedures and different roles of those involved, as well as the assessment team requirements and responsibilities.
- Methodology for the assessment which defines indicators used, scales for assessment, guidance for balancing competing interests and providing for proportionality assessment (trade-offs).
- Engagement of different individuals and groups defines identification of impacted stakeholders, methods and processes for their participation and input.
- Oversight of the assessment process defines its documentation, publication requirements, monitoring and feedback mechanisms.
Finally, we urge the institutions and policy makers, in particular within the EU, to provide a clear regulatory framework and guidance criteria for conducting meaningful and effective AI impact assessment beyond voluntary codes of conduct.
Resources list:
- Recommendations for Assessing AI Impacts to Human Rights, Democracy, and the Rule of Law, ECNL and Data&Society (2021)
- Critical Criteria for AI Impact Assessment: An Aggregated View, Civic AI Lab, University of Amsterdam (2023)
- NIST AI 100-1 (2023)
- ISO/IEC DIS 42001 Information technology — Artificial intelligence — Management system (2023)
- IEEE 7010-2020
- EU Parliament adopted position on Proposal for a Regulation of the European Parliament and of the Council laying down harmonised rules on Artificial Intelligence (Artificial Intelligence Act)
- Ad hoc Committee on Artificial Intelligence (CAHAI), Possible elements of a legal framework on artificial intelligence, based on the Council of Europe’s standards on human rights, democracy and the rule of law (2022)
- U.S. Blueprint for an AI Bill of Rights
- Brazil draft AI Law
- Canada Algorithmic Impact Assessments Tool
- Costa Rica AI Law proposal (2023)
- UK Information Commissioner Guidance on AI auditing, supported by impact assessments (2022)
- Netherlands, Impact Assessment Mensenrechten en Algoritmes (IAMA)
- Alessandro Mantelero, Samantha Esposito (2021), An Evidence-Based Methodology for Human Rights Impact Assessment (HRIA) in the Development of AI Data- Intensive Systems
- Microsoft Responsible AI Impact Assessment Guide (2022)
- Cisco Responsible AI Framework
- Meta Human Rights Impact Assessment (2022)
- Open Loop (2021)
- Amazon HRDD framework