Recommendations for incorporating human rights into AI impact assessments

23-11-2021
New papers by ECNL and Data & Society on the importance of human rights impact assessments in mitigating algorithmic harm and ensuring accountability.

ECNL and Data & Society have published two papers on human rights and algorithmic impact assessments (HRIAs) with recommendations for two ongoing AI regulatory debates in Europe, but also to support broader research and advocacy.

 Recommendations for Council of Europe’s Ad Hoc Committee on AI (CAHAI) - Algorithmic accountability 

The first paper, Recommendations for Assessing AI Impacts to Human Rights, Democracy, and the Rule of Law, takes a critical approach to HRIAs, outlining its risks and limitations, from performativity to human rights washing. It then offers suggestions for avoiding these traps, focusing on community-centered and community-driven impact assessments.

In the paper we emphasise that we are at a turning point for the future of algorithmic accountability. Numerous jurisdictions have already proposed legislation that would implement HRIAs as a tool for bringing accountability to the algorithmic systems increasingly used across everyday life. Despite this heightened focus on HRIAs as an algorithmic governance mechanism, there is no standardised process for conducting such assessments that can be considered truly accountable.

This paper, written to provide recommendations to the CAHAI as they seek to develop a Human Rights, Democracy, and Rule of Law Impact Assessment (HUDERIA), explores both the opportunities and limitations of HRIAs as a mechanism for holding AI systems accountable. Building on Data & Society previous Assembling Accountability report, the paper also provides a framework for evaluating potential HUDERIA tools.

 Recommendations for the EU AI Act - Mandating human rights impact assessments (HRIAs) 

The second paper is a condensed version, outlining high-level recommendations for including HRIAs in the upcoming EU AI Act, the first legally binding framework on AI based on the EU’s standards on fundamental rights. This is an essential step for achieving stated EU goals for the development and deployment of trustworthy AI. It is also central for understanding and determining the levels of risk of AI systems — without understanding the impact of the AI system on human rights, there is little evidence and knowledge for detecting the risk level.

Our recommendations are focused on supporting the EU in developing HRIA requirements, as well as deepening engagement across sectors on impact assessments as a mechanism for algorithmic governance and accountability.