The EU AI Act's introduction of Fundamental Rights Impact Assessments (FRIAs) represents a watershed moment in AI governance. For the first time, some organisations deploying high-risk AI systems must systematically examine how their systems might affect fundamental rights enshrined in the EU Charter. Yet, the promise of FRIAs hinges entirely on how they are implemented.
The temptation to reduce FRIA compliance to a simple checklist is strong and inherently dangerous. A fit-for-purpose FRIA is a governance mechanism that enables structured discussion about adverse impacts and embeds rights protection in the development and use of AI systems. Most importantly, it must inform deployment decisions, not merely justify them after the fact.
This new guide, created by ECNL and the Danish Institute for Human Rights, offers a clear roadmap for how organisations can conduct meaningful FRIAs in line with the EU AI Act. Sturctured in 5 distinct phases that address different aspects of a FRIA, our guidance is grounded in international and regional human rights standards, responds to implementation realities, and is designed for genuine impact.
The main audience for this guide is deployers of high-risk AI systems under the AI Act, with the content specifically tailored for public authorities and bodies. In addition, the guide can be used by any organisation seeking to deploy AI responsibly and in accordance with fundamental rights.
ECNL and DIHR would like to acknowledge the role of AlgorithmWatch and Michele Loi in the development of this guide, in particular the extensive contribution to the questionnaire template. Moreover, we received invaluable input from individuals and organisations who contributed their expertise, reflections and time on a voluntary basis, for which we are deeply thankful. We wish to extend our sincere thanks to Gianclaudio Malgieri (eLaw, Leiden University), Isabelle Schipper (Netherlands Institute for Human Rights), Iris Muis and Julia Straatman (Utrecht Data School), Patricia Shaw (Beyond Reach Consulting Limited), and numerous civil society colleagues.
The publication was funded by the European Artificial Intelligence & Society Fund.