Netherlands sets precedent for human rights safeguards in use of AI

12-04-2022
Dutch parliament agrees to make the use of human rights impact assessments mandatory before using algorithms by public institutions.

ECNL welcomes the adoption of the motion put forward by Dutch parliament members Bouchallikht (GroenLinks) and Dekker-Abdulaziz (D66) to make the use of human rights impact assessments mandatory before using algorithms.

We are glad to see that the Dutch parliament is recognizing the importance of safeguarding human rights with the help of such an assessment and look forward to the implementation in practice by the Dutch government.   

The motion was adopted on April 5, 2022 and calls on the government to: “(…) make it mandatory to conduct this impact assessment before using algorithms when algorithms are used to make evaluations or decisions about people; also it asks the government to oblige, where possible, to make impact assessments public (…)”. This mechanism should prevent algorithmic abuse, like in the case of the Dutch childcare benefit scandal.  

We call on the Dutch government and other EU Member States to include mandatory human rights impact assessment for all AI systems across the EU, within the ongoing development of the EU AI Act. See our earlier recommendations in this regard here.  

The IAMA 

The human right impact assessment (HRIA) developed on behalf of the Ministry as mentioned in the motion refers to the Impact Assessment Mensenrechten en Algortimes (IAMA). IAMA is an overall AI impact assessment model that supports making decisions, primarily for the public sector, about the development and deployment of AI systems, including human rights consideration. IAMA provides steps and questions that should be addressed before the AI is developed and implemented in four key phases:  

  1. Preparation phase: it determines why an AI system will be used and what the expected effects and concrete goals are (question zero).  
  2. The input and throughput phase: it addresses technical issues of the development of an AI system  – how the AI should look like, its operations, transparency, and what data is used to feed it, including its quality.  
  3. Addressing the output phase: it concerns the implementation and supervision of the AI system, its use, how that plays a role in policy or decision-making, and how AI can be monitored – including the opportunity to overrule decisions made by the AI.  
  4. A cross-cutting phase: it involves analysis whether the AI affects human rights, and to what extent, and then determines how adverse impacts can be prevented or mitigated. The methodology for this phase contains a separate elaborate questionnaire for identifying risks of infringing human rights. 

People-based approach and transparency 

For a meaningful implementation of this motion we recommend the following: 

  1. Clearly define what exactly constitutes an algorithm, as well as tangible framework for what constitutes ‘evaluations and decisions’ about people, capturing the broadest possible situations. It should include the evaluation and decision-making that involves groups as well as individuals. 
  2. Any type of developed algorithm ought to be assessed before being used to evaluate or make decisions about people, regardless whether the AI system itself is considered high or low risk. The risk of mistake when making decisions about people – no matter how small - is a risk too high and needs to be addressed before deploying an algorithm. 
  3. Findings on the impact should always be public. This can go hand in hand with allowing space for privacy and confidentially based (legal) requirements and focusing on what the impact is or could be, to allow scrutiny and public discussion where needed.  

Find out more about what to consider when implementing AI impact assessments here