ECNL suggestions incorporated in new CoE Recommendation on the human rights impacts of algorithmic systems

13-05-2020
The Recommendation includes guidelines for States, the public and private sector for the design and use of AI that complies with human rights and fundamental freedoms.
Image












a conference room with people sitting at round tables

On 8 April, the Committee of Ministers of the Council of Europe formally adopted Recommendation (2020)1 on the human rights impacts of algorithmic systems. This standard setting document – which was drafted by the Committee of Experts on human Rights Dimensions of Automated Data Processing and Different Forms of Artificial Intelligenceincludes guidelines directed at States as well as public and private sector actors on their obligations and responsibilities regarding the design, development and ongoing deployment of algorithm-driven systems, to ensure their compliance with human rights and fundamental freedoms. ECNL had been invited to attend the meetings of Committee of Experts with the status of Observer and contribute to the drafting of the guidelines. We are very pleased that most of our suggestions were incorporated in the Recommendation, particularly with regard to the following parts:

  • Preamble: “Engage in regular, inclusive and transparent consultation, co-operation and dialogue with all relevant stakeholders (such as civil society, human rights defence organisations […] paying particular attention to the needs and voices of vulnerable groups, with a view to ensuring that human rights impacts stemming from the design, development and ongoing deployment of algorithmic systems are comprehensively monitored, debated and addressed” (para 5);
  • Testing: “Regular testing, evaluation, reporting and auditing against state-of-the-art standards related to completeness, relevance, privacy, data protection, other human rights, unjustified discriminatory impacts and security breaches before, during and after production and deployment should form an integral part of testing efforts, particularly where automated systems are tested in live environments and produce real-time effects.” (Section B, para 3.3.)
  • Levels of transparency: “States should establish appropriate levels of transparency with regard to the public procurement, use, design and basic processing criteria and methods of algorithmic systems implemented by and for them, or by private sector actors.” (Section B, para 4.1)
  • Standards: “States should co-operate with each other and with all relevant stakeholders, including civil society, to develop and implement appropriate guidance (for example, standards, frameworks, indicators, and methods) for state-of-the-art procedures regarding human rights impact assessment.” (Section B, para 5.1)
  • Consultation: “Private sector actors should actively engage in participatory processes with consumer associations, human rights advocates and other organisations representing the interests of individuals and affected parties, as well as with data protection and other independent administrative or regulatory authorities, on the design, development, ongoing deployment and evaluation of algorithmic systems, as well as on their complaint mechanisms.” (Section C, para 5.1)

On the other hand, we would have welcome stronger and explicit reference in the Guidelines to the States’ need to consult with civil society and human rights organisations both in the process of “drafting, enacting and evaluating policies and legislation or regulation applicable to the design, development and ongoing deployment of algorithmic systems” (Section B, para 1.1.) and as they conduct “human rights impact assessments prior to public procurement, during development, at regular milestones, and throughout their context-specific deployment in order to identify the risks of rights-adverse outcomes.” (Section B, para 5.2).

Find the complete text of the Recommendation here.

ECNL will keep advocating for civil society inclusion in these crucial parts of States’ obligations within its current role of member of the Council of Europe Ad Hoc Committee on Artificial Intelligence (CAHAI) on behalf of the Conference of International NGOs.

To learn more about how ECNL contributes to Europe-wide regulatory standards setting on Artificial Intelligence see here.