The Council of Europe (CoE) Ad Hoc Committee on Artificial Intelligence (CAHAI) has conducted a multi-stakeholder consultation process to gather views on potential elements of a legally binding framework on artificial intelligence (AI) based on the CoE’s standards on human rights, democracy and the rule of law.
The overwhelming majority of submissions to the consultation (234 out 260) came from organisations and institutions based in Europe, with several coming from Kenya, Mexico, Chile, Canada, United States of America, Guatemala, Israel, Jordan, Nicaragua and Hong Kong. After significant awareness raising efforts by civil society representatives, the civil society was the most represented sector with 31% submissions, followed by representatives of government and public administration (28%), academic and scientific community (20%) and private business sector (19%).
Priorities: regulation, human rights protection and inclusion
There are three main takeaways from the consultation responses:
1. Pro-regulation, anti-self-regulation
The overwhelming majority of stakeholders support banning AI systems that have proven to violate human rights or undermine rule of law and democracy. In addition, there is overwhelming support for regulating all AI systems, irrespective of their risk level and mistrust towards self-regulatory mechanisms.
2. Strong preference for a human rights-based approach
Stakeholders want assurance that AI systems will be assessed based on a human rights and freedoms framework. That is why the human rights impact assessment (HRIA) is the preferred choice of governance mechanism for the majority of stakeholders.
3. Participatory process and call for inclusion
Stakeholders called for more participatory inclusion, especially of groups underrepresented in public institutions and AI policymaking. Moreover, civil society was loud and clear: we want to participate (and we were the most active stakeholder in this process).
Main findings of the survey
Here are some of the most interesting details from the responses:
Impact on human rights, rule of law and democracy. When asked to indicate 3 areas (out of 15 options) in which the deployment of AI systems are believed to pose the highest risk of violating human rights, democracy and the rule of law, the most selected choices were “justice” (20%), followed by "law enforcement" (19%) and "national security and counter-terrorism" along with “social networks/media, internet intermediaries” (19%). “Environment and climate” were the least selected option.
Banning the development, deployment and use of AI systems that have been proven to violate human rights or undermine democracy. Most of the respondents consider that the development, deployment and use of AI systems that have been proven to violate human rights or undermine democracy or the rule of law should be fully banned (55%).
Should the use of facial recognition in public spaces be prohibited? The majority of respondents (53%) agree, whereas a quarter (25%) disagree and the remainder (22%) is indifferent or has no opinion on it.
Specific types of AI systems deemed to represent the greatest risk to human rights, democracy and the rule of law. Respondents were asked to identify 5 out of 18 given examples of AI systems. An overwhelming majority of respondents selected AI systems for “Scoring of individuals by public and private entities” (92%), followed by “Facial recognition supporting law enforcement” (91%), “Emotional analysis in the workplace to measure employees’ level of engagement” (72%), “Deep fakes and cheap fakes” (50%) and “AI applications to prevent the commission of a criminal offence” (48%).
What combination of mechanisms should be preferred to efficiently protect human rights, democracy and the rule of law? Having to select a combination of 3 out of 6 given options, a significant majority of respondents (81%) indicated “Human rights, rule of law and democracy impact assessments”, followed by “Audits and intersectional audits” (70%) and mechanisms of “Certification and quality labelling” (51%).
What to do in case of development, deployment and use of AI systems that pose “high risks with high probability” to human rights, democracy and the rule of law? A solid majority of respondents (52%) consider that such AI systems should be regulated by law, followed by others (29%) saying they should be banned and others (11%) who would like it to be subject to moratoria instead. Very few respondents (5%) opted for self-regulation (ethics guidelines, voluntary certification).
What to do in case of development, deployment and use of AI systems that pose “low risks with low probability” to human rights, democracy and the rule of law? Interestingly enough, even in this hypothesis a consistent majority (58%) responded that such systems should also be regulated by law, followed by 28% who called for self-regulation (ethics guidelines, voluntary certification).
Is self-regulation by companies more efficient than government regulation to prevent and mitigate the risk of violations of human rights, democracy and the rule of law? A strong majority of the respondents (75 %) disagreed.
Should the use of AI systems in democratic processes (e.g. elections) be strictly regulated? An overwhelming majority of respondents (92%) agree.
Finally, consistent majorities of respondents consider that the CoE should implement the following follow-up activities:
- Monitoring of AI legislation and policies in member states” (90%);
- Establishing an AI Observatory for sharing good practices and exchanging information on legal, policy and technological developments related to AI systems (89%);
- Establishing a centre of expertise on AI and human rights (87%);
- Capacity building on CoE instruments, including assistance to facilitate ratification and implementation of relevant CoE instruments (80%).
Many respondents also recommend the participatory inclusion of all stakeholders, including the public, citizens and especially groups underrepresented in public institutions and in AI policymaking, by establishing a platform to facilitate the sharing of good practices, the identification of trends in the development of AI and the anticipation of ethical and legal issues.
Robust, clear, legally binding AI regulation at the CoE and EU level
The findings of CAHAI’s multi-stakeholder consultation clearly support the need for a legally binding instrument regulating the use of AI to ensure effective protection of human rights, rule of law and democracy. The feedback from this consultation is now expected to inform the work of the Legal Framework Group at CAHAI, which is tasked with finalizing model provisions of a binding regulatory framework on AI at the CoE level.
In addition, these findings strongly support our own calls for more robust and clearer AI regulation at the EU level. Regulation should include mandatory human rights impact assessment for AI systems (instead of self-assessment of conformity), additional rules and requirements for all levels of risk of AI systems (instead of high-risk systems only), a full ban on systems that violate human rights or undermine democracy without exceptions and clear provisions for consultation and participation of stakeholders, especially impacted groups and civil society in the assessment and monitoring processes.