As members of the European Artificial Intelligence Alliance, ECNL provided input in the development of the EU Ethics Guidelines that have just been released by the High Level Expert Group on Artificial Intelligence (AI) of the European Commission. In particular,we asked for the Guidelines to:
1. require that the development and use of the AI be compliant with the human rights standards of the European Convention on Human Rights and its Court;
2. include specific references to civic freedoms - namely, freedom of association, assembly and expression – as fundamental rights particularly relevant for the development of trustworthy AI systems;
3. include the need for a human rights impact assessment of AI tools and their compliance not just with ethical values but also with fundamental rights;
4. provide for a broad stakeholder consultation to solicit regular feedback from the general public - not just immediate developers or users - on how AI systems impact on their lives and fundamental rights.
ECNL welcomes the fact that the High-Level Expert Group took many of the above recommendations on board. The Guidelines explicitly recognise that a trustworthy AI must be grounded in fundamental rights and that the protection of freedoms of association and assembly contribute to forming a basis for trustworthy AI. For this reason, the Guidelines advise AI developers to consult all stakeholders who may directly or indirectly be affected by the deployment of AI systems throughout their life cycle and recommend setting up long- term mechanisms to solicit their feedback.
Why are the EU Ethics Guidelines important?
The Guidelines are standards that set out a framework for achieving a trustworthy AI. As well as providing a list of ethical principles, they also include a practical Assessment List to ensure that such principles are indeed implemented and that fundamental rights impact assessments are carried out where there could be a negative impact on them.
What is a Trustworthy AI?
According to the Guidelines, a Trustworthy AI has three components, which should be met throughout the AI system's entire life cycle: (1) it should be lawful, complying with all applicable laws and regulations (2) it should be ethical, ensuring adherence to ethical principles and values and (3) it should be robust, both from a technical and social perspective since, even with good intentions, AI systems can cause unintentional harm.
What happens next?
The EU Commission encourages all stakeholders to pilot the Guidelines and its trustworthy AI Assessment List in practice and provide feedback on whether it can really be implemented, if it is complete and relevant. Stakeholders are intended as not only those who research develop, design, deploy or use AI, but also those that are directly or indirectly affected by AI – including (but not limited to) companies, organisations, researchers, public services, institutions, civil society organisations, governments, regulators, social partners, individuals, citizens. Meanwhile, the EU Commission will continue working on testing and improving the Guidelines: a revised version of the Assessment List will be presented in early 2020.
How is ECNL engaged in AI?
ECNL is working to address the impact of AI on civic space. We aim to understand how the development and use of AI-led technologies may restrict civic freedoms but also how CSOs themselves can use AI to be more effective. We therefore engage in European level standard-setting efforts as members of the European Artificial Intelligence Alliance and as observers at the Council of Europe (CoE) Committee of Experts on Human Rights Dimensions of automated data processing and different forms of artificial intelligence. We also consult with the CoE Human Rights Commissioner, member states of the United Nations (UN) Human Rights Council and the UN Special Procedures to advance this field and ensure protection of human rights, including civic freedoms in AI-led technologies.
Register here for the piloting process.
For more background, read also ICNL's article on how AI can amplify civic freedoms.