Follow us linkedin

Check out our broader alliance

updates contact

When law meets tech – a call for rights-based AI

Yesterday, the authorities banned another protest announced by the activist group Rouge One. This is the second time that announced protest has been banned due to “a fairly accurate prediction of violence occurring during the protest” by AuRii – a new government’s algorithm for predictive analytics. AuRii has been analysing tweets about the announced protest in the last 24 hours and its verdict seems clear – there is 86% chance of violence during the protest. Activists are appalled by, what they call, flagrant breach of their rights, calling for investigation into how AuRii makes such predictions. In addition, evidence is emerging that bots and fake accounts are responsible for violence-related tweets, pointing to an organized sabotage of Rouge One’s activities. Despite public outcry, authorities have remained adamant that they can’t disclose the inner analytical works of AuRii, as the court struggles to unpack the decision-making process in her wires.

If you haven’t heard of AuRii yet, don’t worry – this news is fictional. However, its premise is becoming increasingly likely as more algorithmic systems [*] enter into public sphere of making decisions. In their study on Moralization in social networks and the emergence of violence during protests, using data from the 2015 Baltimore protests, the authors created an algorithm that can predict a link between tweets and street action – hours in advance of actual clashes with the police. Another study, funded by the U.S. Army Research Laboratory, concluded that 2016 post-elections protests could have been predicted by analysing millions of American citizens’ Twitter posts. Such predictive analytics impacts on our rights to express opinion and assemble – it can be used by the police to plan for disruptive events and divert them, but also to sabotage legitimate public activism and expression, or silence the dissent. Assemblies inherently cause some disruption in order to highlight an issue, but authorities have to deal with that, as the international standards require that protecting public order should not be used to unnecessarily restrict the holding of peaceful assemblies. How could, then, such algorithmic systems be safely used in public domain? To what extent can findings generated from algorithms be used as conclusive evidence to restrict freedoms and rights?

We learn daily about examples of existing or potential algorithmic systems being used in the public decision making, from facial recognition surveillance for the purpose of establishing “good credentials”, to automated risk calculation models on welfare and other benefits that deter activists. Mozilla Foundation and Elements AI state that a public right to an explanation already exists when an algorithmic system informs a decision that has a significant effect on a person’s rights, financial interests, personal health or well-being. The truth is, no one knows exactly how these can impact our freedoms and rights. Council of Europe recommends appropriate legislative, regulatory and supervisory frameworks as a responsibility of states, as well as human rights impact assessments at every stage of the development, implementation and evaluation process. In addition, algorithmic systems designed by private companies for public use need to have transparency safeguards included in the terms of reference. For example, the authorities can require the source code to be made public. Researchers from the Alan Turing Institute in London and the University of Oxford call for scrutiny so public can see how an algorithm actually reached a decision, if those affect people’s lives. We need an algorithmic “white box”.

Looking into the future, how do we want to protect and support our ability to freely set up or join a civil society organisation, to protest, to express our opinions, to participate and be active? What are “digital preconditions” for our basic rights? Along with established standards on freedoms of association, assembly and expression offline and online, Digital Freedoms Fund imagines a Universal Declaration of Digital Rights, offering, i.e., right to algorithmic transparency, right to request a human override to algorithmic decision-making, right to understand the implications of technology, freedom from profiling, freedom from online manipulation. How do we achieve that?

At the 2019 Mozilla Festival, ECNL hosted a discussion about the impact of algorithms on our freedom to assemble and protest with a group of lawyers, activists, technologists and academics. We agreed that a merger of knowledge, experience, ideas and peer connections by such a diverse group was the winning ticket – we need each other to complement our thinking. We propose several parallel actions, to ensure protection and promotion of our rights:

  1. First, to gain increased access into how algorithmic systems are designed and used in order to understand their existing and potential impact on rights to freedom of assembly, association, expression and participation, and explore possibilities to use them for good. This can be done e.g. by creating a crowdsourced platform on “AI systems for civic freedoms”, to channel information on ways technology and algorithmic systems are used by governments, companies or organisations. Learning can be mapped to inform policy, advocacy actions and creating practical guidelines to protect and promote these rights.
  2. Second, we need to address a key challenge: how to practically translate human rights into algorithmic systems design, to achieve human rights centred design. According to the University of Birmingham research team, it means algorithmic systems will be human-rights compliant and reflect the core values that underpin the rule of law. They proposed translating human rights norms into software design processes and system requirements, admitting that some rights will be more readily translatable (such as right to due process, right to privacy and rights to non- discrimination), while others are likely to be difficult to ‘hard-wire’, such as the right to freedom of expression, conscience and of association. It is precisely why this challenge requires research and close collaboration between technical specialists and legal experts. We need to create connections – legal understanding of the use and impact can inform thinking in development of products, but software developers can also help lawyers become informed about the challenges.
  3. Third, we need strengthening legal safeguards through policy and advocating for nuanced legal standards and guidance on tech and human rights in international treaties and regional frameworks – efforts towards this are already unrolling on the United Nations and European level. The experience from those can help inform also upcoming human rights impact assessments of the AI systems or national strategies and plans on AI. Clear criteria and indicators can help guide the processes. The report on Closing the Human Rights Gap in AI Governance offers practical toolkit to achieve these goals.
  4. Finally, there is an urgent need for consistent, inclusive, meaningful and transparent consultation with all relevant stakeholders, specifically including broader civil society, human rights organisations and movements, the academics, media, education institutions. The public needs to understand, learn and discuss at least basic consequences of applying algorithmic systems that impact our lives. In particular, vulnerable groups should be heard, ensuring that human rights impact of algorithmic systems can be monitored, debated and addressed. This is also required by international standards on public participation . The public and civil society should have an opportunity to conduct independent testing of systems being developed for public use.

In sum, we must develop broader knowledge-building networks and exchange learnings on practical implementation. As a community, we can apply a forward-looking approach and map, explore and unpack these issues. Such actions also require overcoming traditional forms of cooperation – we must collaborate more outside our silos and across specialised fields. Therefore, we are committed to create opportunities to explore and discuss these issues with activists, developers, tech companies, designers, academics, lawyers and governments and to follow up on the recommendations above. Join us in this effort!

We thank: Mozilla Festival for having faith that lawyers can host a tech-related session.

MozFest 2019 session hosts: Vanja Škorić, Program Director, ECNL |  Katerina Hadži-Miceva Evans, Executive Director, ECNL

MozFest 2019 session discussants, in particular: Loraine Clark, University of Dundee |  Juliana Novaes, researcher in law and technology | José María Serralde Ruiz, Ensamble Cine Mudo |  CJ Bryan, Impact Focused Senior Product Engineer |  Drew Wilson, Public Interest Computer Scientist |  Extinction Rebellion activists

 

[*] We use the term algorithmic systems based on the latest Council of Europe definition: algorithmic systems are understood as applications that, often using mathematical optimisation techniques, perform one or more tasks such as gathering, combining, cleaning, sorting, classifying and inferring data, as well as selection, prioritisation, recommendation and decision-making.