Collective power for rights-based and just AI: going beyond the AI buzzword

29-11-2021
The impact of AI systems on marginalised groups is overlooked way too often and this pattern deepens existing systemic problems and inequalities.
Image
grid showing people of varios ethnicities, sex, disability. Some are colored red, while the profiles of white people have no such markers.

AI as a buzzword contributing to AI hype and techno-solutionism

Defining artificial intelligence is incredibly tricky. As cautioned in the AI Myths project, “[t]he vagueness of this term has reached such a state of absurdity that we have people using the term AI to talk about everything from a robot that assembles pretty mediocre-looking pizzas to sci-fi fantasies about super intelligent AI overlords colonizing the universe in an epic battle against entropy.” Many venture capitalists and investors, as well as some AI developers and deployers, seek to capitalize on this ambiguity to accelerate the sale and use of AI systems.

One of key issues today is that AI is being used as a buzzword to push for dangerous data-driven technologies, under the disguise of ‘innovation’ and ‘progress’, too often without clear vision or understanding if such technology is even suited to solve the real life problems. This has real-world implications, especially for marginalized and vulnerable communities and groups: the potential benefits of AI systems or other emerging technologies are touted as (seemingly magical) solutions, while their actual individual or societal harms and adverse impacts are ignored or even concealed.

Potential benefits for a few (privileged) groups while severe harms for many at-risks

Importantly, there is a strong imbalance of power between those that develop and deploy AI systems and the communities that are subjected to them, especially historically marginalized and underrepresented groups. When considering potential opportunities that can arise from AI systems, it’s therefore important to begin with a power analysis and focus on the needs of the most at-risk communities. Any AI system should seek to increase economic and social equity for all, strengthen the human rights and provide for a democratic system and rule of law that is beneficial to everyone, not only to a few privileged individuals or groups. With this in mind, it is important to consider the following elements when analyzing an AI system: First, who will benefit from these systems (specifically, which demographic groups and/or sectors) and who will be harmed? Second, is the root cause of a (social, economic, political or other) issue effectively being addressed by deploying the AI system, or are we merely offering performative and superficial solutions? In reality, there are no systems that only present opportunities or risks from a binary perspective, but instead systems that provide different opportunities or risks depending on the targeted population, context, and situation in which they are deployed.

"As a Roma rights advocate I have to admit that I haven’t come across initiatives that are aimed to use AI or data science for the benefit of the Roma community. It is still challenging to advocate for digital rights of Roma and raise awareness of this issue among the general public because there is a lack of concern or interest about the digital inclusion and protection for this particular minority across the various organizations that shape the digital policies of Europe. Even Roma NGOs are not yet fully aware of the risks and harms that AI presents for Roma people. The risks of new emerging technologies are often not very visible, and therefore not on the radar of priorities among many actors in the Roma NGO network. This can also make it challenging (but not impossible) when it comes to building alliances." - Benjamin Ignac, Romani technologist, Research Fellow at the Roma Initiatives Office at OSF and Public Policy alumnus from the University of Oxford

 

"AI can bring advantages for persons with disabilities for example to enhance freedom of speech by improved automated language technology and by taking advantage of the possibilities for personalization of mainstream digital solutions. As for harms there are big risks if automatic solutions are used before they are mature enough. There is also general awareness of the dangers of algorithmic discrimination and inaccessibility, but often you are not even aware of the concrete harm done by AI. For example if a person with disability’s job application is rejected in 1st phase of selection, how will that person know that the rejection is based on an algorithm and related to disability?" - Mia Ahlgren, Human Rights Policy Officer at the Swedish Disability Rights Federation, Member of European Disability Forum ICT expert group

 

The false comfort of “debiasing AI” - need for a rights-based approach to AI governance

Yes, AI systems are biased against marginalized groups, not least against racialized persons and women (especially transwomen) and gender non-binary persons. Yes, emerging methods to “debias” these technologies have pointed to some promising results, and could potentially mitigate discriminatory outcomes. But most importantly: no, debiasing AI systems cannot account for AI accelerating and exacerbating existing structural discrimination and social and economic inequality. As Agathe Balayn and Dr Seda Gürses argue in a recent EDRi paper, “[d]ebiasing locates the problems and solutions in algorithmic inputs and outputs, shifting political problems into the domain of design, dominated by commercial actors.

To avoid harm from AI systems, and center marginalized and vulnerable groups in every analysis, we urgently need to apply a human rights-based approach to AI governance. This requires applying lessons learned from prior advocacy and basic human rights principles related to transparency, engaging with affected groups, and non-discrimination. Going beyond AI buzzword also requires going back to the basics of procedural rights, from due process and notification, effective public oversight to access to remedy. Measures such as human rights impact assessments, meaningful community engagement and external oversight are necessary to deal with unique aspects of AI systems, such as scale and speed.

The critical role of civil society and affected communities

‘Nothing about us without us’ rings true in every situation, including in AI design, deployment and governance, given the significant human rights impacts and potentials of algorithmic systems. "It's absolutely key that the whole range of stakeholders should be more empowered to participate in discussions about 'AI systems' because they are not purely technical. As they encroach upon different aspects of society, from welfare allocation to workplace well-being, we need to ensure that the existing expertise of civil society is not pushed aside in favour of technical expertise in machine learning." (Daniel Leufer, Access Now).

This begins with building confidence and empowering affected communities, especially members of marginalized and vulnerable groups, to participate in AI governance and learn from their lived experiences. Surely, many barriers to meaningful participation exist, from lack of resources to capacity and cooptation of affected groups. At ECNL, we therefore strongly believe in capacity building, nurturing local and global coalitions, and enabling strong collaboration between digital rights organizations and other types of civil society organizations as well as grassroot activists and communities, especially those working to advance racial, gender, disability, sexual orientation, and economic justice, among others.

"We need to be actively involved in decision-making concerning development and implementation of AI legislation and policy. It is more than a plea; it is an obligation that all member states must comply with, as they have ratified the UN Convention on the Rights of Persons with disabilities. It has to be done in a meaningful and effective way, including resources and training. This is far from the tokenistic involvement that is often applied. The obligation also includes accessibility. Expertise from organisations of persons with disabilities is necessary for inclusive technologies and leaving no one behind." - Mia Ahlgren

 

While AI-related advocacy seems daunting, many of these core human rights issues related to AI systems are not new: social and economic inequality; racism, sexism, ableism, trans/homophobia, xenophobia, and other forms of discrimination; corporate power; and state surveillance. Advocates from all corners of civil society have been addressing these harms for a long time, and yet they are all too often excluded or ignored because of their proclaimed/perceived “lack of technical expertise” - a harmful narrative that is pushed to solidify a capitalist and neocolonial approach to ‘expertise’.

"European institutions that monitor human rights, assess the impact of AI systems, and issue recommendations to the EU on how to build human centric and trustworthy AI - these actors need to decolonise themselves and include a wider spectrum of voices from the actual communities that are vulnerable and at actual risk of technological discrimination. There are definitely plenty of Roma who are qualified to meaningfully participate in the discussions across Europe and they can assist, inform or even guide the necessary research, produce the crucial missing pieces of evidence of injustice, and develop strategies that can help shape Europe’s digital future that works also works for Roma people. The same institutions should also provide the necessary assistance, training or education to build the capacity of even more Roma individuals to join the discussions on how to govern AI and other emerging technologies. " - Benjamin Ignac

Importantly, it enables AI developers, deployers, and policymakers to disregard concerns related to the risks and harms of AI systems to one community because of the proclaimed benefits to other (powerful) groups. It also misses the opportunity to develop truly useful technology that would help empower communities and address some of the existing systemic problems.

Going beyond the AI buzzword begins with acknowledging the critical role that diverse civil society groups and affected communities play in designing and developing AI technologies. Our collective power is key to keep advocating for meaningful inclusion of civic voices.