AI for public participation: hope or hype?

17-12-2024
Explore how to make public engagement truly meaningful and inclusive with the help of AI tools and platforms.
Image
On the left side a small detail of the EU AI act (small paper piece), a man from the side. From his mouth there is kind of a speech wave coming out that leads to a big neural network.  In the background there are some black and white mountains and dark blue square.

Imagine transforming town halls into thriving digital hubs, where every voice is heard without borders or barriers. Welcome to the era of AI-powered online public participation—where technology bridges the gap between communities and decision-makers, making collaboration smarter, faster, and more inclusive than ever before! Source: Chat GPT 

When asking an artificial intelligence (AI) tool about the future of the use of AI in participation processes one gets a very utopian answer, to say the least. Indeed, technological developments can offer new opportunities for participation of people in public decision-making processes, as outlined in ECNL’s Research on new dimensions for public participation. Increasingly, online participation platforms are exploring the use of AI tools to enhance their functionalities.  

However, as interest grows in leveraging AI and algorithmic systems for public participation, the question arises: can these technologies really make engagement in policymaking more inclusive and impactful? Or will they cause more divide and harm to already excluded voices? The UN Human Rights Council Resolution on the role of good governance in the promotion and protection of human rights states that AI systems can play a significant role in facilitating access to information and participation in public life when they are used responsibly, with adequate safeguards and due diligence in place and consistent with human rights law. What risks must be considered and addressed and what are “adequate safeguards and due diligence” to ensure AI systems are used responsibly and without causing harm? 

At a recent webinar, we discussed current and emerging trends and examples on how AI tools are used on participation platforms, as well as relevant policies. Based on our research and the insights shared during the webinar, this blog will explore how and under what conditions the use of AI can potentially enhance meaningful and inclusive public participation and will conclude with a list of recommendations for those who consider piloting AI systems for citizen participation. 

First things first: What is AI? 

When we refer to AI, we follow the definition provided by the Council of Europe Framework Convention on Artificial Intelligence and Human  Rights, Democracy and the Rule of Law according to which “artificial intelligence system” means “a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations or decisions that may influence physical or virtual environments. Different artificial intelligence systems vary in their levels of autonomy and adaptiveness after deployment.” 

Benefits and considerations 

The use of AI can benefit some groups when used to enhance public participation, but there are also important considerations that must be taken into account. Before developing or deploying an AI system for participation, it is critical to assess and provide evidence that the AI system is actually effective in strengthening democracy and civic freedoms, including the criteria used to monitor and measure such impact. Importantly, a preliminary assessment and ongoing monitoring must consider which groups benefit – and which are harmed, if any – from the use of AI systems. Indeed, the potential benefits and harms are always context-specific and may vary over time.  

Benefits 

  • Analysis and synthesis of responses: AI-tools can automate data analysis, find and extract patterns from data, classify ideas, and help summarise key points.  
  • Deliberative potential: AI can support the deliberation of various opinions and identify areas of consensus (and lack thereof). 
  • Save time and costs: AI can quickly and effectively generate information that would otherwise need to be collected through time-consuming and expensive community forums, focus groups, or interviews. 
  • Visualise results: AI can help to visualise ideas and present them in a user-friendly way. 
  • Enhanced accessibility: AI-tools can enhance accessibility, e.g. by providing real-time translation, high quality text-to-speech assistance, and voice navigation, or by simplifying information. This solves language barriers and facilitates participation of specific groups, e.g. people living with disability and older populations.  
  • Scalability: AI tools can facilitate interaction with thousands of participants and help to analyse big data sets such as inputs submitted through public consultations. 
  • Improve user experience: AI tools can suggest topics or discussions tailored to individual interests and provide feedback which can be used to improve platform features.  

Considerations  

  • Privacy and data protection risks: the protection of privacy requires proper safeguards for the protection of data collection, purpose, processing and retention - as well as transparency throughout. These AI-based systems collect and process large datasets, which often include  sensitive data such as people’s political opinions and/or demographics,  What’s more, the possibility of  data sharing (especially with law enforcement and other public authorities) as well as  the ability to infer other sensitive data compound the risk and impact on  people’s privacy. 
  • Discrimination and bias: AI tools perform according to how they have been trained and this may promote discrimination and bias, as the underlying training datasets do not accurately represent diverse populations in their unique contexts. As such, they can not only create new forms of discrimination but also exacerbate and accelerate existing systemic inequality. This scenario can play out in several ways: e.g., when analysing input, AI-based tools tend to look for patterns, averages or middle-ground and may therefore insufficiently consider input submitted by underrepresented groups, or people with deviating views. This may lead to a "drowning effect", losing fringe voices and their added value to the debate in order to find middle-ground alignment. Furthermore, automated translation tools are usually trained on English data, and do not work well for languages of minoritised/marginalised groups or for reclaimed language by these groups. Indeed, they are typically incapable of capturing nuances and understanding different cultural contexts1. 
  • Fallibility / inaccuracy: In addition to bias in datasets and algorithms, other factors may cause errors or wrong interpretations. One of the limitations platform operators may face is the size and balance of available data to train AI tools. One option that may be used to make up for the lack of representation is to create so-called “digital twins” or “digital personas”, i.e. virtual representations/simulations of physical persons. However, this is a dangerous approach, not least because synthetic data would not accurately capture the views of humans (leaving aside the implications for members of these groups’ right to human dignity if they’re represented by digital personas). Another example is related to automated content moderation, which typically amplifies existing risks of content moderation and impacts freedom of expression, assembly and association, especially for already marginalised groups. Automated content moderation often fails to recognise what is legitimate content (especially when it is political or satirical) and what content violates platforms’ internal policies or the law. AI models often misinterpret information and perform poorly in unforeseen scenarios, as they are merely statistical tools. Moreover, algorithmic models operate extremely poorly in a multilingual environment, especially for non-dominant languages. 
  • Exclusion of marginalised groups and digital divide: Technology becomes an intermediary and can create more distance between people and governments.  Opinions filtered by AI risk silencing people’s voices, including through self-censorship if people do not feel comfortable sharing such sensitive data with an AI system. Ultimately, this may lead to exclusion instead of inclusion. Limited access to the internet, lack of digital literacy, and affordability challenges exclude certain groups, such as those in rural areas, older populations, or economically disadvantaged communities. This disparity risks amplifying existing inequalities, as voices from underrepresented groups may be absent from discussions. 
  • Misinformation and hate speech: Research found that the algorithms underpinning chatbots can be tricked into producing a nearly unlimited amount of disinformation, hate speech, and other harmful content. Moreover, as algorithms typically prioritise content with increased user interaction over content accuracy, the use of AI could lead to quicker spreading of false information. Therefore, the use of generative AI models requires thorough testing at the development stage to ensure adequate safeguards are in place to prevent the production of misinformation, hate speech, or other harmful content, as well as advanced monitoring systems to detect such content.  
  • Undue influence: AI-facilitated moderation can steer people in certain directions, for example by nudging platform users through questions such as “have you considered….” to avoid confrontational messages. In such cases, the tool risks becoming an agent rather than a facilitator. 
  • Environmental impact: The use of AI models, particularly large-scale ones, has significant environmental implications. Training and running advanced AI systems require substantial computational power, which consumes large amounts of energy. 

Types of tools 

Online platforms can use various AI-assisted tools to support accessibility, communications, moderation and processing of content.

Content moderation and curation

Moderating massive loads of user-generate content to determine whether it violates platform’s policies or the law is a very challenging, if not impossible, task. What’s more, malicious users seeking to circumvent platforms’ moderation efforts through approaches like coordinated inauthentic behavior have become more sophisticated, weakening and hindering traditional human content moderation. If done with proper safeguards, AI-assisted content moderation tools can potentially support or augment human content moderation. Automated content moderation tools should be flexible, and organisations can set the moderation rules to their specific needs and adjust the sensitivity or target particular types of behavior that are relevant to their community. The platform, Decidim, has been using AI-assisted content moderation on its open-source platform.  The “Global moderations” function allows administrators to moderate various kinds of content, with the objective of making the dialogue on the platform more democratic and constructive. The function flags potentially violative content, which is subsequently reviewed by a human moderator.

Language and content processing
  • Translation services: It is important that language does not create a barrier to participation. AI-assisted translation tools can offer a budget-friendly solution as compared to traditional translation services and are becoming increasingly popular. For example, Decidim uses machine translation on its open-source web platform which allows translation usually within seconds. With their support, the European Commission applied automatic translation on its multilingual digital platform for the Conference on the Future of Europe to support citizens to share their ideas and comment on others’ contributions in any of the 24 official languages of the EU. However, machine translations often fail to capture cultural nuances or specific terminology, and can have error rates, especially in non-dominant languages and for reclaimed language by minorities. As such, human review is still necessary, especially to review flagged errors or violations of the policy. Therefore, Decidim is currently working on implementing a way to report missing and wrong translations to improve them. GoVocal (formerly known as CitizenLab) also uses machine translation to support people, including minority groups, in engaging in discussions. In any case, it is important to meaningfully inform people that they are looking at a machine translation, in a way that they understand. 
  • Facilitation: AI can be used to facilitate discussions, by providing real-time prompts, summarizing key points and guiding conversations to support inclusivity. For example, the Standford Online Deliberation Platform  is a video discussion platform designed for small group discussions. One of its many features is an automated moderator that allows participants to form speaking queues, discuss in small groups with timed agendas, striving to enable equitable participation.  
  • Processing, analysing and summarising inputs: Algorithms can process inputs, classify ideas, and cluster them. For example, GoVocal  used machine-learning algorithms to support civil servants in handling thousands of citizen contributions, in an effort to support decision-making. The platform’s dashboards can classify ideas, highlight emerging topics, summarise trends, and cluster contributions by theme, demographic, or location. According to their website, analysing and understanding input can be up to 50% faster with their AI-powered sense-making tool. The platform provides confidence scores that they generate, links to original input, and opportunities to manually correct the outputs in an effort to have more accurate and reliable AI-generated summaries. Furthermore, GoVocal provides an AI-powered feature called “form sync” which allows for simultaneous online and offline surveys to improve inclusivity. The tool can scan printed, complete survey forms and import results into the platform. AI tools can also conduct a broad analysis of information available on the internet and ensure this is considered in the decision-making process. An example is the open-source Policy Synth AI agent project of Citizens Foundation. According to them the AI system can accelerate collective engagement methods like Smarter Crowdsourcing to solve problems more efficiently. AI tools can be applied to conduct large-scale web research to analyse data, publications, and citizen input and surface key issues like privacy concerns which can inform policy solutions. For example, Citizens Foundation worked with the State of New Jersey and its AI task force to ask workers for their opinion on how generative AI will impact the State’s workforce. According to Citizens Foundation, the large-scale automated web research helped to identify 200 main issues. The State subsequently presented the issues to the workers by using one of the modules called All Our Ideas that creates a rank-ordered list based on public input. Participants could select between pairs of statements as many or as few times as they wished. The answer choices covered a range of challenges including job displacement, economic instability and threats to worker power.  
  • Analysis of user engagement: AI tools can provide data regarding participant practices, which can be used to evaluate and improve approaches to stakeholder engagement. This can be for example facilitated by the Pol.is platform, an open source open-policy-making and engagement platform which was initially designed for use by the Canadian federal government.  
Interaction

Chatbots: Participation platforms may use chatbots to facilitate interaction with the users. For example, the Zencity platform provides an AI assistant, which includes a chatbot feature where users can ask questions about the platform, how to use it and how to find specific data. Another example is virtual assistant Gem which is a digital helper on websites of Dutch municipalities for answering questions of residents and businesses.  

 

The above list is illustrative and not exhaustive as the technologies are developing at a rapid pace, new AI-assisted functionalities emerge in an effort to enhance public participation. 

Regulatory framework 

In recent years, there have been multiple policy developments at the EU level that impact the use of AI systems, including in the area of public participation.  Although the implications of these regulations, particularly of the EU AI Act, are limited for online participation platforms, platform operators can consider applying provisions on a voluntary basis to promote responsible use of AI systems. Private sector companies also have the responsibility to carry out human rights due diligence under the UN Guiding Principles on Business and Human Rights. We highlight the most relevant policies below. 

EU AI Act

The European Union adopted the Artificial Intelligence Act which is the first-ever legal framework on AI. It entered into force on 1 August 2024. The EU AI Act is a risk-based regulation, which imposes different levels of requirements and obligations on providers and deployers of AI-based applications, depending on the risk classification of such systems. Namely, the AI Act identifies four levels of risk: unacceptable, high, limited or minimal.  AI-based applications with “unacceptable” levels of risk are prohibited. AI-based applications classified as “high-risk” must comply with security, transparency and quality obligations, and undergo conformity assessments. AI-based applications with “limited risk” must only comply with transparency obligations, whereas minimal-risk applications are not regulated.  

“High-risk AI systems” are exhaustively enumerated in the annex to the regulation and include such systems as credit scoring, AI systems used to assess eligibility for benefits or to assess visa and asylum applications, AI used in policing, in recruitment and workers’ management. In the current version of the AI Act, AI systems used by participation platforms are not considered high-risk, so their developers or deployers will not have to apply most of the AI Act obligations. However, some provisions might apply: 

  • if the operator of a participation platform develops a General Purpose AI Model –namely, a system that can have a wide range of possible uses and can be applied to many different tasks, as opposed to a specified and pre-defined purpose (e.g., generative AI used in chatbots or language processing algorithms) -in such cases, platforms need to ensure their models are effective, interoperable, robust and reliable as far as this is technically feasible and they need to draw up relevant documentation. However, these obligations apply to cases where platforms develop the systems themselves, and not when they purchase and adapt existing systems.  
  • if a participation platform uses a chatbot which is based on AI (particularly generative AI and not a simple pre-programmed answer/response), the platform operator needs to inform the user that they’re interacting with an AI system; 

For more information on the EU AI Act, see our analysis.  

Digital Services Act

The Digital Services Act (DSA) was adopted in 2022 to regulate digital services, particularly platforms which host content produced by individual users (e.g. social media platforms). User-produced content includes responses to consultations, therefore online participation platforms fall under the scope of the Digital Services Act.  In the following scenarios, the DSA provides obligations for the operators of online participation platforms: 

  • If a platform is using content moderation, then the DSA obliges them to provide transparency about the terms and conditions that apply, justify their decisions to the person whose content has been moderated, and implement an internal appeal system for users whose content has been taken down. With the exception of micro and small enterprises (enterprise which employs fewer than 50 persons and whose annual turnover and/or annual balance sheet total does not exceed EUR 10 million), they also have to publish transparency reports about their content moderation practices including whether they use any automated tools and if so, they need to publish information about the purposes of the tool, indicators of accuracy and the possible rate of error and any safeguards applied. 
  • If a platform uses recommender systems, for example to recommend new topics, the platform operator should set out in their terms and conditions the main parameters used in their recommender systems, as well as any options for the recipients of the service to modify or influence those main parameters. Essentially, the platform has to explain why certain information is suggested to the recipient of the service. If there are several options to sort recommendation results, then platforms should also make it possible for users to change their preferred option.  

In addition, the DSA has more robust obligations for so-called “very large online platforms”, such as risk assessments including for fundamental rights, but currently no participation platform has been classified as such.  

Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law

The Council of Europe adopted a Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law which opened for signature on the occasion of the Conference of Ministers of Justice in Vilnius (Lithuania) on 5 September 2024. The Framework Convention establishes principles and obligations on the State Parties for the use of AI systems by public authorities or private actors acting on their behalf, except for those systems related to the protection of national security interests. The fundamental principles outlined by the Convention include: 1) Human dignity and individual autonomy; 2) Equality and non-discrimination; 3) Respect for privacy and personal data protection; 4) Transparency and oversight; 5) Accountability and responsibility; 6) Reliability; 7) Safe innovation. Taking into account these principles, the State Parties must adopt or maintain measures for the identification, assessment, prevention and mitigation of risks posed by AI systems by considering actual and potential impacts to human rights, democracy and the rule of law. This obligation includes the use of AI-tools on participation platforms which are operated by or on behalf of States.  

Furthermore, online participation platforms need to abide by other legislation, like for example the General Data Protection Regulation, which requires data controllers to assess impacts on rights and freedoms through a data protection impact assessment.  

Recommendations 

From the above-highlighted considerations and regulatory frameworks, it becomes clear that due diligence is required to harness the potential of AI systems responsibly and prevent harm.  

When AI systems are used to enhance public participation through online platforms and tools, operators based in the EU have the following legal obligations

  1. When using chatbots based on generative AI, they must inform users that they’re interacting with or being subjected to an AI system; 
  2. If they develop General Purpose AI models themselves, they must ensure these models are effective, interoperable, robust and reliable as far as this is technically feasible and draw up relevant documentation; 
  3. If a platform uses content moderation, operators must provide transparency about the terms and conditions that apply, justify their decisions to the person whose content has been moderated, and implement an internal appeal system for users whose content has been taken down. Operators which employ more than 50 persons and whose annual turnover and/or annual balance sheet total exceeds EUR 10 million also have to publish transparency reports about their content moderation practices; 
  4. If a platform uses recommender systems, operators should set out in their terms and conditions the main parameters used in their recommender systems, as well as any options for users to modify or influence those main parameters. If there are several options to sort recommendation results, then platforms should also make it possible for users to change their preferred option;  
  5. They must ensure compliance with the General Data Protection Regulation to protect personal data. 

Moreover, we recommend platform operators to

  1. Periodically conduct human rights impact assessment at various stages of the AI cycles, but in any case before developing and deploying any AI tool. This includes identifying and assessing the impact on fundamental and human rights, establishing measures to prevent or at least mitigate the risks, and publishing the impact assessments and risk mitigation measures in their entirety. Such an impact assessment requires meaningful engagement of affected communities, especially marginalised groups. 
  2. Make sure that minoritised and marginalised groups are not disproportionately affected when using AI-assisted content moderation and their input is sufficiently taken into account when AI-tools are used to analyse and summarise submissions, to prevent a “drowning effect” of minority voices and deviating opinions;  
  3. When AI is used for content moderation, platforms should refrain from using automated content moderations to enforce binary leave up/take down actions but instead develop other machine learning-driven interventions such as improving notifications to users or flagging potentially violative content for further human review. Instances where legitimate content is wrongfully marked as harmful, violative or illegal (false positive) or where violative content is missed (false negative) should be tracked to improve the moderation function and findings should be made public. An exception to the need for human review is child sexual abuse material, which should be removed automatically; the rate of false positives is low and any impact to fundamental rights from a false positive is outweighed by the need to address the severe harm of child abuse. Furthermore, efficient internal appeals/grievance mechanisms should be provided to users to request a review of decisions regarding their content; 
  4. When AI-tools are used to analyse and summarise input, ensure relevant references are provided to allow users to verify sources and the accuracy of the summary; 
  5. Ensure transparency beyond strict legal requirements through rigorous metrics and making these public: disclose information related to the use of any AI tool, publicise reports on accuracy of information provided, content moderation (false positives and false negatives) etc.;  
  6. Take appropriate measures to address the shortfalls of automated translation for non-dominant languages. Conduct a risk assessment in high-risk situations with a focus on and involvement of specific communities at risk. Where feasible, strengthen and diversify training data sets to be more representative of foreign languages, hire more diverse human content moderators who speak non-dominant languages to be able to review and correct AI-generated content, and work with communities who create Natural Language Processing systems for non-dominant languages (such as Masakhane for African languages).  
  7. Test and continuously monitor the accuracy level of AI tools throught rigorous metrics; 
  8. When using chatbots, ensure that users are not required to provide sensitive information and ensure data protection; 
  9. Engage civil society organisations in determining how technologies are developed and used to ensure bottom-up technological innovation. 

 

We are grateful to Stefanos Kotoglou (EU Directorate-General for Digital Services), Robert Bjarnason (Citizens Foundation), Carolina Romero Cruz (Decidim), Koen Grellemprez (Go Vocal) and Tim Hughes (OGP) for sharing valuable information and experiences of use of AI-assisted tools. We are also grateful for the contribution of the participants of the December 3 webinar. 

 

Image












EU logo (12 gold stars in circle and dark blue background) incl. the text funded by the European Union

This case study was prepared by the European Center for Not-for-Profit Law Stichting (ECNL) within the framework of the ParticipatiON Project, funded by the European Union.