Artificial intelligence (AI) tools are increasingly being developed or deployed in public participation processes, promising to analyse thousands of citizen submissions efficiently and democratise policy engagement. Yet this technological optimism must be scrutinised with rigorous human rights safeguards, as the right to participation is an obligation that governments need to respect and facilitate.
The stakes are high: poorly designed AI systems risk amplifying existing inequalities and digital divides, silencing minority voices and eroding public trust.
Building on ECNL's analysis in "AI in Public Participation: Hope or Hype" and discussions with experts, this blueprint provides concrete, actionable process recommendations for public institutions and private providers developing or using AI tools in a way that genuinely enhances, rather than undermines, the right to inclusive participation.
1. Human Rights Impact Assessment - Before piloting
A human rights impact assessment (HRIA) is not user testing: it examines how AI systems affect societal structures, power dynamics and fundamental rights across society, centring historically and institutionally marginalised communities. This should be conducted by the responsible public institution (in cooperation with in-house or external developers) already at the design stage, that is, before any development begins, not as an afterthought.
Suggested steps:
Establish an assessment team
- Include human rights experts, civil society representatives (especially from marginalised communities), legal, policy and compliance officers, the institution’s Data Protection Officer and technical staff.
- Ensure representation from affected community groups.
- Document composition, expertise areas and potential conflicts of interest.
- Ensure buy-in from the senior leadership team to ensure that any outcomes of the HRIA (risk mitigation and prevention measures, including deciding to fundamentally change the AI system or stop development) will be implemented in practice.
Map potential rights impacts
- Identify which human rights may be affected, for example, freedom of expression, non-discrimination, privacy, participation in public affairs, access to information, among others. Note that the HRIA must include a holistic approach and assess all potentially impacted rights.
- Conduct risk mapping specific to your jurisdiction's legal framework (General Data Protection Regulation (GDPR), EU AI Act Article 27 requirements for high-risk systems, national constitutions, UN Guiding Principles on Business and Human Rights (UNGP)).
- Consider differential impacts on linguistic minorities, persons with disabilities, elderly populations, low-income communities, rural residents, youth, racialised persons, women and non-binary persons and LGBTQI+ people.
- Document the impact on each right, affected groups, severity of potential impact, likelihood and scale.
Meaningful engagement
- Organise consultation, for example, focus groups with communities that will be affected.
- Use accessible formats, for example, in-person meetings in community spaces, multiple languages, sign language interpretation, plain language materials, or hybrid participation models.
- Provide participants with sufficient information prior to and during the consultation to be able to participate meaningfully. Ask for details of how AI might help or harm participation? What concerns do diverse groups have?
- Compensate participants for their time and expertise.
- Document report with anonymised feedback and input.
- Afterwards, follow up with participants to share takeaways, what feedback was integrated, what was discarded and why.
Technical risk analysis – illustrative examples
- Analyse training data for bias. For example, what demographics are represented and what are missing? What viewpoints? What languages?
- Assess algorithm design. For example, who decides what counts as a "key argument"? Can outlier voices be amplified rather than averaged?
- Evaluate transparency mechanisms. For example, can participants trace how their input influenced the final summary?
- Document data bias, algorithmic fairness, transparency gaps and traceability.
- Develop mitigation strategies
- For each identified risk or negative impact, specify preventive measures, monitoring mechanisms, evaluation benchmarks and remediation processes.
- Document measures with responsible parties and timelines.
Publish the findings
- Make the entire impact assessment public, not just the summary.
- Publish in accessible formats and languages.
Periodic re-assessment:
- Evaluate whether the predicted impacts materialised.
- Document unforeseen impacts.
- Update mitigation measures.
- Re-engage communities for feedback.
- Publish updated assessments.
2. Addressing potential pitfalls of AI participation tools
When AI is used to analyse thousands of submissions during the public consultation process, for instance, to identify "key arguments" and "sentiment" in submissions, it risks systematically underrepresenting perspectives through:
- Clustering bias: grouping diverse perspectives into dominant themes.
- Sentiment misclassification: misreading cultural communication styles.
- Statistical drowning: prioritising frequency over importance.
To mitigate these potential risks, it would be advisable to convene a diverse group with civil society, subject matter experts, linguists and technical staff to collaboratively define what makes an argument from public participation "key", prioritising not just frequency but also representation of diverse groups, human rights and democracy issues, and challenges to the status quo. Such a group could develop parameters for a consultation project that allow customisation of theme categories, sentiment scales, and minority voice amplification, and make these parameters publicly available with documentation of who decided what and why be taken into consideration by the AI tool.
This ensures transparency in how AI categorises input and prevents the systematic drowning of minority perspectives through statistical averaging.
Similarly, when AI is performing sentiment analysis, it would be advisable to test accuracy across demographic subgroups (cultural background, age, linguistic style) and establish a minimum benchmark of accuracy for each subgroup, while documenting the methodology, disaggregated results and identified bias patterns. Conducting a rights-based review includes asking: Does the model recognise indirect speech common in some cultures? Does it distinguish passionate advocacy from aggression? Does it avoid penalising emotionally intense expressions from marginalised communities? And has it been validated by rights holders from affected communities? Continuously monitoring sentiment classification accuracy by demographic groups can help flag patterns, where certain groups are consistently classified more negatively.
Finally, an AI tool could be able to flag underrepresented groups, contradictory perspectives and vulnerable populations. Additionally, there could be customisable detection for disproportionate disagreement from marginalised communities.
3. Meaningful human-in-the-loop validation
The human-in-the-loop framework must be substantive, not performative or a “check the box” exercise. This could be achieved, for example, by having a robust workflow that requires the AI system to automatically flag certain AI outputs (such as low-confidence scores, minority perspectives and content about vulnerable groups) for human review by a diverse team trained in bias and cultural awareness. Reviewers should be able to directly approve, edit, or replace AI output while providing explanations and flag parts needing expert input, ensuring accuracy and representation. In addition, humans should be able to have a final check on AI output results, by cross-referencing the original participant’s contributions and input to make sure AI-generated output accurately reflects what was received.
In this case, such a workflow would entail humans double-checking the output of the AI tool. One key “selling point” of AI tools for participation is their promise of time and resource efficiency. There is a need to strike a balance of human validation and check to be robust, detailed and meaningful, but also efficient and focusing on eliminating negative impact and issues flagged for further consideration. Therefore, such human validation work needs to be factored into the time-saving calculations.
4. Civil society engagement: Bottom-up inclusion
Co-creation and inclusion emphasise that civil society organisations and diverse communities should shape both the implementation and fundamental design decisions of AI systems. The governance structure for achieving such a goal could include a multi-stakeholder group with representatives from government, civil society, technical experts, and academia, making decisions by consensus and allowing civil society to veto features that pose human rights risks. Periodic reviews of the process of AI development and use should focus on accuracy, human rights impact, AI model changes, resource allocation and system efficiency. The collaborative development process involves co-design workshops with affected communities, pilot testing by diverse stakeholders, and continuous improvement through performance reviews and community engagement.
5. Transparency, evaluation and sustainability - Beyond the efficiency hype
The AI system should provide two-way transparency and traceability from AI output to the participant’s submissions and from individual submissions to their place in the AI output, enabling transparency for both government reviewers and participants. Additionally, the methodology behind AI analysis must be openly published alongside a summary of input received, documenting integrated and rejected input, and allowing a secondary review period.
When evaluating efficiency gains of AI tools, measuring success only by time reduction of the participation process misses the broader societal, legal and human rights imperatives.
Societal issues could include legitimacy erosion, where poor AI tool performance undermines trust in public participation, mitigated by high accuracy thresholds and transparency. Exclusion amplification risks leaving out low-tech communities, addressed by maintaining non-digital participation channels and outreach efforts.
‘Participation washing’ risks government ignoring AI-generated reports which can be mitigated by mandating government responses and tracking policy integration.
Metrics for evaluating AI systems could include efficiency, accuracy, reliability and inclusiveness, as well as actual enhancement of participation rights. In addition, metrics should focus on community trust, policy and environmental impact and human rights assessment. Pilot learning questions should address the balance between thorough human review and AI efficiency, the importance of representative and diverse analysis and the effectiveness of feedback loops.
Organisational sustainability of such tools should involve knowledge transfer and capacity building through training programmes for public servants on AI fundamentals, practical usage and human rights. Financial sustainability would require realistic resources for development costs, assessment costs, infrastructure hosting, human resources, stakeholder engagement, training programmes, and external evaluations, acknowledging that efficiency savings may be offset by investments in thorough human review.
From hype to hope through rights-based rigour, transparency, inclusivity and accountability
For AI to potentially enhance public participation, it must be designed and governed with an unwavering commitment to human rights, inclusiveness, transparency and accountability.
The recommendations in this blueprint demand significant investment by public institutions and AI developers: time for meaningful stakeholder engagement, resources for diverse review teams, ongoing monitoring and adaptation, and willingness to prioritise quality over speed.
The key process requirements are:
- Human rights first: Every design decision must be evaluated through a rights lens. When efficiency conflicts with rights protection, rights win. + principle of precaution (when the identified risk cannot be adequately mitigated or prevented, then it’s a no-go)
- Nothing about us without us: Marginalised groups and communities must shape these systems from inception through deployment and beyond.
- Transparency as default: Publish assessments, accuracy metrics, failures and (trade off) decisions. Trust is built through openness.
- Human judgment is irreplaceable: AI assists, humans decide. Maintain meaningful human oversight at every stage, even if it is resource-intensive.
- Continuous humility: Assume your AI system has biases or makes errors you haven't detected yet. Build in mechanisms for continuous learning and correction.
Recommendations
For Public Institutions:
- Commit to this rigorous process even when it's slower and more complex than purely technical solutions.
- Allocate sufficient budget for human rights work, not just technical development.
- Establish multi-stakeholder governance before beginning development.
- Establish an adequate grievance mechanism.
For Civil Society Organisations:
- Demand seats at design tables, not just consultation opportunities.
- Insist on published human rights impact assessments before deployment.
- Hold institutions accountable through monitoring and public advocacy.
For Technology Developers:
- Design for transparency and human oversight from the start.
- Prioritise minority voice amplification features over efficiency.
- Engage ethicists and rights experts as core team members, not afterthought reviewers.
For the Public:
- Ask questions: How was my input analysed? Can I trace it in the final report? Who reviewed AI's work?
- Use grievance mechanisms when AI misrepresents your perspective.
- Participate in impact assessments and usability testing.
The promise of AI in public participation is real, but it requires us to move slowly, deliberately and inclusively. This framework provides the roadmap. The question now is whether we have the collective will to follow it.
Additional Resources
- ECNL: AI in Public Participation - Hope or Hype
- EU AI Act Full Text
- Framework for Meaningful Engagement 2.0
- UN Guiding Principles on Business and Human Rights
- Danish Institute for Human Rights: Human Rights Impact Assessment Guidance
This framework is designed to be a living document. As AI technology evolves and we learn from implementation experience, these recommendations should be updated through the same participatory, rights-based process they prescribe.
Co-funded by the European Union. Views and opinions expressed are however those of ECNL only and do not necessarily reflect those of the European Union or the European Education and Culture Executive Agency (EACEA). Neither the European Union nor the granting authority can be held responsible for them.