The Framework for Meaningful Engagement 2.0 enables organisations to genuinely involve external stakeholders, particularly civil society organisations and affected communities, when developing and deploying AI systems. Grounded in feminist, decolonial and anti-racist perspectives, this framework moves beyond performative engagement to support the creation of rights-respecting AI systems. Developing such systems requires implementing appropriate measures to mitigate or prevent adverse human rights impacts, yet these impacts cannot be properly identified or addressed without meaningful consultation with those most affected by these technologies.
This framework guides product and service providers in substantively incorporating stakeholder perspectives throughout the AI development lifecycle. It can be applied within human rights impact assessments, risk assessments, compliance processes and other due diligence frameworks.
The core principle recognises that creating rights-respecting AI systems demands more than technical expertise alone: it requires the lived experience, knowledge and perspectives of the communities whose lives these systems will affect.
What are the 3 elements of meaningful engagement?
1. Creating a shared purpose
Engagement must be considered meaningful by both the convening organisation and participating stakeholders. A shared purpose emerges through co-creating a vision that integrates the interests of AI developers, affected stakeholders and the broader public good. This approach fosters deeper, more empowered engagement throughout the AI lifecycle while establishing clarity on who to involve, when and how. During this phase, securing internal buy-in from decision-makers, including executives across all relevant departments, is essential.
2. Establishing a trustworthy process
The engagement process must be inclusive, open, fair, and respectful, with facilitators maintaining honesty about any barriers or limitations to delivery. This requires 4 key steps:
Step 1: Understanding and addressing concerns about barriers and limitations
This step involves actively acknowledging and discussing constraints that may affect the engagement's purpose or outcomes. These include limitations on the scope of possible product changes, funding, resources, capacity, expertise, or concerns about competitive pressures. Building trust requires honestly confronting past instances where trust was broken, centering the needs and concerns of historically and institutionally marginalised groups, and proactively establishing confidence in the process.
Step 2: Deciding when to engage
Engagement should occur at points where stakeholder contributions can be most influential. This means involving stakeholders not only in user experience considerations but also in shaping the intent, design and implementation of AI systems. Engagement is not a single event but an iterative, dynamic process with varying objectives, target groups and methods throughout the AI lifecycle. For example, the Discord pilot demonstrated that early involvement - before the design phase - can significantly influence product direction and embed best practices, though it requires working with less concrete information initially.
Step 3: Deciding who to engage
When mapping stakeholders, it is important to emphasise diversity across demographics, location, expertise and lived experience. Stakeholders most directly affected by the AI technology should receive priority attention.
Step 4: Choosing engagement methods
Numerous engagement methodologies exist, including one-on-one meetings, discussion groups, action research, and polling - conducted either online or in person. Independent facilitators trusted by participants are often best positioned to create safe spaces for consultation. Co-designing the approach with stakeholders, especially marginalised groups, helps shape the process, address safety and privacy concerns, and balance power dynamics. A detailed catalogue of methods, including their purpose, scale, timing and cost, is available through the UK public engagement resource Involve. An effective engagement strategy includes tools that document each step with trusted partners such as civil society organisations, increasing transparency by tracking progress from identifying an action through testing and evaluation stages.
3. Visible impact
Meaningful engagement requires clear accountability mechanisms that demonstrate how stakeholder input influences decisions and outcomes. This involves 3 essential steps:
Step 1: Analysing findings and evaluating responses
This stage involves collecting and synthesising feedback from multiple sources, including comments, surveys and reviews, and prioritising insights to inform bug fixes and feature enhancements. Analysis includes determining which policy and design decisions will incorporate stakeholder suggestions, which will not, and the rationale behind these choices.
Step 2: Communicating decisions and impact
Engagement becomes meaningful when contributors see their input taken seriously and leading to tangible outcomes, particularly for those most negatively affected. This requires transparent communication with stakeholders explaining the rationale behind decisions, including any necessary trade-offs. In other words, when changes are implemented or rejected, communicating the reasoning back to participants ensures their contributions are recognised and valued. Organisations should clearly articulate how stakeholder feedback influenced product changes, outline the implementation steps required, and describe pathways for continued stakeholder involvement.
Step 3: Reflecting and acting on ongoing input
Gathering stakeholder feedback must continue even after decisions are made and AI systems are deployed, as this input can reveal new insights and identify alternative courses of action, including harm prevention and mitigation measures. Stakeholder feedback should be systematically translated into product development cycles and policy decisions.
Evaluation and next steps
Evaluation provides essential reflection while establishing clear future plans for both stakeholders and internal departments. This process assesses product risks, determines whether original objectives were achieved, and examines what worked well, what didn't, and what improvements are planned for future iterations. It also considers whether continued collaboration is necessary and identifies key factors required to ensure high-quality data and model training, while maintaining transparency, security, and participant trust throughout the AI lifecycle. A trustworthy evaluation stage includes publicly sharing all commitments, findings and next steps.
A practical guide to participatory practices throughout the AI lifecycle
The Framework for Meaningful Engagement 2.0 was developed through emerging interdisciplinary research and ECNL case studies that demonstrate the transformative power of genuine stakeholder engagement. It offers a tested, actionable guide that distinguishes deep, substantive engagement from superficial consultation.
By centering the voices of those most affected by AI systems, this rights-based framework ensures their input shapes recommendations and drives concrete, accountable outcomes. Critically, the FME 2.0 directly addresses the barriers and structural limitations that often prevent meaningful participation. Rather than treating these obstacles as incidental challenges, it requires organisations to confront them head-on through acknowledgement, examination, transparency and action. Ultimately, the FME 2.0 provides a pathway for AI companies, institutions, developers and deployers to embed responsible, participatory practices throughout the AI lifecycle - fostering more equitable and accountable AI systems that truly serve the communities they affect.
This summary was prepared within the framework of the ParticipatiON Project, co-funded by the European Union. Views and opinions expressed in this material are however those of ECNL only and do not necessarily reflect those of the European Union or European Education and Culture Executive Agency (EACEA). Neither the European Union nor the granting authority can be held responsible for them.