Framework for Meaningful Engagement 2.0

04-11-2025
The updated FME 2.0 offers very practical suggestions on how to engage civil society and affected communities in AI development to support platforms and AI developers in creating rights-based products.
Image
banner for paper titled Framework for Meaningful Engagement 2.0, October 2025, Society Inside and ECNL logo

First released in 2023, the Framework for Meaningful Engagement (FME) is a tool for meaningfully engaging civil society and affected communities in AI development. Grounded in a socio-technical approach and enriched by insights from real-world pilots and leading research, it offers actionable steps spanning incubation to deployment.

Why was the FME updated?

The revised framework draws on two pilot studies carried out from 2023 to 2024. ECNL partnered with Discord to examine how its Safety Machine Learning (ML) team develops models for content flagging, user education and moderation, with a focus on teen safety. For more information, read ECNL’s and Discord’s initial learnings. ECNL also worked with the City of Amsterdam on “scan bikes,” an image recognition service that integrated citizen feedback into design and development. Read more about the pilot. Building on these pilots and more, FME 2.0 has now been refined through expert interviews and emerging research at the intersection of computer science, data science and tech policy.

What are the differences between FME 2.0 and 1.0? 

The FME 2.0 is the same structure as 1.0. There are 3 elements of meaningful engagement. The first is a co-created ‘shared purpose,’ establishing a co-created vision that balances AI developers' interests, affected stakeholders' needs and public interest considerations. The second is a ‘trustworthy process,’ which involves understanding and transparently addressing barriers and limitations, determining optimal engagement timing where contributions can be most influential, and identifying diverse stakeholders across demographics, location, expertise, and lived experiences. The final element of meaningful engagement is ‘visible impact,’ which includes analysis and evaluation of the feedback received, follow up with the participants and next steps.

Who is it for?

This framework helps product and service designers developing algorithmic systems to meaningfully involve external stakeholders in that process. This framework can also be used in the context of human rights impact assessments, risk assessments or compliance with similar processes and frameworks.

How and when can the FME be used?

Continuously throughout the AI lifecycle, beginning at ideation and extending through deployment.

Who developed it?

Developed by ECNL and SocietyInside, this framework is the result of a co-creation and consultation process involving over 300 individuals and groups from civil society, the private sector and public service across the world.

The framework is centered on feminist, decolonial and anti-racist perspectives, integrating stakeholder goals to:

  1. collaborate with experts and those with lived experience to proactively identify and address the concerns of marginalised and vulnerable groups affected by the AI system;
  2. move beyond performative engagement and avoid reinforcing existing power imbalances by establishing robust processes that genuinely incorporate stakeholder values into AI development and drive substantive change.

The goal with engagement is to develop rights-respecting AI systems, which requires measures to mitigate or prevent adverse human rights impacts caused by these systems. Stakeholder engagement is only meaningful when it produces concrete outcomes - from significant product modifications to full discontinuation when risks cannot be mitigated.

What’s next?

We are building connections between AI developers and civil society organisations to advance rights-based innovation. Interested in joining the conversation or collaborating? We would love to hear from you!