From theory to practice: How ECNL and Discord pioneered meaningful AI engagement

23-06-2025
Discover how we tested a new approach to involving civil society and affected communities in AI development - and what we learned from the process.

Having this privileged experience with ECNL to engage with multiple stakeholders from around the world and think about how Discord should approach safety-by-design practices for AI model development has been incredibly valuable. The opportunity to collaborate with people grounded in human rights principles is exactly the kind of meaningful engagement I value most, and ECNL made it happen through this partnership. I'd like to extend sincere thanks to everyone who so generously participated in these discussions with us, and to ECNL for not just facilitating but truly making this important work a reality.

- Soomin Jun, Safety by Design Expert for AI/ML at Discord. 

This reflection from a Discord team member captures the essence of what happens when technology companies move beyond performative consultation to genuine, meaningful engagement with the communities they claim to serve. 

 

The pilot: a bold experiment in AI governance and product development 

In 2024, ECNL and Discord embarked on an ambitious pilot to test the Framework for Meaningful Engagement (FME) - a groundbreaking approach to involving civil society and affected communities in AI development. We were adamant that this wouldn't be just another corporate social responsibility initiative; it was a deliberate experiment in reimagining how AI developers can work with external stakeholders to build technology that respects human rights and strengthens civic space. 

This wasn't a one-sided evaluation. From the outset, ECNL and Discord committed to true co-creation, working collaboratively and openly throughout the process. The pilot aimed to engage diverse stakeholders globally - from digital rights and AI experts to marginalised communities (including youth) and those with lived experiences of online harm. 

Our pilot with Discord focused on a critical application: how the company's Safety ML team builds algorithmic models for content flagging, user education, and moderation of abusive and harassment content online, with a focus on teens. The goal was clear yet challenging - test whether the FME could enhance Discord's stakeholder engagement processes while providing practical insights for the broader tech industry. 

Our time and work on this project were funded through external grants, with generous support from the Mott Foundation and the Omidyar Network. We did not receive any funding from Discord. That said, Discord covered the direct costs associated with stakeholder engagement, including travel expenses, compensation for online participants, and venue and catering costs. 

 

ECNL's Framework for Meaningful Engagement: beyond performative consultations 

The FME, developed by ECNL and Society Inside, was developed through consultations with nearly 300 diverse experts and community members worldwide. It rests on three foundational pillars: 

  1. Shared Purpose - Moving beyond corporate self-interest to embrace genuine public interest 
  2. Trustworthy Process - Creating inclusive, transparent, and respectful engagement mechanisms 
  3. Visible Impact - Ensuring stakeholder input leads to real changes in product development 
Image
Image showcasing three key components of the FME; shared purpose, trustworthy process and visible impact

The stakeholder engagement process 

Our engagement process spanned continents and communities, with each consultation building on the last. The journey began at MozFest in Amsterdam (June 2024), where we engaged technical experts in digital rights. This was followed by the Global Gathering in Lisbon (September 2024), which brought Global Majority advocates into the conversation. 

A pivotal moment came during our in-person session at Discord HQ in San Francisco (October 2024), where AI and content moderation experts dove deep into technical challenges. We then expanded our reach through a virtual youth consultation (January-February 2025), recognising that young people are often most affected by content moderation decisions and key Discord users. The process culminated at RightsCon in Taipei (February 2025), where content moderation experts from across the Asia-Pacific region added crucial perspectives. 

Key learnings 

1. The power of diverse and strategic stakeholder mapping 

Success begins with knowing who to engage. We confirmed that effective stakeholder mapping must consider regional diversity, demographic representation, and subject matter expertise. While leveraging existing networks provided a strong foundation, we saw the critical importance of intentionally broadening our reach to include voices traditionally excluded from tech policy conversations. 

2. Beyond public relations: engaging product and engineering teams 

One of our most significant insights was the value of working across multiple teams—Trust & Safety, product policy, legal, and engineering—rather than only interacting with public policy or communications departments. This approach led to honest conversations about technical constraints and design trade-offs, while giving us direct opportunities to influence product design and development.  

3. The importance of corporate partnership architecture and aligned vision 

The pilot reinforced that Step 1 of the FME - establishing shared purpose - is critical. Without genuine alignment on goals and values, even the best-designed engagement process will falter. Our shared commitment to protecting civic space while addressing real safety concerns provided the north star for difficult conversations and trade-off decisions. 

We learned to rely on our corporate partner for cross-functional engagement while maintaining our independence. Discord's willingness to facilitate connections across different teams while respecting ECNL's autonomous voice proved essential for meaningful dialogue. We're grateful for Discord's trust and transparency. 

4. Early integration changes everything 

Perhaps our most important learning was the transformative power of engaging stakeholders at the earliest stages of AI development. Starting during the ideation and design phase allowed ECNL's input to influence multiple product areas and embed best practices from the ground up. This approach stands in stark contrast to applying fixes after products are already built. 

However, engaging stakeholders so early comes with trade-offs. There's less concrete information to share about specific products and less clear direction, since the goal is to hear from affected communities about what kind of product could be helpful rather than harmful. 

Our hope is that Discord maintains some level of engagement throughout the product lifecycle, even as this formal partnership concludes. More broadly, we hope other AI developers and platforms will engage with stakeholders from the very first stages of ideation and design, sustaining that engagement throughout development and deployment.  

Navigating challenges: honest reflections 

Our journey wasn't without obstacles. Limited agency posed a significant challenge - as a small, independent team, ECNL could only make recommendations without decision-making power. This dependency on Discord as the core implementer required constant navigation of influence without authority. 

Capacity constraints meant relying heavily on existing partnerships and knowledge, with limited time for upskilling or building new networks. Resources, too, depended on corporate partner support, creating an inherent imbalance that required careful management. 

Administrative hurdles proved more challenging than anticipated. From coordinating participant outreach to managing logistics and administering compensation (such as gift cards for community participants), these practical details consumed significant time and energy that could have been directed toward substantive engagement. 

Looking forward: harnessing the lessons learned 

As we move forward, three priorities guide our next steps: 

Following up with participants:

The third pillar of our pilot involves maintaining relationships with the diverse stakeholders who contributed their time and expertise. Informing them about the outcomes of the consultations and enabling their continued engagement remains vital as Discord's products evolve. 

Tracking integration:

Our pilot was never intended as an audit of Discord's practices, but rather as an opportunity to test our FME in real-world conditions and revise it based on those insights. That said, we're committed to observing how the company integrates stakeholder feedback into its products as it moves through development and deployment phases. We'll provide input to product development if requested, and support in communicating how stakeholder input has been integrated—both through public updates and direct engagement with project participants.  

Expanding the model:

Overall, we're deeply satisfied with this collaboration and would enthusiastically pursue similar partnerships with organisations that share our vision and purpose. That's easier said than done, and unfortunately prevents us from partnering with many AI developers who we believe cannot or would not meaningfully engage stakeholders. 

We recommend that any organisations wanting to embark on such projects allocate more resources and establish clearer guidelines for administrative support. The key takeaway isn't that only large companies with massive revenues can do stakeholder engagement well. Instead, every AI developer should allocate resources proportionate to their revenue to meaningfully engage stakeholders during product development. Start-ups and small-to-medium enterprises can also approach this in ways that work for their circumstances. 

The lessons learnt have strengthened the FME, and we're excited to revise the framework in the coming months. We'll incorporate insights from the pilot and include considerations for large language models (LLMs). This pilot has expanded our capacity to support platforms and AI developers in creating rights-based products, and we're grateful for the opportunity to collaborate with like-minded partners such as Discord.  

An open invitation 

ECNL stands ready to support this vital work, both informally with Discord as their AI systems evolve and through new collaborations with mission-aligned platforms and AI developers. The challenges facing our digital civic space are too important for any one organisation to tackle alone. 

If you're an AI developer, platform, or civil society organisation interested in meaningful engagement, we want to hear from you. Together, we can ensure that AI systems don't just avoid harm but actively strengthen our civic space and human rights. 

Ready to move from consultation to genuine engagement? Reach out to us at [email protected] 

The Framework for Meaningful Engagement pilot represents a collaboration between ECNL and Discord, with support from hundreds of stakeholders worldwide (including Society Inside who co-developed the framework). While the formal pilot has concluded, the work of building human rights-respecting AI continues.