Amsterdam's journey in participatory AI development

04-12-2025
Case study on how the municipality, technical team and citizens worked together to develop responsible AI that genuinely serves public interest, piloting ECNL's Framework for Meaningful Engagement while doing so.
Image
Amsterdam's journey in participatory AI development, circular yellow elements and also a white blue circle as well as an icon with a group of people

When the City of Amsterdam's Computer Vision Team set out to develop an AI-powered camera vehicle for urban scanning in 2023, they faced a critical question: how to build public trust in a technology that raises fundamental concerns about privacy and surveillance?

The answer was not in technical specifications or legal compliance alone, but in reimagining how people could shape technology design and use. 

Using the Framework for Meaningful Engagement, Amsterdam embarked on a participatory AI pilot that would test whether genuine collaboration between the municipality, technologists and people could move beyond tokenistic consultation. The results offer valuable lessons for any organisation tackling a challenge of responsible AI development. 

Meaningful engagement matters 

The FME, developed by ECNL and SocietyInside with input from over 300 stakeholders worldwide, rests on three pillars: Shared Purpose, Trustworthy Process, and Visible Impact. These are practical requirements for ensuring that AI systems respect human rights and serve public interests rather than simply optimising for efficiency. This pilot demonstrates why this matters. An AI-powered scanning vehicle could speed up urban planning and street maintenance, but it could also become an instrument of mass surveillance. Without meaningful public input, even well-intentioned projects risk deepening distrust and causing harm to the community. 

Three-level approach  

Amsterdam structured engagement across three levels, each serving a distinct purpose. 

  • A broad online survey reached 862 citizens: it identified priority concerns with the aim of ensuring that subsequent engagement reflected widely-held values. 
  • A focus group of approximately 40 interested respondents enabled deeper deliberation on trade-offs and alternatives.  
  • Finally, a citizen panel of 15 diverse participants selected specifically to include both supporters and critics of AI technology engaged in intensive co-design through multiple design sprint sessions. 

This multi-level approach addressed a key tension: how do you involve enough people to claim legitimacy while enabling the depth of engagement necessary for meaningful impact? Amsterdam allocated €30,000 for participation as a clear commitment that sent a message about institutional seriousness. Critically, the public tender for AI providers included an obligation to facilitate stakeholder input, embedding participation into contractual requirements. 

Navigating engagement  

The pilot's most valuable insights emerged not from what went smoothly, but from confronting tensions that such processes inevitably create. For example:  

  1. Focus group participants didn't just offer input on implementation but also questioned whether the scanning vehicle should exist at all. They raised fundamental concerns about surveillance, explored less intrusive alternatives, and challenged the assumption that AI scanning was the appropriate solution. Rather than dismissing these concerns as "out of scope," the project team provided space for this questioning.

This reflects the FME principle that meaningful engagement must allow stakeholders to influence not just how technology is built, but whether it should be built. 

  1. A persistent challenge was the impression that the engagement process was slowing down development, rather than improving the outcome. This tension is real. Continuous engagement consumed months beyond standard AI development timelines. The pilot experimented with mitigation strategies, continuing technical development alongside participation but couldn't fully resolve the friction between organisational efficiency and the time required for genuine collaboration. Embedding participatory AI as standard practice requires cultural change around what constitutes "reasonable" project timelines. Speed cannot be the primary metric of success when human rights are at stake. 
  2. Converting citizen concerns into technical specifications required constant dialogue to avoid technical teams simply imposing their interpretations. The pilot employed multiple strategies to address this: using demos to make technical choices easy to understand to non-experts and documenting verbatim concerns with clear traceability to design decisions. These mechanisms improved but didn't eliminate power asymmetries, suggesting that these dynamics can only be addressed by ongoing institutional commitment. 

Several concrete practices emerged as particularly effective for translating participation into visible outcomes: 

  1. Traceability and documentation: the project team provided shared meeting summaries after each session, explicitly documented how input influenced design choices (or explained why certain preferences couldn't be accommodated) and used demos to show progress. This transparency built trust by demonstrating that participation mattered. 
  2. Adaptive facilitation: when participants requested less use of Microsoft Teams and more user-friendly communication tools, the team switched to email and WhatsApp. When they emphasised the need for exploring low-tech alternatives, the team adjusted processes. This responsiveness signaled genuine openness to feedback. 
  3. Knowledge translation: Through scenarios, design sessions and accessible language, the team bridged gaps between lived experience and technical expertise. Crucially, translation flowed both ways, as technical teams learned from citizens about surveillance concerns while citizens learned about implementation constraints. 
  4. Creating a positive and safe environment: Participants evaluated design sprint sessions as "fun and stimulating," with diversity providing different perspectives. This wasn't accidental, but deliberate facilitation choices created space for reflection on issues and an environment where critical views were welcomed. Meaningful participation is a social process of building relationships, not just a technical process of collecting input. 

The pilot produced visible changes to the AI system design: modified data retention protocols based on privacy concerns, changes to vehicle appearance and identification mechanisms and adjusted scanning parameters for timing, locations and frequency.  

Beyond AI-level changes, participants expressed increased trust in Amsterdam's willingness to incorporate feedback, as a contrast to prior participatory efforts. The process also created institutional knowledge about participatory practices that benefit future projects.  

Perhaps most importantly, both citizens and project team members viewed the final AI system as more legitimate because its design reflected stakeholder input rather than purely technical or administrative priorities.

Pathway forward 

The pilot demonstrates that meaningful, participatory AI is possible. Whether it's scalable across all AI systems remains an open question. The intensive resources, time, and expertise required raise practical concerns about how participatory approaches can become standard practice rather than exceptional efforts. However, Amsterdam's experience suggests that

the question isn't whether organisations can afford to do meaningful participation, but whether they can afford not to.

For organisations committed to responsible AI that genuinely serves public interests, frameworks like ECNL's FME provide practical guidance for navigating this essential journey. The Framework for Meaningful Engagement 2.0  offers practical tools for involving civil society and affected communities throughout the AI lifecycle.

 

 

Image












Logo of EU with text "Co-funded by the European Union"

This case study was prepared within the framework of the ParticipatiON Project, co-funded by the European Union. Views and opinions expressed in this material are however those of ECNL only and do not necessarily reflect those of the European Union or European Education and Culture Executive Agency (EACEA). Neither the European Union nor the granting authority can be held responsible for them.