What are the risks of generative artificial intelligence (AI) to our democracies? How can we set up guardrails for the development and use of AI? And can Europe’s legal frameworks help ensure that technology reinforce, rather than undermine, democracy?
These were some of the key questions explored in the webinar “How to Make AI Safe for Democracy?”, organised on October 14, 2025, by Make.org and the European Center for Not-for-Profit Law (ECNL). The discussion was part of ECNL’s wider effort on the intersection of democracy, fundamental rights, and technology. Together with partners like Make.org, ECNL creates spaces for dialogue to connect diverse perspectives and explore how technology can be used responsibly and with respect for fundamental rights.
Moderated by Vanja Skorić, Program Director at ECNL, the webinar featured David Mas (Chief AI Officer, Make.org), Théophile Pénigaud (Post-Doctoral Researcher, CEVIPOF – Sciences Po Paris), and Francesca Fanucci (Senior Legal Advisor, ECNL).
Setting the scene: threat, superpower, or both?
Opening the session, David Mas described the new reality of politics in the AI era. We live in a world where AI is used to spread disinformation, create deepfakes, deliver hyper-personalised campaign messages, and manipulate electoral processes. To respond to these threats, Make.org, in collaboration with Sorbonne University, SciencesPo Paris and CNRS, launched the “AI For Democracy - Democratic Commons”, a global research programme that seeks to create AI systems safe for democracy.
But as David explained: “We should not only protect democracy, we must also reinforce and strengthen it.” He outlined four ways in which AI can enhance public participation, calling these “superpowers for democracy”, which can help build a more informed and connected citizenry:
- Enhance access to information: AI can help people grasp complex issues by summarising documents in clear, accessible ways;
- Enhance expression: AI writing assistants can help people formulate ideas more clearly, making public debates more inclusive;
- Facilitate: conversational AI can help moderate discussions and expand citizen assemblies online;
- Translate: translation tools can make multilingual debates possible.
However, David warned that AI systems are far from neutral and built-in biases can easily work against efforts to strengthen democracy. Relevant types of biases are:
- Demographic bias: discrimination by gender, ethnicity, class etc.;
- Information bias: AI “hallucinations” or unreliable data;
- Opinion bias: political leanings encoded in AI outputs;
- Pluralism bias: neglecting minority or less popular viewpoints.
Democratic Commons: redefining AI technology as a force for democracy
To unlock the democratic “superpowers” of AI and guard against its biases, Make.org and several research partners launched the Democratic Commons program one year ago. Through the programme, Make.org envisions creating common goods for anyone to use. These common goods will include Democratic Principles for AI, bias correction tools (based on research findings on democratic biases), open-source Large Language Models, and open-source participatory platforms, which should all be “safe for democracy”.
One of the first milestones of the programme is the development of the Democratic Principles for AI: a set of principles that generative AI must adhere to. These principles were presented during the webinar by one of the researchers leading this work, Théophile Pénigaud. Before doing so, Théophile argued that “Democracy is what philosophers call an essentially contested concept,” and should not be qualified as a value, but rather as a process.
“As soon as you make democracy a value to be defended, it becomes one value among others, and it will soon be traded off against other values”, he said.
This is also the reason why Make.org speaks of democratic “principles” rather than values: principles are not weighed against each other. We cannot fix the meaning of democracy once and for all, but we can define principles that help us orient ourselves.
The team identified five guiding principles generative AI should adhere to in democratic contexts:
- Participation: inclusion and equal access;
- Political and ethical pluralism: reflection of diverse voices;
- Deliberation: reasoned, informed dialogue;
- Responsibility: accountability and transparency;
- Agreement and disagreement: support for constructive dissent.
These five principles are applied across the core AI functions (the “superpowers” for democracy, which David presented) to assess how well technologies support or undermine democratic processes through 19 indicators and sub-questions. ECNL provided comments to the draft Democratic Principles to strengthen their rootedness in human rights. Théophile emphasised that these principles are not about ranking AI systems by virtue but about building understanding. “We don’t sell a product, we provide a roadmap,” he said. “Making AI safe for democracy means enabling independent research to help us ask the right questions, irrespective of the conception of democracy we personally favour.”
The researchers’ questions aim to:
- understand how LLMs function,
- develop AI tools that are neutral and do not steer people in certain directions, and
- provide metrics and benchmarks to assess AI models.
The first tool developed so far based on research findings is the Panoramic AI platform, which facilitates access to information by summarising and simplifying documents on complex topics. In the coming years, Make.org and partners will continue working to contribute to future generations of AI which are inherently aligned with democratic principles and uses.
Civic participation and trust in democracy: leveraging technology regulation in Europe
The final presentation by Francesca Fanucci explored how European legal frameworks can safeguard democracy in the AI era.
The EU AI Act is a regulation adopted in 2024 to harmonise the regulation of the development and use of AI systems in a way that is consistent with EU values, including “democracy”. It is directly applicable in all 27 EU member states.
The EU AI Act is a risk-based regulation, which imposes different levels of requirements and obligations on providers and deployers of AI-based applications, depending on the risk classification of such systems.
AI tools with unacceptable levels of risk are prohibited. These include, among others, manipulative or deceptive techniques that subvert or impair person’s autonomy, decision-making, or free choice. AI-based systems used to influence the outcome of an election or a referendum by influencing the voting behavior of a natural person are classified as high-risk and. must comply with security, transparency, and quality obligations, as well as undergo conformity and fundamental rights impact assessments. In addition, the AI Act includes a specific reference to general purpose AI models and warns about situations in which they are applied to AI systems in ways that may pose systemic risks, including any actual or foreseeable negative effects on democratic processes. In such cases, even if the AI model does not fall under one of the high-risk categories listed in the AI Act, the heightened protection in terms of transparency and fundamental rights impact assessment applies.
Find out more about the AI Act on ECNL's Learning Center.
The Council of Europe’s Framework Convention on AI, Human Rights, Rule of Law and Democracy is the first international legally binding treaty on this topic. It has not entered into force yet because it is not ratified by the required number of countries yet. So far, the EU itself has signed the Framework Convention, followed by another 16 European and non-European countries. Unlike the EU AI Act, the Framework Convention focuses specifically on the protection of human rights, democracy, and the rule of law and does not regulate the economic and market aspects of AI. The Framework Convention’s explanatory report, outlines examples of threats that AI applications pose to democracy and human rights when they serve “as a potent tool for fragmenting the public sphere and undermining civic participation and trust in democracy”, e.g. when they disseminate of disinformation and misinformation or when they make prejudiced decisions about individuals, leading to their exclusion/underrepresentation in democratic processes.
Art. 5 imposes an obligation on states to:
- ensure AI systems are not used to undermine the integrity, the independence, and the effectiveness of democratic institutions and processes; and
- protect its democratic processes in the context of activities within the lifecycle of AI systems, including the ability to freely form opinions (“agency”) and individuals’ fair access and participation in public debate (“influence”).
Both the EU AI Act and the CoE Framework Convention require public authorities and other relevant deployers, in specific situations, to conduct fundamental rights impact assessments throughout the lifecycle of the AI tools. Francesca emphasised that this process must be inclusive, and civil society must be engaged in these assessments: “this process should be democratic itself”. She also stressed that it would be good practice for AI developers and deployers to conduct such assessments beyond the situations in which it is mandatory.
Key Takeaways
Across philosophy, technology and law, the speakers agreed there is an urgent need for robust safeguards and meaningful collaborations across sectors to prevent harm and ensure the “superpowers” of AI are leveraged to reinforce, rather than undermine, democracy. Key insights discussed by the speakers included the following takeaways:
- Embedding democratic principles into AI design and employment is essential to make AI tools “safe for democracy”;
- Independent research, guided by these same principles, can deepen insights into risks and support the development of “safe”, accessible tools;
- Legal frameworks establish foundational safeguards for responsible AI, but they only achieve their purpose when they are properly implemented and when rightsholders and civil society are meaningfully engaged in fundamental rights impact assessments and enabled to hold developers and deployers accountable.
- AI developers and deployers should go beyond statutory obligations in taking precautions to ensure AI tools do not have a negative impact on either fundamental rights or democracy.
We must prevent AI altering the conditions of public debate before public debate influences the course of AI development. How this is done in practice will determine whether AI strengthens or erodes our democratic life.
Find out more:
- If you are a developer or deployer of AI tools for public participation processes and want to learn more on how to design and use AI tools that strengthen public trust in democratic processes, see ECNL's Blueprint for rights-centered AI in public participation.
- If you want to know more about the key results of the Democratic Commons research programme, one year after its launch, read the Progress Report or follow the latest activities on LinkedIn page.
- You can also watch the webinar recording.
Co-funded by the European Union. Views and opinions expressed are however those of ECNL only and do not necessarily reflect those of the European Union or the European Education and Culture Executive Agency (EACEA). Neither the European Union nor the granting authority can be held responsible for them.