The coming two years will be critical for the future of Artificial Intelligence (AI) regulation in Europe. While the adoption of the European Union’s AI Act in 2024 was a significant achievement in itself, the implementation and enforcement phase that now follows will decide whether it can have a practical impact on how AI is developed and used. Civil society organisations must be active partners in this process to ensure that the regulation can be an effective tool to challenge the harmful impacts of AI on society and secure accountability over AI use.
Their success will determine the future direction of AI not just in Europe, but also in the many countries across the globe that look set to follow the EU’s regulatory lead.
In a report commissioned by the European AI & Society Fund we identify opportunities for civil society to shape the outcomes of the AI Act over this period. The report describes the different stages of implementation, suggests specific activities civil society can undertake, the skills and expertise required, and what funders can do to support this work.
In the months ahead, the institutions that will operationalise the AI Act will be established, the guidelines that specify prohibitions and risks will be drawn up, transparency measures will be drafted and technical standards agreed. With civil society participation, each of these presents an opportunity to implement the Act in line with the public interest, uphold fundamental rights and protect the most vulnerable. This could ensure that bans on the most harmful AI systems, like remote biometric identification (RBI), are tightly drawn, that products like ChatGPT have to address the systemic risks they pose to society, and that exemptions around national security and migration are challenged. Without civil society pushback, these processes are an opportunity for industry to slacken rules and widen loopholes, limiting the final effectiveness of the law.
This period is also a time to prepare to apply the law in practice. The AI Act is complex and sits within a mosaic of established fundamental rights and equalities legislation as well as freshly passed digital markets regulation. Navigating routes to accountability will require the painstaking preparation of test cases, which will be the essential test of the EU’s claim to be the home of trustworthy AI. Recent experience from the Digital Services Act (DSA) demonstrates that civil society can have tangible impact by shaping the implementation of a law, and work alongside regulators to start holding companies to account. It also highlights the need for skills and resources to make this happen.
Drawing on these lessons, the report presents detailed recommendations for the most important capacities civil society will need for the implementation and enforcement of the AI Act with respect to:
- coordination;
- research;
- advocacy on the EU and national levels;
- strategic litigation;
- campaigning and movement building.
These recommendations are based on ECNL’s analysis of the AI Act and engagement with the field. They are offered as a starting point for further discussion and strategising, both among public interest advocates and the funders that support them, and we welcome feedback to refine and improve them.