On 7 May 2026, EU institutions concluded a deal on the AI Omnibus, a law proposed in November 2025 to “simplify” the EU’s landmark Artificial Intelligence Act adopted in 2024. While we remain critical of the final text of the AI Act, the Omnibus threatened to dismantle the few hard-won safeguards before they even began to apply.
Luckily, some of the most dangerous changes were rejected, but the Omnibus still rolls back fundamental rights protections in the AI context and creates a worrying precedent for other EU digital laws.
A wave of deregulation undermining EU democracy
The AI Omnibus is part of a broader so-called “simplification” agenda of the European Commission, aiming to reduce “regulatory burdens” across sectors—spanning tech, agriculture, and defence. It was presented in November 2025 alongside the Data Omnibus focused on data protection, together forming the Digital Omnibus package. Additional omnibus laws targeting other areas of the EU digital rulebook might be presented in the coming months.
From the beginning, ECNL and civil society partners voiced concerns that the European Commission is pursuing deregulation under the guise of simplification. Omnibus laws are typically used to introduce small technical changes in legislation. However, in the case of both the AI and Data Omnibus, the proposed amendments went far beyond cosmetic fixes, leading to the overhaul of substantive provisions. Despite the far-reaching consequences of the proposed changes in both drafts, the Commission did not present a proper justification of how this would affect fundamental rights or clear evidence that the changes were necessary. Neither did they conduct a robust stakeholder consultation. Indeed, when preparing the Digital Omnibus, the Commission organised so-called reality check meetings, but primarily invited the private sector, leaving out civil society and other public interest-driven actors. The Commission did not use the consultation mechanism that the AI Act itself envisioned –the Advisory Forum, which has still not been established. This also stands in stark contradiction with the Commission’s own Civil Society Strategy, where the Commission committed to enhancing its engagement with CSOs.
Overall, the Omnibus process is a blow to democratic lawmaking in the EU and public participation, seemingly undertaken to appease private sector actors. Introducing substantive changes to laws that went through a proper law-making process through omnibus-type legislation, without a proper impact assessment and public consultations, is a worrying precedent for EU democracy.
What is going to change?
The most consequential change, which we opposed from the very beginning, is the delay of the application of AI Act obligations to high-risk AI systems. As we show in our Learning Center module, the AI Act was designed to apply in stages, with the biggest chunk of obligations -those applying to AI systems with high risks to fundamental rights - due to kick in in August 2026. The Omnibus delays much-needed protection against harmful AI by 15 months, to December 2027. Beyond this, providers of non-compliant systems are likely to rush them to market before the AI Act's requirements take effect. A clause in the AI Act exempts systems placed on the market before 2 December 2027 from compliance obligations, unless those systems undergo substantive modifications
Several other changes are also likely to affect fundamental rights protections:
- The Omnibus weakens the obligations for AI providers and deployers to ensure a sufficient level of AI literacy of people involved in the operation of the system. Instead, they will only need to support such literacy efforts, without guaranteeing the outcome. In practice, this change deprioritises AI literacy and weakens “human-in-the-loop” safeguards. In the long term, lower AI literacy can translate to AI operators not having the skills or confidence to challenge AI decisions in socially impactful contexts (such as in welfare allocation or predictive policing) and can undermine the quality of fundamental rights impact assessments.
- The Omnibus expands the ability to process sensitive personal data for bias detection and mitigation to all AI systems, not only those classified as high-risk. This change defies the principles of data minimisation, necessity, and proportionality, underpinning EU data protection law. As pointed out by civil society, the use of sensitive data is only one among many measures that can be taken to address algorithmic discrimination. Furthermore, the AI Act already included appropriate measures for bias mitigation in high-risk AI systems.
- AI systems embedded as safety components in machinery - industrial AI - will be excluded from the scope of the AI Act, and will be subject to existing sectoral legislation, which undermines the horizontal nature of the AI Act and casts doubts as to whether existing laws are sufficient.
- The ability to comply with certain AI Act requirements in a simplified manner has been extended to small mid-cap companies, not just small and medium-sized enterprises (SMEs). Small mid-caps are defined as companies with fewer than 750 employees and an annual turnover of under 150 million euros - thresholds that may soon increase to 1,000 employees and 200 million euros. The European Commission is yet to clarify what “simplified” will mean in this context, but we are concerned that this change will limit public scrutiny and accountability of AI systems developed by an even larger number of companies.
Worst changes averted
The AI Omnibus, in its final form, is an example of harmful deregulation, but ultimately, many of the worst suggested changes did not prevail. Most importantly, civil society has been successful in stopping the Commission’s most dangerous proposal to remove transparency safeguards around possibly the biggest loophole in the AI Act. The law grants AI providers whose systems would normally fall in the high-risk category the right to escape AI Act requirements if they conclude that their system does not pose significant threats to fundamental rights. During the negotiations in 2024, policymakers added an important safeguard aimed to limit the risk of abuse and improve public scrutiny: the obligation to register the self-declared exemption in a public database, alongside a justification. The Commission proposed to remove this safeguard entirely, which would have created an incentive for providers to exempt themselves from the AI Act requirements without any accountability. Luckily, both the Parliament and Council agreed that a total removal of this obligation is unacceptable. The final agreement is nonetheless far from perfect. It strips the database of two critical pieces of information: the Member State where a system is available or in use, and the justification for any exemption claimed. Without these, meaningful scrutiny becomes difficult.
We also welcome the Omnibus’s decision to preserve fundamental rights impact assessments, despite proposals from some industry actors to remove the requirement altogether. During negotiations, there was also a risk that FRIAs would effectively be replaced by data protection impact assessments - which are neither published nor, in practice, comprehensive in their coverage of fundamental rights. The final text allows deployers to cross-reference the DPIA - an approach we also recommended in our guide to FRIAs -while still requiring them to assess the impact on fundamental rights before deploying a system.
The bigger threat on the horizon
The real threat to fundamental rights protection in the context of AI actually lies in the Data Omnibus, the other arm of the Digital Omnibus package, which is still under negotiation. This proposal introduced far-reaching changes to the EU’s landmark data protection law, the GDPR. The Commission has proposed revising foundational concepts - including the definition of personal data itself - a change with significant implications for AI, given how many systems depend on personal data processing. The proposal narrows down this definition by focusing on the controller’s technical ability to identify the individuals in question. This could encourage companies to claim that data processed within an AI system is not personal simply because they lack the means to link it to a specific individual, a position that runs counter to decades of data protection case law and practice. The AI Act draws on the GDPR when using terms such as “profiling” to define prohibited or high-risk uses. If AI developers can simply assert that the data in their system is not personal, and therefore that no profiling is taking place, they gain an easy escape route from AI Act obligations altogether.
The Data Omnibus also creates a new legal basis specifically for AI, defying the GDPR’s principle of a technologically neutral regulation. Under this proposal, AI companies could rely on their legitimate interest, rather than explicit user consent, to repurpose previously collected personal data for AI training. This would also open the door to mass scraping of personal data from the internet without giving people any control over this process. Finally, the Data Omnibus allows AI developers to use sensitive data about health, sexual orientation, or political views, when removing it from a dataset would require “disproportionate efforts.”
The full consequences of these deregulatory measures remain to be seen. One thing, however, is clear: what is being sold to the public as “simplification” will ultimately serve the AI industry - not the people affected by its systems.