For investors
Go to MyFondia

Digital Omnibus – proposals from the perspective of the AI Act

Arttu Ruukki, Karoliina Vesa
Blogs
December 16, 2025

The European Union has awakened to the need to cut back digital regulation that weakens the region’s competitiveness. Especially in the area of artificial intelligence, it has become clear that simpler and more agile governance is needed as Europe risks falling behind in the fast‑moving AI race. Based on various studies, the Commission ultimately issued its proposal on 19 November 2025 aimed at simplifying AI regulation. This article outlines the main points of this so‑called digital omnibus package, focusing particularly on the proposed amendments to the AI Act and their practical relevance. Simultaneously presented proposed amendments to the Data Act are addressed in a separate article here.  

What on earth is an Omnibus?

Omnibus legislation usually refers to a legislative proposal in which amendments to several different laws are proposed at the same time. Although there is no single universally accepted definition for the concept, omnibus laws are typically characterised by the fact that the topics covered are both similar and broad in scope. 

The newly published digital omnibus package includes proposals for amendments to several different legal instruments. The package is divided into two parts: The first part focuses on various data‑related and personal‑data‑related regulations. The second part focuses purely on the Artificial Intelligence Act. The purpose of this division may be to ensure that the more urgent section, i.e. the amendments relating to the AI Act, can be processed as quickly as possible.  

Proposed amendments to the AI Act

The measures proposed to simplify the AI Act aim to ensure a smooth, timely, and proportionate implementation of the provisions. The objective is to strengthen the Union’s competitiveness and reduce regulatory burden while still ensuring compliance with the Union’s high standards on values. 

Based on stakeholder consultations, the Commission identified certain key implementation challenges. In particular, the slow designation of national competent authorities and conformity assessment bodies, as well as the lack of harmonised standards and guidance for high‑risk AI systems, were considered problematic.  

Requirements for high‑risk AI systems

According to the proposal, the requirements for high‑risk AI systems would only enter into force once standards and other tools governing their compliance have been published by the authorities. As standards, common specifications or guidance are not yet available, and as Member States have not comprehensively designated their national authorities, complying with the AI Act’s numerous obligations is significantly complicated. 

The proposal would introduce a transition period of 6–12 months for the entry into force of the requirements applicable to high‑risk systems, depending on the system’s classification, starting from the moment when the Commission announces that sufficient implementation guidance is available. 

Thus, no exact date is given; instead, the applicability of the obligations is linked to the availability of supporting resources. However, the obligations would start applying at the latest in December 2027 (for use cases listed in Annex III) or August 2028 (for use cases listed in Annex I), regardless of whether adequate support has been published. 

High‑risk systems already on the market before the above deadlines would be exempt from the obligations. However, if such systems are significantly modified after the obligations begin to apply, the obligations would then apply in full. 

The proposal is unusual in that it seeks to amend legislation that has not yet even begun to apply. This reflects the haste with which the AI Act was finalised. Not all impacts and practical challenges may have been fully anticipated. Technology continues to evolve rapidly, meaning that in a large, slow‑moving machinery such as the EU, regulation is almost inevitably based on yesterday’s understanding and circumstances. Extending deadlines and tying them to supervisory materials will ease compliance burdens, but will not eliminate obligations. 

The requirements for high‑risk systems would also be somewhat lightened. For example, the registration burden in the EU database would be eased for high‑risk systems that fall under an exception – for instance because they are used only for limited or procedural tasks (or another exception under Article 6(3)). The exception assessment would still need to be documented and, if necessary, justified to the authority. In practice, the evaluation work remains, but it could be conducted purely internally, for example when introducing auxiliary AI systems used in recruitment. 

AI literacy

The requirement to promote AI literacy is currently imposed on providers and users under the AI Act. Under the proposal, this obligation would be shifted to the Commission and the Member States. The requirement entered into force among the first obligations already in February 2025, so the proposal aims to redirect an obligation that is already in force. 

The content of the AI literacy requirement was rather vague from the outset and seemed more like a declaratory goal than a precise legal obligation. The proposed amendment would ease the burden on organisations, as they would no longer need to guess the meaning and scope of the obligation. Even without a specific legal requirement, well‑run organisations will naturally seek to train their staff and provide opportunities to improve organisational “AI maturity”. Skilled staff are crucial for identifying and realising efficiency improvements in processes enabled by AI, as well as new business opportunities relating to products, services, and operational optimisation. 

Other amendments

The proposal also introduces various other changes, including: 

  • Requirements for labelling AI‑generated content in generative AI systems would be postponed to February 2027, provided the system is on the market before August 2026. 

  • Extension of SME‑specific reliefs to also cover “small mid‑caps”, which would significantly expand the number of organisations benefiting from reduced obligations. 

  • Supervision of systems based on general‑purpose AI models, and of systems used on very large platforms and search engines, would be centralised under the AI Office. 

  • The right to process special categories of personal data for the purpose of detecting and correcting bias would be extended from high‑risk systems to other categories of AI systems as well. Appropriate safeguards would continue to apply. 

  • Regulation on AI testing environments would be expanded. 

Timeline and legislative process

According to assessments, the proposed amendments are being advanced on a fast‑track schedule to promote legal certainty. Without changes, obligations related to high‑risk use cases under the AI Act are set to enter into force in August 2026. The proposed extensions would give providers and deployers of such systems better opportunities to comply with the requirements. In light of the recent media attention focusing on implementation challenges and rumours about halting the rollout of the AI Act, it is likely that moving these amendments forward is high on the EU’s priority list.  

Summary

The key objective of the amendments is to reduce the administrative burden arising from compliance with the rules and to support the competitiveness of operators established in the Union in the global AI race. The changes, once in force, would ease the pressures related to meeting the requirements and would slightly postpone their timelines. Overall, however, the AI Act remains a fairly demanding framework for parties developing AI systems, particularly when the intended use falls under the high‑risk category.  

Fondia’s experts at your service

Fondia’s AI regulation experts are here to support you in your AI‑related projects, whether you are developing AI tools yourself or implementing them.  

Read also

We law your business.

Privacy⁠Privacy⁠
Cookies⁠Cookies⁠
CSR⁠CSR⁠
Contact us⁠Contact us⁠

Copyright © Fondia 2022. All rights reserved.