According to the proposal, the requirements for high‑risk AI systems would only enter into force once standards and other tools governing their compliance have been published by the authorities. As standards, common specifications or guidance are not yet available, and as Member States have not comprehensively designated their national authorities, complying with the AI Act’s numerous obligations is significantly complicated.
The proposal would introduce a transition period of 6–12 months for the entry into force of the requirements applicable to high‑risk systems, depending on the system’s classification, starting from the moment when the Commission announces that sufficient implementation guidance is available.
Thus, no exact date is given; instead, the applicability of the obligations is linked to the availability of supporting resources. However, the obligations would start applying at the latest in December 2027 (for use cases listed in Annex III) or August 2028 (for use cases listed in Annex I), regardless of whether adequate support has been published.
High‑risk systems already on the market before the above deadlines would be exempt from the obligations. However, if such systems are significantly modified after the obligations begin to apply, the obligations would then apply in full.
The proposal is unusual in that it seeks to amend legislation that has not yet even begun to apply. This reflects the haste with which the AI Act was finalised. Not all impacts and practical challenges may have been fully anticipated. Technology continues to evolve rapidly, meaning that in a large, slow‑moving machinery such as the EU, regulation is almost inevitably based on yesterday’s understanding and circumstances. Extending deadlines and tying them to supervisory materials will ease compliance burdens, but will not eliminate obligations.
The requirements for high‑risk systems would also be somewhat lightened. For example, the registration burden in the EU database would be eased for high‑risk systems that fall under an exception – for instance because they are used only for limited or procedural tasks (or another exception under Article 6(3)). The exception assessment would still need to be documented and, if necessary, justified to the authority. In practice, the evaluation work remains, but it could be conducted purely internally, for example when introducing auxiliary AI systems used in recruitment.