Context
Following the formal endorsement of the Artificial Intelligence Act (the “AI Act” or the “Act”) by the European Parliament in mid-March, the Council formally adopted the AI Act on 21 May 2024. The Act was published in the official Journal of the EU on 12 July 2024 and will enter into force twenty days after its publication, namely on 1 August 2024. We now have a clear view on the timeline for the phased implementation and enforcement of the AI Act (see below).
The AI Act is widely recognized to become the world’s first comprehensive legal framework for AI, implementing a risk-based approach with different sets of obligations for the deployment and provision of AI systems for each risk level. Amongst others, the Act aims to enforce human oversight and data governance for these systems, as well as enhance transparency to facilitate a better understanding of their operations.
The formal adoption and publication of the Act mark important steps in the regulatory process, providing clarity on the applicable regulations and obligations, as well as outlining the concrete timeline for phased implementation. Furthermore, the European Commission is expected to issue guidelines to further clarify and assist in interpreting and adhering to the provisions of the Act.
Phased implementation
The following timeline of phased implementation must be taken into consideration by all those affected by the AI Act:
- Six (6) months after its entry into force – by 2 February 2025 – unacceptable risk / prohibited AI systems must be phased out.
- Twelve (12) months after its entry into force – by 2 August 2025 – the obligations (and penalties) for General Purpose AI (GPAI) models become applicable. Moreover, each Member State must appoint its national competent authority and must lay down and notify to the European Commission the rules on penalties (including administrative fines) and ensure they are properly and effectively implemented by the date of application of the AI Act.
- Eighteen (18) months after its entry into force – by 2 February 2026 – the European Commission is expected to provide guidelines, together with a comprehensive list of practical examples of use cases, for the classification of AI systems as being high-risk or not.
- 24 months after its entry into force – by 2 August 2026 – all rules of the AI Act become applicable, including obligations for high-risk systems defined in Annex III of the Act, such as remote biometric identification systems, AI used as a safety component in critical infrastructure, and AI used in education, employment, credit scoring, law enforcement, migration, and the democratic process.
- 36 months after its entry into force – by 2 August 2027 – the obligations for high-risk systems defined in Annex II of the Act also go into effect. These systems relate to AI intended to be used as a product (or the security component of a product) covered by specific EU legislation, such as toys, radio equipment, in vitro diagnostic medical devices, civil aviation, vehicle security, marine equipment, lifts, pressure equipment, and personal protective equipment.
- By the end of 2030, requirements will be enforced for certain AI systems that are components of the large-scale IT systems established by EU law in the areas of freedom, security, and justice, such as the Schengen Information System, Visas, and Eurodac.
As always, our team is available to support you and answer any questions about the AI Act, ensuring your organization is well-informed and prepared for the future.
Related news
How can we help?
Discover our expertises