The European regulation on artificial intelligence came into force on 1 August 2024. The obligations are phased in until 2027. Since 2 February 2025, AI systems classified as "unacceptably risky" have been banned in the EU.
The AI Act is the first regulation to establish a uniform legal framework for the use of artificial intelligence in the European Union. Presented as a means of fostering innovation while guaranteeing user confidence, it imposes compliance standards based on a risk-based approach.
The aim of the regulation is to create an environment that is conducive to AI, secure, ethical and respectful of fundamental rights, thereby supporting the competitiveness of businesses on the European and global markets.
THE PILLARS | CLARIFICATION |
---|---|
AI systems | Unacceptable risk systems are prohibited because they threaten fundamental rights. High-risk systems require a strict conformity assessment. Specific risk systems are subject to transparency obligations. Minimal risk systems follow voluntary codes of good practice. |
AI models for use General (GPAI) | The AI Act provides a framework for general-purpose AI models, which require high computing capacity for their development. These models are subject to due diligence obligations, including transparency, respect for copyright and publication of a summary of the learning base. So-called "systemic risk" models are subject to specific risk assessment and limitation obligations. |
Governance | Independent national authorities supervise AI systems by providing regulatory sandboxes, ensuring companies' compliance. At European levelThe AI Office, created in February 2024, regulates AI models and draws up codes of good practice, particularly for general-purpose and systemic models. |
"Adapting to new technologies is always a challenge, especially when faced with a revolution of this scale. The AI Act marks an essential step forward by framing AI with common-sense measures, making suppliers, deployers and distributors accountable. However, the opaque nature of AI systems makes their total control illusory, despite good legislative will."
The AI Act provides for graduated penalties for non-compliance, with a ceiling of 7% of annual worldwide turnover or 35 million euros. Fines vary according to severity: up to 3% or €15 million for non-compliance with requirements on high-risk systems, and 1.5% or €7.5 million for incorrect information to the authorities. For the rest, the Member States are responsible for setting penalties.