Artificial intelligence (AI) is profoundly transforming our societies and economies. To harness this potential and manage the risks, the European Union is introducing the first comprehensive legislation on AI, the AI Act, from 1 August 2024.
A few key figures on the AI market
AI has become an essential vector of transformation:
- Global marketEstimated at 500 billion dollars by 2028 (Source: Statista).
- GrowthThe market is expected to grow by 37 % each year from 2024 to 2030 (Source: Hostinger).
- Public perception51 % of French people consider AI to be both a danger and a step forward (Source: Cluster 17 and Le Point).
These figures demonstrate the strategic importance of AI and the need for appropriate regulation.
Why regulate artificial intelligence?
- Bias and discrimination : risks of reproducing and amplifying existing prejudices in training data, leading to discriminatory decisions.
- Protection of privacy : risks associated with profiling, unauthorised access to personal data and the Cambridge Analytica incident.
- Handling informationThe issue of deepfakes and the propagation of fake news by bots.
- Impact on employmenttransformation of the labour market, creating new opportunities but also potential inequalities.
- Lack of transparencyThe opaque operation of AI systems, which are often likened to "black boxes".
- Safety and securityrisks of hijacking or malfunction that could lead to accidents or endanger critical infrastructures.
AI Act: how the law regulates AI systems
The AI Act is the world's first general legislation to regulate the development, marketing and use of artificial intelligence systems. Its key points include:
Classification of AI systems by risk
- Unacceptable risks :Systems banned since 2 February (e.g. social rating systems, real-time detection of emotions in the workplace, real-time facial recognition in public spaces for profiling purposes).
- High risk :Systems requiring strict regulation (e.g. remote biometric identification, critical infrastructure surveillance systems).
- Rispecific risk in terms of transparency : Systems requiring transparency guarantees. (e.g. facial recognition systems, recommendation systems).
- Minimal risk :Systems such as chatbots or spam filters, for which the adoption of codes of good practice is encouraged.
Players in the AI value chain
The AI Act applies to all actors involved in the lifecycle of AI systems, including:
- Suppliers :those who develop and market AI systems.
- Deployers :those who use AI systems in a professional context.
- Mandataires: those who are located in the European Union and have received a written mandate from an AI service provider. This mandate allows the agent to act on behalf of the provider to manage its obligations and procedures.
- Importers: those located in the EU who place on the market an AI system bearing the name or trademark of a company established in a third country.
- Distributors :those who make AI systems available on the EU market.
Obligations of AI players
The obligations of AI players depend on the level of risk associated with their AI systems. High-risk AI systems entail obligations for all players:
For suppliers
Suppliers must identify themselves on the AI system, draw up a declaration of conformity, affix a CE mark, ensure quality management, assess conformity before placing on the market and take corrective action in the event of non-compliance.
For deployers
Deployers have specific responsibilities. In organisational and technical terms, they must implement measures to guarantee the security of the system, ensure effective human control over its use, and rigorously check the quality of input data. In terms of communication, they are responsible for informing employees and data subjects, and must carry out impact assessments on fundamental rights.
For agents
As for agents, they must verify the compliance of high-risk AI systems, keep documents available to the authorities for ten years, cooperate to mitigate risks and comply with registration obligations.
For the importers :
Importers, for their part, are responsible for ensuring the conformity of AI systems before they are placed on the market, and for indicating the identity of the AI system and its documentation. They are also responsible for keeping a copy of the certificate issued by the notified body for ten years.
For distributors
Distributors also have their own obligations. They must meticulously check the conformity of documents, apply a precautionary principle before any distribution, and guarantee appropriate storage and transport conditions.
Penalties for non-compliance
The IA Act lays down strict penalties for non-compliance with obligations, including :
- Prohibited practices :fines of up to €35 million or 7 % of annual turnover.
- Other breaches :penalties of up to €15 million or 3 % of annual worldwide sales.
- Providing inaccurate information :fines of up to €7,500,000 or 1 % of worldwide annual turnover.
Compliance can be difficult. We need to map the uses of AI, identify the risks and prepare the tools for assessment and compliance.
If you would like to be accompanied by experts in the field on these subjects, contact Cloud Temple