The Artificial Intelligence (AI) Act (“AI Act”) should be formally adopted soon, following the European Parliament’s approval on 13 March 2024 of the political agreement reached with the Council in December 2023.
It will be the world’s first legislation governing AI!
The main purpose of the AI Act is to define the rules applicable to the placing on the market and use of an AI system and to prohibit certain AI practices whose risk is considered unacceptable.
How can you determine whether your activities are affected by these new rules?
A company should be able to assess the impact of the AI Act based on two questions:
Do you use an AI tool subject to the AI Act in your business?
AI systems – risk-based approach
The AI Act defines an AI system as “a machine-based system designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment and that […] infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments”.
AI systems are classified into different categories based on the risk they pose:
High-risk AI systems can fall into two categories:
- AI systems that constitute a product or a safety component of a product subject to a third-party conformity assessment in accordance with several EU legislative acts (e.g. medical devices, machines, toys).
- AI systems in certain areas such as education and vocational training, remote biometric identification, critical infrastructure (e.g. electricity, water, gas), administration of justice and democratic processes).
By way of exception, AI systems which do not pose a significant risk of harm to the health, safety or fundamental rights of natural persons and which do not perform profiling are not considered high-risk AI systems. This would be the case, for example, of an AI system intended to perform a narrow procedural task.
General purpose AI systems and models
A general purpose AI system is “an AI system which is based on a general purpose AI model, that has the capability to serve a variety of purposes, both for direct use as well as for integration in other AI systems”.
A general purpose AI model is defined as “an AI model, including when trained with a large amount of data using self-supervision at scale, that […] is capable to competently perform a wide range of distinct tasks […] and that can be integrated into a variety of downstream systems or applications”.
Among general purpose AI models, those with high-impact capabilities (i.e. computing power used in their training greater than 10^25 FLOPs) qualify as general purpose AI models with a systemic risk.
What are your doing with this AI system?
You will be subject to the requirements of AI Act in any of the following cases:
- as a supplier, if you place on the market/put into service in the European Union (EU) an general purpose AI model or AI system;
- as a supplier established/located in a third country, if the AI system that you place on the market/put into service generates output that is used in the EU;
- as a deployer, if you use under your own authority for professional purposes an AI system and you are established/situated in the EU (or if you are established outside the EU but the results (output) of the AI system are used within the EU);
- as an importer, if you place on the market/put into service an AI system in the EU under the name or trademark of a third party established outside the EU;
- as a distributor, if you make an AI system available on the EU market;
- as a product manufacturer, if you place on the market/put into service an AI system together with your product on the EU under your own name or trademark;
- as an authorised representative in the EU of a supplier which is not established in the EU, if you are appointed as such by such a supplier.
What type of obligations will be applicable to your business?
The extent and type of obligations that will be applicable to you depend on your role (supplier, deployer…) and the AI tool concerned.
For instance, as a supplier of a high-risk AI system, you will be required to draw up technical documentation and a quality management system, or to submit that AI system to a conformity assessment (carried out by a third party, where applicable).
As a deployer of an AI system that generates or manipulates content (deep fake), for example, you will have to mention that the content has been artificially generated or manipulated.
When will these new requirements apply?
As a reminder, the AI Act has not yet been formally adopted. It is therefore not yet possible to determine its precise effective date.
Once in force, the AI Act is expected to apply after 24 months. However, certain provisions will be applicable on a deferred basis, in particular those relating to AI systems with unacceptable risk (6 months after entry into force) and general purpose AI models (12 months after entry into force), and the obligations imposed on high-risk AI systems (36 months after entry into force).
Our advice:
With the adoption of the AI Act fast approaching, it is essential to start your compliance process as soon as possible. The first players on the market will benefit from a significant competitive advantage. Our team of specialists is available to answer your questions and support you throughout the process.
For more information about the AI Act, sign up for our earlegal training.