Last Updated on July 16, 2024
On 12 July 2024, Artificial Intelligence Regulation 2024/1689 was published in the Official Gazette of the European Union. It will come into force on the 20th day after publication and will apply from 2 August 2026 with certain exceptions.
The aim of the measure is to promote both innovation and the uptake of reliable, human-centric artificial intelligence (hereinafter referred to as AI) while ensuring a high level of protection of health, safety and fundamental rights.
Prohibited AI practices and high-risk AI systems
The Regulation identifies different levels of risk to people’s health and fundamental rights including, in particular, the “unacceptable risk” and “high risk” arising from prohibited AI practices and high-risk AI systems, respectively.
Prohibited practices include the marketing and use of systems in the workplace that recognise human emotion.
Where high-risk AI systems are concerned, on the other hand, Annex III to the Regulation contains a varied list that includes “systems operating in the areas of employment, worker management and access to self-employment”, namely:
1. AI systems used for recruitment or selection, in particular for posting job advertisements, filtering applications and assessing candidates;
2. AI systems used to make decisions regarding working conditions, promotion or termination of employment, to assign tasks on the basis of individual behaviour or personal traits and characteristics, or to monitor and evaluate workers’ performance and behaviour.
As Recital 57 points out, throughout the recruitment process and in the evaluation, promotion or, in general, the conduct of employment relationships, high-risk AI systems — if improperly designed and used — may perpetuate discrimination against women, certain age groups, persons with disabilities, or persons of certain racial origins or sexual orientation. AI systems used to monitor the performance and behaviour of such persons may also undermine their fundamental rights to data protection and privacy.
Obligations of deployers
The Regulation stipulates obligations not only for providers, importers and distributors of high-risk AI systems, but also for their deployers, i.e. those who “use an AI system under their own authority”.
Deployer obligations include the following:
- entrust human oversight of AI systems to persons with the necessary competence, training and authority;
- monitor the operation of the IA system based on the instructions for use;
- promptly inform the provider or distributor and the relevant market surveillance authority and suspend use of the system if there is reason to believe that it may present a risk to the health, safety or fundamental rights of persons;
- prior to use, inform workers’ representatives and the affected workers that they will be subject to the use of an AI system.
The Regulation also recognises the right to explanation of individual decision-making: any person affected by a decision taken by the deployer based on the output of a high-risk AI system and «which produces legal effects or similarly significantly affects that person in a way that they consider to have an adverse impact on their health, safety or fundamental rights shall have the right to obtain from the deployer clear and meaningful explanations of the role of the AI system in the decision-making procedure and the main elements of the decision taken»
Toffoletto De Luca Tamajo is at your disposal for any support you may need with compliance with a view to fully benefitting from the new technologies linked to the development of artificial intelligence.
For further information: comunicazione@toffolettodeluca.it