Does your company use high-risk AI?
Patrick Gordinne Perez2024-12-27T07:30:20+00:00You have automated part of your recruitment system. Did you know that this may be high-risk AI, and depending on the data you handle and how you handle it, you may be obliged to take certain measures?
See how to identify if this is the case and what you should do.
European Regulation on Artificial Intelligence (AI)
First of all, let’s remember that the European Regulation on Artificial Intelligence (RIA) sets out rules to ensure the safe and ethical use of AI in business (avoiding bias, discrimination, etc.) and outlines who is liable in case of misuse of this tool.
One of the key points to consider is whether you are using AI systems that are considered “high risk”, and if so, you must comply with the regulations.
What is high-risk AI?
Rights.
Given that the objective is to safeguard the fundamental rights of individuals and their security, the RIA categorises certain AI systems as “high risk” because of their potential impact on these rights.
These are categorised as high risk, among others:
- AI systems used in labour recruitment (to select candidates).
- AI used to determine access to public services (loans or grants).
- AI used in credit risk assessment or medical decisions.
AI and Personnel
At the business level, the RIA emphasises the use of high-risk AI when it affects decision-making (recruitment and selection of staff – job advertisements, screening and evaluation of candidates, working conditions and even dismissals).
If your company uses high-risk AI whose decisions relate to these areas (screening CVs, selecting employees for possible promotion…), you are using a high-risk system and should take action.
How to comply with regulations if you use high-risk AI?
Step 1
Identification of the risk
When the system acts in sensitive areas (health, security, privacy of persons…), check whether it is affected.
Annex III of the RIA provides a detailed list of these systems.
Step 2
Conformity assessment
You should subject the system to a conformity assessment before putting it into operation to verify that it meets the requirements of the RIA in terms of security and protection of rights.
This assessment can be carried out either by the supplier itself or by an external body.
Step 3
Execution
With the system in place, it must meet the following requirements:
- Transparency:
Among other obligations in this area, you must inform users that they are interacting with an AI. Include a warning on the screen to indicate this.
You should also explain the general criteria on which the system bases its decisions, as well as how and why each decision is made.
- Records:
You should also keep detailed records of the system’s operations, including technical documentation on how it was designed, how it was tested and how its performance is monitored.
Keeping clear documentation will protect you against possible audits or inspections.
- Continuous monitoring:
High-risk AI systems should be constantly inspected to ensure that they comply with safety and ethical standards at all times.
Conduct periodic audits to ensure that they do not deviate from the initial parameters.
Non-compliance
Be aware that breaching the RIA can lead to fines in the millions, as well as damage to your company’s reputation.
Consult Annex III of the Regulation to identify whether your company uses high-risk AI and to comply with the obligations it imposes.
This will help you avoid penalties and ensure ethical use of this technology.