What are high-risk AI systems?
The AI Act distinguishes between four risk classes. High-risk AI systems include those AI applications that can have a significant impact on health, safety, or fundamental rights.
According to Article 6 of the AI Act, systems are considered high-risk if they
- are subject to one of the EU harmonization acts (Annex I), or
- are used in one of the areas of application listed in Annex III.
This applies, for example, to applications for selecting or evaluating applicants, credit rating systems, or AI used in critical infrastructures.
A detailed overview of the risk categories can be found in our article on risk classification under the AI Act.
Important: AI applications, not AI models, are used to classify high-risk AI. This means that you must perform separate assessments for Microsoft Copilot or similar tools, for example, depending on the area of application. The same AI tool can therefore have different risk classes in different contexts.
When is my AI application classified as high-risk AI?
Whether an AI application falls within the scope of high-risk AI for a deployer depends on a legal assessment based on several criteria.
The following questions will help with classification:
- Is the AI system used as a product or a safety-related component of a product that is subject to an EU conformity assessment in accordance with Annex I?
- Does the application fall within one of the fields of application defined in Annex III of the AI Act, for example: employment, education, critical infrastructure, or creditworthiness?
- Do exceptions or reverse exceptions apply in accordance with Article 6 (3) of the AI Act?
Which exemptions and reverse exemptions may apply?
Not every AI application within a high-risk area is automatically a high-risk system. The AI Act provides for exemptions under certain circumstances if the risk can be classified as low.
According to Article 6(3) of the AI Act, an AI system can be classified as non-high-risk if it does not pose a significant risk. This is particularly the case if the system does not have a significant influence on the outcome of the decision-making process in individual cases.
At least one of the following criteria must be met:
- The AI performs a narrowly defined procedural task without influencing the overall system decision.
- The AI improves the results of previously completed human activities, for example through data aggregation or error checking.
- The AI is used solely to identify decisions or outliers, but does not replace human judgment.
- The AI is used for preparatory tasks, such as sorting information or automatic classification, without directly influencing decisions.
These clear criteria are intended to prevent companies from being disproportionately burdened if their AI systems only perform supporting functions. At the same time, the AI Act requires this distinction to be documented in a transparent and comprehensible manner, especially in the case of systems that potentially perform multiple functions or are used in different areas.
However, if the application performs profiling or automated assessments, there is usually a reverse exception, which means that the application is then considered high-risk AI after all.
This assessment highly depends on context. Companies should document it carefully and regularly review whether new use cases or contexts of use have arisen that require reassessment.
Who is considered a deployer under the AI Act?
The AI Act defines various roles for actors, including providers, distributors, importers, and deployers. The assignment to a role is crucial because each role is associated with different obligations.
A deployer is any natural or legal person who uses an AI system on their own responsibility, provided that this is not exclusively for personal use (Art. 3 No. 4 AI Act).
Practical example:
- Company "B" uses an AI-supported applicant screening tool provided by an external provider.
- "B" uses the tool in its own recruiting process without modifying the underlying model.
- In this case, "B" qualifies as a deployer under the law.
We have explained further examples of roles and their distinctions in detail in the webinar "Understanding the roles in the AI Act and identifying AI assets." The German-language recording of this webinar is available free of charge.
Role change: When deployers become providers
The classification as a deployer of a high-risk AI system is not static. A change in role and, consequently, in obligations may occur when certain conditions arise.
Specifically, a deployer can legally become a provider within the meaning of the AI Act if one of the following cases occurs:
Significant change to the system
A change of role takes place in accordance with Art. 25 AI Act if a company makes a substantial change to a high-risk AI system that has already been placed on the market or put into service.
This includes, for example:
- Interventions in the underlying AI model
- Modification of the system logic or training data
- Provision of the system under the company's own name or trademark
In this case, additional and significantly stricter obligations apply, as the company is now considered a provider. A precise interpretation of a substantial change is based on the conformity assessment originally carried out by the provider and should always be examined in detail.
Upgrade or change of purpose
A company also becomes a provider if it changes the intended purpose of a non-high-risk AI system in such a way that it now falls within the scope of high-risk AI.
This may be the case, for example, if an originally internal assistance system is now used for applicant assessments or risk classifications - i.e., it transitions into a high-risk environment.
The legal basis for this case can be found in Article 25(1) of the AI Act.
Practical tip: Document changes
In order to be able to provide supervisory authorities or internal audits with verifiable evidence of whether and when a role change has taken place, companies should document every significant technical or functional change to the system.
The following is recommended:
- Versioning and historizing technical changes
- Providing a comprehensible explanation of why a change was not considered significant
- Regular review of usage contexts, especially in changing fields of application
The 17 obligations for deployers of high-risk AI systems
Deployers are subject to a variety of organizational, technical, and documentary obligations. These can be divided into five categories:
Training obligations:
Ensuring AI literacy:
All employees who work with AI systems must have sufficient knowledge to use them safely (Art. 4 AI Act). A training concept for AI competence is recommended for this purpose.
Training of supervisors:
Persons who take on human supervision require specific training and powers (Art. 26(2) AI Act).
Control obligations:
Implement technical and organizational measures to comply with the operating instructions:
Use must be in accordance with the provider's specifications (Art. 26(1)).
Human oversight:
Oversight and control must be carried out by qualified persons. These persons must have the necessary authority within the company (Art. 26(2), Art. 14(3)(b)).
Check input data:
The data must be fit for purpose, complete, and representative (Art. 26(4)).
Operational monitoring:
Ongoing operations must be continuously monitored in order to identify risks at an early stage (Art. 26(5), recital 91).
Suspension of operations in the event of risk:
If there is suspicion of a significant risk within the meaning of Art. 79(1), operations must be suspended (Art. 26(5), sentence 2).
Information obligations:
Towards the provider:
Deployers must pass on security-related observations to the provider (Art. 26(5) in conjunction with Art. 72).
Towards employees:
Employees must be informed about the use of AI systems (Art. 26(7), Recital 93).
Towards data subjects:
Persons whose data or decisions are affected must be informed about the use of AI (Art. 26(11), Recitals 93 and 171).
In the event of incidents:
Serious malfunctions or risks must be reported immediately to providers, distributors, and authorities (Art. 26(5)).
Cooperation with authorities:
Deployers are obliged to actively cooperate with supervisory authorities (Art. 26(12)).
Documentation requirements:
Note: List items 14 to 17 only apply under the specific circumstances described therein.
Logging:
Automatically generated system logs must be retained for at least six months (Art. 26(6)).
Data protection impact assessment:
A DPIA may be required when processing personal data (Art. 26(9)).
Fundamental rights impact assessment (FRIA):
In certain cases, a fundamental rights impact assessment must also be carried out (Art. 27). This obligation only applies to the following groups of deployers:
- Public law institutions
- Private institutions providing public services
- Deployers who use an AI system to assess the creditworthiness or credit rating of natural persons
- Deployers who use AI systems for risk assessment or pricing in life or health insurance for natural persons
Notification requirements for FRIA:
The results of these analyses must be communicated to market surveillance authorities (Art. 27(3)).
Registration requirement in the EU database:
European Union authorities and bodies must register the system in the relevant EU database (Art. 49(3) in conjunction with Art. 26(8) of the AI Act).
Further recommendations:
Review of prohibited practices:
Deployers must ensure that no impermissible AI practices are used (Art. 5).
Self-assessment of role changes:
If a significant change is made, it must be checked whether new provider obligations apply (Art. 25(1)).
Recommendations for deployers of high-risk AI
The path to compliance does not begin with the first audit by the authorities, but starts now. Companies should create structures at an early stage to meet the requirements.
Our recommendations:
- Make an inventory of all AI systems in use. Use an internal AI inventory to keep track of overviews and responsibilities.
- Evaluate each system individually. Check whether it is classified as high risk and document any possible exceptions.
- Work closely with data protection and IT. The interfaces with GDPR, IT security, and governance are crucial.
- Create an action plan. This should include responsibilities, deadlines, and training measures.
Establish an AI governance system. A practical framework such as ISO/IEC 42001 provides guidance for setting up an internal control system.
Download a free checklist with all obligations for deployers of high-risk AI
To ensure you don't overlook any obligations, we have summarized all 17 deployer obligations with the corresponding articles of the AI Act in a user-friendly Excel checklist.







