Obligations under the AI Act : An overview by risk class

25. March 2026
5 minutes

The EU's AI Act takes a risk-based approach to ensure the safe and responsible use of AI systems. The obligations that providers and deployers of AI systems must fulfill vary depending on the risk class. In this article, we provide a structured overview of the obligations per risk class and offer a comprehensive PDF checklist for download.

All Obligations under the AI Act

What are the obligations under the AI Act?

Compliance with the AI Act depends largely on two key factors:

  1. What role does your company play?

    The AI Act distinguishes between five central roles, each of which entails specific obligations:

    • Provider
    • Deployer
    • Authorized representative
    • Distributor
    • Importer

    Each of these roles entails different responsibilities in the life cycle of an AI system.

  2. Which risk class does your AI system or model fall into?

    The AI Act is based on a risk-based approach and defines five risk classes, each with specific compliance requirements:

    • Prohibited practices
    • High-risk AI
    • Limited risk
    • Systemic risk
    • Low risk

    For a structured overview of all five risk classes and the process for correctly classifying your AI applications, we recommend our article on AI risk classification.

Relevant roles and risk classes in practice

In business practice, the roles of the provider and deployer, as well as the risk categories of high-risk AI, limited risk, and systemic risk, are of central importance. Therefore, we will focus on the specific obligations in these areas below.

It should also be noted that the obligations of the AI Act are cumulative. This means that an AI system can be assigned to several risk classes at the same time, which means that the respective compliance requirements add up.

What obligations apply to high-risk AI systems?

The AI Act defines extensive requirements for high-risk AI systems. A distinction is made between the obligations for providers and deployers.

Obligations for providers of high-risk AI systems:

Providers of high-risk AI systems are subject to the most comprehensive obligations. The most important requirements include:

  • Introduction of a quality management system (Art. 17 AI Act)
  • Creation, updating, and storage of technical documentation (Art. 11 in conjunction with Art. 18 AI Act)
  • Logging obligations and retention of records (Art. 19 AI Act)
  • Conducting conformity assessment procedures and affixing the CE marking (Art. 16 lit. f-h in conjunction with Art. 43, 47, 48 AI Act)
  • Proof of conformity to supervisory authorities (Art. 16 lit. i in conjunction with Art. 49 AI Act)
  • Registration of the AI system in the EU database (Art. 16 lit. k AI Act)

In addition, providers must ensure:

  • Implementation of a risk management system (Art. 9 AI Act)
  • Responsible data and data governance practices (Art. 10 AI Act)
  • Ensuring accuracy, robustness, and cybersecurity (Art. 15 AI Act)
  • Creation of exhaustive instructions for use (Art. 13 AI Act)

Ensuring human oversight of the system (Art. 14 AI Act)

Obligations for deployers of high-risk AI systems:

Deployers of high-risk AI systems also bear considerable responsibility. Their obligations include, among other things:

  • Implementing appropriate technical and organizational measures (TOM) to comply with the instructions for use (Art. 26(1) AI Act)
  • Ensuring human oversight of the AI system (Art. 26(2) AI Act)
  • Using appropriate input data in accordance with the intended purpose (Art. 26(4) AI Act)
  • Continuous monitoring of AI operations in accordance with the instructions for use (Art. 26(5) AI Act)
  • Fulfillment of information obligations towards users and authorities (Art. 26(5), (7), (11); Art. 50(4) AI Act)
  • Retention of logs when the deployer has control over the system (Art. 26(6) AI Act)
  • Consultation with employee representatives when implementing high-risk AI in the work environment (Art. 26(7) AI Act)
  • Conducting a data protection impact assessment if personal data is processed (Art. 26(9) AI Act)

Preparation of a fundamental rights impact assessment to evaluate the impact of the AI system on the rights of data subjects, in case the deployer is a body governed by public law or a private entity providing public services (Art. 27 AI Act)

What obligations apply to limited-risk AI systems?

Obligations for providers of limited-risk AI systems

Providers of AI systems that fall under Article 50 of the AI Act are subject to specific transparency obligations. These include:

  • Information obligation towards users:
    Providers must ensure that data subjects are clearly informed that they are interacting with an AI system.
  • Labeling of synthetic content:
    If the AI system generates synthetic content (e.g., text, images, audio, or video material), the output must be clearly labeled as such. In addition, this output must be interoperable, resilient, and reliable, as far as technically possible.

Obligations for deployers of limited-risk AI systems

Deployers of limited-risk AI systems are also subject to certain transparency requirements under Article 50 of the AI Act:

  • Disclosure of deepfakes:
    Deployers must ensure that deepfake content generated by the AI system is disclosed as such.
  • Information on emotion recognition:
    If an AI system is used for emotion recognition, the persons concerned must be informed that this technology is being used.

What obligations apply to AI models with systemic risk?

Providers of general-purpose AI models (GPAI) are subject to specific obligations under Art. 51 ff. of the AI Act. These requirements are significantly expanded if the model is classified as an AI model with systemic risk.

Obligations for providers of general purpose AI models

Providers of general purpose AI (GPAI) must fulfill the following obligations:

  • Preparation of technical documentation:
    Detailed description of the model architecture, training methods, and performance characteristics.
  • Provision of information to downstream providers
    Provision of all necessary information and documentation for companies wishing to integrate GPAI models into their own systems.
  • Ensuring compliance with copyright law
    Development and implementation of strategies to prevent copyright infringement through the use of the AI model.
  • Transparency regarding training data
    Provision of a detailed summary of the data used in training. The AI Office provides standardized templates for this purpose.
  • Facilitating transparency obligations through open source licenses
    Providers can partially facilitate transparency requirements by making their model available under a FOSS (free and open source software) license (Art. 53(2) AI Act).

Additional obligations for providers of AI models with systemic risk

If an AI model is classified as posing a systemic risk, extended obligations apply in accordance with Art. 55 AI Act:

  • Extended technical documentation and evaluation
    Conduct comprehensive model evaluations, including stress tests and attack simulations to identify vulnerabilities.
  • Implementation of risk mitigation measures
    Develop specific strategies and control mechanisms to minimize potential risks.
  • Incident reporting
    Commitment to immediately report incidents and security breaches to the AI Office and national supervisory authorities.
  • Enhanced cybersecurity measures
    Implementation of robust security mechanisms to prevent manipulation, misuse, or unauthorized use of the model.

Comprehensive overview of the obligations under the AI Act

We have summarized all obligations under the AI Act for the roles and risk classes mentioned in a clearly structured PDF file. This overview serves as a practical guide to quickly and easily understand the specific requirements.

Download the free list with all obligations now

Only relevant news
Monthly
Over 2,000 subscribers are already reading it

Article written by

Dennis Kurpierz Co-Founder & COO

Dennis Kurpierz is co-founder and Chief Operating Officer of caralegal. Thanks to his many years of experience as a senior consultant and lead project manager at ISiCO Datenschutz GmbH, he is familiar with customer needs, pain points, and challenges in data protection management. As product owner, he applies this expertise to product development at caralegal.

All i need is
more time
caralegal

Set up in just 2 days
64 % time reduction
20 years of privacy expertise