What is the AI Act?
The AI Act is the EU's regulatory response to the rapid advances in the field of artificial intelligence. It is a comprehensive set of EU rules aimed at making the use of artificial intelligence (AI) safe and trustworthy. It is the first legal framework of its kind and is intended to ensure that AI systems are used responsibly and comply with European values and fundamental human rights. Different measures are required depending on the risk posed by an AI system.
The AI Act has several core objectives:
- Strengthening trust in AI technologies;
- Ensuring that AI systems are safe, transparent, and traceable;
- Protecting fundamental rights, such as privacy and non-discrimination;
- Promoting innovation through a clear legal framework.
Horizontal and risk-based regulation
A core element of the AI Act is its horizontal approach: the rules apply across all sectors in which AI systems are used. AI systems are classified into different risk categories – from minimal to unacceptable risk. This classification determines which legal requirements must be met. The greater the risk a system poses to society, the stricter the requirements of the AI Act for companies. The aim is to minimize risks while promoting innovation.
The market location principle and extraterritorial application
The AI Act applies not only to AI systems developed or used within the EU, but also to those imported into the EU. The market location principle ensures that all AI systems used in the EU are subject to the same standards, regardless of where they were developed.
What are the links between the AI Act and the GDPR?
The EU AI Act and the GDPR are designed to coexist. There are situations in which companies must comply with the requirements of both regulations at the same time:
- when personal data is used to train or test an AI system during its development,
- when personal data is processed in the use of artificial intelligence.
In these cases, providers must take particular care to comply with both regulations. Both sets of rules require transparency, security measures, and risk minimization. Companies should ensure that their AI systems meet both GDPR and AI Act requirements.
What is an AI system?
The AI Act uses a fairly broad definition of artificial intelligence. According to the EU Parliament's definition, AI systems are "machine-based systems designed to operate with varying degrees of autonomy and generate outputs such as predictions, recommendations, or decisions that affect physical or virtual environments."
In summary, AI systems are adaptive and autonomous according to the EU definition. Examples include tools for automated text generation or systems that filter and evaluate job applications.
When will the AI Act come into force?
The AI Act was proposed by the EU Commission in April 2021 and adopted by the European Parliament and the Council in December 2023. It entered into force on August 1, 2024. From February 2, 2025, the prohibitions on certain AI practices will apply in accordance with Article 5 of the AI Act
This includes, in particular, a general ban on real-time remote biometric surveillance systems in public spaces for law enforcement purposes. Also prohibited is "social scoring" – a process in which AI evaluates people's behavior, resulting in social disadvantages such as exclusion from public services.
AI Act timeline: When do which rules apply?
The AI Act will be implemented in stages, with phased transition periods for various requirements:
- From February 2, 2025: The first binding requirements will take effect. These include general provisions and the prohibition of certain AI practices, such as manipulative or discriminatory systems.
- From August 2, 2026: The AI Act will apply comprehensively, in particular to providers and deployers of high-risk AI systems in accordance with Annex III of the AI Act.
- From August 2, 2027: Additional specific obligations for high-risk AI systems will come into force, in particular the requirements set out in Annex I of the AI Act.
What do companies need to do to prepare for the AI Act?
To best prepare for the requirements of the AI Act, we recommend a six-step approach:
Identify and document AI assets and AI use cases
Companies should identify and document all AI systems in use and their use cases. You can find out how to do this systematically in our article on identifying AI assets – including a free AI identification questionnaire to download.
Once you have identified your AI assets, the next step is to create an AI inventory (or “AI registry”) to ensure greater transparency and comply with regulatory requirements. Our blog post explains how to set up such a register and what information it should contain. It also includes an Excel template that you can use for your AI inventory to get started right away.
Understanding AI use cases
Another crucial aspect is the consideration of AI use cases. This step prepares the classification into the various AI risk classes, because an AI asset can be used for different purposes.
Microsoft Copilot, for example, can be used for both text generation and applicant screening. Each of these purposes represents an AI use case and must be reviewed and evaluated separately. Only by recording AI use cases can you ensure that all regulatory requirements for each AI asset can be adequately assessed.
Determine your own role within the AI Act
Identifying your own role is important for AI compliance. The AI Act defines roles and actors along the AI value chain and assigns regulatory obligations based on these roles.
Determining your own role can be challenging in some cases, and the role may change under certain circumstances. For example, it is possible that a deployer may rise from being a mere deployer to a provider due to the way in which they use or further develop the AI system.
The roles within the AI Act
- Providers: Develop and market AI systems. They are responsible for ensuring that the systems are brought to market in a safe and compliant manner.
- Deployers: Use AI systems on their own responsibility, but not exclusively for personal use.
- Distributors, importers, and authorized representatives of the provider: These roles complement the ecosystem and ensure the correct distribution of the systems.
Important: In some cases, a company can be both a provider and a deployer of an AI system.
Determining the risk classes of AI systems
The risk assessment step is crucial in determining which regulatory requirements must be met under the AI Act.
Each AI system can be classified into one or more risk classes in accordance with the AI Act. The AI Act classifies AI systems into different risk classes – from prohibited applications to high-risk AI to systems with medium or low risk. A distinction is made between five risk classes:
- Prohibited practices (Art. 5 AI Act): These AI use cases are generally prohibited.
- High-risk AI systems (Art. 6 ff., Annexes I and III): Subject to special regulation and extensive requirements.
- Systemic risk (Art. 55 AI Act): Applies to general-purpose AI models whose use entails particular risks.
- Medium risk (Art. 50 AI Act): Transparency requirements apply, but no conformity assessment.
- Low risk (Art. 95 AI Act): No specific regulatory requirements, voluntary commitments possible.
The classification depends on the potential threat to fundamental rights, security, and public order. High-risk AI systems are particularly strictly regulated, for example in the areas of law enforcement, credit assessment, or personnel decisions.
A detailed overview of the individual risk classes and the specific criteria for their classification can be found in our blog post on the risk classification of AI systems.
Deriving the obligations under the AI Act
Based on the previously identified role and risk class, the next step is to derive the requirements under the AI Act that are particularly relevant for providers and deployers.
Requirements for AI systems with medium risk
Providers of medium-risk AI systems must fulfill information obligations toward data subjects. For example, data subjects must be informed that they are interacting with an AI system. If images, videos, or audio recordings are artificially generated, this must be recognizable.Deployers of such AI systems also have transparency obligations towards data subjects. They must inform data subjects if they use AI systems for emotion recognition or biometric categorization and label deepfakes as such.
Requirements for high-risk AI
In the case of high-risk AI, both providers and deployers are subject to numerous requirements. All of these serve the purpose of systematically managing the risks of such AI systems and providing all stakeholders with the necessary information.Providers of high-risk AI have a number of obligations. We have compiled some of the key requirements below:
- Registration, instructions, and verification: Providers must register their product in an EU database and affix a CE mark. They must also prepare technical documentation and operating instructions for their AI product. The conformity of the AI system with the AI Act must be demonstrated to the supervisory authority.
- Traceability and minimum standards: Providers must ensure that the functioning of AI systems is traceable. This includes, among other things, technical documentation explaining all relevant parameters and decisions. AI systems must also meet certain minimum requirements in terms of accuracy, robustness, and cybersecurity. The training, validation, and test data used must meet certain quality criteria.
- Risk management and security: Providers must check high-risk AI for potential risks and evaluate it continuously. They are obliged to establish procedures and systematic measures for this purpose. This includes a quality management system for quality control and assurance. But it also includes a risk management system that continuously and iteratively monitors the risks of the AI system.
- Human oversight: The AI Act stipulates that humans must always retain control over AI. Providers must design and develop AI systems in such a way that they can be effectively supervised by a human being.
Deployers of high-risk AI must also meet a number of requirements. Many of these relate to the requirements of providers. This ensures that the AI system remains transparent and secure even during use.
These are the most important requirements for deployers:
- Monitoring and documentation: Deployers enforce the minimum AI standards. For example, they monitor performance metrics, such as accuracy and robustness, based on the operating instructions and always use only appropriate input data. Depending on the type of data used by the AI system, the deployer takes appropriate technical and organizational measures (TOM) to comply with the instructions for use and carries out, if required, a data protection and/or fundamental rights impact assessments.
- Information obligations: Deployers must inform employees about the AI systems used. Data subjects must also be informed if AI systems make or support specific decisions.
- Human oversight: Deployers assign human oversight of the AI system to a qualified person and enforce oversight measures (if specified by the provider).
A complete and detailed overview of all obligations can be found in our article "All obligations under the AI Act: Overview by risk class." There, we show in a structured way what specific requirements providers, deployers, and users will face—supplemented by a practical PDF checklist for download to help you with implementation.
In addition, it is important that providers and deployers of AI systems ensure their AI competence. This requirement applies to AI systems of all risk classes. AI competence refers to the ability to use AI systems competently and to be aware of their opportunities and risks. Companies fulfill this requirement by providing targeted training for their employees.
Establish structures and processes for AI governance in the company
To ensure compliance with the AI Act, companies should create their own set of rules that clearly define processes, responsibilities, and control mechanisms for dealing with AI systems. The term AI governance is often used for this purpose.
Such a set of rules is necessary to ensure continuous AI compliance. Only the assignment of responsibilities and the definition of processes for controlling AI systems can guarantee compliance with regulatory requirements.
There is no one-size-fits-all approach to how companies should address this issue. It depends, for example, on the size of the company and the governance structures already in place. Fundamentally, companies can approach this issue from two perspectives:
From a regulatory perspective, by identifying the obligations under the law and determining the necessary responsibilities and processes.
Or from the perspective of existing standards, concepts, and frameworks. These include, in particular, a series of ISO standards (ISO/IEC 42001, ISO/IEC 38507), or other frameworks that have already been developed, such as NIST AI 600-1 (Artificial Intelligence Risk Management Framework) or the Six Pillars of Responsible AI from Equal-AI.
In principle, AI governance can be integrated into various existing governance structures, for example, into corporate governance, data governance, or even a risk management system.
Concrete first elements of AI governance can include the creation of an AI policy and the establishment of clear responsibilities.
- Creating an AI policy
A sensible starting point can be the creation of an AI policy. Although such a policy is not legally required, it allows companies to define principles that guide all activities related to AI. The AI policy can thus serve as the anchor point for all further governance structures. - Establishing clear responsibilities
In parallel or subsequently, companies should define clear responsibilities for AI. We recommend that the topic be addressed at the management level if possible, because it already has and will continue to have a very significant social and economic impact. It is also important to clearly define responsibilities at levels below the management level, as many different employees come into contact with AI systems. A fundamental distinction can be made here between technical and organizational roles.
- Creating an AI policy
Conclusion
The AI Act is a future-oriented regulatory initiative toward the safe use of AI systems in Europe. It creates a regulatory framework to give providers more legal certainty with regard to AI products and enable productive innovation. Companies should analyze their systems, define roles and responsibilities, and implement the necessary processes.
To provide companies with optimal support in implementing the requirements of the AI Act, caralegal has developed AI Flow, a comprehensive solution for AI compliance. The platform was specifically designed to map the complex requirements of the AI Act in a practical and efficient manner. AI Flow ideally complements our established data protection software Privacy Flow—for all organizations that want to leverage synergies between data protection and AI governance.





