Introduction to the EU AI Act
The European Union Artificial Intelligence Act (EU AI Act), which entered into force on August 1, 2024, represents the world's first comprehensive horizontal legal framework specifically regulating artificial intelligence systems. This landmark regulation establishes a risk-based approach to AI governance that has become the global reference model for AI regulation.
💡 Historical Context
The EU AI Act was proposed by the European Commission in April 2021, following extensive consultation and the publication of the 2020 White Paper on AI. After nearly three years of trilogue negotiations between the Commission, Parliament, and Council, the final text was adopted in March 2024.
Regulatory Objectives
The EU AI Act pursues several key objectives:
- Safety and Fundamental Rights: Ensuring AI systems placed on the EU market are safe and respect fundamental rights and EU values
- Legal Certainty: Providing clear, harmonized rules for AI development and deployment across all 27 member states
- Innovation Promotion: Facilitating investment and innovation in AI while managing associated risks
- Single Market: Preventing fragmentation of the internal market through unified requirements
- Global Standard-Setting: Establishing the EU as a global leader in trustworthy AI governance
Scope and Territorial Application
The EU AI Act applies to:
- Providers placing AI systems on the EU market or putting them into service in the EU, regardless of where they are established
- Deployers of AI systems located within the EU
- Providers and deployers located outside the EU where the output of the AI system is used in the EU
- Importers and distributors of AI systems in the EU
- Product manufacturers placing products with AI systems on the market under their name
⚠ Extraterritorial Reach
Similar to GDPR, the EU AI Act has significant extraterritorial application. Non-EU companies whose AI systems produce outputs used within the EU are subject to the regulation, regardless of where the AI processing occurs.
The Risk-Based Classification Framework
The cornerstone of the EU AI Act is its risk-based approach, which categorizes AI systems into four tiers based on the level of risk they pose to health, safety, and fundamental rights. This pyramid structure determines the applicable regulatory requirements.
UNACCEPTABLE RISK
Prohibited
HIGH RISK
Strict Requirements
LIMITED RISK
Transparency Obligations
MINIMAL RISK
No Specific Requirements
Understanding the Risk Categories
1. Unacceptable Risk (Prohibited AI Practices)
Certain AI practices are deemed to pose unacceptable risks to fundamental rights and are strictly prohibited. These represent AI applications that conflict with EU values:
- Subliminal Manipulation: AI systems deploying subliminal techniques beyond a person's consciousness to materially distort behavior in a manner likely to cause physical or psychological harm
- Exploitation of Vulnerabilities: Systems exploiting vulnerabilities of specific groups (age, disability, social/economic situation) to materially distort behavior causing harm
- Social Scoring: AI-based evaluation or classification of persons based on social behavior or personality characteristics, leading to detrimental or unfavorable treatment
- Real-Time Remote Biometric Identification: Use in publicly accessible spaces for law enforcement, with narrow exceptions
- Emotion Recognition: In workplace and educational institutions (with limited exceptions)
- Biometric Categorization: Systems categorizing individuals based on sensitive attributes (race, political opinions, sexual orientation)
- Facial Recognition Databases: Untargeted scraping of facial images from internet or CCTV to build recognition databases
- Predictive Policing: Individual risk assessments for criminal offending based solely on profiling or personality traits
2. High-Risk AI Systems
High-risk AI systems are subject to the most stringent requirements. A system is classified as high-risk through two pathways:
Annex I: Product Safety Legislation - AI systems that are safety components of products or are themselves products covered by EU harmonization legislation requiring third-party conformity assessment:
- Machinery and equipment
- Toys
- Medical devices
- In vitro diagnostic devices
- Civil aviation
- Motor vehicles
- Marine equipment
Annex III: Specific Use Cases - AI systems in critical areas:
| Category |
Examples |
| Biometric Identification |
Remote biometric identification systems (excluding verification); Emotion recognition; Biometric categorization |
| Critical Infrastructure |
AI as safety components in management/operation of road traffic, water, gas, heating, electricity supply |
| Education & Training |
Access determination to educational institutions; Evaluation of learning outcomes; Monitoring prohibited behavior during tests |
| Employment & Workers |
Recruitment; Promotion/termination decisions; Task allocation; Performance monitoring |
| Essential Services Access |
Credit scoring; Emergency services dispatch; Health/life insurance assessment |
| Law Enforcement |
Individual risk assessment; Polygraphs; Evidence reliability evaluation; Profiling during investigations |
| Migration & Asylum |
Polygraphs at borders; Application assessment; Risk assessment; Document authenticity verification |
| Justice & Democracy |
Legal fact/law interpretation assistance; ADR application; Election/referendum influence (research exemption applies) |
💡 High-Risk Exception
An Annex III AI system is NOT considered high-risk if it does not pose a significant risk of harm to health, safety, or fundamental rights, including by not materially influencing decision-making outcomes. This exception does not apply if the system performs profiling of natural persons.
3. Limited Risk AI Systems
These systems are subject primarily to transparency obligations:
- Chatbots and Conversational AI: Users must be informed they are interacting with an AI system
- Emotion Recognition Systems: Persons exposed must be informed of system operation
- Biometric Categorization: Subject to transparency requirements
- Deep Fakes and Synthetic Content: Must be clearly labeled as artificially generated or manipulated
- AI-Generated Text: When published for public information on matters of public interest, must be labeled
4. Minimal Risk AI Systems
The vast majority of AI systems fall into this category and face no specific regulatory requirements under the AI Act. Examples include AI-enabled video games, spam filters, and inventory management systems. However, providers are encouraged to voluntarily apply codes of conduct.
Requirements for High-Risk AI Systems
High-risk AI systems must comply with an extensive set of requirements throughout their lifecycle. These requirements represent the most demanding compliance obligations under the regulation.
Risk Management System (Article 9)
Providers must establish, implement, document, and maintain a continuous, iterative risk management system:
- Identification and analysis of known and reasonably foreseeable risks
- Estimation and evaluation of risks from intended use and reasonably foreseeable misuse
- Adoption of risk management measures based on state-of-the-art technology
- Testing to identify appropriate risk management measures
- Evaluation of residual risks and their acceptability
Data Governance (Article 10)
Training, validation, and testing data sets must be subject to appropriate data governance practices:
- Relevant design choices and data collection processes
- Formulation of assumptions regarding intended purpose
- Assessment of availability, quantity, and suitability of data sets
- Examination for possible biases likely to affect fundamental rights
- Identification of relevant data gaps and how to address them
Technical Documentation (Article 11)
Comprehensive technical documentation must be prepared before market placement:
- General system description and intended purpose
- Detailed description of system elements and development process
- Monitoring, functioning, and control mechanisms
- Description of hardware requirements
- Risk management system description
- Description of changes throughout lifecycle
- List of applied harmonized standards
Record-Keeping (Article 12)
Systems must enable automatic logging of events (logs) throughout their lifetime:
- Recording of operational periods
- Reference database against which input data is checked
- Input data for which search led to a match
- Identification of persons involved in verification of results
Transparency and Information (Article 13)
Systems must be designed to enable deployers to interpret output and use appropriately:
- Clear, adequate instructions for use
- Identity and contact details of provider
- System characteristics, capabilities, and limitations
- Relevant changes over system lifecycle
- Human oversight measures
- Expected lifetime and maintenance requirements
Human Oversight (Article 14)
Systems must be designed to enable effective human oversight:
- Ability to fully understand system capacities and limitations
- Capability to properly monitor operation
- Ability to interpret system output
- Capacity to decide not to use, disregard, or override output
- Ability to intervene or interrupt operation through "stop" button or similar
Accuracy, Robustness, and Cybersecurity (Article 15)
- Accuracy: Appropriate levels of accuracy, as declared in instructions
- Robustness: Resilience to errors, faults, or inconsistencies
- Cybersecurity: Protection against unauthorized modification and data manipulation
Administrative Penalties
The EU AI Act establishes a tiered penalty structure reflecting the severity of violations:
EUR 35M
or 7% of worldwide annual turnover
Prohibited AI practices violations
EUR 15M
or 3% of worldwide annual turnover
High-risk system non-compliance
EUR 7.5M
or 1.5% of worldwide annual turnover
Incorrect information to authorities
💡 SME Considerations
For small and medium-sized enterprises, including startups, the penalties are calculated at the lower of the fixed amount or the percentage of turnover, with specific provisions for proportionality.
Factors in Penalty Assessment
When determining penalty amounts, authorities consider:
- Nature, gravity, and duration of the infringement
- Intentional or negligent character of the infringement
- Actions taken to mitigate damage
- Previous infringements
- Level of cooperation with authorities
- Size and market share of the infringing entity
- Financial benefits gained or losses avoided