Part 4.3 of 7

GDPR & AI Intersection

📚 2-2.5 hours 🎯 Intermediate 📅 Updated January 2026

GDPR and AI: The Regulatory Intersection

The General Data Protection Regulation (GDPR) and the EU AI Act operate as complementary regulatory frameworks. While the AI Act specifically regulates AI systems, GDPR governs the processing of personal data, which is fundamental to most AI applications. Understanding their intersection is critical for AI compliance.

💡 Complementary Not Conflicting

The EU AI Act explicitly states that it is "without prejudice" to GDPR. Both regulations apply simultaneously to AI systems that process personal data. Compliance with the AI Act does not exempt organizations from GDPR obligations, and vice versa.

Key GDPR Provisions Affecting AI

  • Article 5 - Principles: Lawfulness, fairness, transparency, purpose limitation, data minimization, accuracy, storage limitation, integrity, accountability
  • Article 6 - Lawful Bases: Consent, contract, legal obligation, vital interests, public task, legitimate interests
  • Article 9 - Special Categories: Additional protections for sensitive data (health, biometrics, race, etc.)
  • Article 13-14 - Information: Transparency requirements for data subjects
  • Article 22 - Automated Decision-Making: Rights related to solely automated decisions
  • Article 35 - DPIA: Impact assessment requirements

Article 22: Automated Decision-Making

Article 22 is perhaps the most directly relevant GDPR provision for AI systems. It establishes fundamental rights regarding automated individual decision-making, including profiling.

Article 22(1): The General Right

"The data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her."

Key Elements:

  • "Solely automated": No meaningful human involvement in the decision process
  • "Including profiling": Automated processing to evaluate personal aspects (work performance, economic situation, health, preferences, interests, reliability, behavior, location, movements)
  • "Legal effects": Decisions affecting legal status or rights (contract termination, denial of social benefits, refused entry)
  • "Similarly significantly affects": Substantial impact on circumstances, behavior, or choices (credit denial, job rejection, differential pricing)

Article 22(2): Exceptions

The prohibition does not apply when the decision is:

📄 Contract Necessity

The decision is necessary for entering into or performance of a contract between the data subject and the controller.

Example: Automated credit scoring for loan applications where it is genuinely necessary for the service.

⚖ Legal Authorization

The decision is authorized by EU or Member State law which also lays down suitable safeguards.

Example: Automated fraud detection required by anti-money laundering regulations.

✓ Explicit Consent

The decision is based on the data subject's explicit consent.

Note: Must meet GDPR's high standard for explicit consent - freely given, specific, informed, unambiguous, clear affirmative action.

Article 22(3): Safeguards Required

Where exceptions (a) or (c) apply, the controller must implement suitable measures to safeguard the data subject's rights, freedoms, and legitimate interests:

  • Right to obtain human intervention on the part of the controller
  • Right to express their point of view
  • Right to contest the decision
⚠ Special Category Data

Automated decisions under Article 22 involving special category data (Article 9) are only permitted where explicit consent exists OR processing is necessary for substantial public interest under Member State law with appropriate safeguards. Contract necessity alone is NOT sufficient.

Understanding Profiling in AI Context

Article 4(4) GDPR defines profiling as "any form of automated processing of personal data consisting of the use of personal data to evaluate certain personal aspects relating to a natural person."

Profiling Categories

Type Description Article 22 Applies?
General Profiling Profiling without automated decision-making affecting individuals No (but GDPR principles apply)
Decision-Supporting Profiling Profiling informs human decision-maker who makes final determination No (if human involvement is meaningful)
Solely Automated + Legal/Significant Effect Profiling leads directly to automated decision with legal or significant effect Yes - Full Article 22 protections

What Constitutes "Meaningful" Human Involvement?

To avoid Article 22, human involvement must be more than symbolic:

  • Authority: The human must have genuine decision-making authority
  • Competence: Ability to understand and evaluate the AI system's output
  • Access to information: All relevant information available for review
  • Ability to override: Genuine capacity to reach different decision
  • Time and resources: Sufficient opportunity for meaningful review
💡 Rubber-Stamping Warning

Simply having a human "approve" or "sign off" on automated decisions without genuine review does not constitute meaningful human involvement. Controllers cannot circumvent Article 22 through token human oversight.

Data Subject Rights in AI Processing

Data subjects retain all GDPR rights when their data is processed by AI systems, with some specific applications:

🔍
Right to Information
Be informed of AI processing, including meaningful information about logic, significance, and envisaged consequences
👁
Right of Access
Access personal data processed and information about automated decision-making
Right to Rectification
Correct inaccurate data used in AI training or inference
🚫
Right to Erasure
Delete data from AI systems (challenging with trained models)
🛑
Right to Object
Object to profiling, especially for direct marketing purposes
👤
Human Intervention
Obtain human review of solely automated decisions

Transparency Requirements (Articles 13-14)

When processing involves automated decision-making including profiling, controllers must provide:

  • The existence of automated decision-making, including profiling
  • Meaningful information about the logic involved
  • The significance and envisaged consequences of such processing for the data subject
⚠ Explainability Challenge

"Meaningful information about the logic" creates an explainability requirement that can be technically challenging for complex AI models (black boxes). Organizations must balance trade secret protection with transparency obligations.

Data Protection Impact Assessments for AI

Article 35 GDPR requires DPIAs when processing is "likely to result in a high risk to the rights and freedoms of natural persons." AI systems frequently trigger this requirement.

Mandatory DPIA Triggers for AI

  • Systematic and extensive evaluation of personal aspects based on automated processing, including profiling, used for decisions producing legal or similarly significant effects
  • Large-scale processing of special categories of data
  • Systematic monitoring of publicly accessible areas on a large scale
  • Processing meeting two or more criteria from supervisory authority lists (new technologies, profiling, automated decision-making, sensitive data, large scale, matching/combining datasets, vulnerable subjects, innovative use)

AI-Specific DPIA Elements

Element AI-Specific Considerations
Description of Processing Model architecture, training methodology, inference process, data flows, third-party components
Necessity & Proportionality Why AI? Could less intrusive methods achieve purpose? Is accuracy improvement proportionate to privacy impact?
Risk Assessment Bias risks, accuracy errors, security vulnerabilities, re-identification risks, downstream harms
Mitigation Measures Bias testing, fairness constraints, human oversight, access controls, audit trails, monitoring systems
Training Data Source, lawful basis, representativeness, bias assessment, retention

GDPR-AI Act DPIA/FRIA Coordination

The EU AI Act's Fundamental Rights Impact Assessment (FRIA) requirement complements GDPR DPIA obligations:

  • Both may be required for the same AI system
  • DPIA focuses on personal data processing risks; FRIA addresses broader fundamental rights
  • Can be combined into single comprehensive assessment
  • Must satisfy requirements of both regulations

Lawful Bases for AI Training Data

Using personal data to train AI models requires a valid lawful basis under Article 6 GDPR. The choice of basis has significant implications:

Consent (Article 6(1)(a))

  • Advantages: Broadest permission if truly specific and informed
  • Challenges: Must describe future uses; withdrawal right creates difficulties for trained models; "specific" purpose requirement conflicts with general-purpose models
  • AI Considerations: Consent for training may not cover all inference uses

Legitimate Interests (Article 6(1)(f))

  • Advantages: Flexible; can cover evolving uses
  • Requirements: Balancing test required; document legitimate interest, necessity, and balance against data subject interests
  • AI Considerations: Commonly used for training; must reassess as uses evolve
✓ Best Practice: Layered Approach

Many organizations use a layered approach: legitimate interests for general training with consent for sensitive applications or secondary uses. Always document the basis and reassess when model use cases expand.

📚 Key Takeaways

  • GDPR and the EU AI Act are complementary frameworks - both may apply simultaneously to AI systems processing personal data
  • Article 22 restricts solely automated decisions with legal or significant effects - exceptions require explicit consent, contract necessity, or legal authorization plus safeguards
  • Human involvement must be meaningful (authority, competence, information, ability to override) - rubber-stamping is insufficient
  • Transparency requirements mandate meaningful information about AI logic, significance, and consequences
  • DPIAs are required for most significant AI applications and should address AI-specific risks like bias, accuracy, and training data governance
  • Lawful bases for training data must be carefully selected and documented, with legitimate interests commonly used but requiring balancing assessment