Module 7 - Part 2 of 6

Lawful Basis for AI Processing

📚 Estimated: 2-2.5 hours 🎓 Advanced Level ⚖ Legal Focus

📜 Introduction

Every processing of personal data in an AI system requires a valid lawful basis under GDPR Article 6. For AI applications, selecting and documenting the appropriate legal basis presents unique challenges that differ significantly from traditional data processing scenarios.

This part examines each lawful basis through the lens of AI processing, highlighting practical considerations, common pitfalls, and strategies for establishing robust legal foundations for AI systems.

⚠ Critical Consideration

The lawful basis must be identified and documented before processing begins. Retrospectively applying a legal basis to existing AI processing is non-compliant and may constitute a data protection violation. Organizations must conduct this analysis during AI system design and procurement.

The Six Lawful Bases for AI Processing

Article 6(1) GDPR provides six mutually exclusive lawful bases. Their applicability and practical utility vary significantly for AI systems.

Consent

Article 6(1)(a)

Freely given, specific, informed, and unambiguous agreement to processing for defined AI purposes.

Challenging for AI
📄

Contract

Article 6(1)(b)

Processing necessary for performance of a contract with the data subject.

Limited AI Use

Legal Obligation

Article 6(1)(c)

Processing required to comply with a legal obligation to which the controller is subject.

Narrow Application

Vital Interests

Article 6(1)(d)

Processing necessary to protect someone's life or physical integrity.

Emergency Only
🏛

Public Task

Article 6(1)(e)

Processing necessary for tasks in the public interest or exercise of official authority.

Public Sector AI

Legitimate Interests

Article 6(1)(f)

Processing necessary for legitimate interests, balanced against data subject rights.

Most Used for AI

Consent Challenges in AI

While consent appears straightforward, applying it to AI processing presents significant practical and legal challenges that often make other lawful bases more appropriate.

❌ Common Consent Failures in AI

  • Not specific enough: Vague consent to "AI processing" or "machine learning" fails the specificity requirement
  • Not truly informed: Data subjects cannot meaningfully understand complex AI processing
  • Power imbalance: Employee, patient, or customer relationships create unfree consent
  • Cannot be withdrawn: Once data is used to train a model, withdrawal may be technically impossible
  • Bundled consent: Making service conditional on AI consent is invalid
📖 When Consent May Work

Consent can be appropriate for AI when: (1) the individual genuinely has choice without detriment, (2) the AI purpose is specific and clearly explained, (3) model architecture allows data removal if consent is withdrawn, (4) there is no power imbalance, and (5) the processing is not a condition of service. Example: A photo editing app asking users to optionally contribute anonymized images to improve filters.

Consent Requirement AI Challenge Mitigation
Specific AI purposes evolve; model improvements unclear Define specific use cases; limit scope; reconsent for new uses
Informed Complex AI processing hard to explain simply Layered notices; plain language; specific examples
Freely given Often bundled with service or employment Genuine opt-in; no service degradation if refused
Withdrawable ML models retain learned patterns Machine unlearning; retraining without data; use alternative basis

Legitimate Interests for AI

Legitimate interests is the most commonly relied-upon basis for commercial AI processing. However, it requires a documented balancing test (LIA) that weighs organizational interests against data subject rights.

💡 Three-Part Test

A legitimate interests assessment must address three questions:

  • Purpose Test: Is there a legitimate interest being pursued? Is it lawful, clearly articulated, and real (not speculative)?
  • Necessity Test: Is the AI processing necessary to achieve that interest? Are there less intrusive alternatives?
  • Balancing Test: Do the legitimate interests override the interests, rights, and freedoms of data subjects?

Examples of Legitimate Interests for AI

  • Fraud detection and prevention systems
  • Cybersecurity threat detection
  • Service personalization and improvement
  • Business analytics and operational efficiency
  • Product recommendation systems
  • Quality assurance and process optimization

📋 Legitimate Interest Assessment Template for AI

1 Identify the Legitimate Interest

Describe the specific business or organizational interest. Example: "Improving fraud detection accuracy to protect customers and reduce financial losses from fraudulent transactions."

2 Necessity Assessment

Explain why AI processing is necessary. What alternatives were considered? Why is this the least intrusive effective method?

3 Data Subject Impact Analysis

What personal data is processed? What are potential negative impacts? Would processing be unexpected? Are there vulnerable individuals affected?

4 Safeguards and Mitigations

Document safeguards: pseudonymization, access controls, retention limits, transparency measures, opt-out mechanisms, human oversight.

5 Balancing Conclusion

Conclude whether legitimate interests override data subject rights, with reasoning. If balanced, document why processing should proceed with safeguards.

📄 Contractual Necessity for AI

Contract as a lawful basis is narrowly interpreted by regulators. AI processing is only covered if it is genuinely necessary to perform the contract - not merely useful or commercially desirable.

⚠ Strict Interpretation

The EDPB has clarified that "necessary" means the processing must be objectively necessary, not just helpful. Many AI enhancements fail this test because the core service could be delivered without them. Including AI processing in contract terms does not automatically make it "necessary."

AI Processing Contractual Necessity? Reasoning
AI chatbot for contracted support Potentially Yes If support is contractual obligation and AI is delivery method
Personalized recommendations Usually No Service can typically function without personalization
AI credit scoring for loan Maybe Assessment necessary, but AI method may not be
AI fraud check on transaction Potentially Yes Security may be integral to payment service

🏛 Public Interest and Official Authority

Public sector organizations often rely on public task or official authority for AI processing. This basis requires a clear legal foundation in law for the task being performed.

📋 Requirements for Public Task

  • The task must be laid down in law (statute, regulation, or legal instrument)
  • The processing must be necessary for performing that specific task
  • AI as the method of processing should be proportionate
  • Appropriate safeguards must be implemented
  • Article 22 protections apply to automated decision-making
📖 Public Sector AI Examples

Tax authority: AI to detect tax fraud may rely on public task where fraud detection is a statutory function.

Healthcare: AI diagnostic support for NHS services where providing healthcare is the statutory task.

Law enforcement: AI for crime pattern analysis where policing functions are legally established.

Note: Even with public task basis, proportionality and necessity must be demonstrated, and Article 22 safeguards apply.

🔒 Special Category Data in AI

When AI processes special category data (Article 9) or criminal data (Article 10), an additional legal basis is required alongside Article 6 grounds.

❌ Special Categories Requiring Additional Basis

  • Racial or ethnic origin
  • Political opinions
  • Religious or philosophical beliefs
  • Trade union membership
  • Genetic data
  • Biometric data for identification
  • Health data
  • Sex life or sexual orientation

⚠ AI Inference Risk

AI systems can infer special category data even when not directly collected. For example, browsing patterns may reveal health conditions, or language analysis may infer ethnic origin. When AI makes such inferences with reasonable certainty, special category protections apply to those inferred data points.

Article 9(2) Conditions for AI

Condition AI Application
Explicit consent 9(2)(a) Must meet high bar: specific to AI use, genuinely voluntary, withdrawable
Employment/social security 9(2)(b) HR AI processing where authorized by law with safeguards
Healthcare 9(2)(h) Medical AI under professional secrecy obligations
Public health 9(2)(i) Epidemiological AI, disease surveillance systems
Research 9(2)(j) Scientific AI research with appropriate safeguards

📚 Key Takeaways

  • Determine Before Processing: Legal basis must be identified and documented before AI processing begins
  • Consent is Challenging: High requirements often make consent impractical for AI; consider alternatives
  • Legitimate Interests is Common: Most commercial AI relies on LI, but requires documented balancing test
  • Contract is Narrow: Must be genuinely necessary for contract performance, not just useful
  • Public Task Needs Legal Basis: Must be grounded in specific legal provisions
  • Special Categories Need More: Additional Article 9 condition required alongside Article 6 basis
  • Document Everything: Records of legal basis analysis are essential for accountability