Introduction
A Data Protection Impact Assessment (DPIA) is a mandatory process under GDPR Article 35 for processing that is "likely to result in a high risk" to individuals' rights and freedoms. Most AI systems processing personal data will trigger DPIA requirements due to their inherent characteristics.
This part provides practical guidance on conducting DPIAs specifically for AI systems, from initial screening through risk assessment, mitigation planning, and consultation with supervisory authorities.
💡 Key Insight
A DPIA is not merely a compliance document - it is a risk management tool that should genuinely inform AI system design. Effective DPIAs identify real risks and drive meaningful privacy improvements. Retrospective DPIAs that rubber-stamp existing systems miss the point and may not satisfy regulatory expectations.
When is a DPIA Required?
Article 35(1) requires a DPIA where processing, "in particular using new technologies," is likely to result in high risk. The EDPB guidelines identify criteria that, when two or more are present, generally trigger DPIA requirements.
Automated Decision-Making
Processing that produces legal or significant effects, including profiling
Systematic Evaluation
Systematic evaluation of personal aspects, including profiling
Large Scale Processing
Processing of personal data on a large scale
Sensitive Data
Processing special category data or criminal convictions
Matching/Combining
Combining datasets in unexpected ways
Vulnerable Subjects
Processing data of vulnerable individuals
New Technologies
Use of innovative technologies like AI/ML
Preventing Rights
Processing that prevents exercise of rights or access to services
❌ AI Systems Almost Always Require DPIA
Most AI systems processing personal data will meet at least two criteria: (1) they use new technology (AI/ML), and (2) they involve systematic evaluation or profiling. Combined with factors like scale, automation, or sensitive data, AI systems almost always require a DPIA. Assume DPIA is needed unless clear analysis proves otherwise.
DPIA Methodology for AI
A DPIA must contain the minimum elements specified in Article 35(7), but for AI systems, additional AI-specific considerations should be incorporated throughout the assessment.
1. Systematic Description
Document the processing operations, purposes, legitimate interests, data flows, and the AI system architecture. Include training data sources, model type, inputs, outputs, and decision logic.
2. Necessity & Proportionality
Assess whether AI processing is necessary and proportionate. Could the purpose be achieved with less personal data, simpler methods, or stronger safeguards?
3. Risk Assessment
Identify and assess risks to data subjects' rights and freedoms. Consider AI-specific risks: bias, accuracy, opacity, function creep, and impacts on vulnerable groups.
4. Risk Mitigation
Document measures to address identified risks. Include technical measures, organizational safeguards, and governance controls specific to AI.
5. Stakeholder Consultation
Seek views of data subjects or their representatives where appropriate. Document consultation outcomes and how views were addressed.
6. Review & Sign-Off
DPO must provide advice. Accountable decision-maker must sign off accepting residual risks. Document the decision and any conditions.
AI-Specific Risks to Assess
Beyond standard data protection risks, AI systems present unique risks that must be assessed within the DPIA framework.
📈 Algorithmic Bias & Discrimination
AI systems may perpetuate or amplify biases present in training data, leading to discriminatory outcomes. Assess: potential for discrimination against protected groups, training data representativeness, testing for disparate impact, monitoring mechanisms.
🔎 Opacity & Lack of Transparency
Complex ML models may operate as "black boxes," making it difficult to explain decisions to data subjects. Assess: model interpretability, ability to provide meaningful explanations, documentation of decision factors, impact on data subject rights.
🎯 Accuracy & Error Rates
AI systems make errors that can significantly impact individuals. Assess: model accuracy metrics, error rates across different groups, impact of incorrect decisions, mechanisms for error detection and correction.
🔬 Function Creep & Secondary Use
AI systems may be repurposed beyond original intentions. Assess: scope boundaries for system use, controls preventing unauthorized applications, governance for scope changes, monitoring for drift.
🔒 Inference Risks
AI may infer sensitive information not directly collected. Assess: potential sensitive inferences, whether special category protections apply, controls on inference generation, data subject awareness.
Risk Assessment Matrix
Risk Mitigation Measures
For each identified risk, document specific mitigation measures that reduce either the likelihood or the severity of potential harms.
| Risk Category | Mitigation Measures |
|---|---|
| Bias/Discrimination | Bias testing, fairness metrics monitoring, diverse training data, regular audits, appeal mechanisms |
| Opacity | Explainability tools (SHAP, LIME), model documentation, decision factor logging, plain-language explanations |
| Accuracy Errors | Confidence thresholds, human review for edge cases, error monitoring, feedback loops, rectification processes |
| Function Creep | Purpose documentation, technical access controls, change management, use case governance, audit logging |
| Sensitive Inferences | Inference audits, output filtering, special category protections, privacy-preserving techniques |
| Data Security | Encryption, access controls, model security, adversarial testing, incident response planning |
AI DPIA Template
📋 Data Protection Impact Assessment - AI System
1 System Identification
2 Processing Description
3 Necessity & Proportionality
4 Risk Assessment
5 Mitigation Measures
6 Sign-Off
Consultation Requirements
Article 36 requires prior consultation with the supervisory authority where a DPIA indicates that processing would result in high risk in the absence of measures taken by the controller.
⚠ When to Consult the Supervisory Authority
You must consult the supervisory authority if, after implementing mitigation measures, residual risks remain high. This typically occurs when:
- No adequate mitigation measures exist for identified risks
- Mitigation would undermine the purpose of processing
- Novel AI applications with uncertain risk profiles
- Large-scale processing of sensitive data where new AI approaches are used
Submit to the supervisory authority: (1) the complete DPIA, (2) description of controller and processor roles, (3) purposes and means of processing, (4) safeguards and mechanisms for data subject protection, (5) DPO contact details, and (6) any other relevant information requested. The authority has 8 weeks (extendable by 6 weeks) to respond with written advice.
Key Takeaways
- AI Usually Requires DPIA: Most AI systems meet threshold criteria; assume DPIA needed
- Conduct Early: DPIA should inform design, not rubber-stamp existing systems
- Address AI-Specific Risks: Bias, opacity, accuracy, and inference risks are distinct AI concerns
- Document Thoroughly: Comprehensive documentation demonstrates accountability
- Implement Real Mitigations: Measures must genuinely reduce risk, not just check boxes
- Consult When Required: High residual risk requires supervisory authority consultation
- Review Regularly: DPIAs should be living documents updated as AI systems evolve