The General Data Protection Regulation (GDPR) and the EU AI Act operate as complementary regulatory frameworks. While the AI Act specifically regulates AI systems, GDPR governs the processing of personal data, which is fundamental to most AI applications. Understanding their intersection is critical for AI compliance.
The EU AI Act explicitly states that it is "without prejudice" to GDPR. Both regulations apply simultaneously to AI systems that process personal data. Compliance with the AI Act does not exempt organizations from GDPR obligations, and vice versa.
Article 22 is perhaps the most directly relevant GDPR provision for AI systems. It establishes fundamental rights regarding automated individual decision-making, including profiling.
"The data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her."
The prohibition does not apply when the decision is:
The decision is necessary for entering into or performance of a contract between the data subject and the controller.
Example: Automated credit scoring for loan applications where it is genuinely necessary for the service.
The decision is authorized by EU or Member State law which also lays down suitable safeguards.
Example: Automated fraud detection required by anti-money laundering regulations.
The decision is based on the data subject's explicit consent.
Note: Must meet GDPR's high standard for explicit consent - freely given, specific, informed, unambiguous, clear affirmative action.
Where exceptions (a) or (c) apply, the controller must implement suitable measures to safeguard the data subject's rights, freedoms, and legitimate interests:
Automated decisions under Article 22 involving special category data (Article 9) are only permitted where explicit consent exists OR processing is necessary for substantial public interest under Member State law with appropriate safeguards. Contract necessity alone is NOT sufficient.
Article 4(4) GDPR defines profiling as "any form of automated processing of personal data consisting of the use of personal data to evaluate certain personal aspects relating to a natural person."
| Type | Description | Article 22 Applies? |
|---|---|---|
| General Profiling | Profiling without automated decision-making affecting individuals | No (but GDPR principles apply) |
| Decision-Supporting Profiling | Profiling informs human decision-maker who makes final determination | No (if human involvement is meaningful) |
| Solely Automated + Legal/Significant Effect | Profiling leads directly to automated decision with legal or significant effect | Yes - Full Article 22 protections |
To avoid Article 22, human involvement must be more than symbolic:
Simply having a human "approve" or "sign off" on automated decisions without genuine review does not constitute meaningful human involvement. Controllers cannot circumvent Article 22 through token human oversight.
Data subjects retain all GDPR rights when their data is processed by AI systems, with some specific applications:
When processing involves automated decision-making including profiling, controllers must provide:
"Meaningful information about the logic" creates an explainability requirement that can be technically challenging for complex AI models (black boxes). Organizations must balance trade secret protection with transparency obligations.
Article 35 GDPR requires DPIAs when processing is "likely to result in a high risk to the rights and freedoms of natural persons." AI systems frequently trigger this requirement.
| Element | AI-Specific Considerations |
|---|---|
| Description of Processing | Model architecture, training methodology, inference process, data flows, third-party components |
| Necessity & Proportionality | Why AI? Could less intrusive methods achieve purpose? Is accuracy improvement proportionate to privacy impact? |
| Risk Assessment | Bias risks, accuracy errors, security vulnerabilities, re-identification risks, downstream harms |
| Mitigation Measures | Bias testing, fairness constraints, human oversight, access controls, audit trails, monitoring systems |
| Training Data | Source, lawful basis, representativeness, bias assessment, retention |
The EU AI Act's Fundamental Rights Impact Assessment (FRIA) requirement complements GDPR DPIA obligations:
Using personal data to train AI models requires a valid lawful basis under Article 6 GDPR. The choice of basis has significant implications:
Many organizations use a layered approach: legitimate interests for general training with consent for sensitive applications or secondary uses. Always document the basis and reassess when model use cases expand.