The Six Landmines: Where GDPR and AI Collide
Beyond the legal basis question, there are six specific areas where AI and GDPR create the most significant conflicts. These are the landmines that organisations consistently step on.
Landmine 1: Purpose Limitation
Data collected for one purpose cannot simply be reused for another without a fresh legal basis or a formal compatibility assessment. AI training is a new purpose — and it requires justification.
Landmine 2: Data Minimisation
GDPR requires that only data "adequate, relevant and limited to what is necessary" be processed. AI systems, particularly deep learning models, often perform better with more data — creating direct tension with the minimisation principle. You must be able to justify every data field used in training.
Landmine 3: Article 22 — Automated Decision-Making
Article 22 gives individuals the right not to be subject to solely automated decisions that produce legal or similarly significant effects. This is one of the most commonly triggered provisions in enterprise AI.
- ◆Automated loan decisions without human review → Article 22 applies
- ◆AI-generated performance reviews that directly determine pay or promotion → Article 22 applies
- ◆Algorithmic candidate screening that produces a hire/no-hire output → Article 22 applies
- ◆Insurance premium pricing set entirely by algorithm → Article 22 applies
Article 22 applies to "solely" automated decisions. Many organisations believe a rubber-stamp human approval avoids Article 22. Regulators and courts have consistently rejected this. If a human routinely approves 95%+ of AI outputs without meaningful review, the decision is effectively automated regardless of the label.
Landmine 4: The Right to Erasure
Individuals have the right to request deletion of their personal data. For data in a database, this is straightforward. For data that was used to train an AI model, it is technically very difficult — the data's influence is embedded in model weights and cannot be selectively removed without retraining.
Landmine 5: Transparency & Explainability
GDPR requires that individuals receive "meaningful information about the logic involved" when automated decisions affect them. For deep learning models, providing a genuine explanation of the decision logic is technically challenging — but legally required.
Landmine 6: Data Protection Impact Assessments (DPIAs)
A DPIA is mandatory before any processing that is "likely to result in a high risk" to individuals. The ICO's guidance makes clear that AI systems processing personal data at scale, using novel technologies, or making automated decisions about individuals will almost always require a DPIA. Many organisations deploy AI without one.
For every AI system processing personal data: (1) Identify the lawful basis; (2) Document the purpose and confirm data minimisation; (3) Check whether Article 22 applies and if so, ensure genuine human oversight; (4) Assess the right to erasure implications; (5) Build explainability into the model design; (6) Conduct a DPIA.
