2.4 · Accountability, Transparency & Algorithmic Auditing

When AI Causes Harm — Who Answers?

12 minCourse 02

When an AI system causes harm — a biased hiring decision, a discriminatory credit refusal, a misdiagnosis, a financial loss — the question of legal liability is complex, contested, and rapidly evolving. Understanding the current legal landscape is essential for anyone deploying AI in a business context.

The Liability Gap

Traditional product liability law was designed for physical products. When a toaster catches fire, the manufacturer is liable under established frameworks. When an AI model makes a discriminatory decision, liability is far less clear — it may sit with the developer, the deployer, the data supplier, or be distributed across all three.

  • AI Developer — Liable if the harm results from a defect in the model, training data, or its documentation
  • AI Deployer — Liable if the harm results from inappropriate deployment, insufficient human oversight, or failure to follow the developer's instructions
  • Data Supplier — Potentially liable if poisoned, biased, or incorrect training data caused the harm
  • User — In professional contexts, the person or organisation relying on AI output may bear liability for how they use it
The EU AI Liability Directive

The EU AI Liability Directive, currently progressing through the legislative process, proposes a "rebuttable presumption of causality" — meaning that if an AI system violated the AI Act's requirements and harm occurred, it will be presumed the AI caused the harm unless the defendant proves otherwise. This shifts the burden of proof significantly.

Existing UK Law: What Applies Now

In the UK, AI-related liability currently flows through existing frameworks:

  • Consumer Rights Act 2015 — AI outputs may be considered services; if substandard, consumers may have remedies
  • Equality Act 2010 — Discriminatory AI decisions (in employment, services, housing) are actionable regardless of whether AI or humans made them
  • GDPR / UK GDPR — Individuals affected by unlawful automated decisions may claim compensation for material and non-material damage
  • Common law negligence — Professionals who rely on AI outputs without appropriate care may be liable for negligent advice
The Practical Implication

As an AI deployer, you bear significant liability for what your AI systems do — even if someone else built them. Documenting your due diligence, bias testing, human oversight processes, and governance decisions is not bureaucracy: it is your legal defence.