The Autonomy Trap
A US hospital introduced an AI triage system to help allocate specialist care resources. The model was trained on healthcare spend data — a widely used proxy for medical complexity. The system began deprioritising Black patients for specialist referrals. The model wasn't biased in its training data. Nobody designed it to discriminate. Nobody noticed it was discriminating for over a year.
The explanation was structural. Black patients had historically received less specialist care — not because they needed it less, but because they had less access to it. The model interpreted lower historical healthcare spend as lower medical need and allocated resources accordingly. A proxy that seemed technically neutral was socially harmful.
The hospital had given the AI too much autonomy over a decision with serious consequences, and had no mechanism for a human to catch and override its outputs. The lesson isn't that AI can't be used in healthcare. It's that the level of autonomy given to any AI system must be proportionate to the stakes of the decision it influences.
What is the Autonomy Trap?
The Autonomy Trap is what happens when an organisation gives AI more decision-making power than it has earned — either because the tool seems impressive, because it reduces headcount, or simply because nobody explicitly designed the oversight structure. Once an AI is deployed autonomously, the instinct to trust its outputs grows over time. Reversing it becomes politically difficult, even when the evidence suggests it should be reversed.
Autonomy is easy to grant and hard to retract. Once an AI makes decisions without human review, the humans who would have reviewed those decisions are typically reassigned or made redundant. If the AI later needs human oversight reintroduced, the institutional knowledge to provide that oversight may no longer exist.
The next lesson introduces the Autonomy Spectrum — a five-level model that gives you a structured way to decide how much autonomy any AI system should have, and what human oversight mechanisms must exist at each level.
