Design Your Oversight Model
Pick one AI use case your organisation currently runs or is planning to deploy. Work through the following steps to design or audit its oversight model.
Identify your use case. Name the specific AI system or planned deployment you're working with. What does it do? What decisions does it influence?
Place it on the Autonomy Spectrum. Based on the five levels, where does this AI currently operate — or where is it planned to operate? Write down your assessment and your rationale.
Map the decision points. List every type of decision or output this AI produces. For each one, note whether a human currently reviews it, approves it, or acts on it without review.
Identify the oversight gaps. Are there decision types that are currently autonomous but shouldn't be, given the stakes? Are there human reviews that add delay without adding value? Document both.
Design your oversight protocol. For each decision type, specify: who reviews it, on what cadence, with what authority to override, and how overrides are recorded. If you're moving to a higher autonomy level, specify what evidence of model reliability would justify that move.
Define your monitoring regime. How will you know if the model starts performing below acceptable thresholds? What metrics trigger a review? Who is responsible for monitoring them?
A written oversight model for one real AI use case — the kind of document that should exist before any AI system is deployed, and that most organisations don't have. This is also the foundation of your AI register from Course 2.
