Draft Your AI Prohibition Policy
Every AI governance programme needs a documented prohibition policy — a clear statement of what AI applications your organisation will not deploy, and why. This assignment walks you through drafting one.
List your legal prohibitions. Start with the EU AI Act's prohibited AI practices list. For each one, note whether it is immediately relevant to your industry and whether any current tools or planned projects might touch these areas.
Identify your organisational red lines. Beyond legal requirements, list AI applications you would not deploy based on your brand values, stakeholder relationships, or long-term risk assessment. Aim for 3–5 specific categories (e.g., "Emotion inference on customer calls", "Autonomous employment decisions").
Document the rationale for each red line. For each prohibition, write 1–2 sentences explaining why. This is the audit trail that demonstrates ethical deliberation — and that protects the organisation if challenged.
Define the exception process. What would need to happen for your organisation to reconsider a red line? Who has authority to grant an exception, and what review process must be completed first?
Assign ownership and review cadence. Who owns this policy? When is it reviewed (annually? when regulation changes?)? Who approves changes? This policy should sit with your AI governance committee or equivalent.
A draft AI Prohibition Policy that defines both your legal floor and your ethical ceiling. This is one of the five core documents any organisation needs to demonstrate AI governance maturity.
