The Risk Tiers Explained
The EU AI Act organises AI systems into four risk tiers. The tier your system falls into determines your obligations — and the consequences of non-compliance. Understanding this classification system is the foundation of your entire compliance programme.
Tier 1: Unacceptable Risk — Prohibited
These AI applications are banned outright in the EU from February 2025. No exceptions, no grace periods, no authorisations.
- ◆Subliminal manipulation of behaviour — AI that exploits unconscious vulnerabilities to change decisions without the person's awareness or meaningful consent
- ◆Exploitation of vulnerable groups — AI that targets children, elderly, or people with disabilities in ways that impair rational decision-making
- ◆Social scoring by public authorities — government systems that assess citizens' trustworthiness based on behaviour across contexts
- ◆Real-time remote biometric identification in public spaces by law enforcement (with narrow, strictly defined exceptions)
- ◆Emotion recognition in workplace and educational settings
- ◆Biometric categorisation systems that infer sensitive characteristics (political views, sexual orientation, race, religion)
Before dismissing this tier as irrelevant, review your marketing technology and personalisation tools. Several major ad-tech and customer engagement platforms have come under scrutiny for subliminal manipulation and emotion inference features that may fall within prohibited categories.
Tier 2: High Risk — Heavy Obligations
High-risk AI systems are permitted but subject to strict requirements before they can be placed on the market or put into service. The Act lists two categories:
- ◆Annex I — Safety components in regulated products: AI in machinery, medical devices, vehicles, aircraft, toys, etc.
- ◆Annex II — Standalone high-risk systems: AI used in biometric identification, critical infrastructure, education, employment, essential services (credit, insurance), law enforcement, migration, and administration of justice
Obligations for high-risk AI include: risk management systems, data governance requirements, technical documentation, transparency measures, human oversight, accuracy and robustness standards, and mandatory registration in the EU database.
Tier 3: Limited Risk — Transparency Requirements
Limited-risk AI systems must meet targeted transparency obligations. The key examples:
- ◆Chatbots and conversational AI must inform users they are interacting with an AI
- ◆Deepfakes and AI-generated content must be labelled as artificially generated
- ◆Emotion recognition and biometric categorisation systems must inform subjects
Tier 4: Minimal Risk — No Requirements
The vast majority of AI applications fall into this category — spam filters, AI in video games, recommendation systems, most productivity tools. No mandatory requirements apply, though providers are encouraged to adopt voluntary codes of conduct.
The classification determines everything. Before you can build a compliance plan, you must classify every AI system. That's exactly what the next lesson's assignment will walk you through.
