Legal Red Lines vs Organisational Red Lines
There are two categories of AI prohibition. Legal red lines are established by law — crossing them exposes your organisation to regulatory action, fines, and legal liability. Organisational red lines are the limits your organisation chooses to set, beyond what the law requires, based on your values, your brand, and your risk appetite.
Legal Red Lines: What the EU AI Act Prohibits
The EU AI Act's prohibited AI practices (in force from February 2025) include:
- ◆Subliminal manipulation — AI that exploits unconscious psychological vulnerabilities to influence decisions without the person's awareness
- ◆Exploitation of vulnerable groups — AI targeting children, elderly, or disabled persons in ways that impair rational decision-making
- ◆Social scoring by public authorities — government use of AI to rate citizens' trustworthiness across contexts
- ◆Real-time remote biometric identification in public spaces by law enforcement (with narrow exceptions)
- ◆Emotion recognition in workplace and educational settings
- ◆Biometric categorisation to infer sensitive characteristics (political views, sexual orientation, race, religion)
Several commercial tools — including some video interview platforms and customer service monitoring software — use what they call "engagement analysis" or "sentiment detection." Some of these tools fall within what the EU AI Act classifies as emotion recognition in workplace settings. If your HR or customer service tech stack includes any such tool, check its documentation against the Act's definitions before August 2026.
Organisational Red Lines: Going Beyond the Law
Legal compliance is the floor, not the ceiling. Smart organisations define their own prohibition list — AI applications they will not deploy even if technically legal — based on three factors:
- ◆Brand alignment — would deploying this AI be inconsistent with how we present ourselves to customers, employees, or the public?
- ◆Stakeholder trust — would our customers, employees, or partners object if they knew we were using this AI? Could we defend this publicly?
- ◆Long-term risk — even if legal today, is this in an area where regulation is tightening? Are we building systems we may have to unwind under future requirements?
The process of defining organisational red lines is as important as the list itself. It forces leadership to articulate what the organisation stands for in the context of AI — and creates a documented record of ethical deliberation that can be evidenced if challenged.
Every organisation deploying AI needs two lists: what the law prohibits, and what we prohibit for ourselves. The first is determined by regulators. The second is determined by you. Both need to be documented, approved at board level, and communicated across the business.
