Healthcare, Infrastructure & Public Sector
Beyond financial services, three other sectors face particularly significant AI regulatory complexity: healthcare, critical infrastructure, and the public sector.
Healthcare AI: The Highest Stakes
AI in healthcare carries the highest regulatory burden of any sector. Several AI applications are automatically classified as high-risk under Annex I of the EU AI Act when they constitute safety components of regulated medical devices. Additionally:
- ◆Clinical AI must comply with the Medical Device Regulation (MDR) or In Vitro Diagnostic Regulation (IVDR)
- ◆AI processing health data faces the strictest GDPR rules — health data is special category data under Article 9
- ◆Clinical validation requirements may demand evidence of efficacy across diverse patient populations
- ◆The MHRA (UK) and EMA (EU) are developing AI-specific guidance for medical device software
If your software makes clinical recommendations, assists diagnosis, or informs treatment decisions, it may be regulated as a medical device even if you haven't sought regulatory approval. This is a common discovery that creates significant compliance risk for health tech startups and digital health platforms.
Critical Infrastructure: The Resilience Dimension
AI used in energy, transport, water, and telecommunications is subject to both the EU AI Act (high-risk classification) and sector-specific critical infrastructure protection requirements. Key considerations:
- ◆AI systems managing critical infrastructure must have documented human oversight and override capabilities
- ◆Failure modes and resilience under adversarial conditions must be tested and documented
- ◆NIS2 Directive (Network and Information Security) applies to operators of essential services and imposes cybersecurity obligations that extend to AI systems
Public Sector AI: Accountability and Algorithmic Transparency
Government and public sector use of AI faces unique accountability demands:
- ◆The EU AI Act prohibits real-time biometric surveillance by law enforcement except in strictly defined circumstances
- ◆AI systems used in benefits decisions, immigration, and justice are all classified as high-risk
- ◆The public sector Equality Duty (in the UK) requires public bodies to actively consider equality impacts — including those caused by AI systems
- ◆Freedom of Information obligations may require disclosure of AI systems and their decision logic in some circumstances
Whatever sector you operate in, map your regulatory stack before you start building your AI governance programme. Horizontal frameworks are the foundation; sector-specific requirements are the walls. You need both.
