1.3 ยท Data Leakage, Poisoning & Supply Chain Risk

AI Supply Chain Risk โ€” Third-Party Models & Vendors

โฑ 11 minCourse 01

Most organisations no longer build AI models from scratch. They rely on pre-trained foundation models, open-source libraries, third-party APIs, and cloud AI services. Each of these introduces supply chain risk โ€” the possibility that a vulnerability, backdoor, or compromise in an upstream component affects your downstream systems.

The Dimensions of AI Supply Chain Risk

  • โ—†Pre-trained model risk โ€” Open-source models downloaded from repositories like Hugging Face may contain embedded backdoors. Researchers have demonstrated this is not theoretical: poisoned models have been uploaded to public repositories and downloaded thousands of times before detection.
  • โ—†Dependency risk โ€” AI frameworks (PyTorch, TensorFlow, scikit-learn) and their dependencies carry traditional software supply chain risks: malicious packages, typosquatting, compromised maintainer accounts.
  • โ—†Third-party data risk โ€” Datasets sourced from external providers may be poisoned, biased, or contain content that creates legal liability.
  • โ—†API vendor risk โ€” If you rely on a third-party AI API (OpenAI, Anthropic, Azure AI, Google Vertex), that vendor's security posture, uptime, and model behaviour directly affects your systems.
68%
of enterprises use at least one third-party pre-trained model in production
12ร—
increase in malicious ML packages since 2021
43%
of AI teams don't vet third-party models before deployment

Third-Party AI Vendor Due Diligence

When evaluating AI vendors, go beyond the standard security questionnaire. Specific questions to ask:

  • โ—†Is my data used to train your models? Under what terms?
  • โ—†What happens to my data if I terminate the contract?
  • โ—†Do you hold SOC 2 Type II, ISO 27001, or equivalent certification?
  • โ—†What is your model update cadence โ€” and how do you notify me when underlying model behaviour changes?
  • โ—†Do you conduct adversarial testing on your models before updates?
  • โ—†What are your data residency and sovereignty commitments?
The Model Update Problem

Third-party AI vendors update their models regularly โ€” sometimes without notice. A model that was safe and accurate when you evaluated it may behave differently six months later. Build continuous monitoring of vendor model behaviour into your AI governance programme, not just point-in-time evaluation.

โœ“ What to Do Now

Build an AI Vendor Register โ€” a simple inventory of every third-party AI model, API, dataset, or tool your organisation relies on, with the vendor name, data sharing terms, last security review date, and business criticality. This single artefact will transform your visibility into supply chain risk.