AI Is Getting Smarter. Is Your Supply Chain Keeping Up?

Written by

People are using AI to boost productivity and make decisions, but at what cost? What happens if the very AI tools you rely on become the source of your next data breach or cyberattack?

IBM’s 2025 report revealed that 13% of organizations reported AI related breaches and 97% of those lacked proper AI access controls. In the US, the average cost of a breach hit $10.22 million.

As AI adoption accelerates, organizations pull models and tools from many sources. Some are proprietary and closed, while others are open source, transparent and easy to try. We often test the end product and use it if it looks safe on the surface.

But we rarely stop to question the “recipe” behind it, which includes source code, dependencies, data, training, packaging and deployment. While we still need to watch for traditional software threats, the new challenge today is that AI models hold onto what they learn and mistakes can spread quickly across an organization.

Here’s the way to address this issue: the SLSA framework an open source industry standard, provides a structured approach to ensure software is built and delivered securely, with integrity checks at every stage.

Adopting a similar framework for AI workflows can help mitigate risks related to compromised data, dependencies, and training pipelines. To understand why this approach matters, it helps to first look more closely at where the real risks arise in the AI supply chain.

New AI Risks

An machine learning workflow may seem familiar as it involves building, testing and shipping models, but the AI supply chain is broader and includes additional assets and activities. Data is collected, cleaned and fine-tuned for training. Training produces model weights, which along with the model architecture are pushed to a registry for use.

Each step in model production opens new attack vectors. If data is poisoned, the model can shift in ways that are hard to see upfront. Even NIST identifies data poisoning as a key supply chain risk, warning that adversaries can introduce corrupted data to make harmful or erroneous decisions.

Real incidents demonstrate these risks. In 2023, researchers modified a popular open source model to push targeted misinformation and uploaded it to a public hub where it appeared legitimate. It passed normal tests but triggered harmful behavior only on specific prompts, revealing a backdoor in the wild.

And in 2022, PyTorch nightly builds were compromised by a dependency confusion attack that left systems vulnerable to compromise. This provides a real world example of model swaps and backdoors in the software supply chain.

This is not just about models and data. Traditional infrastructure risks such as vulnerable containers and OS images, leaked secrets, over permissioned integrations, misconfigured networks, weak authentication or access controls are all still in play.

Ten Ways to Fix the AI Supply Chain

This approach draws on insights from real incident response, operational deployments, and continuous improvement. The following steps combine proven field practices with NIST AI security guidance to support effective security integration across the AI/ML lifecycle.

  1. Inventory everything: Generate SBOMs (Software Bill of Materials) and maintain an inventory of models and datasets with versions and sources. Treat models as software artifacts by running automated scans and enforcing standard checks.

  2. Prove provenance in practice: Ensure model training is transparent, traceable and verifiable. Promote models only after validating provenance, checking the signature, authorized signer and recent timestamp.

  3. Trust, but verify model sources with Zero Trust: Pull models only from trusted sources. Always pin versions and verify signatures before deployment and block unsigned or unverifiable models.

  4. Sign what you build: Sign model archives, containers, wheels and even policy files. Verify signatures both before deployment and at runtime, and block anything that doesn't match the allow list.

  5.  Track and lock down data flows: Log data origins, approvals and transformations with tamper evident trails. Use automated filters to remove malicious or outlier data before training.

  6. Reduce blast radius: Use minimal base images, enforce hard tenant isolation, avoid running as root, mount read only filesystems, enforce strict network policies, separate service accounts by roles, securely store secrets and block unsigned images at admission.

  7. Test like an adversary: Test new models in an isolated and controlled environment with limited access. Use canary prompts and targeted tests to detect jailbreaks or backdoors before production use.

  8. Continuous behavioral monitoring: Verify model digests on load/reload, monitor outputs for sudden shifts or unexpected activity. Configure alerts to quickly detect and investigate anomalies.

  9. Drills and incident playbooks: Develop incident response playbooks to disable registries, revoke keys, roll back models and rebuild from clean data snapshots. Additionally, implement continuous canary testing on live models and conduct drills to ensure readiness.

  10. Compliance and visibility: Track model and data licenses for compliance. Block unknown or conflicting licenses and maintain thorough audit trails of usage.

The Importance of Layered Defense

Think of your security as layers of Swiss cheese. Each slice is full of holes, its own weaknesses and blind spots. Relying on just one slice means those holes become open doors that attackers can slip through without being detected. But when you stack enough slices together, including SBOMs, provenance checks, attestations, signed artifacts, data lineage, pinned digests, admission controls, network policies and runtime verifications, the holes rarely line up.

This layered defense creates a challenge for even the most determined attackers. To succeed, they must navigate through every hole in every slice all at once. Because of this, breaches become much harder to pull off, far riskier for attackers and significantly less likely to succeed.

The Swiss cheese model is more than just a metaphor, it’s a proven strategy for turning scattered vulnerabilities into a nearly impenetrable fortress.

What’s Hot on Infosecurity Magazine?