Cisco UK & Ireland Blog

Securing the Future: Moving from AI Supply Chain Visibility to Decisive Action

3 min read



This is a guest post from James Dunne, Director, Defence & National Security, Cisco UK & Ireland.

As the UK cyber community convenes in Glasgow for CYBERUK 2026, the conversation has shifted from the potential of Artificial Intelligence to the practical challenge of securing it at scale. Our white paper, “AI Supply Chain Security: From Visibility to Action,” developed in collaboration with Plexal and HMG partners as part of the Laboratory for AI Security Research (LASR) programme addresses part of this challenge. 

As AI adoption accelerates across the public sector and critical national infrastructure, the security of the underlying AI supply chain has become a defining challenge 

The Challenge: Why Traditional Security Isn’t Enough 

For decades, we have secured software by auditing code and managing known dependencies. However, AI systems operate on a fundamentally different paradigm. In traditional systems, software consumes data; in AI, data defines the system itself.  

The white paper highlights that AI supply chains are far more complex and fragmented than their traditional counterparts. They span global data providers, model developers, cloud infrastructure, and orchestration platforms. This structural fragmentation creates “blind spots” where responsibility is unclear, and data provenance is difficult to trace.  

When we rely on static transparency mechanisms such as traditional Software Bills of Materials (SBOMs) or model cards, we are often left with a snapshot of a system that has already changed. In an era where models are updated frequently and agents interact dynamically with third-party tools, a pre-deployment assessment is often outdated by the time it is completed. 

Key Insights: From Visibility to Insight 

The paper identifies four critical dimensions that define the AI supply chain security challenge: 

  1. Data as System Definition: because model behaviour emerges from training data, data governance is now a core supply chain risk. Without granular lineage tracking, organisations cannot easily distinguish between benign data quality issues and deliberate poisoning. 
  1. Expanded Attack Surface: AI systems involve a multi-layered architecture—from compute and storage to runtime environments—that traditional software security tools were not designed to monitor. 
  1. Structural Fragmentation: the sheer number of parties involved in an AI lifecycle means that no single entity holds a complete view, creating accountability gaps when incidents occur. 
  1. Weak Attribution Mechanisms: unlike software, where we can trace vulnerabilities back to specific code commits, AI models often lack the robust attribution needed to identify the source of a compromise, making remediation a complex, often manual, process. 

These factors compound to create systemic vulnerabilities. As the paper explains:  

“Organisations cannot secure what they cannot see, and current transparency mechanisms do not enable action.” 

What This Means for Organisations 

For leaders in the public sector and for those responsible for critical infrastructure more broadly, the stakes are clear. Sovereign AI capability depends on more than just hosting resources domestically. It requires genuine operational control: the ability to verify vendor claims, trace component lineage, and maintain service continuity when external dependencies fail. 

If we treat transparency as a “check-the-box” compliance exercise, we remain reactive. To build resilient AI, we must shift our focus to continuous validation and enforceable remediation pathways. We need to be able to detect anomalies in real-time and, when necessary, block, isolate, or re-route workloads to ensure that a failure in one part of the ecosystem does not cascade into a national-level disruption. 

Cisco’s Commitment: Enabling Secure AI at Scale 

At Cisco, we believe that security should be an enabler of innovation, not a barrier. We help organisations bridge the gap between understanding their supply chain and taking decisive action. 

We are focused on providing the tools that allow for real-time monitoring of infrastructure dependencies and agentic behaviours. By moving toward automated, continuous validation, we help our customers reduce the threat landscape of potential compromises, ensuring that even as AI systems evolve, the underlying infrastructure remains secure, resilient, and trustworthy. We view our role as providing the operational clarity that procurement and security teams need to make informed, safe decisions in an increasingly complex landscape. 

A Collaborative Path Forward 

This white paper is the result of a year-long collaboration with the Laboratory for AI Security Research (LASR). This partnership reflects the practical, systems-level thinking required to support the UK’s AI Opportunities Action Plan. By bringing together the best minds from academia, industry, and government, we are working to ensure that the rapid deployment of AI across the public sector is matched by a robust, secure, and sovereign-ready foundation. 

Conclusion: Take the Next Step 

The transition from visibility to action is the most important journey an organisation can take in the AI era. We invite you to read the full report to understand how these challenges apply to your specific environment and get in touch to see how you can begin building the operational controls necessary for a secure AI future with Cisco. 

Download the White Paper: AI Supply Chain Security: From Visibility to Action 

This work was supported by the Laboratory for AI Security Research (LASR). The views expressed in this paper are those of the authors and do not necessarily reflect the position of LASR or His Majesty’s Government. 

Leave a comment