
Building Trustworthy AI: Cisco supports LASR’s Open Call for Secure Innovation
2 min read
The Critical Need for AI Security
Artificial Intelligence is no longer a vision of the future. It’s a transformative force shaping industries, economies, and daily life. From real-time decision-making to autonomous systems, AI is becoming increasingly powerful and interconnected. However, this rapid evolution introduces new security challenges, including adversarial manipulation, memory poisoning, and vulnerabilities in data pipelines.
As AI systems grow more autonomous, traditional cybersecurity boundaries are being redefined. The rise of agentic AI – where intelligent systems act, collaborate, and learn with minimal human oversight – demands a proactive and resilient approach to security. To ensure trust and integrity in this new era, security must be embedded into agentic AI systems by design.
Introducing LASR: A Collaborative Approach to AI Security
The Laboratory for AI Security Research (LASR), backed by the UK Government, will address the growing and strategic security challenges posed by AI systems. This initiative unites public and private sector to conduct cutting-edge research and develop innovative AI security solutions for UK national security and economic growth.
As LASR’s first Technology Partner, Cisco is proud to play a pivotal role in advancing its mission to make AI security a first-class discipline. Led by innovation partner Plexal, Cisco is excited to support LASR’s new Opportunity Call, a flagship initiative to accelerate the development of secure and trustworthy AI solutions.
The LASR Opportunity Call: Empowering Startups to Tackle Agentic AI Security
The LASR Opportunity Call invites startups and SMEs at the forefront of AI and cybersecurity to join our growing ecosystem dedicated to secure AI innovation. Participants will gain access to funding, mentorship, a secure testbed for validation, and a workspace at the LASR Hub in London.
The programme focuses on three critical areas:
- Agent Trust and Discovery: developing secure methods for autonomous agents to find, authenticate, and collaborate.
- Infrastructure Hardening: securing the platforms and pipelines that build and run AI systems.
- Data Protection and Reasoning: safeguarding data integrity and privacy in AI workflows.
Applications are open now, with a deadline of July 9, 2025, and the programme will commence in August 2025.
Why This Matters
At Cisco, we believe that security is the foundation of responsible AI innovation. Supporting the LASR Opportunity Call is a step toward building a future where AI systems are not only powerful but also safe, transparent, and resilient. By fostering collaboration between startups, researchers, and industry leaders, we aim to create an ecosystem where innovation thrives without compromising security.
Cisco’s Role in LASR: Driving Innovation and Leadership
Cisco’s partnership with LASR showcases how science, innovation, and industry can come together to build resilience. Announced at CyberUK in May, our collaboration will span multiple dimensions from sharing strategic insight and investment alignment, to supporting innovation challenges and showcasing AI security capabilities. By engaging with startups, government stakeholders, and global audiences, Cisco is reinforcing its position as a trusted and value-driven technology leader.
A Call to Action
If you’re building solutions to protect the security of AI, we want to hear from you. Join us in shaping the future of secure AI innovation.
Apply now: LASR Opportunity Call
Together, let’s build a future where AI systems are not only transformative but also trustworthy.