Cisco UK & Ireland Blog
Share

Putting secure AI firmly on the agenda


March 4, 2024


Over the past month, I’ve had the chance to speak to organisations both public and private who consistently raise the challenge they are facing with the rise in AI usage and the potential security risks it poses.

Having attended several industry events where conversations led to the same concerns, I wanted to share some perspectives on how fast we have been moving as an industry but also set the context for which organisations now need to be thinking about their digital security posture.

As Jeff Campbell, Cisco’s SVP & Chief Government Strategy Officer recently put it, “Tech companies and governments must unite behind cybersecurity and join forces to develop advanced AI detection systems, ensuring a safer online environment”.

There is a clear case for AI to be harnessed at scale to finally tip the balance in favour of defenders over cyber threat actors. However, to bring this to life, more work must be done to counter the rising threat that AI-enabled disinformation poses to people, companies, and society.

What the data tells us

Advances in AI technology make it faster, easier, and cheaper than ever to manipulate and abuse digital content with the aim to mislead and deceive on a massive scale. This is an area where those developing, using, and regulating the technology all have an important role to play if we hope to achieve the potential AI benefits while effectively managing for the new risks it inevitably introduces.

Cisco’s Cybersecurity Readiness Index revealed that only 15% of organizations were in a mature state of readiness to remain resilient when faced with a cybersecurity threat. Just 22% are in a mature state of readiness to protect data. While it’s clear that the pressure is on to leverage AI capabilities, the recent Cisco AI Readiness Index showed that 86% of organizations around the world are not fully prepared to integrate AI into their businesses.

This year, we will see organisations take considerable strides to address these dual challenges. They will focus their attention on developing systems to reliably detect AI and mitigate the associated risks.

In her 2024 tech predictions, Cisco Chief Strategy Officer and GM of Applications Liz Centoni summed it up: “Inclusive new AI solutions will guard against cloned voices, deepfakes, social media bots, and influence campaigns. AI models will be trained on large datasets for better accuracy and effectiveness. New mechanisms for authentication and provenance will promote transparency and accountability.”

To date, detecting AI-generated written content has proven stubbornly difficult. AI detection tools have managed only low levels of accuracy, often interpreting AI content as human-generated and returning false positive results for human-written text. This has obvious implications for those in areas that may disallow AI (more on this later).

One such example is in schools, where students may be penalised if the content they have personally written ‘fails’ an AI detector’s algorithm.

To strengthen their guard against AI-based subversion, we can expect tech companies to invest further in this area—improving the detection of all forms of AI output. This may take the form of developing mechanisms for content authentication and provenance, allowing users to verify the authenticity and source of AI-generated content.

In amongst all of this is data. “Data is the backbone and differentiator” as my colleague and SVP and GM for @OutshiftbyCisco, Vijoy Pandey put it in a piece he recently wrote.

AI has and will continue to be front-page news in the year to come, and that means data will also be in the spotlight. Whilst data is the backbone and the differentiator for AI, it is also the area where readiness is the weakest.

The Cisco AI Readiness Index revealed that 81% of all organisations claim some degree of siloed or fragmented data. This poses a critical challenge due to the complexity of integrating data held in different places and formats.

While siloed data has long been understood as a barrier to information sharing, collaboration, and holistic insight and decision making in the enterprise, AI adds a new dimension. With the rise in data complexity, it can be difficult to coordinate workflows and enable better synchronisation and efficiency. Leveraging data across silos will require data lineage tracking, as well, so that only the approved and relevant data is used, and AI model output can be explained and tracked to training data.

To address this issue, businesses will turn more and more to AI in the coming year as they look to unite siloed data, improve productivity, and streamline operations. In fact, this may be the year we look back on as the beginning of the end of data silos.

Moving Security from Complex to Conversational

There’s a lot to get started with today. Whilst the data problem is getting solved, an area that is starting to receive more attention is the opportunity of AI Assistants. Imagine if everyone working in Security Operations (SecOps) could have a sidekick, to help them stay on top of things, make them more productive and ultimately help them thwart issues faster? That’s exactly the challenge Jeetu Patel EVP & GM, Security and Collaboration Business Units at Cisco posed.

And that’s exactly what our engineering teams have been hard at work on. We’ve invested heavily in cutting-edge innovations in AI that will augment security staff by simplifying operations and increasing efficacy. If we can do things like simplify policy management for SecOps then we can directly improve threat response.

Effective security policy creation and management is an oftentimes extremely complex but critical function of cybersecurity hygiene. There is little room for error and the process of making simple edits that won’t interfere with or override previous rules is extremely time consuming and technically challenging. The volume, inconsistency, and complexity of maintaining all these policies across all these systems creates a significant risk that opens the door for attacks.

Our latest Generative AI Assistant addresses these problems by enabling security and IT administrators to describe granular security policies and evaluate how to best implement them across different aspects of their security infrastructure.

The AI Assistant can reason with the existing firewall policy set to implement and simplify rules within the Cisco Secure Firewall Management Centre. It is the first of many examples of how generative AI can reimagine the way we manage our security posture.

Augmenting Analysts with Machine Speed and Scale

Threat detection and response is another complex and high stakes responsibility of the security operations team where time is of the essence and analysts must rapidly gain understanding of complex systems at machine scale.

Our AI Security Operations Centre (SOC) Assistant will augment security analysts with the context to make the right decisions at the right time. The SOC Assistant will provide a comprehensive situation analysis for analysts, correlating intel across the Cisco Security Cloud platform, relaying potential impacts, and providing recommended actions. Significantly reducing the time needed for SOC teams to respond to potential threats.

How do I know who is using what and when?

We know the usage of Generative AI applications will only increase. Helping organizations safely use them within their environments to increase employee productivity, without adding security risk has become a priority as organisations look to harness the potential benefits.

But who is using what, how? How do you manage the potential data loss or intellectual property loss that could occur through misuse? Well, we can now help organisations with AI Data Loss Prevention (DLP) functionality and secure the use of Generative AI applications via discovery, block/allow, granular control, and inline data loss prevention.

Now you can:

  • Discover and control use of 70+ Generative AI apps, including Bing AI, Google Gemini, and ChatGPT — who’s trying to use it, how frequently, and where.
  • Block or allow multiple Generative AI applications.
  • Enable the safe use of ChatGPT:
    • Granularly control which functions to allow — or not — and by whom.
    • Use DLP to ensure sensitive data is not leaked to the AI platform.
    • Use DLP to block the download of unsafe content from ChatGPT and notify the user.

Policy management is done via a single, unified dashboard, so while it’s strengthens security, it’s also keeping things simple for the SOC and IT team.

Our Stance: Responsible AI is Non-Negotiable

When it comes to AI, trust is paramount. Ultimately, our customers trust us with their data because we view data privacy as a fundamental human right. That’s why we built governance tools that measure our data management, data provenance (where data originated and its movement), and how it’s being leveraged in the models.

None of what I have written above matters if there is a lack of transparency, because that leaves the door open for privacy loss, algorithm bias, and data manipulation. Any organisation using AI should be asking the questions: “What data sets are you training your AI on?” and “Does any of my data become public domain because of your use of AI?”

But with governance in place, there is a path forward for every organisation. As previously mentioned, at Cisco. we have both Responsible AI Principles and a Framework to guide our approach. We also developed a Generative AI Policy on the acceptable use of these tools. Before we allow the use of GenAI tools with confidential information, we conduct an AI Impact Assessment to identify and manage AI-specific risks. Once we’ve validated that a tool sufficiently protects our confidential information and we’re comfortable with the security and privacy protections in place, the tool is opened for employees to explore and innovate further.

We really are just getting started with the impact AI will have on each one of us. We need to approach this new technology with excitement and humility – there’s so much we don’t yet know. New concerns are being raised every day. Organisations and governments will need to be agile and adaptable to changing regulations, consumer concerns, and evolving risks.

More than ever before, there will need to be a strong partnership across public and private sectors. AI has tremendous potential for good, but it will take industry, government, developers, deployers, and users all working together to promote responsible innovation without compromise to privacy, security, human rights, and safety.

Here’s to a year where we see Secure AI become something we all put on the agenda.

Let me know your thoughts and, until next time, let’s all push for Secure AI!

@chintancsco

Tags:
Leave a comment