RSAC: Why Cybersecurity Professionals Have a Duty to Secure AI

Written by

Cybersecurity professionals have an urgent duty to secure AI tools, ensuring these technologies are only used for social good, was a strong message at the RSA Conference 2024.

AI bring enormous promise in the real-world setting, such as diagnosing health conditions faster and with more accuracy.

However, with the pace of innovation and adoption of AI accelerating at an unprecedented rate, security guardrails must be put in place early to ensure they deliver on their enormous promise was the call by many.

This has to be done with concepts like privacy and fairness in mind.

“We have a responsibility to create a safe and secure space for exploration,” emphasized Vasu Jakkal, corporate vice president, security, compliance, identity, and management at Microsoft, highlighted the.

Separately, Dan Hendrycks, founder of the Center for AI Safety, said there are an enormous amount of risks with AI, and these are societal as well as technical, given its growing influence and potential in the physical world.

“This is a broader social-technical problem than just a technical problem,” he stated.

Bruce Schneier, security technologist, researcher, and lecturer, Harvard Kennedy School, added: “Safety is now our safety, and that’s why we have to think about these things more broadly.”

Threats to AI Integrity

Employees are utilizing publicly available generative AI tools, such as ChatGPT for their work, a phenomenon Dan Lohrmann, CISO at Presidio referred to as “Bring Your Own AI.”

Mike Aiello, chief technology officer at Secureworks, told Infosecurity that he sees an analogy with when software-as-a-subscription (SaaS) services first emerged, which led to many employees throughout enterprises creating subscriptions.

“Organizations are seeing the same thing with AI usage, such as signing up for ChatGPT, and it’s a little bit uncontrolled in the enterprise,” he noted.

This trend is giving rise to numerous security and privacy concerns for businesses, such as sensitive company data being inputted into these models – which could make the information publicly available.

Other issues threaten the integrity of the outputs of AI tools. These include data poisoning, whereby the behavior of the models are changed either accidently or intentionally by altering the data they are trained on, and prompt injection attacks, in which AI models are manipulating into performing unintended actions.

Such issues threaten to undermine trust in AI technologies, causing issues like hallucinations and even bias and discrimination. This in turn could limit their usage, and potential to solve major societal issues.

AI is a Governance Issue

Experts speaking at the RSA Conference advocated that organizations treat AI tools like any other applications they need to secure.

Heather Adkins, vice president, security engineering at Google, noted that in essence AI systems are the same as other applications, with inputs and outputs.

“A lot of the techniques we have been developing over the past 30 years as an industry apply here as well,” she commented.

At the heart of securing AI systems is a robust system of risk management governance, according to Jakkal. She set out Microsoft’s three pillars for this:

  1. Discover: Understand what AI tools are used in your environment and how employees are using them
  2. Protect: Mitigate risk across the systems you have, and implement
  3. Governance: Compliance with regulatory and code of conduct policies, and training the workforce in using AI tools safely

Lohrmann emphasized that the first step for organizations to take is visibility of AI across their workforce. “You’ve got to know what’s happening before you can do something about it,” he told Infosecurity.

Secureworks’ Aiello also advocated keeping humans very much in the loop when entrusting work to AI models. While the firm uses tools for data analysis, its analysts will check this data, and provide feedback when issues like hallucinations occur, he explained.


We are at the early stages of understanding the true impact AI can have on society. For this potential to be realized, these systems must be underpinned by strong security, or else risk facing limits or even bans across organizations and countries.

Organizations are still grappling with the explosion of generative AI tools in the workplace and must move quickly to develop the policies and tools that can manage this usage safely and securely.

The cybersecurity industry’s approach to this issue today is likely to heavily influence AI’s future role.

What’s hot on Infosecurity Magazine?