How to Discover the Right AI Cybersecurity Tools for Your Security Strategy

Written by

The public release of OpenAI’s generative artificial intelligence (AI) tool ChatGPT in November 2022, ramped up the conversation about how these technologies can be utilized in cyber – both by attackers and defenders.

In response, cybersecurity vendors have expanded the product offerings that partially or wholly leverage AI capabilities. A report by Technavio in March 2024 estimated that the AI-based cybersecurity market size grew by 19.5% from 2022 to 2023, and is forecast to increase by $28.29bn by 2027.

Meanwhile, a recent survey of 200 cybersecurity professionals by Infosecurity Europe found that 54% planned to integrate AI as part of their organization’s cybersecurity strategy in the next 12 months.

Harman Singh, Managing Consultant and Director at consultancy Cyphere, told Infosecurity: “Subsets of AI are evolving very fast as tech advances, with lots of new and exciting possibilities. There will be solutions in stealth that we may see later this year.”

While AI provides tremendous possibilities for cybersecurity teams, the enormous hype around the technology is creating challenges for organizations in leveraging these tools effectively in their cybersecurity operations.

Ian Hill, Director of Information and Cyber Security, UPP, warned that while AI will be a game-changer in cybersecurity, in the current environment, organizations could be sucked into wasting a lot of money on products that are not necessarily appropriate or effective within their business risk requirements.

"Nothing more than glorified automation tools”

“Everyone is jumping on the AI bandwagon because they see £/$ signs, so many vendors are trying to put an AI spin on existing, or as they’ll sell it, ‘updated’ products, that in some cases are nothing more than glorified automation tools,” he outlined.

It is crucial that cyber professionals and business leaders build a good understanding of the AI cybersecurity product market, and how to identify the specific solutions that will work for their organization.

The Importance of Utilizing AI in Cybersecurity

Where AI is Proving Most Impactful in Cybersecurity so Far

AI has been used in cybersecurity for many years now, with large language models (LLMs) adding to these capabilities since around 2022. With AI weaponized by cyber adversaries, it is essential these technologies are harnessed by defenders too.

Hanah Darley, Director of Threat Research at AI cybersecurity firm Darktrace, commented: “SOC teams need a growing arsenal of defensive AI to effectively protect an organization in the age of offensive AI.”

A major impact of AI in cybersecurity currently is in alleviating the workload of SOC teams, which is especially pertinent amid the rising cyber skills gap.

Singh explained: “SOC work is famous for tiring SOC analysts due to the amount of work required to detect, analyze, triage and remediate security issues. With AI automation at hand to remove repetitive tasks, its enabling SOC teams to focus on critical aspects and adding a boost to overall efficiency and quality of the work.”

Well-trained AI models are also far more effective than humans in analyzing data, enhancing security teams’ ability to detect and even predict future attacks.

Indy Dhami, Financial Services Cyber Security Partner at KPMG UK, said: “Through the analysis of these patterns, predicting future threats can be modelled with greater confidence.” 

“Generative AI can also be used to simulate cyber-attacks, helping security firms understand network components, vulnerabilities and data flows, highlighting potential attack paths in a continuous manner to both protect and test system and data resilience,” he added.

AI’s ability to rapidly identify and classify attacks can also significantly enhance security teams’ incident response capabilities. A March 2024 report by ReliaQuest found that AI and automation helped organizations respond to security incidents up to 99% faster in 2023 compared to 2022.

Generative AI, in particular, is offering new opportunities for improving the efficiencies and capabilities of security teams. LLM chatbots can assist in a variety of ways, as demonstrated by Microsoft’s Copilot for Security tool.

Microsoft Copilot was made generally available worldwide on April 1, 2024, following the conclusion of its early access program.

The LLM used in Copilot is designed to assist security teams in a variety of functions, including classifying and responding to incidents, report writing for investigations, create secure code and scripts, and analyzing an organization’s internal and external attack surface.

Dhami expects generative AI will also be ideally suited to enhancing cyber governance, including tracking compliance with existing security protocols and third-party risk management.

“For example, by automating due diligence, it is possible to monitor what is changing with every vendor in the supply chain and assess potential risks,” he noted.

Protecting Against the Risks of Generative AI

A number of solutions have been developed in the past year that are focused on mitigating the data security risks posed by generative AI tools, when used for general operational purposes in organizations.

These risks include accidental data leakage such as company source code, a concern which led to Samsung banning its employees from using generative AI apps in the workplace in 2023.

Singh explained: “Sensitive data can be shared intentionally where staff may choose not to sanitize data first, or unintentionally, such as clip board pasting, document summarization where sensitive data is included.”

Another concern is vulnerabilities within generative AI tools themselves. For example, in March 2023, OpenAI provided details of a data breach caused by a bug in an open-source library, which exposed some customers’ payment-related information and allowed titles from some active users’ chat history to be viewed.

Several cybersecurity vendors have developed solutions to protect organizations against such dangers.

One example is the Culture AI solution that monitors and flags employees’ use of generative AI in the workplace to promptly identify instances of sensitive data being shared with these tools, as well as offering real-time education on safely utilizing such tools.

Additionally, AI Security Labs was recently launched Mindguard, a solution that is designed to help engineers evaluate the cyber risk to AI systems, such as ChatGPT, including machine learning attacks.

How to Cut Through a Noisy AI Security Market

Adding AI to Cybersecurity Not a Silver Bullet

While AI is a crucial weapon for cybersecurity professionals, it should not be viewed as a silver bullet to tackling growing cyber-threats. In addition, organizations must be selective about the type of AI tool they decide to employ.

Chris Stouff, Chief Security Officer at Armor, believes that organizations should be very cautious about using standalone AI solutions. This is because some AI tools are unreliable if the data it is trained on is tainted and/or contains biases.

“AI lacks the ability to contextualize. It doesn’t have human-like situational awareness, judgment, or prioritization abilities. It doesn’t understand the nuances of the wider environment it’s being used in, the industry or market context,” he warned.

Darktrace’s Darley acknowledged that there is currently a trend of “AI-washing,” with many businesses applying AI to systems and solutions that simply are not suited to them.

“Even if AI is used, it may not be the right AI for the right problem, leading to gaps in the efficacy of the solution,” she noted.

It’s important to recognize that the term ‘AI’ can cover a range of technologies, from generative AI chatbot tools like ChatGPT to automation and machine learning capabilities. 

"The right AI technique must be applied to the right challenge”

“Generative AI is just one type of AI – the right AI technique must be applied to the right challenge,” explained Darley.

“Business leaders must invest the time to understand the different types of AI in their stack and ensure investments in the right types of AI being applied to the right use cases,” she added.

Hill expressed concern that in some cases, AI-based products have only served to put prices up.

“I have implemented AI based security tools, one being a famous and well publicized early adopter, that for me just didn’t seem to bring anything that I could already achieve with existing tools for a lot less money and that didn’t live up to all its hype,” he outlined.

How to Find the Right AI Tool

At a strategic level, AI-based cyber tools are no different from other solutions, and the fundamental risks remain unchanged.

“While AI may enhance certain aspects of cybersecurity, it won't replace the need for a comprehensive risk strategy or the expertise of internal IT security teams and external SOCs,” said Stouff.

It is important to recognize that as with other solutions, AI tools have a business purpose, and that purpose is to protect the business.

Hill observed: “Too many take a ‘bottom up’ solutionized approach to cybersecurity, rather than a ‘top down’ risk approach, aligned to the business goals and objectives.”

Additionally, AI-based tools are not effective in isolation, and need to be integrated effectively into existing capabilities.

Hill urged CISOs to take a holistic view of their entire security posture before implementing a new AI based tool, ensuring they consider the following questions:

  1. Is this tool actually needed?
  2. Where would it fit within the existing strategy and capability?
  3. What would need to change for it to be effective?

Organizations must also ask the right questions to vendors to ensure the solution is effective, safe, and a good fit for the problem they are trying to solve, according to Darley.

“To cut through the noise, security leaders should be asking questions about which specific AI techniques are used, how the organization mitigates risks of data poisoning and model tampering, and how they perform quality assurance on their models to ensure valid, unbiased outputs,” she stated.

These questions should be engrained within a stringent buying process that enables organizations to make an informed choice.

Singh advised: “Just like every network, every organization is unique. Leaders should be making data-driven purchasing decisions to separate wheat from the chaff. This should include asking for capabilities and not believing buzzwords, demanding measurable results, considering integration and expertise and focus on ROI.”

Conclusion

AI offers enormous potential in cybersecurity now and in the future. With cybercriminals leveraging these technologies to increase the volume and sophistication of attacks, it is even more essential that defenders also utilize AI in response.

However, the availability of generative AI tools has ramped up the hype cycle around AI, creating a lot of noise about how it should be employed in security teams. Security and business leaders must learn how to cut through the noise and develop the processes to make appropriate purchasing decisions for AI-based tools.

What’s hot on Infosecurity Magazine?