Why Relying on AI for Automated Detection and Response is Dangerous

Written by

One of the major use cases for artificial intelligence (AI) in cybersecurity is using the technology to automate threat detection and response. With so much hype around how AI can improve threat detection rates and replace security analysts in the security investigation process (an extremely in-demand benefit in the age of the great resignation and the ongoing cybersecurity skills shortage), many companies are going all-in on AI for threat detection and response. Yet, should they be?

AI in Threat Detection and Response: Reality or Pipe Dream?

For AI systems to work as intended, users need to train the technology by classifying data as either “good” or “bad.” This is a very feasible goal in many industries, but cybersecurity isn’t one of them. The reason is that, in cybersecurity investigations, data on its own won’t present a clear distinction between good and bad activity. You need the underlying context around that data to make the right classification – and AI cannot always decipher this context.

For example, AI may be able to pull data showing failed user login attempts, but without broader context on the user’s identity, the business case, the (un)reliable location they logged in from, the systems they used to do so, etc., it’s impossible to know whether those failed attempts were benign or malicious. While AI systems might create noisy alerts, the actual attack would pass undetected because the attacker likely obtained the employee’s credentials through means outside of the organization’s purview. Therefore, the unpleasant reality is that even with the advances of AI, a human analyst is needed to review and triage this background information, make a proper assessment given the context and take the appropriate action.

Because of this, I believe that hoping AI will suddenly automatically detect an attack, understand its complexity and then make an automated response action is an aspirational goal – not a reality. The truth is that AI technology just isn’t there yet when it comes to cybersecurity complexities. So, relying on it to automate threat detection and response – no matter how badly you want it to – is unreliable and dangerous. Organizations that are forging ahead anyway are being rewarded with volumes of false positives that security teams need to investigate, which greatly increases the risk of missing the actual threats that need to be acted on. Not to mention, this approach continues to overload already overburdened security analysts suffering from alert fatigue.

Applying AI to Other Areas of Security Investigations

This isn’t to say that AI will never fulfill its promise to automate threat detection and response or that the concept of using AI in security investigations has failed. Rather, we need to simply rethink where we use AI in the process and what goals we set for it to achieve. The areas of security investigations that AI can positively impact today are extraction of data from diverse data silos and then automating that data’s normalization, summarization and visualization to assist analysts.

Without AI, the typical data extraction process consists of analysts manually combing through endless logs and events to detect and pull data indicative of an anomaly or threat – an arduous, time-consuming process that requires resources and training that today’s security teams just don’t have. Doing this across multiple data sources compounds the problem further. AI can be used to extract, correlate and bubble up entities of interest along with their contextual business interaction, so analysts can quickly review, triage, pivot and make decisions in a highly efficient manner. This involves AI extracting entities of interest and their interaction from siloed heterogeneous authentication data, network activity, endpoint port and process information, user identity and application context and threat information from public data sources.

While the data sources and use-cases may be the same, the key difference in how you use AI is whether your cybersecurity AI tool is trying to build a self-driving car for you or giving you a car with assistive controls that lets you drive easily. Your cybersecurity solution should let you freely investigate by giving you a summary and visuals of the interacting entities while letting you drill down to supporting raw data and easily pivot to explore the direction you want to investigate further. Using AI in this way will empower security analysts to conduct more accurate security investigations and act on actual threats faster, with a high degree of confidence in the outcomes.

Maintaining the Human Element

AI in cybersecurity is still in its infancy, and we cannot expect it to run security operations fully. Rather, we need to think about it as “assistive AI,” where the technology is assisting – but not replacing – analysts. It’s this human and machine collaboration – with analysts remaining in the driver’s seat – that will enable companies to conduct simpler, faster and more accurate security investigations.

What’s hot on Infosecurity Magazine?