Our website uses cookies

Cookies enable us to provide the best experience possible and help us understand how visitors use our website. By browsing Infosecurity Magazine, you agree to our use of cookies.

Okay, I understand Learn more

How AI and Machine Learning Will Win or Lose the War in Cyber

The headlines are filled with fear-inducing reports of massive data losses across almost every industry, yet every security vendor is shouting that the golden age of AI and machine learning has arrived to defend our corporate networks. How can we reconcile these competing memes?

Based just on our observations, the constant drumbeat of news stories about breaches at companies with talented and diligent security organizations proves that adversaries have the advantage today. A successful adversary campaign need only find a single flaw in an enterprise defense, while security teams are dealing with the increasing complexity of more instrumentation, tools, data and alerts that are being pushed as the only way to protect against threats and detect successful intrusions.

AI and machine learning are preached as groundbreaking technology that will turn the tide, but the reality is that they could actually exacerbate the existing problems and perpetuate the disadvantaged posture of security teams today. There are three common AI weaknesses that can deteriorate defenses:

Weakness #1: Increasing the Noise
Rampant implementation of AI to detect additional problems leads to an increase in alerts that security teams must add to workloads that are already maxed out. It is easy to build models that detect new potential threats, indicators of compromise or anomalous behaviors.

On the surface, it appears that these provide additional security, but in reality, this just generates more false positives that distract overburdened security operations teams from seeing real threats.

Weakness #2: Static Understanding
While AI is sold as intelligence that detects sophisticated new patterns, most AI systems actually only provide a moderate extension beyond previous rule and signature-based approaches. AI is only as powerful as the data it is provided, and most implementations of AI distribute generic models that don’t understand the networks they are deployed to and are easy for adversaries to evade.

When pattern detection is static across time and networks, adversaries can profile the detections and easily update tools and tactics to avoid the defenses in place.

Weakness #3: Black Box Results
Most AI systems today produce arbitrary scores and don’t explain why. This leads to a breakdown in trust and understanding with the humans that need to consume and act on the results. When AI isn’t able to support “sophisticated” detections with explanations that security analysts can understand, this adds to the cognitive load of the analyst, rather than making them more efficient and effective.

How AI can win the war in cyber
AI and machine learning can be powerful tools in improving enterprise defenses, but success requires a strategic approach that avoids the weaknesses of most of today’s implementations. There are three key strategies that will amplify the ability of security teams to work with AI, rather than adding to their problems.

Strategy #1: Focus on Adversary Objectives
AI systems are only as good as the goal they are asked to achieve. An effective system requires an ambitious goal that reduces the workload of the security team and automates investigation with a focus on the full adversary objective.

Rather than detecting ancillary artifacts of adversary activity such as the tool used or the tactic employed, AI systems that uncover the core behaviors that an adversary has difficulty avoiding will provide security teams with a small number of true business risks to investigate. Effective solutions should have very low false positive rates, generating fewer than 10 high-priority investigations per week (not the hundreds and thousands of events produced by current approaches).

Strategy #2: Adaptive Understanding
A focus on core adversary objectives forces adversaries to evolve and shift their approach to better hide in the environments they attack. Adversaries traditionally have the advantage because they can profile an environment and avoid the detections in place. AI systems can gain the advantage by understanding the environment better than the adversary.

A system that understands the specifics of an environment can identify unusual behaviors with context that adversaries could only gain with complete access to the full (and constantly updating) internal data feeds that the AI system is provided to learn with.

Strategy #3: Interpretable and Actionable Results
Ai systems must provide results that automate typical analyst workloads and explain the results in a way that builds trust and, over time, accelerates the skill and experience development of humans who use AI tools.

The talent shortage security teams face today means that AI tools must help fill skills gaps with automation but then also provide interpretability and situational awareness that will help grow the skills of security teams while also making day-to-day operations more efficient and impactful.

Gaining the Advantage
Basic AI and machine learning technology has reached a maturity that is providing unprecedented results across a rapidly expanding set of problems facing enterprises today. Organizations have an opportunity to gain the advantage against cyber adversaries by strategically deploying this technology with an approach that leaves behind the deafening noise generated by today’s security technologies.

Instead, a strategic focus on the key business risks with adaptive and transparent results will empower security teams to act effectively against the increasingly aggressive attacks they face.

What’s Hot on Infosecurity Magazine?