#HowTo Vet AI Threat Detection Solutions for Stronger Cybersecurity

Written by

Traditional indicator-based cybersecurity approaches look at the "how" of an attack. This means scouring a network for specific hashes, patterns and chains of code execution that point to particular exploits.

In contrast, a behavioral analytics-based approach looks at motives rather than tactics. It cuts closer to the heart of an attack, leaving fewer unanswered questions about what the hackers are trying to achieve. It also is more effective against unknown threats—for example, custom-built malware that can lurk go undetected by traditional solutions for years.

Adopting a behavioral approach also requires a shift in thinking, especially around how to evaluate solutions. The standard toolkit of scenarios to run and tests to administer are unlikely to capture the biggest benefit of behavioral analytics security solutions, which is their ability to deal with the unusual and the unanticipated.

Rather than applying a simple comparative matrix, understanding the true value of an AI-based threat detection system requires a more in-depth look at the technology itself. CIOs can use the following steps to evaluate AI-driven threat detection solutions and find ones that will actually have an impact.

Use real-life examples to test. CIOs should use complex examples to test the depth of the analysis and engine. The best way to compare security solutions is through red team tests, which simulate the conditions of a real attack as closely as possible. In addition, try to test the solution on a real-life network or at least a good simulation environment as well as in a lab. This degree of rigor is necessary for the superior platform to show its strengths; otherwise, it’s like test-driving a Maserati and a Hyundai in a parking lot and concluding that they each drive about the same.

Ask the vendor to describe the depth of the engine’s analysis. A strong behavioral analytics cybersecurity solution should arrive at its conclusions via complex and numerous deduction chains. The simple conclusion that a user is malicious could be driven by hundreds of different data points. Truly sophisticated platforms have the human ability to contextualize. Even when a behavior deviates from a baseline of normalcy, the platform searches for mitigating factors before assuming malicious intent.

Evaluate explainability. A good behavioral engine explains its conclusions in as close to human terms as possible. More importantly, it explains how it arrived at those conclusions, taking users step-by-step down the deduction path all the way to the raw data. Opacity is the enemy in AI, since it makes it hard to evaluate how well a solution is working or how its performance changes over time.

Look to the future. AI solutions are expected to evolve. A good platform should be able to illustrate how that evolution will take place, ensuring it remains relevant even as the threat landscape changes.

AI threat detection with behavioral analytics continues to gain popularity as a cybersecurity solution. However, the technology won’t reach its full potential unless CIOs take the time to understand it—and the optimal ways to test it—a little bit better. If they address these issues, they will greatly strengthen their defenses against attackers’ full arsenal.

What’s hot on Infosecurity Magazine?