Robots vs False Positives: AI is Future Protection for Applications and APIs

Written by

The goal for any security solution is to correctly detect whether an incoming API request is malicious - agnostic of the solution type. However the tools and processes we most commonly rely on may actually leave systems vulnerable. The question is, why are they suddenly failing and what new solutions can take their place? The answer may lie in automating security with artificial intelligence.

The main problem is that old solutions are less accurate and require heavy resources that aren’t terribly effective. 

What’s wrong with tried and true security solutions? 
We have to start at the beginning to answer why widespread, accepted solutions are failing. The most obvious answer: systems are changing too quickly for regular updates, and fundamental logic is set to unravel over the thousands of iterations in a timely, comprehensive way. Even threat identification is stymied by old processes and tools (which clearly makes threat response weaker than your tools may lead you to believe).

Before you can protect an application or API, you have to be able to determine what properties make a request malicious. It should be simple: a request is malicious when active content dresses up as benign data (i.e. an attack payload) in order to take over the targeted system. It’s like crashing an embassy ball as an agent of MI5 in any number of spy films—expressly to plant a bomb, assassinate a key diplomat, take hostages, or steal information.

The security landscape no longer fits that embassy ball analogy. Nowadays, it’s more like trying to protect a country with hundreds of borders from millions of physical and virtual threats, foreign and domestic. Think of the way a virus multiplies and mutates in ways that outpace vaccine production: it makes you wish for MI5 or the NSA, by comparison. 

The old ways are obsolete
The traditional way to detect malicious requests and attack payloads has been to use signatures and regular expressions. Truth be told, signature-based security cannot scale to handle larger and more complex inputs. They require too many updates. Even if signatures could be updated regularly, the logic used for detection fails as expressions are used within thousands of individual rules. It cannot keep up.

Smarter security solutions: weed out false positives with AI
Another major problem eating up security resources is the high volume of false-positives, which require human involvement with processes like captcha or manual support based on request-ID. With AI, we can automatically reduce the number of triggered detects that are actually false positives. 

First, you need to get around the signature problem. For grammar detection and simplification, building a signature-free approach, or library, is the most effective solution. A libdetection framework—a library that can define grammars in a formal way and then apply them to detect attacks and generate payloads—is one exceptional example that gets around traditional problems.


The library supports a variety of contexts (the parser states), which allows the sharing of attack details for subsequent analysis and attribution to vulnerabilities in the code. With each attack, your security solution builds on itself. 

There are also good examples of open-source projects implemented to detect false-positives with AI/recurrent neural networks. AI can create broader detection and reduce resource heavy and tedious human labor—increasing security and boosting DevSecOps morale. 
 
Another way AI can help sort through the complexity isn’t in dealing with heavy data loads, it’s in learning what qualifies as a legitimate threat and what is a false-positive, as well as what the value or status attributed to a blocked or flagged event is. After all, not all false-positives are equal.

Machine learning means that as an AI runs, it will get better at identifying a false positive and a real cybersecurity threat. Unlike a human (or human team), it can store, recall, and pattern vast libraries of identified payloads.
  
In many cases, the application context is critical to detection. This is the next frontier for security experts and innovators. For example, putting a payload to any website would be an attack, but that wouldn’t be the case if I want to put a payload to my own personal blog. 

The future of detect automation
Given today’s cybersecurity climate and the influx of malicious online activity, it’ll be critical for organizations of all sizes to quickly analyze incoming API requests to accurately determine intent. That just doesn’t seem possible without AI, even if a company has unlimited resources to throw at the problem.

The world out there is bigger and growing with it means letting it in, not getting eaten by it and it means taming it as it enters your domain. There is no choice but to automate.

What’s hot on Infosecurity Magazine?