What Should Frighten us about AI-Based Malware?

Written by

Of all the cybersecurity industry’s problems, one of the most striking is the way attackers are often able to stay one step ahead of defenders without working terribly hard. It’s an issue whose root causes are mostly technical: the prime example are software vulnerabilities which cyber-criminals have a habit of finding out about before vendors and their customers, leading to the almost undefendable zero-day phenomenon which has propelled many famous cyber-attacks.

A second is that organizations struggling with the complexity of unfamiliar and new technologies make mistakes, inadvertently leaving vulnerable ports and services exposed. Starkest of all, perhaps, is the way techniques, tools, and infrastructure set up to help organizations defend themselves (Shodan, for example but also numerous pen-test tools) are now just as likely to be turned against businesses by attackers who tear into networks with the aggression of red teams gone rogue.

Add to this the polymorphic nature of modern malware, and attackers can appear so conceptually unstoppable that it’s no wonder security vendors increasingly emphasize the need not to block attacks but instead respond to them as quickly as possible.

The AI fightback
Some years back, a list of mostly US-based start-ups started a bit of a counter-attack against the doom and gloom with a brave new idea – AI machine learning (ML) security powered by algorithms. In an age of big data, this makes complete sense and the idea has since been taken up by all manner of systems used to for anti-spam, malware detection, threat analysis and intelligence, and Security Operations Centre (SoC) automation where it has been proposed to help patch skills shortages. 

I’d rate these as useful advances, but there’s no getting away from the controversial nature of the theory, which has been branded by some as the ultimate example of technology as a ‘black box’ nobody really understands. How do we know that machine learning is able to detect new and unknown types of attack that conventional systems fail to spot? In some cases, it could be because the product brochure says so. 

Then the even bigger gotcha hits you – what’s stopping attackers from outfoxing defensive ML with even better ML of their own? If this were possible, even some of the time, the industry would find itself back at square one.

This is pure speculation, of course, because to date nobody has detected AI being used in a cyber-attack, which is why our understanding of how it might work remains largely based around academic research such as IBM’s proof-of-concept DeepLocker malware project

What might malicious ML look like?
It would be unwise to ignore the potential for trouble. One of the biggest hurdles faced by attackers is quickly understanding what works, for example when sending spam, phishing and, increasingly, political disinformation.

It’s not hard to imagine that big data techniques allied to ML could hugely improve the efficiency of these threats by analyzing how targets react to and share them in real time. This implies the possibility that such campaigns might one day evolve in a matter of hours or minutes; a timescale defender would struggle to counter using today’s technologies. 

A second scenario is one that defenders would even see: that cyber-criminals might simulate the defenses of a target using their own ML to gauge the success of different attacks (a technique already routinely used to evade anti-virus). Once again, this exploits the advantage that attackers always have sight of the target, while defenders must rely on good guesses. 

Or perhaps ML could simply be used to crank out vast quantities of new and unique malware than is possible today. Whichever of these approaches is taken – and this is only a sample of the possibilities – it jumps out at you how awkward it would be to defend against even relatively simple ML-based attacks. About the only consolation is that if ML-based AI really is a black box that nobody understands then, logically, the attackers won’t understand it either and will waste time experimenting. 

Unintended consequences
If we should fear anything it’s precisely this black box effect. There are two parts to this, the biggest of which is the potential for ML-based malware to cause something unintended to happen, especially when targeting critical infrastructure.

This phenomenon has already come to pass with non-AI malware - Stuxnet in 2010 and NotPetya in 2017 are the obvious examples - both of which infected thousands of organizations not on their original target list after unexpectedly ‘escaping’ into the wild.

When it comes to powerful malware exploiting multiple zero days there’s no such thing as a reliably contained attack. Once released, this kind of malware remains pathogenically dangerous until every system it can infect is patched or taken offline, which might be years or decades down the line.  

Another anxiety is that because the expertise to understand ML is still thin on the ground, there’s a danger that engineers could come to rely on it without fully understanding its limitations, both for defense and by over-estimating its usefulness in attack. The mistake, then, might be that too many over-invest in it based on marketing promises that end up consuming resources better deployed elsewhere.  Once a more realistic assessment takes hold, ML could end up as just another tool that is good at solving certain very specific problems. 

Conclusion
My contradictory-sounding conclusion is that perhaps ML and AI makes no fundamental difference at all. It’s just another stop on a journey computer security has been making since the beginning of digital time. The problem is overcoming our preconceptions about what it is and what it means. Chiefly, we must overcome the tendency to think of ML and AI as mysteriously ‘other’ because we don’t understand it and therefore find it difficult to process the concept of machines making complex decisions. 

It’s not as if attackers aren’t breaching networks already with today’s pre-ML technology or that well-prepared defenders aren’t regularly stopping them using the same technology. What AI reminds us is that the real difference is how organizations are defended, not whether they or their attackers use ML and AI or not. That has always been what separates survivors from victims. Cybersecurity remains a working demonstration of how the devil takes the hindmost. 

What’s hot on Infosecurity Magazine?