AI, Machine Learning: Not Ready for Prime Time

Written by

Artificial intelligence (AI) and machine learning (ML) have been marketed as game-changing technologies amid the climbing number of breaches, increased prevalence of non-malware attacks and the waning efficacy of legacy antivirus (AV). Yet doubts still persist, especially when they’re used in siloes. For now, it appears to be a fledgling space.

According to Carbon Black’s Behind the Hype report on the subject, nearly two-thirds (64%) of security researchers said they’ve seen an increase in non-malware attacks since the beginning of 2016; and, the vast majority (93%) of security researchers said non-malware attacks pose more of a business risk than commodity malware attacks.

This group of attacks include remote logins (55%); WMI-based attacks (41%); in-memory attacks (39%); PowerShell-based attacks (34%); and attacks leveraging Office macros (31%).

Against this backdrop, two-thirds of security researchers said they were not confident that legacy AV could protect an organization from non-malware attacks, such as those seen in the recent WikiLeaks CIA data dump—opening the door for new approaches. Yet, three-quarters (74%) of researchers said AI-driven cybersecurity solutions are still flawed and 87% of security researchers said it will be longer than three years before they trust AI to lead cybersecurity decisions.

 “AI technology can be useful in helping humans parse through significant amounts of data,” the report noted. “What once took days or weeks can be done by AI in a matter of minutes or hours. That’s certainly a good thing. A key element of AI to consider, though, is that it is programmed and trained by humans and, much like humans, can be defeated. AI-driven security will only work as well as it’s been taught to…While AI is being used to effectively highlight nonobvious relationships in data sets, it still appears to be in its nascent stages.”

As a result, only 13% of these researchers indicated they will look to implement AI-driven cybersecurity solutions at their organizations over the next three years.

On the ML front, 70% of security researchers said attackers can bypass ML-driven security technologies; and nearly one-third (30%) said it’s easy to do so.

“Any reasonable ML approach to endpoint security is going to face the problem of obtaining training data at scale. If you’re looking at files, you’ll need a lot of files,” Carbon Black noted. “If you’re looking at behavior, you’re going to need a lot of behavior. Unfortunately, obtaining many examples of real attacks as they happen isn’t always feasible.”

Carbon Black recommends that users assemble a massive body of baseline data, a torrent of detonation data, and statistics and comparisons among behaviors for validation.

“Collectively, these approaches will give you a powerful set of tools to generate patterns of malicious behavior,” the report said.

Bottom line? This is a nascent space. While AI and ML-driven security solutions can exist as effective components to cybersecurity programs, they should not yet be exclusively relied upon as sole protections.

“According to a majority of security researchers, cybersecurity will continue to be, at least for the next five years, a battle of human vs. human, where AI and ML can be used to augment and empower human reasoning, not replace it,” the report concluded.

What’s hot on Infosecurity Magazine?