How AI is Becoming Essential to Cyber-Strategy

Written by

The cybersecurity landscape is evolving at breakneck speed:

  • Threat levels are higher than ever.
  • The volume of data being protected is quadrupling every 36 months
  • Computing power and data transfer speeds are increasing at least as fast.

Added to this, the diversity of internet and network-connected technologies are following an even faster curve. There are some hard truths that many organizations ignore at their own peril.

Infosec budgets are not matching the pace of change
Most security departments will acknowledge that their resources are already spread too thinly. Now there is an expectation to do much more with even less. What is the answer? In a recent Infosecurity webinar, the topic of the impact of artificial intelligence on cyber-resilience was discussed. Could AI be the answer to extending the value and efficacy of cybersecurity?

Until 2014, AI, especially AI for security purposes, was for the most part just a marketing term. AI was far from a mature technology and something called multi-layer artificial neural networks was a capability that had only recently been invented.

Techies understand that programming is a process of humans providing conditional instructions to software. If this happens, then do that. A traditional software program has no AI – in other words, it cannot decide anything for itself, it is just operating a set of pre-defined instructions.

Prior to the dawn of artificial neural networks, the few programs that used AI were based on a rudimentary approach known as machine learning.

What machine learning does is clever, but still requires a significant amount of human input. For the most part, machine learning requires humans to point out to the software what “features” the program should observe and analyze. Examples of features could be file size, file extension, normal data transfer patterns and so on.

With machine learning, the AI is set loose observing specific features across a set of training data. During the training stage, humans tell the machine learning program when it reaches the right decision or conclusion and when it does not. This allows the program to gradually refine how it uses analysis of those features to come to the right conclusion.

The problem with machine learning is that the AI is limited to the features that it has been taught to expect. Fooling a machine learning security system is as simple as adding an unexpected/ unprogrammed feature into the exploit. Imagine a card trick such as “find the lady” where the machine learning software is expecting the dealer to operate inside the given parameters (the dealer is only moving around these three cards), but the dealer is cheating by having a fourth card. Because the concept of the fourth card is outside the expected features, the program can be defeated.

What artificial neural networks can do is allow an AI to self-determine what features it uses to reach a conclusion. An artificial neural network still requires some degree of human input to confirm if a conclusion is incorrect, but it effectively self-organizes how it reviews and manages the data it has access to.

As an example, an AI looking for new types of viruses can sense everything happening in a computer and then identify based on everything whether a program or even an activity in the memory are doing something unwelcome. It does not need to have seen the behavior before, it only has to recognize the outcome, or potential outcome.

The level of computing power artificial neural networks require is currently too much for this to happen on the average laptop or phone. These AI security programs instead use small local programs (agents) to act as a local representative, but the outcome is the same.

An example of this can be seen with any smart speaker. Alexa is not inside the $30 smart speaker, but it is connected to it and able to receive and process instructions from the local program.

Why this matters to managing security
The progression and investment into artificial neural network technology means that some security software technologies have now reached a level of competency that was unthinkable 10 years ago. Understanding and blocking rogue identity and access activities, identifying and quarantining malware, preventing data loss, adaptively configuring the security policies on devices – all of these are activities that the most progressive new AI technologies can do with little to no errors.

For some SIEM (Security Incident and Event Management) environments, this means that the security technologies they use can inspect, alert and block based on analysis that would be impossible to achieve manually. The AI technologies are literally performing the equivalent of years of manual security work every minute, allowing the SIEM team to coordinate and orchestrate responses to threats faster and more efficiently than ever.

As these AI technologies become more stable and usable, the price point is also moving lower. As an example, the average AI anti-malware solution for home use now comes in at less than $1 per device per month. Although they can seem daunting, my own experience using these technologies is that they are incredibly helpful. They take away a lot of stress and bring success and recognition to the team.

AI security technologies still require the human component – but the transition is moving security professional activities away from extensive manual checking and configuration into roles of oversight and strategy.

The biggest problem in the future is likely to be how to prevent hackers from using variations of the same AI capabilities to perform intrusions and exploits.

What’s hot on Infosecurity Magazine?