Our website uses cookies

Cookies enable us to provide the best experience possible and help us understand how visitors use our website. By browsing Infosecurity Magazine, you agree to our use of cookies.

Okay, I understand Learn more

Interview: Dave Palmer, Darktrace

Since its launch a few years ago, and since this writer first met the company, UK-based Darktrace has seen the trends of machine learning and AI rise significantly in the industry. The company now offers technologies around the concept of the network immune system which detects strange and abnormal instances, and has also moved into the autonomous response space.

Speaking to Infosecurity, Darktrace director of technology Dave Palmer said that these moves were a “stepping stone for where we take joined-up response,” where the machine can see if 100 other people are affected by something, to ultimately “move from responding to individual attacks to campaigns” and reduce the workload on response teams.

Darktrace was one of the first companies to explore the use of machine learning in cybersecurity some five years ago. So does Palmer think that the growth of this trend is a positive thing, and what does he think when he sees it being adopted by vendors across the board? 

Palmer’s answer was that after seeing the prevalence of vendors offering machine learning-based technology at Infosecurity Europe this year, he felt that every technology conference will see offerings on the machine learning trend. 
 
“Do you remember when people would talk about mobile and every conference talked about payments or shopping on mobile? I think AI is going to do the same thing as we now have AI conferences,” he said. “To us as practitioners, AI is an activity you use to reach the goals you’re trying to get to.” 

Palmer added that the AI hype cycle is not done with, and soon it will become so normalized that everyone will just assume that everyone else is doing AI as it is the future of the way we do programming.

He said that the big change was in 2014/2015, where beforehand if you did meaningful production using AI you’d build it from the ground up. “Now Amazon and Google work on Tensorflow services which means you can do a few months of online learnings and be good at it, as the tool that underpins it is pretty decent.”

So what about AI and machine learning being everywhere? Palmer said that there is a slight frustration about it, as there is not enough education on what is useful and what intuitively seems OK, but may well fail.

“I think what has happened is people wanted to do AI and gravitated towards AI where the thinking is ‘what have we got that is a big difficulty for humans and lets apply AI to it’ and not enough of ‘what’s AI good at, let’s start from that principle’ and see where it is the best fit in cybersecurity.”

Palmer said that what he wanted to see from AI was its capability to understand complexity and make decisions based on that context. He cited the example of firewalls making better decisions, and if something unexpected happens, why doesn’t the firewall determine that it is unusual and stop it?

“At the moment firewalls only do what they are told and in future could be more clever,” he pointed out.

He said the other way AI can be useful is to reduce the number of alerts as you will get a situation where someone collects logs, uses the SIEM to make more decisions on what can be correlated, what can be matched together and what needs to be stored – and AI is run on top of that.

“I get that tons of people are overwhelmed with logs and alerts, but putting AI in at the end is intuitively useful to sort the noise, but you need it at the edge deciding what is relevant where there is all the richness and depth and it can look at a billion events a day, but what it cannot do is sort three trillion events a day in the middle,” he said.

Palmer acknowledged that “some things win out and some things fade away,” but he likened AI to the early days of WAP; where a mobile was forced to run a website – and now the question is on what can AI do and how can it be beneficial to the industry, rather than adding it in too late. He likened it to putting a sticking plaster over a problem, as opposed to redesigning conceptually for the new era of programming.

AI and machine learning have made many predictions lists, and according to Gartner, they can “provide value in simple tasks and elevating suspicious events for human analysts,” and by 2025 “machine learning will be a normal part of security solutions and will offset ever-increasing skills and staffing shortages.” 

Perhaps, as Palmer said, AI and machine learning have a place in cybersecurity, but their application needs to be reconsidered.

What’s Hot on Infosecurity Magazine?