Fighting Fire With Fire: AI's Role in Cybersecurity

Written by

Last year, it was predicted by Gartner that Artificial Intelligence (AI) will be implemented in almost every new software product by 2020. As AI capabilities grow at a rapid pace, such technologies become critical for not only protecting against cyber-attacks but also launching them.

AI’s ability to make automated decisions about cyber-threats has meant that it is revolutionizing the cybersecurity landscape as we know it, from both a defensive and offensive perspective. 

As with many double-edged swords, then, it will be important for security teams to be aware of both sides of the picture in order to guard against the weaponization of AI.

AI in cyber defense teams 
A subdivision of AI, Machine Learning (ML), eases the burden of detection for many cyber defense teams. It has the ability to monitor network traffic and therefore establish a baseline for normal activity within a system. This information can be used to flag any suspicious activity, drawing from vast amounts of security data collected by businesses. Anomalies are then fed back to security teams who make the final decision on how to react.   

Machine learning is also able to classify malicious activity on different layers. For example, for the network layer can be applied to the intrusion network system (IDS), in order to categorize classes of attacks like spoofing, Denial of Service (DoS), data modification and so on.

Machine learning can also be applied on the Web Application Layer (WAF) and the endpoint layer to pinpoint malware, spyware and ransomware. 

Its goes without saying then, that machine learning – if it’s not already – will become a key component in a security team's toolbox over the next few years, particularly given that attacks are becoming more frequent and targeted. 

AI and cyber-criminals  
Yet, the implementation of AI for cyber defense is a case of fighting fire with fire, as hackers are armed with the very same ammunition and capabilities, creating a seemingly never-ending arms race. 

At the beginning of 2018, The Malicious Use of Artificial Intelligence Report warned that AI can be exploited by hackers for malicious purposes, possessing the ability to target entire states and alter society as we know it. The authors highlight that globally, we are at “a critical moment in the co-evolution of AI and cybersecurity, and should proactively prepare for the next wave of attacks”.  

It’s no surprise that cyber experts are concerned. After all, for hackers, AI presents the ideal tool to enable scale and efficiency. Similar to the way machine learning can be used to monitor network traffic and analyze data for cyber defense, it can also be used to make automated decisions on who, what and when to attack.

There is potential for hackers to use AI in order to alter an organization’s data, as opposed to stealing it outright, causing serious damage to a brand’s reputation, profits and share price. In fact, cybercriminals are already able to utilize AI to mould personalized phishing attacks by collecting information on targets from social media and other publicly available sources. 

To provide a tangible example, ZeroFOX recently led an experiment to garner who was more efficient at getting Twitter users to click on harmful links: humans or AI. The AI, named SNAP_R, sent stimulated spear-phishing tweets to over 800 users at a rate of 6.75 tweets per minute, hoodwinking 275 individuals. In the end, the AI was more effective at the job. 
 
Guarding against the weaponization of AI
To protect against AI-launched attacks, there are three key steps that security teams should be mindful of to cement a strong defense. For clarity, I have laid these out below. 

  • Understand what is being protected. What are you protecting? Once teams lay this out clearly then the appropriate solutions can be implemented for patch management, threat vulnerability management, ensuring important data is encrypted and providing visibility into the whole environment. It is vital that there is the option to rapidly change course when it comes to defense, since the target is always moving.  
  • Having clearly defined processes in place. Organizations can have the best technology in the world, yet it is only as effective as the process it operates within. The key here is to make sure both security teams and the wider organization understand procedures and it is the responsibility of these security teams to educate employees on cybersecurity best practice. 
  • Knowing exactly what is normal for the environment. Having context around attacks is crucial and often where companies fail. Possessing a clear understanding of assets and how they communicate, allows organizations to correctly isolate events that aren’t normal and investigate them. Ironically, as mentioned above, machine learning is an extremely effective tool for providing this context. 

As technological capabilities grow day on day, so will tactics used by cybercriminals. Organizations must build a robust architecture on which the technology operates and be mindful that all this is useless without the right education internally.

What’s hot on Infosecurity Magazine?