Pandora’s Box Is Open – AI Won’t Get Back In

Written by

The next generation of cybersecurity adversaries are not just vandals trying to deface your website or petty criminals stealing credit card numbers, they are nation states advancing their political and economic goals; quasi-professional hacking groups working for unscrupulous businesses to spy on competition; and large crime syndicates earning profits from cyber-attacks on levels comparable to the drug trade, but with a fraction of the risk.
 
Soon these adversaries will be employing a new generation of Artificial Intelligence (AI) algorithms to support their attacks, using advanced techniques including Deep Reinforced Learning, Attention and Self-Supervising. This new generation may well bypass existing traditional security measures, and only defenses based on comparable technology will have a fighting chance against them.
 
The problem with the current generation of AI technology used in cybersecurity is that it is largely based on supervised learning or anomaly detection techniques – these are unlikely to meet clients’ expectations in the long term. They are akin to the Maginot Line at the beginning of WWII: a defense structure designed for the last war, but totally insufficient for the next, that an agile force can easily move around. 

Supervised learning requires enormous amounts of data, hand-labelled by experts, which is an expensive exercise, so most deployed systems are undertrained. Anomaly detection is usually very fragile and hard to port between different environments, as most organizations are unique. So, what separates the new techniques from the old?
 
A changing AI attack landscape
The new breed of highly organized and advanced perpetrators outlined at the start of this article are real. These organizations are well funded, determined and often stealthy when attacking, and known to build attack arsenals, particularly in the form of zero-day vulnerabilities. 
 
A variety of AI approaches are proving their ability to be applied to all sorts of problems: Deep Reinforcement Learning, for example, uses large artificial neural networks to solve problems related to planning long-term goal-oriented actions in changing environments, often when an adversary is present. It has been demonstrated to achieve super-human performance in a variety of games from Chess to Go, and even the popular real-time strategy title StarCraft. These same capabilities could be used to penetrate computer networks as well as to defend them.

Attention mechanisms, that allows AI to look at the relationship between words that may be far apart in a sentence to help make sense of it, is producing state-of-the-art results in natural language processing. Self-supervised learning uses supervised learning algorithms on unlabeled data by exploiting its specific structure.
 
Both Attention and Self-Supervising are used in the newest build of OpenAI. It can produce a piece of original English prose, sometimes on a level hardly distinguishable from that which a human would write.

The results were so astonishing that the OpenAI researchers broke their tradition of sharing all of their progress and refused to provide details of the new algorithm in case it was used for harmful purposes. Even so, the basic principles are known, and less capable versions of the same algorithm are in the public domain, so replicating the results would not be out of reach for a group committed hackers.

Trained on appropriate data, such an algorithm could be used for a variety of malicious tasks from writing convincing phishing emails, to large scale propaganda, fraud, or even planning attacks and the tactics that should be used. 

Systems – doing it for themselves!
AI is evolving rapidly, and the security market is moving quickly to try and capitalize on the benefits the technology can bring to their solutions for enterprises. We will undoubtedly see an explosion of solutions on offer over the coming months and years. The best way to apply AI to enterprise security is yet to be established – what form will systems take that can truly self-preserve and outwit the determined attacker?

There are three things that security professions can do now, that will prepare them for this exciting, if slightly unknown future:

  • Education – Don’t relate to AI as a flash in the pan, and don’t believe everything you hear or read. Do your homework. Investigate the market and the intentions of the vendors that matter to you as they begin to show their hand.
  • Change your mindset – To embrace self-preserving systems, you need to relinquish some control, but there is a line to be drawn. Where that line sits will be a function of the trust you place in AI and how much control and authority you want to maintain. If you try to maintain full human control, are you sure you can react fast enough to an AI-enabled attack? 
  • Accept inevitability – AI is going to be part of your security make-up. You have a simple choice. Are you going to stand idly by while the AI attackers weaponize against you, or are you going to fight fire with fire?

Without doubt it is a very exciting time to be in the security sector. Through our work with academic institutes in the arena of CyberAI, from Imperial College and Edinburgh University in the UK, to MIT in the USA, we get to see some of the most exciting projects unfolding in the field. There are great opportunities ahead for security professionals to be on the leading edge of an AI driven, enterprise security revolution. The hackers are already starting to believe that – and that is all any security professional should need to get motivated.

What’s hot on Infosecurity Magazine?