Generative AI: Friend or Foe?

Written by

In just six months, ChatGPT has changed artificial intelligence (AI) forever and put potentially game-changing AI into the hands of the masses for the first time. It’s left the world wondering what the implications of it might be – for the better or worse.

For those in cybersecurity, the implications have been no less profound. While vendors have been steadily building AI into their offerings for years, security professionals are now wondering how attackers might use the new capabilities at their disposal. Indeed, attackers have been wondering much the same thing – according to Nord VPN, new posts about ChatGPT on the dark web grew seven-fold between January and February, while threads rose in popularity by 145%.

AI Will Drastically Reduce the Basic Errors Attackers Make

I do not doubt that generative AI has completely changed the security landscape. Often, small errors have reduced the impact of attacks in the past. It might, for instance, be a slight grammatical error in a phishing email or a few lines of poorly written code that have rendered an attack ineffective. Generative AI can drastically reduce these mistakes and now puts a level of native language and coding capability in the hands of attackers.

We have seen the lowering of barriers to entry for criminals for a number of years now, most notably via attack kits which are easy to buy on the dark web. But Europol indicates that it’s now possible to create basic tools without having to risk buying them from dubious sources.

While it notes that these tools are currently ‘basic,’ they ‘provide a start for cybercrime as it enables someone without technical knowledge to exploit an attack vector on a victim’s system.’ This capability can only be expected to improve over time, and it’s reasonable to expect that we’ll see both a growth in wannabee criminals and sophistication in the levels of attacks they can carry out. 

AI Enables Security Professionals to Sift Through the Noise

Thankfully, for security professionals, generative AI also represents a powerful step forward which could negate some of the above. Microsoft was among the first to introduce the capability to its solution via a security co-pilot. Announced in March, it claimed that using AI “can process 1000 alerts and give you the two incidents that matter in seconds.

At ReliaQuest, we have been assessing the effectiveness of ChatGPT 3.5 and the newer ChatGPT 4 version and found it to be up to 90% accurate at diagnosing the most common threats. This could play a big role in upskilling security professionals, particularly those at an early career stage and covering gaps in knowledge. Using our own AI to automate tasks across the detection, investigation and response workflow, we have found that it’s typically possible to reduce response times by up to 90% and help security analysts get answers faster than ever. This provides better context to security incidents, which helps cut through the noise and false positive alerts that clutter a security analyst’s day.

Enabling security professionals to focus only on what really matters and fix real issues obviously has a huge benefit to keeping organizations safe, but is ultimately more rewarding for the security professional too. In an industry beset with burnout and skills shortages, retaining these vital people is key since whatever developments we see in AI, humans must remain in the loop.

Beware of its Limitations 

There are limits to generative AI technology currently. It can’t ‘think,’ only interpret knowledge already documented and trained on. It’s not much use for a zero day attack and will make errors because its understanding is incomplete. For now, while the technology can be useful to automate certain workflows, it should only be used to advise on a course of action, and ultimately, any decision should be made by a human. It’s particularly important to be aware of the environment in which it’s used – caution should be exercised in a hospital or other critical environment. Over time, some of this caution could be relaxed as the models become embedded with more up-to-date and better contextual data, ironing out errors and making them more reliable.

Finally, vendors have a responsibility not to oversell what generative AI can do today. It’s important to be honest with customers and disclose the limitations of the technology so they don’t become over-reliant on it. In doing so, they can take the necessary steps to remediate known weaknesses and ensure errors don’t creep into their decision-making process. 

What’s hot on Infosecurity Magazine?