How AI Will Change Red Teaming

Written by

Red teaming allows ethical hackers to flex their cognitive muscles when testing systems. The practice sees an offensive team attempt to emulate the tactics, techniques and procedures that you’d see in a real cyber-attack, so during the exercise there might be feints and pivoting as the team seeks to find and exploit vulnerabilities.

Because red teaming requires human ingenuity, it might seem an unlikely candidate to benefit from artificial intelligence (AI). In fact, it’s a great example of how we can use the technology to boost testing capabilities.

Generative AI such as ChatGPT, Bard and Bing are all language learning models (LLMs), so utilize online resources to provide custom responses to questions or prompts, allowing the user to refine and build upon their query. In a red teaming context, it can be readily applied to writing tasks such as constructing red teaming policy documents, as recently illustrated on the Hacker Thoughts substack. However, manual adjustments had to be made to sections on the rules of engagement and scope. 

Testing Tools

However, these AI technologies can also be used during the testing process itself. Testers use ChatGPT during reconnaissance, for example, to determine potential CVEs to exploit or avenues to explore to attack particular systems. This has been shown to dramatically reduce lead time on bespoke payload creation and reduce development time in general when looking at implementing various evasion techniques such as event tracing for Windows (ETW) and dynamic link library (DLL) unhooking as well as other obfuscation techniques, for instance. 

As AI is often used to look for coding errors, it can detect weak spots effectively. However, its interpretation of the severity and exploitability has been questioned, so these must be validated. 

Generative AI has built-in safeguards, so they cannot be used to create malicious code. Attempts to circumvent these controls, using jailbreaks such as the infamous DAN (Do Anything Now), contravene the use policies and have since been quickly closed down by OpenAI. But a number of researchers have previously shown how it could be used to create backdoors or run scripts automatically. So, while the capability exists, red teamers will still need to create their own code for writing exploits. 

Deceptively Real

Other attack methods do lend themselves to being automated via AI. It’s well documented that it can be used to create convincing phishing campaigns and mine social media to spear phish individuals. AI has even been shown to be able to adopt the style of speech of certain groups or individuals, which means it can be used for CEO fraud, for instance. And here we come to its real strength, which is its ability to sustain these conversations. This makes it a great tool to use for social engineering. A red teamer could use ChatGPT, for example, to converse with a target employee over a chat platform while pretending to be a colleague. 

Finally, when it comes to composing the report on the vulnerabilities identified by the red team, where security controls need to be implemented and/or policy tightened, and where defenses are strong, generative AI can be used to create useful summaries. This can see the report outcomes interpreted and tailored to different audiences and made more intelligible to the board, helping to bridge the security-business gap and inform business decisions.

No Putting the Lid Back On

Over time, AI will undoubtedly improve red teaming, helping to evolve attack scenarios and enabling clients to realize more value from the exercise. However, its relevance will, to a large extent, depend upon just how far those safeguards go. While it’s great that AI creators are exerting control, there’s a fine line between preventing harm and limiting the capabilities of a tool. Lock AI down too far, and we may fail to realize its potential.   

The predominant focus in the media and among regulators has been on how cyber-criminals can use AI, focusing attention on restricting its capabilities when the reality is that it’s a tool that serves both camps. Pandora’s box has been opened, and as white hats, we now need to ensure we utilize this tool to its full.

Today, while AI can help facilitate or automate some of the ‘less bespoke’ tasks, such as creating HTML templates/phishing pages, starting conversations with targets, etc., we are still a long way off from an AI being able to generate a bespoke implant that bypasses a client’s anti-virus or Endpoint Detection and Response (EDR). Whether AI ever gets that far remains to be seen, but it will always depend upon the prompting and foresight of a human, making the human red teamer indispensable.

What’s hot on Infosecurity Magazine?