Will ChatGPT Democratize the Development of Evasive Malware?

Written by

ChatGPT has garnered significant attention since its release on November 20, 2022.

Dubbed the world’s most advanced chatbot, this powerful new artificial intelligence (AI) system developed by OpenAI can support a range of tasks in a highly sophisticated and human-like manner, from answering questions to writing emails and even code.

The natural language processing tool’s capabilities are causing a major stir. OpenAI CEO Sam Altman revealed that ChatGPT had more than a million users within five days of launching, while in December, Elon Musk tweeted, “ChatGPT is scary good. We are not far from dangerously strong AI”. 

Not all the feedback has been positive. Indeed, many are expressing concerns over how AI chatbots could undermine education. For example, ChatGPT is able to create an entire essay within seconds, making it easier for students to cheat.

What is perhaps more of an immediate concern is the question of whether such advanced AI models pose a cyber threat

According to research from Blackberry, ChatGPT may already have been used in nation-state cyber-attacks. A survey of 1500 IT professionals revealed that 71% believe foreign states are likely to be using the technology already for malicious purposes against other nations.

CyberArk also published findings detailing how it was able to create polymorphic malware using ChatGPT. Additionally, Check Point Research found examples where cyber-criminals had begun using the AI platform for malicious means, from creating convincing spear-phishing emails to sharing an information-stealing piece of malware code developed using the chatbot on criminal forums. 

Querying ChatGTP on HEAT

To explore this in more detail, we asked ChatGPT if it could be misused by threat actors looking to develop highly evasive adaptive threat (HEAT) attacks. These are a new and rapidly growing category of attack methods that can break through multiple layers of security software, including firewalls, secure web gateways and phishing detection.

ChatGPT initially responded to this query as follows: “It is possible for ChatGTP or any other language model to be misused to generate malware. If a malicious actor has access to the model and trains it on malware samples, it could generate malicious code or be used to create phishing campaigns or other nefarious activities. In this sense, ChatGTP generated malware could be considered a HEAT threat, as AI-generated malware can be sophisticated, adaptive and difficult to detect.”

In this same response, the chatbot stated that it is important to emphasize that creating malware using AI language models is not an inherent capability of the models themselves but rather the result of malicious use.

Interestingly, when we asked ChatGPT a similar question several times after, it offered a range of responses that summarized that it didn’t pose a HEAT threat.

One response stated: “ChatGPT itself is not capable of generating malware. It is simply a language model developed by OpenAI that generates text based on the input it receives. Misusing ChatGPT, such as using it to generate phishing scams or spreading false information, can be unethical, but it wouldn’t result in the creation of malware.” 

A second, similar response read: “While it is possible for a threat actor to misuse ChatGPT for malicious purposes, such as generating misleading or false information, it is not capable of generating malware on its own.”

Combatting Democratized Cybercrime

ChatGPT itself states that responses can be incorrect and that it shouldn’t be taken as a source of advice, with the platform only able to respond on what it has been trained on so far.

In respect of the question of whether it could pose a cyber threat, however, the danger lies in the fact that the platform’s success is centered around machine learning, meaning the more inputs it receives, the better the outputs become.

This is perhaps the most worrying aspect. Indeed, research has revealed that those already using it for malicious purposes have had limited-to-no cyber skills, raising concerns that ChatGPT could democratize the development of evasive malware and/or more credible social engineering or phishing emails.

We’ve seen the effects that democratized cybercrime can have in the case of ransomware-as-a-service. Therefore, it is imperative that organizations work to protect themselves from such an eventuality.

Indeed, ChatGPT has even advised the following: “To mitigate the risk of ChatGPT generated malware, it is crucial to follow good security practices, such as access controls, monitoring and secure data storage.”

At Menlo Security, we advise organizations to adopt isolation technology within their cybersecurity strategies – a technology that ensures all active content is executed in an isolated, cloud-based browser rather than one on the user’s end device. It shifts focus away from post-compromise detection and towards prevention, working to ensure that malicious payloads never have the opportunity to reach the target endpoint.

With HEAT attacks on the rise, and the potential for future implications to arise from the abuse of intelligent systems such as ChatGPT, organizations must update their defenses to protect against sophisticated, modern attack methods.

What’s hot on Infosecurity Magazine?