Experts Warn ChatGPT Could Democratize Cybercrime

Written by

A wildly popular new AI bot could be used by would-be cyber-criminals to teach them how to craft attacks and even write ransomware, security experts have warned.

ChatGPT was released by artificial intelligence R&D firm OpenAI last month and has already passed one million users.

The prototype chatbot answers questions with apparent authority in natural language, by trawling vast volumes of data across the internet. It can even be creative, for example by writing poetry.

However, its undoubted talents could be used to lower the barrier to entry for budding cyber-criminals, warned Picus Security co-founder, Suleyman Ozarslan.

He was able to use the bot to create a believable World Cup phishing campaign and even write some macOS ransomware. Although the bot flagged that phishing could be used for malicious purposes, it still went ahead and produced the script.

Additionally, although ChatGPT is programmed not to write ransomware directly, Ozarslan was still able to get what he wanted.

“I described the tactics, techniques and procedures of ransomware without describing it as such. It’s like a 3D printer that will not ‘print a gun,’ but will happily print a barrel, magazine, grip and trigger together if you ask it to,” he explained.

“I told the AI that I wanted to write a software in Swift, I wanted it to find all Microsoft Office files from my MacBook and send these files over HTTPS to my webserver. I also wanted it to encrypt all Microsoft Office files on my MacBook and send me the private key to be used for decryption. It sent me the sample code, and this time there was no warning message at all, despite being potentially more dangerous than the phishing email.”

Ozarslan said the bot also wrote an “effective virtualization/sandbox evasion code,” which could be used to help hackers evade detection and response tools, as well as a SIGMA detection rule.

“I have no doubts that ChatGPT and other tools like this will democratize cybercrime,” he concluded.

“For OpenAI, there is a clear need to reconsider how these tools can be abused. Warnings are not enough. OpenAI must get better at detecting and preventing prompts that generate malware and phishing campaigns.”

Separately, ExtraHop senior technical manager, Jamie Moles, found equally concerning results when he asked the bot for help in crafting an attack similar to the notorious WannaCry ransomware worm.

“I asked it how to use Metasploit to use the EternalBlue exploit and its answer was basically perfect,” he explained.

“Of course, Metasploit itself isn’t the problem – no tool or software is inherently bad until misused. However, teaching people with little technical knowledge how to use a tool that can be misused via such a devastating exploit could lead to an increase in threats – particularly from those some call ‘script kiddies.’”

What’s hot on Infosecurity Magazine?