Cybercriminals Hesitant About Using Generative AI

Written by

Cybercriminals are so far reluctant to use generative AI to launch attacks, according to new research by Sophos.

Examining four prominent dark-web forums for discussions related to large language models (LLMs), the firm found that threat actors showed little interest in using these tools, and even expressed concerns about the wider risks they pose.

In two of the forums included in the research, just 100 posts on AI were found. This compares to 1000 posts related to cryptocurrency during the same period.

The researchers revealed that the majority of LLM-related posts related to compromised ChatGPT accounts for sale and ways to circumvent the protections built into LLMs, known as ‘jailbreaks.’

Additionally, they observed 10 ChatGPT-derivatives that the creators claimed could be used to launch cyber-attacks and develop malware. However, Sophos X-Ops said that cybercriminals had mixed reactions to these derivatives, with many expressing concerns that the creators of the ChatGPT imitators were trying to scam them.

The researchers added that many of the attempts to create malware or attack tools using LLMs were “rudimentary” and often met with skepticism by other users. For example, one threat actor inadvertently revealed information about their real identity while showcasing the potential of ChatGPT. Many users had cybercrime-specific concerns about LLM-generated code, including operational security worries and AV/EDR detection.

There were even numerous ‘thought pieces’ posted on the forums about the negative effects of AI on society.

Christopher Budd, director of X-Ops research at Sophos, noted: “At least for now, it seems that cybercriminals are having the same debates about LLMs as the rest of us.”

He added: “While there’s been significant concern about the abuse of AI and LLMs by cybercriminals since the release of ChatGPT, our research has found that, so far, threat actors are more skeptical than enthused.”

Preparing for the Proliferation of AI Based Threats

Despite the current reluctance from cybercriminals to use AI tools, Sophos published separate research demonstrating that LLMs can be used to conduct fraud on a “massive scale” with minimal technical skills.

Using LLM tools like GPT-4, the team built a fully functioning e-commerce website with AI-generated images, audio and product descriptions. It also contained a fake Facebook login and checkout page to steal users’ log in credentials and credit card details.

Sophos X-Ops said it was able to create hundreds of similar websites in seconds with one button.

The firm explained the research was conducted to help prepare for AI-based threats of this nature before they proliferate.

“If an AI technology exists that can create complete, automated threats, people will eventually use it. We have already seen the integration of generative AI elements in classic scams, such as AI-generated text or photographs to lure victims,” explained Ben Gelman, senior data scientist at Sophos.

What’s hot on Infosecurity Magazine?