Attackers Using Social Engineering to Capitalize on the ChatGPT Buzz

Written by

According to the latest Netskope Cloud and Threat Report, during Q1 2023, social engineering continued to be a dominant malware infiltration technique, with attackers abusing search engines, email, collaboration apps and chat apps to trick their victims into downloading malware. These campaigns exploit popular topics in the zeitgeist or major events so that the attackers can better disguise malicious content as legitimate files or web pages so the victim would fail to recognize its nefarious intent.

With all the buzz around ChatGPT, it was just a matter of time before threat actors started to capitalize on the hype around the artificial intelligence chatbot. For example, launching campaigns delivering malware in disguise of improbable ChatGPT clients or phishing pages promising rather improbable free access to the same service or other AI tools.

This juicy opportunity encouraged the attacker’s creativity, resulting in several malicious OpenAI chatbot-themed campaigns that have occurred so far in 2023.

Social Media is a Perfect Launchpad

ChatGPT was first released at the end of November 2022, and three months later, starting in late February 2023, multiple ChatGPT-based malicious campaigns were discovered. The campaigns used different distribution channels to deliver malicious content, such as fake social media pages containing links to typosquatted or deceptive domains mimicking the real OpenAI website. There were also phishing pages related to payments for fake ChatGPT subscriptions designed to steal credit card information and the inevitable plethora of malicious mobile apps using the ChatGPT icon and claiming AI functionalities as bait to convince the victims to perform the download.

For the record, the first attempts to exploit ChatGPT for criminal purposes were unearthed in January 2023. They were mostly aimed to weaponize the AI tool itself rather than to launch malicious campaigns where ChatGPT itself was the theme. Reportedly, threat actors initially focused on bypassing the restrictions to create new malicious tools and polymorphic malware.

Hijacking Social Media Accounts

Attackers have not limited themselves to fake social media pages but have also exploited browser extensions with ChatGPT-themed attacks. For example, in March 2023, another interesting campaign was discovered, carried out via a malicious fork of an open-source extension, “ChatGPT for Google,” containing code designed to steal Facebook session cookies. Another interesting aspect of this campaign was that the link to the malicious extension was available on the official Chrome Store - downloaded more than 9,000 times before being removed – and promoted through malicious sponsored Google search results. This is a further example of how SEO poisoning is regaining popularity among threat actors, as we outlined in our Cloud and Threat Report, and was not the only ChatGPT-themed campaign using Google Search Ads.

Delivering ChatGPT-Themed Malware

One of the best ways to exploit a hijacked Facebook account is to publish apparently legitimate ads promoting free downloads of malware disguised as legitimate software. That’s the modus operandi of an additional ChatGPT campaign discovered in mid-April, where the threat actors exploited compromised business or community Facebook accounts to advertise and deliver the malware-as-a-service RedLine stealer disguised as a ChatGPT client and its companion Google Bard. Coming full circle, the buzz around ChatGPT has been exploited to hijack Facebook accounts and subsequently to promote malware downloads from hijacked Facebook accounts.

Image credit: Rokas Tenys / Shutterstock.com
Image credit: Rokas Tenys / Shutterstock.com

Trojanized Installers (including ChatGPT) Exploited to Deliver the Bumblebee Malware

Injecting malware into the installers of legitimate software is another common technique adopted by attackers. Of course, with all the interest around ChatGPT, it did not take long before threat actors started to deliver trojanized installers of the OpenAI chatbot. A similar campaign was discovered later in April, and the infection chain again relied on malicious Google Ads that sent users to fake download pages of trojanized popular applications such as Zoom, Cisco AnyConnect, Citrix Workspace and ChatGPT. The fake pages would deliver the Bumblebee payload, a malware loader normally used to gain initial network access and conduct ransomware attacks.

Stealing Saved Credentials from Google Chrome

To give an idea of the scale of the massive growth in the exploitation of ChatGPT-themed attacks, consider that the number of newly registered and squatting domains related to the AI chatbot grew by 910% monthly between November 2022 and early April 2023. Moreover, since March 2023 alone, researchers from Meta discovered and thwarted around 10 malware families using ChatGPT and other AI-related themes. This trend is continuing and is inevitably leading to the discovery of new themed attacks on a regular basis. Such a campaign was discovered in late April, delivering yet another infostealer mimicking a ChatGPT Windows desktop client capable of copying saved credentials from the Google Chrome login data folder.

Google Ads Constantly Poisoned to Deliver ChatGPT-Themed Malware

As we have seen, the abuse of Google Ads is proving to be the most effective way to deliver ChatGPT-themed malware attacks, as well as other additional AI-based tools such as Midjourney. This was probably one of the first techniques to use ChatGPT as a lure to entice users to install malware. By February 2023, a campaign with this modus operandi had already been discovered, carried out by a financially-motivated threat actor dubbed Void Rabisu, and aimed to distribute the RomCom implant, a backdoor used to deliver ransomware, and also deployed against Ukraine.

In May 2023, several security researchers observed a malicious advertisement campaign in Google’s search engine with themes related to AI tools such as ChatGPT, Midjourney and Dall-e. The malicious advertisement tricked users into downloading a fake installer that ultimately dropped the RedLine infostealer (again). Interestingly, this campaign deployed several evasion techniques; for example, if the connection did not come from the Google ads redirector, a non-malicious version of the domain was served. The campaign also abused Telegram’s API for the command-and-control (C&C) communication, acting as an evasion technique to conceal the malicious communication with normal traffic, increasing the chances of avoiding detection.

Not Only Malware-Based Attacks

The ChatGPT hook was also proven to be compelling for financial fraudsters who quickly adapted their techniques to jump on the chatbot bandwagon and orchestrated sophisticated investment scams to target those looking for an AI-powered financial advisor to generate an additional form of passive income. 

A similar campaign was discovered in early March 2023, targeting users in several European countries and combining a leitmotif of phishing, the exploitation of human weakness – in this case, the promise of relatively easy money – with the excitement around ChatGPT capabilities. The attacks started with unsolicited emails containing the link to a copycat OpenAI site from where, after a quick assessment by a phoney chatbot, the user was redirected to a call center promising improbable earnings in exchange for an ingress fee of at least €250, which opened the victims up to further attacks by the fraudsters. 

Recommendations

As social engineering attacks dominate the current threat landscape, attackers are constantly looking for new and disruptive events to use as a lure for their campaigns. The advent of ChatGPT (and other AI tools) was a juicy opportunity, and the threat actors did not waste time capitalizing on this opportunity. This poses a risk for individuals and organizations as they look to block these attacks and recognize the new vulnerabilities. Some of the steps that should be taken to reduce the risk include:

  • Educating users about likely social engineering techniques aimed at themselves and the organization. At a corporate level, set up a clear process and channel for users to easily report and receive feedback on anything they find suspicious.
  • Inspect all HTTP and HTTPS downloads, including all web and cloud traffic, to prevent malware from infiltrating the network directly or via a compromised endpoint.
  • Configure policies to block downloads from apps not used in the organization to reduce the risk to only those necessary apps and instances (company vs personal).
  • Block downloads of all risky file types from newly registered domains, newly observed domains and other risky categories to reduce the overall risk surface.
  • Ensure that all security defenses share intelligence and work together to streamline security operations.

It is clear that much like the false utilities and banking scams, ChatGPT-themed attacks have increased in popularity among attackers and will remain in their rolodex of tools for years to come. However, these campaigns have recognizable hallmarks, and the implementation of sound security policies will help enterprises defend against these attacks. 

Image credit: Rokas Tenys / Shutterstock.com

What’s hot on Infosecurity Magazine?