ChatGPT: A New Wave of Cybersecurity Concerns?

Written by

As 2022 ended, OpenAI made ChatGPT live to the world. It is an artificially intelligent research and deployment chatbot that interacts through text using realistic human responses. Its deep learning techniques can generate conversations that convince anyone they are interacting with an actual human. 

Like opening the jar and releasing the genie, its impact is relatively unknown, but grave intrigue and curiosity surrounded it. How will it be used; how does it work; is it for good or evil? No, this is not the next Terminator sequel…

Its intentions are certainly for positive use, and its articulate responses have led many to claim it as the best chatbot to be released. However, in a short period, ChatGPT has already been linked to cyber threats as cyber-criminals leverage its advanced capabilities for nefarious means. 

How is this possible, you ask? Well, for starters, it is entirely possible to use an AI chatbot to create a completed infection chain that starts with a spear phishing email and uses convincing human language to dupe a victim into having their systems infected. 

Security vendors have even explored this by creating phishing emails on ChatGPT and the results were worryingly accurate. For instance, CheckPoint created a phishing email with an attached Excel document containing malicious code that downloads a reverse shell to the victim’s system. 

This is deeply concerning as the threshold and knowledge to create such a threat have been removed with AI. Of course, there is already phishing-as-a-service (PhaaS) and ransomware-as-a-service (RaaS) that provide tool kits for a fee that enable threat actors to carry out such attacks. However, we are seeing another evolution of cyber-criminal activity because many dangers can sprout from this genius creation, which is free and open to the public.

Some of the most obvious threats that come to mind involving ChatGPT include the following:

  1. Mass social engineering: As mentioned, ChatGPT’s state-of-the-art language model can be utilized to create very realistic phishing emails that threat actors will be used to dupe individuals into downloading malware and stealing sensitive information. 
  2. Scamming material content creation: Like phishing messages, threat actors can ask ChatGPT to create fake ads, listings and other scamming material. 
  3. Imposters and imitation: Labelled as the best software program for impersonating humans, hackers could use ChatGPT to generate convincing digital copy of a specific person’s writing style, thus enabling criminals to imitate an individual or organization through email or a text message.
  4. Automated attacks: Large-scale attacks, like the distribution of phishing emails or malicious messages, can be deployed more efficiently and effectively through ChatGPT and can be automated.
  5. Spam victims: ChatGPT’s functionality can allow users to fine-tune its creation and research capabilities to mass produce high or low-quality content that threat actors can distribute. An example is spam comments on social media or spam email campaigns.

These are just some possible ways cyber-criminals can leverage ChatGPT, and as the technology advances, more will likely become prevalent. Therefore, organizations and the wider workforce must remain vigilant and become aware of these possible risks. 

Unfortunately, the phishing messages created by ChatGPT are so convincing that it is significantly better at creating them than those that initially wrote them. The language and designs are of higher quality, especially when you consider many phishing campaigns are created by actors who are not proficient in American/British English.   

Yes, it will likely become harder to spot these threats, but that doesn’t mean we can’t do it. We absolutely can, and tools are being tested that can detect ChatGPT’s written text. 

Cybersecurity defenses will meet this test head-on like it always has. 

From a human security perspective, organizations can take mitigating steps to provide individuals/the workforce with new-school cyber-awareness training to arm them with the knowledge of identifying a social engineering attack. We can then look to security technology to effectively remediate the threat.

There would always be a host of new world opportunities and possibilities presented as AI was explored and increasingly provided to the masses. However, with reward also comes risk, and the cybersecurity industry must remain alert to the possible threats that will likely manifest from the wide adoption of technologies like ChatGPT. 

What’s hot on Infosecurity Magazine?