Padding Users’ Defenses Against ChatGPT

Written by

The ChatGPT craze has swept the world. Within just a few months of its release, the AI chatbot app reached 100 million active users in February, and the latest figure for April was estimated at 1.6bn, making it the fastest-growing consumer application ever launched – surpassing even social media platforms TikTok and Instagram. What’s more, Google, Microsoft and Baidu have also announced plans for AI-enhanced search, taking the AI space race into a new phase. 

While many have sung praises about the benefits of ChatGPT (including the ability to parse through data noise to find sophisticated attack signals), cybersecurity experts have also warned that the increasing use of such AI-powered technology comes with risks – and could facilitate the work of scammers and cybercrime syndicates.  

For instance, reports suggest there was a 135% increase in novel social engineering attack emails in the first two months of 2023, an increase which uncoincidentally matches the adoption rate of ChatGPT. 

Gearing up for the AI Arms Race 

Technology is a double-edged sword. When used for good, ChatGPT has the potential to save businesses valuable time and money thanks to its rapid content creation and language processing abilities. 

However, when misused, this application of AI can be a weapon for criminals to unleash greater harm on the public. These chatbots, which use AI-language models to generate content, make phishing scams harder to detect. Phishing refers to a type of online scam where criminals will send emails or text messages impersonating government representatives, bank officials or the authorities to trick people into revealing sensitive information such as usernames, passwords or other personal data.  

In the past, a phishing email or text message was fairly easy to recognize due to its poor spelling and/or grammar. Now, ChatGPT creates content free of grammatical errors, while awkward phrasing can quickly be edited out and proofed to make phishing messages seem convincing, even in languages other than English.  

With these advancements, users now carry a heavier burden of accurately identifying legitimate and scam messages. Fortunately, authentication technology has advanced to relieve users of this burden that they should never have had to carry in the first place. 

Padding Users’ Defense with Passwordless Authentication 

It’s clear that phishing is not going away, and technology like ChatGPT is only advancing in its effectiveness. There was a whopping 47.2% increase in phishing attacks in 2022 compared to the previous year alone. To overcome the threat of phishing, it’s necessary to take away the most valuable piece of information criminals are seeking in these attacks: passwords.  

By eliminating passwords during the authentication process, we remove these highly prized credentials that bad actors want to ‘phish.’ Instead, technology is now available for users to authenticate themselves through simpler yet stronger methods not dependent on passwords or other human-readable ‘secrets.’ These methods rely on advanced cryptography coupled with on-device biometrics that is readily available on most devices in very user-friendly formats. With just a touch of a finger or a quick facial scan, users can log into their accounts safely and seamlessly – without fear of unknowingly handing over their credentials to scammers or through spoofed websites. 

Preventing the Misuse and Abuse of AI Chatbots  

With any new technology, governance and ethical frameworks need to be established to clearly outline the boundaries of what are and aren’t acceptable uses of such tools. While ChatGPT claims to have safeguarding features built into its programming to prevent bad actors from misusing the platform, cybersecurity experts, such as those from Check Point, have demonstrated their success in circumventing these guardrails in instructing the AI chatbot to draft a conceivable phishing email. 

ChatGPT is just the latest technology advancement to be leveraged (or co-opted) as a tool to help cyber-criminals capture personal data and sign-in credentials more effectively. Our takeaway should not be to stop advancing but to understand that cyber-criminals will continually evolve and phishing will always be a threat. Our priority should not be to try and limit the use of such tools in the name of cybersecurity, but rather to combat phishing by providing individuals with less authentication data to give away, starting with passwords.  

Celia Ong /

What’s hot on Infosecurity Magazine?