The Real Threats and Opportunities of ChatGPT

Written by

I recently experimented with ChatGPT and used it to write code I’d previously spent months on to go out and get data from a website. It wrote it in three seconds and then told me about the library it was calling up, enabling me to learn more in the process. So not only can ChatGPT write code much faster than a human, but it can also act as an educational tool, improving our abilities. ChatGPT writes secure code, and it can even debug your current code. Additionally, you can talk to it, explaining what you are trying to achieve, which then gets turned into code, making it highly interactive.  

Thus far, the security community has been primarily focused on how threat actors might harness ChatGPT. We’ve seen researchers bypass ChatGPTs usage restrictions to write phishing emails and malware or to improve existing strains, create backdoors and automate the running of scripts. A report in the New Scientist estimates it could reduce the costs of malicious campaigns by 96%. Fear, uncertainty and doubt (FUD) aside, the reality is that it will speed up both attack and defense mechanisms, effectively levelling up the playing field.  

Should we be looking to nail down those restrictions more? No, because that goes against the tenets and freedom of the internet. There genuinely will be times when we want to know what a polymorphic malware looks like or want AI to simulate an attack, and it can also tell us how to defeat that specific attack.  

Security Superpowers 

From a security perspective, there’s not been a lot of focus on the astonishing things ChatGPT can do. For starters, it makes code secure by default because when it seeks code from an end user, for instance, it knows that issues such as SQL injection won’t be tolerated, so it automatically cleans up that data while suggesting libraries and security settings. This means developers will use it to query their code and suggest improvements, marking a huge step forward for the business in terms of cybersecurity hygiene.  

Take APIs, for example. Putting application code into ChatGPT can allow it to be debugged, but the AI can also analyze the API for operational issues, hunting out any security flaws in the code. Developers will often use third-party libraries in combination, but there are millions of bad libraries out there. Analyzing those dependencies can help make the code secure. Or why not just use ChatGPT to write the API? Creating application security tools for the different build phases or eradicating the hours spent trying to determine why a particular third-party tool won’t work will provide huge efficiency gains. Either scenario will result in more secure production and testing. 

We’re just at the start of what Language Learning Models (LLMs) can do. ChatGPT has since launched GPT-4, and it’s also got competition in the form of Google Bard, Microsoft Bing and GitHub CoPilot. But there are concerns the technology is outpacing our ability to integrate it with how we govern and protect our data.  

Who Watches the Watchmen? 

AI is currently self-governing and there are no regulations, leading to calls to hit the brakes. Italy has even banned it altogether on the grounds that it contravenes GDPR. Additionally, Microsoft’s GitHub Pilot has been accused of software piracy on an unprecedented scale because these AI learn by scraping public repositories of code, many of which are copyright protected. The counter argument is that using this information for training purposes constitutes fair use, but it remains to be seen if the courts agree.  

The problem from a commercial perspective is how companies can use ChatGPT without compromising their information. Once you’ve pasted your data into the interface, it will then be indexed and used, meaning that if that data is intellectual property, it becomes publicly accessible. To resolve this issue, there will need to be some way of creating a protected version for proprietorial purposes, perhaps on-prem. 

But there’s also the risk of placing too much trust in its abilities. As we’ve seen from the Google Bard goof, which saw it incorrectly state that the James Webb space telescope took the first photo of an exoplanet in the promo video, AI isn’t always correct. Thus, we need to become not just prompters and editors but vet the output of these systems. 

From copyright to data protection to fake information, there are real issues that need to be resolved when it comes to LLMs, but you can’t stop progress and why would we want to? We live in exciting times where man is becoming more empowered by machine and, for the first time, vice versa. Where this will take us remains to be seen, but this technology will undoubtedly transform how we work and approach security.

What’s hot on Infosecurity Magazine?