Do We Need to Have a Security Conversation About GPT-3?

Written by

There has been plenty of marketing chatter about AI being the future of cybersecurity, much of which involves talking up the benefits that machine learning offers in a defensive context. But what about the potential for evolving AI cyber-threats? I mean, sure, information security professionals have plenty of current problems to be thinking about, what with ransomware, supply chain attacks and endless vulnerabilities to deal with.

On the other hand, theoretical dangers from as yet unproven technologies are hardly a line one priority, so maybe it’s not surprising that the chatter isn’t loud. But talk there is, and talk there needs to be. For this article, Infosecurity has been tapping into the security and privacy conversation surrounding one area of AI that is already setting a marker in the tech world: Generative Pre-Training Transformer 3 or GPT-3 for short. But what, exactly, is GPT-3, what are those security considerations and how should the infosec community be reacting?

What is GPT-3?

GPT-3 is the third generation of an unsupervised language model developed by OpenAI, a technology that has not long seen its first birthday. Yet, as Professor Lisa Short, chief research officer at the Global Foundation for Cyber Studies and Research [Washington DC], tells Infosecurity, GPT-3 would be considered for entry to the Prometheus Society as a genius if it were human. “When born, it initially had some 175 billion parameters scraped from the pervasive digital world we live in embedded into its psyche,” Short explains. “And given that every two days ‘it’ has absorbed an extra five exabytes of data – or the equivalent of the entire written works of humanity – suffice to say it is a powerfully artificially intelligent phenomenon.”

As an unsupervised language model, GPT-3 can, with its neural network, produce human-like text-based responses based on the text entered by a user. “In other words, if a user asks GPT-3 a question, it provides an appropriate response,” Boris Cipot, a senior security engineer at Synopsys, says. “It’s still not even close to the capabilities of the human brain,” Cipot continues, but “GPT-3 is already capable of offering responses that are so convincing that they’re able to trick humans.”

"GPT-3 is already capable of offering responses that are so convincing that they're able to trick humans"

GPT-3 offers the potential for what Short describes as extraordinary benefits, such as being used as a diagnostic and delivery tool for health knowledge, an enabler for people with a disability or even ‘just’ as a Babel Fish tool to accurately and quickly translate complex texts between languages. The opportunities to do good are numerous, but Short warns that it also introduces the opportunity for “profound increases in the threat landscape for nefarious use and malicious outcomes.” As Jennifer Fernick, the global head of research at NCC Group, points out, “GPT-3 has been shown to not only be able to generate novel images and prose when given instructions in plain English, but has also been demonstrated to generate code in a variety of programming languages. The security implications of this are wildly understudied.”

What Are the Security Considerations Around GPT-3?

Etay Maor, an adjunct professor at Boston College and senior director of security strategy at Cato Networks, gets straight to the point in telling Infosecurity that he thinks this is an area that is currently not discussed enough. “Security practitioners have a lot on their hands, so ‘theoretical’ threats like AI-based attacks utilizing GPT are left untouched except perhaps by academia or futurists,” he says.

We all know how quickly technology moves and how quickly both cyber-criminals and nation-states can weaponize new technology. “Five years ago, we played around with face swapping; today we have deepfakes,” Maor says, adding, “which is a scary thought if you think of what can happen when you combine a high-quality deepfake with a GPT-3 engine!” Maor is referring to the potential for business email compromise attacks on steroids, where automated AI-driven bots might be able to better fool the recipient into thinking they are conversing with the actual CEO, for example. Indeed, when you consider just how many cybersecurity breaches involve a human error or the tricking of someone into doing something they shouldn’t, this becomes a scary proposition.

“Internal information security incidents have grown 47% in the past year, with 85% originating directly from negligence or social engineering fraud,” according to Short. “GPT-3 has been trialed by comparing completely fictitious news it generated against material specialized journalists had written,” she continues, “and more than 52% of people could not distinguish the difference.”

GPT-3 can be used as a source code generator to help programmers build a system similar to 'autocomplete' in other areas but this time with project code
GPT-3 can be used as a source code generator to help programmers build a system similar to 'autocomplete' in other areas but this time with project code

Then there’s the potential to exploit known security vulnerabilities by “writing scripts to manipulate them,” which Cipot suggests isn’t out of the realm of reason either. We already know that GPT-3 can be used as a source code generator to help programmers build a system similar to ‘autocomplete’ in other areas but this time with project code, as GitHub has a tool called Copilot that can do just that. Programming languages are, after all, languages, are they not? As GPT-3 becomes more proficient with such languages, Cipot notes, “writing malware could very well become an AI-guided task.”

The issue of exploitation is something that Fernick also flags in our conversation. “In addition to all of the typical security risks that may be present in a machine learning system, we must pay attention to how large language models such as GPT-3, in particular, can be potentially weaponized for offensive use to generate exploit code against arbitrary systems,” she says.

“Sufficiently large language models’ ability to be both generic and generative,” Fernick points out, “indicates to me that there is substantial risk that we do not yet fully understand.” This has to be of concern, given that such models can be trained on everything from Stack Overflow Q&A’s to Wikipedia articles on vulnerability classes and code repositories showing the changes between vulnerable and patched code.

All that’s before we turn our attention to how a GPT-3 program could become a digital influencer, including the ability to radicalize. “The danger is that it is not human, doesn’t choose its data to learn from and has no ability to discern or think beyond what it knows,” Short argues. “Humans can seek out new learning and solutions and engage in the unknown frontier of innovation.” It doesn’t take too much of a speculative leap to land upon the notion that GPT-3 could make it “increasingly difficult to ascertain actual trusted sources of information,” Short says. In times of conflict, this could ultimately mean that “cyber diplomacy would require far greater analysis than has ever previously been required to decipher false information.”

"The danger is that it is not human, doesn't choose its data to learn from and has no ability to discern or think beyond what it knows"

Then, of course, there’s privacy. “Data privacy concerns could arise as organizations will have to store the training data for the language model to use,” Kevin Curran, professor of cybersecurity at Ulster University, tells Infosecurity. “There is some worry too, as the actual platform is a black box. If company X uploaded a corpus, for instance, of law cases where individuals are named, would it be possible that those names appear in ‘later searches’ or results in an unforeseen manner? Data leaked can never be undone.”

How Should the Cybersecurity Industry Be Reacting?

According to Short, the digital security industry is primarily focused on systems and networks and the ‘attacker – responder’ pattern of thinking. “Strategic planning in the design of systems will be difficult, while levels of knowledge of emergent technology by leaders, boards and decision-makers sit at a staggeringly low level,” she says. As for how to counter potential risks from GPT-3, like most areas of security, Short thinks we should start with people and education. “There is an inadequacy of high-quality focus or prioritization on this,” she warns, continuing, “there’s an almost invidious approach to scaremongering about blockchain and perceived financial loss, yet the same doctrine does not apply to a technology which has the power to socially engineer and misinform thinking and fundamentally erode digital security and trust.”

Boris Cipot, meanwhile, considers GPT-3 threats in the same category as deepfakes. “If we start thinking about countermeasures for potential threats once they begin happening,” he explains, “then we’re far too late.” Instead, Cipot advises that security researchers need to start researching misuse capabilities of GPT-3 as early as possible to build effective defense mechanisms against them. Yet, there are significant barriers to entering this field of research, as Jennifer Fernick points out. “GPT-3 and others are largely available only in a SaaS API-based model,” she says, which “results in both ethical and legal limitations on the types of offensive security research that researchers may do.” What’s more, Fernick concludes, “it’s hard to even craft our own versions of these systems to test for ourselves, given that the cost of training models of this size can be millions of dollars, not to mention the massive technical challenge of actually developing anything of remotely similar capability to GPT-3 and its peers.” Therefore, the worrying thing is that security researchers will fall further and further behind as such systems’ capability continues to grow.


Infosecurity reached out to OpenAI for a statement but had not heard back at the time of writing.

What’s hot on Infosecurity Magazine?