OpenAI Unveils GPT-5.4-Cyber for Improving Cyber Defense With AI

Written by

OpenAI has launched a new large language model (LLM) focused on use cases for cybersecurity and expanded its Trusted Access for Cyber (TAC) program, as the AI company behind ChatGPT looks to enhance how its models can be deployed for cyber defense capabilities.

In a blog post which announced the expanded TAC program, published April 14, OpenAI revealed GPT‑5.4‑Cyber, a variant of GPT 5.4 which has been trained to be “cyber-permissive” and “fine-tuned for cybersecurity use cases.”

Initially revealed in February, the OpenAI Trusted Access for Cyber Program was designed to automate identity verification to help reduce the friction of safeguards on cybersecurity-related tasks and partner with a limited set of organizations.

This has since been followed by the Anthropic launch of Claude Mythos Preview and Project Glasswing, an initiative designed to discover and fix cybersecurity vulnerabilities in software with the aid of LLMs.  

Now, OpenAI has opted to publicly announce the expansion of its own program, following what the company described as “many months of iterative improvement.”

The company said that it has chosen a staggered release for GPT‑5.4‑Cyber so that it can “learn the most by putting these systems into the world carefully⁠” to help understand the potential benefits and risks.

Read more: AI Companies to Play Bigger Role in CVE Program, Says CISA

The expansion of TAC sees the introduction of additional tiers to the program, with the highest tiers reserved exclusively for “users willing to work with OpenAI to authenticate themselves as cybersecurity defenders.”

New Capabilities for Cyber Defenders

In return, users will gain access to a frontier model: “This is a version of GPT‑5.4 which lowers the refusal boundary for legitimate cybersecurity work and enables new capabilities for advanced defensive workflows.”

While the expanded tools are currently only available to vetted security vendors, organizations and researchers, OpenAI said it wants to “make these tools as widely available as possible while preventing misuse.”

That is why the company has announced a requirement for stronger verification processes to ensure that the cyber defense capabilities of the model can’t be abused.

“Cyber capabilities are inherently dual use, so risk isn’t defined by the model alone,” the company said, in reference to how malicious cyber-attackers have also look for ways to enhance their capabilities with AI.

The new model is also a reaction to what OpenAI described as “steady improvements in agentic coding” and the “direct implications for cybersecurity” this has.

The company has also called for software development itself to be more secure and views GPT‑5.4‑Cyber and TAC can help improve this.

“The strongest ecosystem is one that continuously identifies, validates and fixes security issues as software is written,” said the blog post.

“By integrating advanced coding models and agentic capabilities into developer workflows, we can give developers immediate, actionable feedback while they are building, shifting security from episodic audits and static bug inventories to ongoing, tangible risk reduction.”

Image credit: Samuel Boivin / Shutterstock.com

What’s Hot on Infosecurity Magazine?