FIRST CEO Calls for Global CVE Collaboration amid AI Vulnerability Tsunami

Written by

The cybersecurity community is facing a rapidly evolving software vulnerability landscape, with the mean time to exploit has plummeted from weeks to mere hours.

As artificial intelligence fundamentally alters how security flaws are both discovered and weaponized, industry leaders are grappling with how to manage an unprecedented explosion of vulnerabilities.

Chris Gibson, CEO of global incident response alliance FIRST, believes the answer lies in strict global cooperation rather than fragmented, regional efforts.

Pointing to European Union Agency for Cybersecurity’s (ENISA) decision to integrate with CISA and MITRE as a Top-Level Root CNA, Gibson says it is crucial to maintain a unified, federated vulnerability database instead of siloing vital threat intelligence.

In conversation with Infosecurity following VulnCon26, the global conference focused on improving vulnerability management and disclosure, Gibson spoke about the disruption caused by frontier AI models from companies like Anthropic and OpenAI.

While these autonomous tools present a major challenge to traditional coordinated disclosure timeframes, he argued that the cybersecurity community must adapt by bringing these AI firms inside the tent, ideally integrating them as Common Vulnerabilities and Exposures (CVE) Numbering Authorities (CAN) to help stabilize the ecosystem.

Infosecurity Magazine:  FIRST hosts the annual VulnCon summit and the 2026 event has now finished. What were the main highlights of this edition?

Chris Gibson: To me, the one that really stood out was the announcement by ENISA that they’re going to be working with the US Cybersecurity and Infrastructure Security Agency (CISA) and MITRE on the CVE program.

One thing that had me worried last year was when ENISA announced their EU Vulnerability Database (EUVD). It felt like they would go off and do their own thing.

The fact that they're coming together now and ENISA is going to become a Top-Level Root CNA for the CVE program is really positive.

It demonstrates that the cybersecurity community is going to work together to fix the vulnerability explosion problem and no one going to go and build their own system with their own data.

IM: How would you evaluate the magnitude of the ‘vulnerability explosion?’

CB:  We've seen a steady increase in the number of vulnerabilities reported over the years. That's logical. There's more software out there, including more software getting older and more new pieces of software.

More recently, of course, I've heard about the ‘tsunami’ that is coming towards us of all these vulnerabilities being found very quickly through AI and our ability to react to them. I looked at a zero-day vulnerability tracking website recently and saw that the mean time to exploit now is in hours. It's not in months, weeks or days that it used to be. It's down to hours.

The ability for the bad guys to be able to come in, find vulnerabilities and then exploit them within hours is more than concerning and then how we manage our way out of that is a real challenge.

Vulnerability management is not something many companies do properly. Most don't have enough staff; they have enough trouble trying to have information security people. Having a whole team working on vulnerabilities is a real challenge.

The fact that companies develop ways and systems to automate vulnerability management, especially using AI, is a good thing. We need AI to fight AI. But then, as a company, you're giving your controls to a machine and that's slightly concerning as well.

I also worry that AI fighting AI means that you can sort of game the system and therefore lose the understanding of how the AI on the defending side is working.

"I would be surprised if Anthropic and OpenAI weren't CVE Numbering Authorities by the end of 2026."

IM: Do you always need AI to fight AI?

CB: If we step back to the good old cyber hygiene that we banged on about for years, if we segmented networks properly, had decent passwords, patched our systems in a timely manner, no exploits would be quite so bad. But we don't.

I would posit that very few companies have a very good idea of what their information systems actually look like. There are old machines, edge devices, personal devices and more.

My worry is that, if AI becomes as ubiquitous as some people think it will and as powerful as Anthropic’s Mythos announcement seems to claim, the sheer speed of being attacked is going to be so quick. Are humans going to be responsive enough to figure to respond to that?  

IM:  What do you make of Anthropic’s announcement of Claude Mythos and OpenAI-s GPT-5.4-Cyber, two frontier models that promise to autonomously find and fix cybersecurity vulnerabilities at scale, but launched only to a limited user group?

CB: I guess that's them being responsible. Yet I think the genie is out of the bottle. At some point, such a model is going to leak out. Whenever humans have tried to keep things to a small, trusted community over time, that's we have failed.

At some point, these models or others with the same capabilities are going to hit the market or be available for people to use. Are we not better of facing that now, rather than later?

IM: How should vulnerability disclosures be handled when considering traditional coordinated vulnerability disclosure models versus sharing information with a limited set of organizations?

CB: The old way of finding vulnerabilities, the responsible disclosure process, has been proving useful in the past when finding individual vulnerabilities, giving people X weeks to deal with a vulnerability you have found.

On the one hand, yes, I'd love for Anthropic and OpenAI to have done that. On the other hand, I don’t know how it would work for the scale and speed we’re talking about.

Also, at the end of the day, Anthropic and OpenAI are looking for market share. These companies are not in it for the good of their health, so to speak, so showing what they can do and how good they are at it is clearly a benefit.

Most of us use AI for free, because some of the models are free, but they've got to make their money somewhere, and this is presumably how they're going to do it.

IM: Do you think AI companies should be better integrated into the vulnerability disclosure ecosystem?

CB: I would say so. At FIRST, the organization hosting VulnCon, we believe in the community. We want to get people together and talk about things. I'd rather have AI companies inside that tent talking about it than outside, producing information that is going to make our lives challenging. Let's have them inside and talk about how we do that.

I would be surprised if Anthropic and OpenAI weren't CVE Numbering Authorities by the end of 2026. But I guess that's a decision they're going to have to make.

"We've got some absolute geniuses at VulnCon, but we need more of them."

IM: Is traditional vulnerability research “cooked” because of AI, as some people suggest?

CB: Clearly, it's another tool in the armory for finding vulnerabilities, and probably a game changer in many ways.

Will it, over time, calm down as we move forward? Possibly, but it's clearly going to change the way we work and the way we have to respond to vulnerabilities and exploits.

IM: Do we have the vulnerability disclosure infrastructure to deal with this new technology?

CB: I look around the world, I clearly would worry we're not ready for it. We see government programs that are having challenges for funding issues. We see the sheer complexity of analyzing vulnerabilities using a common language and the lack of good vulnerability researchers to fill those roles. We’ve got some absolute geniuses here, but we need more of them, and it's a very niche topic for most people.

As an organization, FIRST aims to represent incident responders, and they want as simple a landscape as possible. That’s why the C in CVE stands for ‘common.’ We need a common language and end-user need to deal with coordinated organizations.

I'd like to see more announcements like ENISA joining the CVE program as opposed to specific countries launching their own initiatives.

If the NIST National Vulnerability Database (NVD) and the ENISA EU Vulnerability Database (EUVD) stay as two separate tracks, not really talking to each other, that will be unhelpful, because vulnerability consumers will have to track multiple systems and multiple identifiers. No one wants that.

However, if they can work together so that we've got a federated global system while providing what their stakeholders need, then it could be a working system. We bring them all together at VulnCon to nudge them in the right direction.

IM: With both the CVE and NVD being US-funded programs, people fear of possible lack of funding. Are you worried about this?

CB: We're clearly concerned that there could be a cut in funding, but when I talk to CISA people here, they are adamant that it is not going to happen. They are absolutely categorical: the CVE program is well funded, and we will continue to fund it.

Nevertheless, I personally think that diversifying that funding would be a good thing, because it would take that uncertainty away – especially after last year, when a memo was made public about the CVE program’s funding about to end.

If such a CVE program shutdown were to happen – a worst-case scenario – I’m sure our industry would rebuild it in some way, because it cannot live without it. There would be a massive pull to reproduce it, as ENISA did in the EU. So, there is a bit of me that thinks, even if it did crash, there would be a period of challenge, but we would rebuild it in some form.

However, I remember that at that point in 2025, we saw various initiatives spin up, and that started to fragment the ecosystem. It is not the way forward from our point of view.

Now, if Europeans come and take part in the program, it could bring enough diversification that we can start putting things back together.

What’s Hot on Infosecurity Magazine?