Hackers Observed Using AI to Develop Zero-Day for the First Time

Written by

Cybercriminals have been using AI to identify and exploit a zero-day vulnerability successfully for the first time, Google Threat Intelligence Group (GTIG) has warned.

Published on May 11, the GTIG AI Threat Tracker report said that “prominent” cybercrime threat actors partnered to plan a mass vulnerability exploitation operation.  

An AI model was likely used to identify a zero-day vulnerability and weaponize it to exploit bypass two-factor authentication (2FA) protections on a popular open-source, web-based system administration tool.

GTIG worked with the system admin tool vendor to close the vulnerability and disrupt the campaign before the new zero-day could be exploited.

Google said this is the first evidence it has seen of a threat actor successfully using AI to support the discovery and weaponization of a zero-day vulnerability.

Neither the Google Gemini AI model nor Anthropic Mythos were used by the attacker, said the report.

Analysis of the code, which was implemented in Python, suggested hallmarks of being generated by AI. This included the highly structured use of educational docstrings and Pythonic format which are characteristic of training data used by large language models (LLMs).

The script also contained a hallucinated CVSS score, another indicator that it was developed by an AI rather than a human.

While this campaign was disrupted before it was deployed, the discovery of this AI-developed zero-day is an indication of how rapidly the AI threat landscape is evolving.

“There’s a misconception that the AI vulnerability race is imminent. The reality is that it’s already begun. For every zero-day we can trace back to AI, there are probably many more out there,” said John Hultquist, chief analyst at GTIG.

AI as an Enabler for Hackers

The GITG report detailed several examples of threat actors which have recently employed AI as part of their campaigns.

This included nation-state hacking and espionage groups. Google said that People’s Republic of China (PRC) and the Democratic People's Republic of Korea (DPRK) have demonstrated “significant interest” in capitalizing on AI for vulnerability discovery.

Cybercriminal groups have also continued to deploy AI as part of hacking campaigns with threat actors using AI models to help develop malware. GTIG also noted AI is being used for operational support tools which are more difficult to detect by anti-virus software and cybersecurity protections.

While attackers are using AI for more sophisticated activities including developing zero-days and malware obfuscation, the most common use of AI by threat actors is, much like regular users, using LLMs to conduct research and troubleshooting.

By automating intelligence gathering and task support, cybercriminals are allowing themselves additional time and resources to manage  complex, multi-stage operations and more effective campaigns.

“Threat actors are using AI to boost the speed, scale, and sophistication of their attacks. It enables them to test their operations, persist against targets, build better malware, and make many other improvements,” said Hultquist.

“State actors are taking advantage of this technology, but the criminal threat shouldn’t be underestimated, especially given their history of broad, aggressive attacks,” he added.

What’s Hot on Infosecurity Magazine?