AI is powering a “fifth wave” in the evolution of cybercrime, offering inexpensive, ready-made malicious tools enabling sophisticated attacks, according to Group-IB.
In its latest report, published on January 20, the Singapore-based cybersecurity firm divided the history of cybercrime in four phases, from the opportunistic malware and viruses of the 1990s and early 2000s to “ecosystem and supply chain attacks” wave that marked the 2010s and 2020s.
Since 2022, the firm argued, cybercrime has entered a fifth wave, which it called “weaponized AI.”
This new era is marked by the rapid adoption of AI and generative AI (GenAI) tools by attackers that “turn human skills into scalable services” and make cybercrime “cheaper, faster and more scalable,” Dmitry Volkov, Group-IB’s CEO, said in the report’s foreword.
Black Market Deepfake Kits Fuel Cybercrime for as Little as $5
One of the most striking misuses of GenAI, Group-IB argued, is in the creation of fake synthetic content impersonating real people.
This content can be used lure other trusting people to execute tasks or to bypass authentication processes and know your customer (KYC) systems to gain access to devices, steal money or steal data.
For instance, Group-IB analysts found “synthetic identity kits” offering AI video actors, cloned voices and even biometric datasets for as little as $5 and deepfake-as-a-service offerings for subscriptions starting at $10 per month.
Additionally, the analysts recorded a spike in discussions about such AI-powered tools for criminal purposes in dark web forums over the past three years, from an average of below 50,000 messages on this topic from 2020 to 2022 to approximately 300,000 messages every year since 2023.
During the report’s launch event in London, Anton Ushakov Group-IB’s cybercrime investigation unit leader, said these ready-made kits have become “a commodity” on dark web marketplaces.
“What is really interesting is that not only pre-recorded deepfakes are popular, but also cheap tools enabling live deepfake schemes,” he added.
“Of course, these will not convince 90% of people, but if it works in 5% to 10% of cases, it can be lucrative enough at this stage,” he noted.
Read more: World Economic Forum: Deepfake Face-Swapping Tools Are Creating Critical Security Risks
Phishing Kits Enter the Agentic AI Era
Another major use of AI by cybercriminals highlighted in the Group-IB report is for phishing.
Phishing kits are now listed at prices ranging “from as little as a Netflix subscription to $200 per month, making them accessible and affordable to groups big and small,” said the report.
Ushakov’s team found that the new malicious AI capabilities are now used beyond simply assisting the attacker in the production of believable phishing emails.
“AI is not only changing how phishing is generated, handled, hosted and run, but the way it’s distributed,” Ushakov said.
He explained that, previously, criminals using phishing-as-a-service (PhaaS) kits would still need to configure everything, including SMTP servers and list of victims and run those campaigns.
“Now, with the help of AI, and especially the open-weight models that are accessible, criminals are building the tools to automate these tasks,” Ushakov started.
“They embed the models into the tools that are helping to scale and automate phishing campaigns in terms of the delivery. The models provide them with the list of the victims and sort of narrative that they want to use for the lures,” he continued.
Group-IB found one service that “agentizes the phishing campaigns.” This tool uses AI agents to develop lures, send phishing emails to victims and returns information to the criminals with feedback, allowing them to adapt the campaign over time.
“On the victim’s side, all the malicious emails feel personal and new ones keep being sent out by the phishing kit’s agent,” said Ushakov, who noted that the ‘agentized’ phishing kit appears to still be in a testing and development phase.
Dark LLMs Grow in Sophistication
Finally, Group-IB analysts also found that threat actors are moving past chatbot misuse and are creating proprietary “dark large language models” (LLMs) that are more stable, capable and have no ethical restrictions.
From early experiments of rudimentary, open-access dark LLMs like WormGPT, these tools have now evolved into custom-built, self-hosted AI models optimized for generating harmful content, including malware, scams and disinformation.
They have no ethical restrictions and are often fine-tuned on scam linguistics or malicious code and datasets.
The dark LLMs assist in various cybercriminal activities, including:
- Fraud and scam content generation for romance, investment and impersonation scams
- Crafting phishing kits, fake websites and social engineering scripts
- Malware and exploit development support, including code snippets and obfuscation
- Initial access assistance with vulnerability reconnaissance and exploit chains
The analysts identified at least three active vendors offering dark LLMs with subscriptions ranging from $30 to $200 per month, and a customer base exceeding 1000 users.
One example, called Nytheon AI, is an unrestricted, self-hosted AI chatbot promoted on dark web forums as a fully offline, 80-billion-parameter, locally-hosted hybrid LLM hosted over TOR and blending open-source models like DeepSeek-v3, Mistral, Llama v3 Vision and some others.
In April 2025, Group-IB investigations confirmed the sale of Nytheon AI on Telegram channels through a subscription-based model. Designed to provide uncensored chatbot responses, its advertised use cases include helping to develop malware, penetration testing, vulnerability research, fraud schemes and unfiltered information queries.
The cybersecurity firm validated Nytheon AI’s AI functionality, technical capabilities and lack of ethical restrictions.
Craig Jones, former Interpol director of cybercrime and independent strategic advisor for Group-IB argued that, while “AI hasn’t created new motives for cybercriminals,” it has industrialized cybercrime by “dramatically increasing the speed, scale and sophistication with which those motives are pursued.”
“What once required skilled operators and time can now be bought, automated and scaled globally. That shift marks a new era, where speed, volume, and sophisticated impersonation fundamentally change how crime is committed and how hard it is to stop,” he concluded.
