Easy access to large language models (LLMs) and other AI tools has significantly lowered the barrier to entry for cybercriminals to conduct effective cyber-attacks rapidly and at scale, a new threat intelligence report by Cloudflare has warned.
The 2026 Cloudflare Threat Report draws on research and analysis by the company’s Cloudforce One threat research team and details how AI has become a “force multiplier” for cybercriminals, lowering the effort required to carry out campaigns, while also making those campaigns more impactful.
“An actor who previously lacked the skills to craft a convincing phishing email or write custom malware can now leverage an LLM to generate them rapidly and at scale, significantly lowering the barrier to entry for highly effective operations,” said Cloudflare.
According to the report, LLMs and AI have been adopted by a wide range of threat actors, including state-sponsored hacking groups, financially motivated cybercriminal gangs and hacktivist collectives.
The ways in which malicious hackers are exploiting these tools include using LLMs to write more convincing phishing emails, especially if they’re not being written in their native language.
Attackers are also taking advantage of AI tools to help with writing malware and conduct campaigns, in a way which is lowering the technical barrier to entry for launching attacks. For example, according to the report, attackers are using LLMs to map networks in real-time.
“Cloudforce One tracked a threat actor who leveraged AI to help identify the location of high-value data. This allowed the actor to compromise hundreds of corporate tenants… in one of the most impactful supply chain attacks seen,” said researchers.
AI Deepfakes: The New Insider Threat
Corporate identities have become a prime focus of cyber-attacks, with user accounts highly coveted by the attackers as they look to leverage access to cloud architecture to covertly conduct campaigns while remaining under the radar.
But sometimes, using account identity isn’t enough. Researchers warn that AI-generated deepfakes and fraudulent IDs are being generated to bypass hiring filters to embed threat actors directly inside target organizations as employees. In particular, North Korea is known to exploit this attack vector.
“This infiltration turns the remote workforce into an attack vector, placing malicious insiders within the organization’s most trusted administrative and financial systems,” said the report.
Cloudflare has warned that the proliferation of AI-based tools lowering the barrier to entry for technical, sophisticated campaigns amounts to the “total industrialization of cyber threats” – and that organizations must be prepared for rapid evolution of cyber-attacks.
“Threat actors are constantly changing tactics, finding new vulnerabilities to exploit and ways to overwhelm their victims. To avoid being caught off guard, organizations must shift from a reactive posture to one fueled by real-time actionable intelligence,” said Blake Darché, head of threat intelligence, Cloudforce One at Cloudflare.
