Cyber criminals have embraced AI as a core component of campaigns, allowing even low-level hackers to conduct far more sophisticated and prolific attacks, PwC has warned.
A new report from global consultancy firm’s incident response team has revealed that the rise of AI-driven threats is the most frequently raised concern by clients.
However, PwC’s Annual Threat Dynamics 2026 report, published on March 25, also suggested that AI will act as an enabler for cyber defenders.
“AI supercharges both defenders and adversaries and is the No. 1 cyber investment priority for security leaders,” said PwC.
According to the report, AI has become a force multiplier for cyber threat actors, who have weaponized to improve their capabilities.
This includes using AI to accelerate malware development and automate reconnaissance. Dark web LLMs have also emerged and help cybercriminals generate convincing phishing lures, as well scale social engineering across languages and platforms.
Improved AI Capabilities Means Improved Attacks
As the capabilities of AI improve and major AI companies publicly release products, threat actors are quickly weaponizing tools for malicious ends, something which has particularly been the case with agentic AI and AI agents.
PwC detailed how following the release of ReaperAI, a proof-of-concept AI agent designed to function as a penetration tester, it was reported that a China-based threat actor had launched a hacking campaign against organizations using a tool with very similar capabilities.
As part of its pen testing capabilities, ReaperAI could autonomously conduct reconnaissance activities and execute exploits without human intervention.
“We assess continued AI adoption by adversaries will highly likely fuel a sustained increase in the volume and sophistication of threats originating from a much wider pool of threat actors,” warned PwC’s report.
“Organizations should anticipate malware that natively incorporates AI to evade detection and target high-value data, alongside a widening pool of less skilled threat actors leveraging AI to punch above their weight”
However, PwC stressed that AI does not have to be an existential threat to organizations, and that AI tools can and should be used as means of defending against cyber threats.
“AI also represents the single greatest opportunity for defenders to match the pace, enabling faster detection, automated containment, and intelligence-led decision-making at scale,” said PwC.
“Investing in AI-enhanced defense, embedding frameworks into threat modelling, and becoming post-quantum ready will be essential to keeping pace.”
