Low-Skilled Cybercriminals Use AI to Perform "Vibe Extortion" Attacks

Written by

Unsophisticated cyber threat actors have started delegating key steps of extortion campaigns to large language model (LLM)-powered AI assistants.

In a report published on February 17, Unit 42, Palo Alto Networks’ research team, shared findings about a low-skilled actor who used an LLM to script a professional extortion strategy, complete with deadlines and pressure tactics.

This technique has been dubbed by the researchers as “vibe extortion.”

In one incident investigated by Unit 42, the cybercriminal recorded a threat video from their bed while visibly intoxicated, reading the AI-generated script word-for-word from a screen.

While the threat lacked technical depth and seriousness, Unit 42 researchers argued that the LLM “supplied the coherence” and could open the door to more serious ways of using AI for low-skilled actors.

“AI didn’t make the attacker smarter; it just made them look professional enough to be dangerous,” they added.

AI, A “Force Multiplier” For Cybercriminals

This case was just one of many examples identified by Unit 42 researchers of cyber threat actors using AI in a novel way to help them achieve their cyber extortion goals.

In the report, the researchers argued that AI, and especially generative AI (GenAI), has now become a “force multiplier for attackers.”

“In 2025, threat actors moved from experimentation to routine operational use. AI is not an attacker’s ‘easy button,’ but it is a massive friction reducer,” they wrote.

Unit 42 observations showed that the cybercrime ecosystem is well beyond the “phishing with better grammar” phase. Attackers are now using GenAI in novel ways to help them scale or speed up the attack lifecycle, iterate more frequently, operate with fewer human constraints or lower the barrier to entry for cybercriminals.

These include:

  • Scanning vulnerabilities faster for rapid exploitation: Unit 42 researchers found that attackers start scanning for newly discovered vulnerabilities within 15 minutes of a CVE being announced, with some exploitation attempts beginning before many security teams have even finished reading the vulnerability advisory
  • Parallelized targeting with reconnaissance and initial access attempts across hundreds of targets at once
  • Delegating and automating key ransomware tasks, such as script generation, templating and extortion
  • Crafting hyper-personalized social engineering (e.g. automating open-source intelligence collection, including professional and organizational context to craft lures that match the target’s role and relationships)
  • Creating synthetic identities: threat actors like Scattered Spider (tracked by Palo Alto as Muddled Libra) and North Korean IT workers increasingly use deepfake techniques to steal credentials and pass remote hiring workflows
  • Developing malware: in the Shai-Hulud campaign, Unit 42 assessed that attackers used an LLM to generate malicious scripts
  • Turning an AI platform into a weapon: threat actors use valid credentials to misuse enterprise AI platforms. For example, recent Unit 42 research on Google Vertex AI demonstrated how attackers could misuse custom job permissions to escalate privileges and use a malicious model as a Trojan horse to exfiltrate proprietary data

Speaking to Infosecurity, Chris George, managing director at Unit 42, said he is especially impressed by how AI can help scale and speed up reconnaissance.

“Now that threat actors are fully using AI to fix phishing emails’ grammar and make them more compelling, throwing product names or system names in the mix that were collected through reconnaissance and will sound familiar and specific to the victim adds a level of realism that makes phishing more efficient,” he explained.

“We even use AI for reconnaissance internally, within Unit 42, to support our assessments.” said.

Haider Pasha, VP and CSO for EMEA at Palo Alto Networks, told Infosecurity that he is particularly concerned by how AI has helped shrink the time to infiltrate networks and exfiltrate data.

“What used to take on average three to four weeks has now dropped down, in certain cases, to under 25 minutes. This is a record time that we didn't anticipate and that would have been impossible without AI,” he added.

Recommendations to Mitigate AI Threats and Threats to AI

In their report, Unit 42 researchers provided recommendations to mitigate AI threats in three domains: countering the AI-accelerated attack speed, defending against improved tradecraft and protecting the AI attack surface.

Recommendations for countering the AI-accelerated attack speed include:

  • Automating external patching: Mandate automated patching for critical CVEs on internet-facing assets to close the 24-hour exploitation window
  • Autonomous containment: Deploying AI-driven response to drive down mean time to detect/respond (MTTD/MTTR) and isolate threats before they can automate lateral movement

Recommendations for defending against improved tradecraft include:

  • Behavioral email security: Transitioning from signature-based filters to engines that identify anomalies in communication patterns
  • Intent-based awareness: Moving beyond simply training employees to spot typos. Shift to out-of-band (OOB) verification for all sensitive requests (e.g., wire transfers, credential resets or remote hiring)

Recommendations for protecting the AI attack surface include:

  • Monitoring model telemetry: Correlate unusual AI API calls or scripts sourced from model outputs with known evasion techniques
  • Improving prompt visibility: Alert on sensitive queries to internal LLMs (e.g. ‘find all passwords’) and enforce strict permission boundaries for tokens and service accounts

What’s Hot on Infosecurity Magazine?