AI-Enabled Voice and Virtual Meeting Fraud Surges 1000%+

Written by

Fraudsters significantly ramped up their use of AI to enhance campaigns across voice and virtual meeting channels last year, boosting speed and volume, according to Pindrop.

The voice authentication and deepfake detection specialist said its new report, Inside the 2025 AI Fraud Spike, is based on its own data collected between January and December 2025.

The firm pointed to a 1210% increase in AI-enabled fraud during this time, versus a 195% surge in traditional fraud.

The reasons for the growing popularity of deepfakes, voice bots, and “AI-generated interactions” is simple – it’s cheaper, faster, harder to detect, and incredibly scalable, said Pindrop.

“We saw it first in contact centers, where synthetic voices and automated social engineering bypassed controls in seconds,” the report noted.

“Now, the same tactics are rippling across real-time interactions that rely on trust: from remote job interviews to financial transactions and beyond. For CISOs and CTOs, this isn’t just another trend – it’s a fundamental shift in how fraud operates and how trust breaks at enterprise scale.”

Read more on AI fraud: AI and Deepfake-Powered Fraud Skyrockets Amid Identity Fraud Stagnation.

Pindrop explained that voice fraud usually begins with automated bots calling up enterprise or call center Interactive Voice Response (IVR) systems in order to perform reconnaissance.

That means mapping menu options, testing workflows and identifying which prompts trigger security checks.

“Later, those same bots – now smarter – come back armed with knowledge about processes, workflows, and weak points, setting up a much more effective fraud attempt,” the report explained.

In enterprise settings, AI is also used to deploy deepfakes of C-suite executives in virtual meetings, designed to trick victims into wiring funds to the fraudster.

As deepfakes get more convincing, with “baked-in” empathy, natural conversation and small talk, employees are the weakest link in the battle against AI-enabled fraud, said Pindrop.

Retail and Healthcare Under Fire

The report claimed healthcare and retail are particularly exposed to this type of fraud. Bots apparently use intel collected from IVR probing to socially engineer live agents into enabling account takeover.

This gives fraudsters access to Health Savings Accounts (HSAs), Flexible Spending Accounts (FSAs) and other employer-funded savings accounts, the report claimed.

Bot attacks accounted for over half of all fraud attempts at one US healthcare provider and Pindrop customer.

In retail, AI-powered return fraud is a growing threat to the bottom line.

“Fraudsters deploy bots equipped with common scripts that initiate return requests across retail sites and mobile apps,” the report continued.

“The strategy is intentional: target low-dollar refunds that stay below review thresholds. Each refund is small. Thousands of them are not. What looks like background noise quickly becomes a material loss when bots run continuously.”

Pindrop said it detected a 56% monthly increase in “non-live” fraud in the sector in November, while non-AI fraud dropped by 69% in the same period.

What’s Hot on Infosecurity Magazine?