AI technology is being adopted by fraudsters in ever growing numbers to commit new account fraud (NAF) and circumvent even biometric-based checks, according to a new report from Entrust.
The security vendor analyzed data from over one billion identity verifications in 30+ sectors and 195 countries, between September 2024 and September 2025, to compile its 2026 Identity Fraud Report.
It revealed that, while physical counterfeits accounted for almost half (47%) of document fraud attempts, digital forgeries now comprise over a third (35%). The latter has been driven by “the accessibility and scalability of modern editing tools” and generative AI (GenAI), which enables the creation of “hyper-realistic replicas” of identity documents, it said.
Fraudsters typically use these techniques to open new accounts.
“What once required specialized software and design skills can now be achieved with an open-source model and a few prompts,” the report claimed.
Read more on digital fraud: Digital Fraud Costs Companies Worldwide 7.7% of Annual Revenue
Scammers are also turning to AI-powered deepfakes to help them open new online accounts.
Entrust claimed deepfakes now account for a fifth of biometric fraud attempts. They’re especially prevalent in financial services, particularly for crypto (60%), digital-first banks (22%) and payments & merchants (13%).
Deepfake methods include:
- Synthetic identities: AI-generated faces that don’t correspond to real people
- Face swaps: Replacing one person’s face with another in a recorded or live video
- Animated selfies: Taking a static photo and using AI to add movement
Injection Attacks on the Rise
The report warned that deepfakes are most likely to be used in injection attacks, where fake images/videos are fed straight into an identity verification system, bypassing the camera and live capture process.
The frequency of such attacks has risen by 40% annually, Entrust claimed.
Virtual camera injections are most common, often paired with device emulation techniques to trick the verification software into believing it is a legitimate user/login attempt.
“As detection improves, fraud rings evolve, becoming faster, more organized and commercially driven,” said Simon Horswell, senior fraud specialist manager at Entrust.
“Generative AI and shared tactics fuel volumes and sophistication, targeting people, credentials and systems. Identity is now the front line, and protecting it with trusted, verified identity across the customer lifecycle is essential to staying ahead of adaptive threats.”
