World Economic Forum: Deepfake Face-Swapping Tools Are Creating Critical Security Risks

Written by

The rapid advancement of deepfakes is becoming a major challenge for sustaining trust in digital identity systems, the World Economic Forum (WEF) has warned 

Deepfake-generating technologies, and especially face-swapping tools are enabling malicious actors to bypass know-your-customer (KYC) and remote verification processes, creating financial, operational and systemic risks for any institution that relies on digital trust. 

new report for the World Economic Forum’s Cybercrime Atlas, published on January 8, noted that this advancement coincided with other worrying trends, such as threat actors increasingly targeting financial services and cryptocurrency – particularly prone to KYC bypass attacks. 

“Criminals are now combining AI-generated or stolen identity documents, advanced face swaps and camera injection to bypass live verification,” reads the report. 

A typical KYC bypass attack using face-swapping. Source:"Unmasking Cybercrime: Strengthening Digital Identity Verification against Deepfakes,” Cybercrime Atlas, World Economic Forum, January 2026
A typical KYC bypass attack using face-swapping. Source:"Unmasking Cybercrime: Strengthening Digital Identity Verification against Deepfakes,” Cybercrime Atlas, World Economic Forum, January 2026

Current Commercial Face-Swapping Tools Bypass KYC Protections 

The team of researchers, including  Natalia Umansky and Seán Doyle, respectively project specialist and lead of the Cybercrime Atlas, as well as research leads at Banco Santander and Group-IB, analyzed 17 face-swapping tools and eight camera injection tools to assess whether they effectively enable KYC bypass and to characterize the current deepfake landscape. 

KYC protections are used across many industries to authenticate the identity of new customers and assess potential risks associated with them. Typical KYC processes combine document verification – the collection and automated validation of government-issued identity documents (passport, ID card, driver’s licence) – and biometric verification – comparison of a live biometric sample (e.g. facial image or short video) against the identity document. 

While the tools’ identities, vendors and step-by-step exploitation techniques have been redacted from the report to prevent potential misuse, most were intended for creative or entertainment use and none explicitly included anti-KYC functionality in their publicly available documentation and websites. 

However, the researchers concluded that some tools do include capabilities defeating traditional digital KYC protections. 

“Overall, the greatest KYC risk was found where low-latency, high-fidelity, real-time swaps were deliverable directly into a verification pipeline,” the researchers wrote. 

Additionally, the analysis showed that even moderate-quality face swapping models, when integrated with camera injection techniques, can deceive certain biometric systems under specific environmental or technical conditions.  

“Most attacks, however, still exhibit detectable inconsistencies, particularly in temporal synchronization, lighting and compression artefacts. These weaknesses provide actionable focus points for advanced detection models and forensic countermeasures,” the researchers added. 

Read more: AI and Deepfake-Powered Fraud Skyrockets Amid Identity Fraud Stagnation 

Forecasting Future Deepfake-Powered Threats to KYC Protections 

Beyond their technical analysis of deepfake tools, the researchers forecasted five trends and trajectories the domain is likely to adopt over the next year: 

  • Democratization of AI tools lowering entry barriers and increasing attack complexity 

  • Persistence of finance and cryptocurrency as prime targets, with expansion into other KYC-dependent sectors 

  • Rising fidelity of face-swap technology enhancing realism and undermining verification 

  • Persistence of presentation attacks in the near term, with injection attacks escalating as active liveness adoption grows 

  • Fragmented regulation constraining defences in the short term, but regulatory convergence likely improving resilience in the medium term 

The WEF report also outlined 27 recommendations to KYC solution providers like liveness and anti-spoof vendors, fraud teams within organizations relying on KYC protections (e.g. risk engines, monitoring units) and national and international institutions to mitigate the growing threat of AI and deepfake-enabled KYC bypass attacks in the future. 

“The study also reveals that the defensive landscape must evolve in tandem with GenAI advancements. Detection models must not only recognize known patterns but anticipate future ones through continual learning, feedback integration and cross-platform signal correlation,” the researchers noted. 

“As adversaries harness open-source AI models and low-cost hardware, the barriers to executing real-time identity spoofing will continue to decline, demanding equally agile defences.” 

The WEF’s Cybercrime Atlas report, titled Unmasking Cybercrime: Strengthening Digital Identity Verification against Deepfakes, was made in collaboration with Lemon, Mastercard and its subsidiary Recorded Future, SpyCloud and Trend Micro. 

Read now: Rebuilding Digital Trust in the Age of Deepfakes 

What’s Hot on Infosecurity Magazine?