HowTo: Challenge Deepfake Fraud

Written by

For many, deepfakes are a faraway threat that keeps headline writers busy but offers little current risk to businesses. In reality, the technology is already conning some companies out of millions. And as it becomes more ubiquitous and lower in cost, budding fraudsters will increasingly look to tap AI tools to hijack accounts and open new ones.

What does this mean today? Banks and other businesses on the front line against deepfakes must start planning now to avoid a surge in fraud losses. To do so, they would be advised to look beyond biometrics to tools which analyze user behavior over time.

A New Wave of Fraud

Deepfakes have been with us for years. But the deep learning mimicry that enables users to impersonate other individuals via ‘fake’ audio and video is becoming increasingly convincing. As with any new technology trend, the criminal community is an early adopter. A couple of years ago, a UAE bank manager was tricked by a deepfake audio impersonating the director of a client business. He ended up wiring a $35m transfer on behalf of the client to scammers.

Eye-catching stories like this grab much of the attention. But what businesses should really fear is the democratization of deepfakes to the point where even those with limited tech know-how or resources can circumvent regular customers’ account security. Years ago, researchers created AI-generated “skeleton keys” to fool fingerprint scanners. And now they’ve shown how banks’ voice identification systems can be tricked by free or low-cost deepfake audio tools available freely online. It’s only a matter of time before facial recognition is next. Isolated stories have proven that it is technically possible.

This could open the door to a new wave of fraud. Scammers may use deepfake voice and video to impersonate legitimate customers to hijack their accounts and create new ones to run up debt or use for money laundering. They could also create synthetic images and voiceprints for new account fraud or even impersonate recently deceased individuals to collect welfare checks and other funds.

We are even reaching a point where fraudsters can combine deepfakes with social engineering to add extra legitimacy to “hey mom” text scams that impersonate loved ones or romance fraud using synthetic identities.

Read here: Humans Unable to Reliably Detect Deepfake Speech

Getting Smarter About Data Analysis

Two trends are working in the fraudsters’ favor. First, as mentioned, the technology is increasingly affordable. And second, the training data needed to create fake voices and videos of regular consumers is growing daily, thanks to the rising volume of clips posted to social media. Organizations need to rethink their fraud and risk strategy.

What should this entail? First, don’t treat simple voice and video authentication measures as a gold standard for identity verification. Both the range of actions that users can perform via these channels and the information they reveal should be urgently reviewed to minimize the opportunity for exploitation of verification systems.

If facial and voice biometrics aren’t the savior of identity and access management, what approach can mitigate the deepfake threat? It all comes down to data and how you use it. Looking beyond fixed biometric identifiers to analyzing behavior across time offers a possible way forward.

Is a user’s location pattern consistent with previous login attempts? Is their behavior different to previous journeys? What content are they engaging with? Are they trying to change personal details on the account or make a payment to a new beneficiary? Answering these and other questions with certainty can help uncover the fakers.

By looking beyond a biometric audio or video match and analyzing behavior across time and through previous user journeys, businesses can start to piece together a wholly more accurate view of their customers. And with that, they can make trust decisions that even deepfakes will have difficulty influencing.

What’s hot on Infosecurity Magazine?