#HowTo: Fight the Use of Deepfakes in ID Theft

Written by

Deepfake technology will become a serious threat to businesses as our world adopts remote work and online ID verification as standard. 

When a video recently circulated online of US President Biden appearing to break into song while talking about inflation, stunned watchers questioned his sanity. Before the creator of the skit, Ali Al-Rikabi could explain how the video was made, however, hundreds of thousands of people viewed the clip – and not everyone realized it was fake. 

The video resulted from deepfake technology, an umbrella term that describes applications of subsets of artificial intelligence (AI). In this case, the producer combined genuine footage, lip-synching software and synthesized voice audio.

While the video might be amusing at first glance, the deepfake highlights an emerging problem: being able to tell fake content from reality and how this applies to our online identities – especially in a world that is becoming increasingly remote. 

What is Deepfake Technology?

Deepfakes are falsified images, videos or audio clips created through the application of AI technologies. Deepfake AI algorithms can generate lifelike people and animals or manipulate real people into doing or saying things they haven’t in reality. 

You need a suitable mobile app for quick deepfake creation or a pair of AI algorithms trained to compare, replace and synthesize content for more in-depth and deceiving content. As millions of videos, photographs and audio clips are already on the web, there is plenty of data lake fodder for AI models to learn from. 

In the past five years, the possibilities of deepfakes have expanded far beyond the boundaries of entertainment or fake pornography. Cyber-criminals are also adopting the technology for elaborate scams, including fraud and identity theft.

The Role of Deepfakes in Identity Theft

Online and in the public eye, deepfakes of celebrities are being used in advertising content for everything from real estate to cryptocurrencies. 

The law is yet to catch up to this latest technological leap. While some deepfake creations are defended by portraying them as “satire,” there are real legal and ethical implications of deepfake technology. It’s only a small leap from the consensual use of someone’s image to non-consensual reproductions, and deepfakes don’t have to be technically perfect to have a substantial impact and to be truly believable. 

Social Engineering 

Criminals will use social engineering to pretend to be someone else, often assuming their identity to conduct theft or commit fraud. 

Deepfakes take impersonation an unprecedented step further. With enough footage and the right tools, attackers could develop synthetic identity markers or create fake videos and voice audio clips. We observed this in 2019 in the case of a UK firm’s CEO, who believed he was talking to his boss and transferred $243,000 to a fraudster

While conducting deepfake scams in real-time is a challenging prospect, considering the speed of technological innovation, it may not be long before this is our reality. However, effective biometric authentication to verify identities can bridge this gap in our defenses before identifying deepfakes becomes a real business crisis.

Financial Crime, Bypassing Know Your Customer 

Another concern of deepfake technology is the potential abuse of know-your-customer (KYC) onboarding processes. 

To perform a KYC check, businesses require customers to provide physical ID documents, proof of address and biometric identification, among other evidence. 

Deepfakes could be used to fulfill this part of a KYC demand. Unfortunately, the rest can be gathered via social engineering or taken from the vast amount of data leaked online, often caused by third-party data breaches, in which terabytes of stolen information are released every year. 

Deeply Fake Documentation

Separately, a sophisticated criminal may be able to provide counterfeit documents as proof of identity. It is already possible to create fake passports, driver’s licenses, and more. Still, deepfake AI could make it easier to fool authentication checks when ID documents are requested online.

Synthetic ID fraud is already estimated to cost banks $6 billion annually, and as a Finextra journalist found, deepfake technology is already sophisticated enough to pass bank verification checks. 

How Identity Verification Can Combat the Deepfake Identity Challenge

Deepfake AI is a promising technology when applied to ethical applications. However, with every innovation, cyber-criminals will find a way to exploit it. 

We can’t stop the abuse of new software, but we can fight deepfake criminal schemes and ID theft with the proper forms of defensive technology – and one that stands out is facial recognition software. 

What’s hot on Infosecurity Magazine?