The Rise of Dangerous Deepfakes

Written by

Experts are concerned about the growing prevalence of deepfake technology and its potential to cause serious harm. Danny Bradbury explores just how bad things could get

Deepfake videos use AI to manipulate someone else’s speech or movement, or superimpose their image onto someone else’s. The results are often entertaining, as comedian Bill Hader morphs effortlessly into Tom Cruise and Seth Rogen, or more unnervingly, as Jennifer Lawrence gives a TV interview wearing Steve Buscemi’s face.

However, Deepfakes have a darker side. According to a new report from deepfake detection company Deeptrace, 96% of all deepfake videos online today are content of an adult nature, misappropriating someone’s image for sexual purposes without their consent. That’s bad enough, but experts believe it’s about to get far worse, possibly disrupting the political landscape and even threatening democracy.

So where did it all start? Foreshadowing of deepfake technology first emerged in 1994, when filmmakers manually doctored footage of Presidents Kennedy and Nixon for the movie Forrest Gump. In 1997, researchers took things up a notch with Video Rewrite, a program that broke apart existing footage and reassembled it to make targets say new phrases. No one could imagine how far things would progress in the next two decades.

After computer scientists began using graphical processing units (GPUs) for AI training in 2007, their ability to build complex neural networks increased. Then, in 2014, researcher Ian Goodfellow produced the first working version of a generative adversarial network (GAN). This concept pits brain-like neural networks against each other.

One network tries to produce a lifelike image or video of a person. The other network tests the result against real images and tries to find mistakes. It keeps throwing its findings back to the first network, like a teacher grading a paper, so that the first one can try again. Eventually, the first network produces a passable image.

Academics began using GANs to create fake videos around 2016 when the visual computing group at Technical University Munich’s Face2Face program mapped facial expressions from a new video source to an existing one.

“We don’t have the one-touch, deepfake pornography generator yet, but we’re almost there”

The Dark Side of Deepfakes
Adam R Dodge, founder of Endtab.org, has seen the dark side of deepfakes close up. His company trains universities, law enforcement agencies and social workers among others in how to spot and tackle technology-related abuse. Deepfake sex videos are a growing problem, he says.
 
Deepfake porn initially targeted celebrities. Now, he warns that it is victimizing others too. Deepfakes changed the game for purveyors of revenge porn, because now they don’t even need to have compromising images of their targets, he warns. “Now you can simply scrape the internet, manufacture these things with a deepfake generation platform, and basically make anybody a target of revenge porn.”

These videos are getting easier to make as the technology – and the data – required to produce them becomes easier to obtain. A strong secondary market for deepfakes is already emerging on specialist sites that will superimpose a person and video of your choice for as little as 80 cents, he says.

Early deepfake producers needed lots of training data to make their images. Today, they can create passable results with just a few shots, or as little as a single image. 

“We live in an age where a 15-year-old girl is pumping out the same amount of content on social media as an A-list celebrity,” Dodge points out.

He worries that mobile-happy teens who do not fully grasp the consequences of their actions will be among the first adopters. Apps such as Deepnude, which deepfakes women as nude, and the Zao app, which superimposes users onto non-porn film clips, are just a taste of things to come, he warns. “We don’t have the one-touch, deepfake pornography generator yet, but we’re almost there,” he says, adding that school principals are woefully unprepared to cope with that looming threat.

As women increasingly fall victim to these attacks, concern is already mounting over an emerging threat: the use of deepfake technology for political manipulation. In its 2019 Worldwide Threat Assessment, the Office of the Director of National Intelligence predicted that Russia and other countries would use deepfake technologies to misdirect voters. 

Sam Gregory, program director at Witness.org, a nonprofit that helps citizens film human rights abuses, says that deepfake attacks could further disrupt the pulse of public discourse. “Often, quite purposefully, people will pump out lots of contradictory, fake accounts in order to destroy trust in a public space, or to alienate people from participation,” he says. If no one can trust what they see with their own eyes, how can they begin to engage in public debate?

Fixing the Deepfake Problem
As the technology behind deepfakes and its potential for misuse grows, experts are turning their attention to solutions. Many of these focus on technology to detect deepfake videos. In September 2019, Facebook teamed up with Microsoft and academic partners on a $10m contest to create deepfake detection technology. Researchers at the University of Southern California’s Information Sciences Institute have developed software that looks for strange facial movements, while the University at Albany’s approach relies on the fact that in many deepfake videos, the ‘people’ don’t blink.
 
ZeroFOX, which scans online platforms looking for threats against its clients, developed a deepfake detection system called Deepstar and open-sourced it at Black Hat in August. 

ZeroFOX open-sourced not only the code to help find deepfakes but also the training data – hundreds of videos from YouTube, Vimeo and notorious underground site 4Chan.

"Researchers are still considering how to develop and collaborate on anti-deepfake projects without having bad actors co-opt their work"

Facebook and ZeroFOX differ in their approach to releasing that training data. Whereas ZeroFOX sources all of its training data publicly and packages it for anyone to use, Facebook generated it all from scratch using hired actors and only releases it to certain individuals. The difference in approach highlights a larger problem in anti-deepfake research. 

Deepfake authors could use the training data and code released by anti-deepfake projects to refine their own techniques, enabling their GANs to create even better deepfakes. It’s a problem that concerns Aviv Ovadya, founder of the Thoughtful Technology Project, a nonprofit organization that hopes to prevent new technologies from irreversibly harming our information ecosystem. “You may be creating defenses in ways that directly enable better exploits,” he says. “So the ways in which you share information and the ways in which the field moves forward might need to be different.”

Researchers are still considering how to develop and collaborate on anti-deepfake projects without having bad actors co-opt their work, he says, likening Facebook’s approach to “asked-for source” instead of open source. Others release incomplete versions of their data or make deployment hard for anyone but dedicated researchers. 

Dodge suggests another approach: building detection research into ethical guidelines. Publication and perhaps even funding for new deepfake production algorithms depends on the researchers providing an antidote in the form of technology to detect it, or at least an acknowledgement of the potential harms it could do.

Gregory suggests a range of other solutions, all of which complement and overlap with the technical ones. One approach, at least to combat misinformation, is to promote media literacy, explains Gregory. “We need to teach people to think about where something comes from, whether they trust the source, whether they can corroborate it,” he says. 

Another approach is to work with big platforms such as Facebook and Twitter on policies to stop the spread of deepfakes, explains Gregory. This has applications for both disinformation and pornography deepfakes, although moves by online adult platforms to stamp out the videos have been met with mixed results. Several porn sites banned them, but rely on users to flag them. Journalists found a flood of new deepfakes hitting these sites after the bans, showing just how difficult it is to police them – and people have also created specialist sites dedicated to distributing these videos. Deeptrace says that 94% of deepfake porn videos reside on sites dedicated to them.

“We need to teach people to think about where something comes from, whether they trust the source, whether they can corroborate it”

When in Doubt, Legislate?
That paves the way for a more punitive final layer of defense: legislation. In early October, California signed two new laws into effect: AB-730 stops people sharing deepfakes of political figures without accompanying warnings within 60 days of an election, while AB-602 bans deepfake porn. Virginia also amended an existing law to cover deepfaked porn videos. At the Federal level, bill H.R 3230 would require a watermark stating that a video is a fake.

None of these security measures are infallible. Ethical constraints on research will only apply to academics, rather than amateur or commercial deepfake producers. A law passed in California will be meaningless to a Chinese deepfake jockey producing synthetic revenge porn to order. 

Gregory argues that measures like these at least buy some time though. “I tend to come at it from a kind of harm reduction approach or triage approach, which is that we’re just trying to reduce the ones we have to spend more time on.”

That time may be more valuable than we think. Ovadya worries about “disinformation ratchets” in which bad actors use deepfakes to increase their power so much that they become difficult to dislodge. “You can have one point in time where a politician can abuse this to hurt their political opponents and gain power, and then use that real-world power to further deceive,” he muses, describing a doomsday scenario. We must prepare society to cope with the new disinformation landscape that would emerge should bad actors break ahead in the deepfake arms race, he concludes. “That may be a mix of technical and non-technical measures that ensure that your societal system is robust against these attacks.”

What’s hot on Infosecurity Magazine?