When Seeing Isn’t Believing: Deepfakes in the Digital Age

Written by

Forget GIF and memes, deepfakes are the new visual content taking the social media sphere by storm. From viral, entertaining videos of Bill Hader turned into Arnold Schwarzenegger to politically-driven videos of Barack Obama giving speeches that never happened, deepfakes and other artificial intelligence (AI)-based techniques for low-cost creation of fake videos represent a new era of digital threats.

These videos have become cheaper and easier to make, and with algorithms that favor visual, dynamic content, you can no longer believe everything you see. 

A deepfake is altered video content that shows something that didn't actually happen. By definition, deepfakes are produced using deep learning, which is an AI-based technology. Of late, the term deepfake has been used to depict nearly any type of edited video online – from Nancy Pelosi’s slowed speech to a mash-up of Steve Buscemi and Jennifer Lawrence. Given the technical definition, however, the video of Nancy Pelosi does not actually classify as a deepfake but rather simply an altered video, sometimes referred to as “shallow fake.”

Although technically different, shallow fakes can cause the same level of potential damage as deepfakes -- the number one risk: disinformation. 

The Implications of Disinformation
In an era of fake news and misinformation, these videos accentuate people’s hesitations about what they see online. One area where experts fear the power of deepfakes most is in the government and national security sectors.

For example, in the upcoming 2020 elections, the creation of deepfakes could serve as a new method for distributing misinformation, fomenting false influence and targeting individual candidates and parties.

It’s no secret that deepfakes have real implications for political discourse globally, and addressing these videos needs to be a bipartisan effort for the sake of safety and democracy. As deepfakes become cheap to produce, they pose impersonation-related risks that nations and companies across industries, geographies, and sizes will have to contend with.  

As the volume of videos and the sophistication of the technology increases, the social and digital platforms will no longer be able to delay or waiver on their response. Beyond simple falsehoods, the implications of deepfakes go much further because of what they can represent, and especially misrepresent.

The Deepfake Dilemma for Big Tech
One recent manipulated video took on another layer of complexity because of the person depicted: Mark Zuckerberg, the CEO of Facebook and Instagram, the same platforms making said video viral. In the doctored clip, Zuckerberg appears to boast about his power over billions of people’s data. It’s clear that the voiceover is not Zuckerberg’s actual voice and his motions are stiff, but it shows what is possible and what it means for tech companies and leaders all over the world. 

This video of Zuckerberg has since spread across social and digital platforms, leaving sites -- including Facebook and Instagram -- to grapple with what action to take, adding to the ongoing conversation around addressing disinformation online. For example, YouTube was quick to remove the doctored Nancy Pelosi video but it remained on Instagram, Facebook and Twitter for much longer periods of time.  

Congress’s Call to Action
Like in the case of other fake news and disinformation, questions around attribution and responsibility arise. For one, who should be responsible for regulating the content? Is it the video creator? Those who share it? Or the platforms that they are shared on?

To help answer these questions, the House of Representatives held its first hearing in June focused specifically on national security threats posed by deepfake technology. Proposed is a change for Section 230 of the Communications Decency Act to be amended to hold social and digital platforms responsible for the content posted on their sites. 

Some states aren’t waiting any longer, and are already taking action against deceptive videos and disinformation at large. On June 14, Texas signed State Bill 751 into law, focused specifically on “creating a criminal offense for fabricating a deceptive video with the intent to influence the outcome of an election.” In New York, Bill A08155 was introduced on May 31, focused on criminalizing the knowing creation of digital videos, photos and audio of others without their consent.

A definitive action from Congress would represent a major shift for all of the social networks that have previously taken a more passive stance, placing responsibility on the individual poster and sharer to determine whether the content is fraudulent or malicious.

As of now, Facebook has decided to remain consistent with the application of their own disinformation policy, leaving the fake Zuckerberg video live on Instagram.

For companies looking to defend against deepfakes, they cannot wait for regulators to catch up. It’s important to start monitoring your brand’s digital presence for any form of impersonation from early warning of account takeover attempts to detection of spoofed sites that abuse your trademarks and brand.

Computer vision, including object detection, leveraging machine learning technologies to achieve processing scale and accuracy, is crucial.

What’s hot on Infosecurity Magazine?