Our website uses cookies

Cookies enable us to provide the best experience possible and help us understand how visitors use our website. By browsing Infosecurity Magazine, you agree to our use of cookies.

Okay, I understand Learn more

Deepfakes: Malicious AI is Here, and Now

Malicious AI isn’t just on its way, it’s already here. McAfee executives gave us a taste at a the RSA Conference 2019 this week.

Steve Grobman, CTO at McAfee, joined the company’’s chief data scientist Celeste Fralick on stage to discuss the darker implications of AI. Needless to say, the concept of deepfakes came up pretty quickly.

This concept first emerged in the form of fake porn videos on Reddit in 2017. It uses generative adversarial networks (GANs), which compete with each other to produce and spot increasingly convincing fake assets. One application is the combination of real-world video and audio created by an imposter to produce a hybrid video depicting a person saying or doing things they never did in real life.

Fralick showed a rudimentary version that she created, in which a manipulated video of Grobman spoke her words. 

The deepfake wouldn’t have been that convincing in a real life situation, because her voice had not been altered. However, altered voices are even easier to produce. Baidu's Deep Voice machine learning software can clone a person's voice with just 3.7 seconds of audio, using it to generate new speech, accents and tones.

AI’s potential to create real-looking fakes is expanding. In February, software engineer Philip Wang used a GAN algorithm produced by Nvidia to create deepfake faces from a large facial training data set. While there are some mistakes in the images on his website, such as eyes that look in different directions or glasses that don’t extend all the way around a face, for the most part they are incredibly convincing.

Then, there are the text-based fakes. Elon Musk's AI think tank, OpenAI, recently demonstrated an AI program that could create convincing text from a small sample, mimicking its style. The style of the text is convincing, but its facts are not; they are entirely made up. This is fake news, AI style.

What implications do deepfakes bring as they become more sophisticated? Consider what might happen if an attacker hacked an official channel, such as a Twitter account owned by the Associated Press, or by a sitting politician? Using that to distribute fake information about an emerging event or disaster could mobilize large parts of the population, causing chaos. Imagine marrying that with a cyber-attack on the electrical or water system to get a picture of how much havoc a determined attacker could cause.

Society isn’t yet ready to cope with these threats, but we had better prepare. The question is, how?

The topic of Cyber Physical/IoT will be covered throughout the free-to-attend conference at Infosecurity Europe in London from 4-6 June. See all the talks on Cyber Physical/IoT here. Infosecurity Europe is the leading European event for information and cyber security; find out more and secure your free visitor badge.

Brought to You by

Should you register for this event your information will be shared with the sponsor indicated above. See our privacy policy for more information.

What’s Hot on Infosecurity Magazine?