AI Puts Voice Impersonation on Steroids – Why and How Organizations can Minimize the Risks

Written by

Earlier this year, fraudsters used Artificial Intelligence to mimic the voice of the CEO of a German energy company in order to dupe the boss of its UK subsidiary to send USD 243,000 to a Hungarian supplier. According to the company’s insurer Euler Hermes, the money ended up disappearing into dubious bank accounts in Mexico and elsewhere.

Voice impersonation (aka ‘vishing' or voice phishing) fraud has been a real concern for banks and other types of organization for years. Whilst vishing is currently estimated to account for a mere 1% of phishing attacks, its use for cybercrime is estimated to have increased over 350% since 2013. 

AI puts voice impersonation on steroids
AI increases the risks of voice impersonation. Not only is AI voice software freely available, convincing impersonations can be hatched in little time. A recent Israeli National Cyber Directorate study found that software now exists that can accurately mimic someone's voice after listening to it for 20 minutes. Artificial voice company Lyrebird promises anyone can create a digital voice that sounds like you, or anyone else, in only a few minutes.

The more convincing the impersonation, the greater the sums those duped may be induced to hand over. Cybersecurity firm Symantec says it knows of at least three cases of executives’ voices being impersonated for fraudulent ends, with loses in one case totaling millions of dollars.

Reputation is as big a threat as fraud 
Voice impersonation has many uses beyond fraud and with AI voice software now freely available online, convincing fake news stories, hoaxes and reputational attacks are eminently possible.

Canadian psychology professor Jordan Peterson recently found himself at the mercy of a website where anyone could generate clips of themselves saying whatever they wanted in his voice. Most deepfakes poke fun at or, in the case of Mark Zuckerberg, seek to expose hypocrisy. But much of the content generated by Peterson’s website was vulgar and abusive, forcing him to threaten legal action. 
 

Fortunately, the relatively limited nature of current AI audio technologies has meant the numbers have so far been small and the damage limited. But these technologies are improving fast. Euler Hermes notes the AI software used to defraud the German energy company was able to mimic effectively its CEO’s voice, as well as his tone, punctuation and German accent.

Potential AI voice impersonation reputational threats
It is surely only a matter of time before we see more regular instances of voice impersonation hitting – directly or indirectly – the reputations of companies, governments and other organizations. Scenarios might include:
 

  • A fake CEO audio message to employees regarding the new company strategy is ‘leaked’ to the outside world, allegedly by a short seller
  • The voice of a well-known national politician is used to manipulate a senior director into discussing allegations of corporate fraud
  • A fake voice recording of two executive board directors discussing making sexual innuendos about a colleague is used to blackmail the company
  • An outsider gains entrance to a secured office by impersonating the voice of a company employee.

How to mitigate the reputational risks of AI deepfakes
Incidents with a reputational dimension can be difficult to anticipate, and even harder to manage. AI complicates matters considerably. Whilst the risk of AI-fueled voice attacks may not be high priorities, here are five things cyber and security professionals can do to mitigate the problem:

  • Work with your risk management, communications, corporate/public affairs and other relevant teams to identify and assess actual and potential security, financial, reputational and other relevant vulnerabilities.
  • Educate your people, especially those in the public eye, to watch out for and recognize deepfake videos and voice impersonations, and make sure they understand what to do when they see or experience something unusual.
  • Scan regularly for suspicious video and audio files and sites across the internet, social media and other relevant third-party platforms and channels.
  • Be prepared to respond quickly and appropriately to any incident which might impact your reputation. Specifically, make sure your cyber and communications plans are relevant and up to date.
  • Keep abreast of government and technology industry initiatives to combat the scourge of deepfakes, especially those aiming to improve detection and verification.

Artificial Intelligence may have been with us for decades, but the risks of malevolent voice impersonations and other types of deepfakes are only starting to become apparent. Every organization would be wise to consider now what these may mean for their name and image before today’s trickle turns into an avalanche.

What’s hot on Infosecurity Magazine?