Self-Driving Cars, in Mordor Where the Shadows Lie

Written by

Self-driving cars are edging closer to commercial reality, with Google, Tesla and Uber taking up pole positions in the race for dominance. But consider the cybersecurity implications of a world where cars are 100% controlled via a network that’s open to the internet. It can get pretty dark.

“Hacking self-driving cars” is a concept that will not only draw all the white-hat boys to the cyber-yard for its coolness appeal, but it’s also a phrase that conjures up darker thoughts: kidnapping, mayhem, terrorist attacks, murder.

And worse, in the self-driving case, it’s a hack one, hack ‘em all proposition—and hold the fate of their helpless passengers in one’s hands. With a properly executed exploit, a hacker can play a cyber-age Sauron:

 “One script to rule them all, one script to find them…”

And where’s Gandalf when you need him?

Risks are already being demonstrated. Researchers at software security company Security Innovation say they are able to trick Google vehicles’ radar scanners into thinking that there are objects in their paths—thus prompting the cars to take automatic, evasive maneuvers.  

Mounted on top of the vehicle, the Lidar laser radar system spins around constantly to build a picture of the car’s surroundings, which the car then relies upon to navigate safely. Security Innovation’s principal scientist Jonathan Petit said that he can use a homemade laser device to disrupt the system and create false echoes of objects—in turn forcing a stop, a turn or wild evasive moves that can result in an accident.

“I can spoof thousands of objects and basically carry out a denial-of-service attack on the tracking system so it’s not able to track real objects,” Petit told IEEE Spectrum. “The only tricky part was to be synchronized, to fire the signal back at the Lidar at the right time – then the Lidar thought that there was clearly an object there.”

We’ve heard a lot about hacking connected cars lately, what with the massive Jeep recall and all. Do self-driving cars have fewer failsafe measures than the hunks that are piloted around by imperfect human brains? Well, for now, the answer would appear to be “yes.” There’s definitely more work to be done here.

What’s hot on Infosecurity Magazine?