HoneyBot Lures in Would-Be Factory Hackers

We’ve all heard of honeypots – but what about a HoneyBot?

The Georgia Institute of Technology has pioneered just such a thing: a shoebox-sized robot meant to act as a decoy for hackers looking to infiltrate factory environments.

HoneyBot, which was partially funded with a grant from the National Science Foundation, has one mission: to be attractive enough to the bad guys to get them to give up valuable information about themselves and their tactics, so that defenders can better harden high-stakes environments.

“Robots do more now than they ever have, and some companies are moving forward with, not just the assembly line robots, but free-standing robots that can actually drive around factory floors,” said Raheem Beyah, the Motorola Foundation Professor and interim Steve W. Chaddick School Chair in Georgia Tech’s School of Electrical and Computer Engineering. “In that type of setting, you can imagine how dangerous this could be if a hacker gains access to those machines. At a minimum, they could cause harm to whatever products are being produced. If it’s a large enough robot, it could destroy parts or the assembly line. In a worst-case scenario, it could injure or cause death to the humans in the vicinity.”

HoneyBot can sit motionless in a corner, springing to life when a hacker gains access – a visual indicator that a malicious actor is targeting the facility. The gadget then goes about mimicking an unprotected device that hackers would want to gain access to, sending back fake sensor information. It also would allow them to control it to a limited extent – such as following commands to meander around or pick up objects.

“The idea behind a honeypot is that you don’t want the attackers to know they’re in a honeypot,” Beyah said. “If the attacker is smart and is looking out for the potential of a honeypot, maybe they’d look at different sensors on the robot, like an accelerometer or speedometer, to verify the robot is doing what it had been instructed. That’s where we would be spoofing that information as well. The hacker would see from looking at the sensors that acceleration occurred from point A to point B.”

Trials testing how convincing the spoofed data would be to bad actors returned positive results so far. In experiments, participants who actually controlled the device the whole time and those who were being unwittingly fed simulated data both indicated that the data was believable at similar rates.

“We wanted to make sure they felt that this robot was doing this real thing,” Beyah said.

What’s Hot on Infosecurity Magazine?