Artificial Intelligence

This robot tricks hackers into giving away their information

A robot drawing car is seen during the Web Summit, Europe's biggest tech conference, in Lisbon, Portugal, November 9, 2017. REUTERS/Pedro Nunes

The HoneyBot can help protect workplaces from hackers through deception. Image: REUTERS/Pedro Nunes

Josh Brown
Professor, Georgia Institute of Technology
Share:
Our Impact
What's the World Economic Forum doing to accelerate action on Artificial Intelligence?
The Big Picture
Explore and monitor how Artificial Intelligence is affecting economies, industries and global issues
A hand holding a looking glass by a lake
Crowdsource Innovation
Get involved with our crowdsourced digital platform to deliver impact at scale
Stay up to date:

Artificial Intelligence

It’s small enough to fit inside a shoebox, yet this robot on four wheels has a big mission: protecting factories and other large facilities from hackers. It’s the HoneyBot.

Loading...

The diminutive device lures in digital troublemakers who have set their sights on industrial facilities and then tricks them into giving up valuable information to cybersecurity professionals.

The decoy robot arrives as more and more devices—never designed to operate on the internet—are showing up online in homes and factories alike, opening up a new range of possibilities for hackers hoping to wreak havoc in both the digital and physical world.

Attack the attackers

“Robots do more now than they ever have, and some companies are moving forward with, not just the assembly line robots, but free-standing robots that can actually drive around factory floors,” says Raheem Beyah, professor and interim chair in Georgia Tech’s School of Electrical and Computer Engineering.

“In that type of setting, you can imagine how dangerous this could be if a hacker gains access to those machines. At a minimum, they could cause harm to whatever products are being produced. If it’s a large enough robot, it could destroy parts or the assembly line. In a worst-case scenario, it could injure or cause death to the humans in the vicinity.”

Internet security professionals have long employed decoy computer systems known as “honeypots” as a way to throw cyber attackers off the trail. Researchers applied the same concept to the HoneyBot. Once hackers gain access to the decoy, they leave behind valuable information that can help companies further secure their networks.

“A lot of cyber attacks go unanswered or unpunished because there’s this level of anonymity afforded to malicious actors on the internet, and it’s hard for companies to say who is responsible,” says graduate student Celine Irvene, who worked with Beyah to devise the new robot.

“Honeypots give security professionals the ability to study the attackers, determine what methods they are using, and figure out where they are or potentially even who they are.”

Tricking hackers

Operators can monitor and control the gadget through the internet. But unlike other remote-controlled robots, the HoneyBot’s special ability is tricking its operators into thinking it is performing one task, when in reality it’s doing something completely different.

The HoneyBot is protecting factories from hackers. Image: Georgia Tech

“The idea behind a honeypot is that you don’t want the attackers to know they’re in a honeypot,” Beyah says. “If the attacker is smart and is looking out for the potential of a honeypot, maybe they’d look at different sensors on the robot, like an accelerometer or speedometer, to verify the robot is doing what it had been instructed. That’s where we would be spoofing that information as well. The hacker would see from looking at the sensors that acceleration occurred from point A to point B.”

In a factory setting, such a HoneyBot robot could sit motionless in a corner, springing to life when a hacker gains access—a visual indicator that a malicious actor is targeting the facility.

Rather than allowing the hacker to then run amok in the physical world, researchers could design the robot to follow certain commands deemed harmless—such as meandering slowly about or picking up objects—but stopping short of actually doing anything dangerous.

So far, their technique seems to be working.

In experiments designed to test how convincing the false sensor data would be to individuals remotely controlling the device, volunteers in December 2017 used a virtual interface to control the robot and could not see what was happening in real life.

To entice the volunteers to break the rules, at specific spots within the maze, they encountered forbidden “shortcuts” that would allow them to finish the maze faster.

Have you read?

In the real maze back in the lab, no shortcut existed, and if the participants opted to go through it, the robot instead remained still. Meanwhile, researchers fed the volunteers—who have now unwittingly become hackers for the purposes of the experiment—simulated sensor data indicating they passed through the shortcut and continued along.

“We wanted to make sure they felt that this robot was doing this real thing,” Beyah says.

In surveys after the experiment, participants who actually controlled the device the whole time and those who researchers fed simulated data about the fake shortcut both indicated that the data was believable at similar rates.

“This is a good sign because it indicates that we’re on the right track,” Irvene says.

The National Science Foundation supported the work. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation.

Loading...
Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Related topics:
Artificial IntelligenceCybersecurityEmerging Technologies
Share:
World Economic Forum logo
Global Agenda

The Agenda Weekly

A weekly update of the most important issues driving the global agenda

Subscribe today

You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.

How we can prepare for the future with foundational policy ideas for AI in education

TeachAI Steering Committee

April 16, 2024

About Us

Events

Media

Partners & Members

  • Join Us

Language Editions

Privacy Policy & Terms of Service

© 2024 World Economic Forum