• Screenshot from Star Wars: A Force Awakens by Lucasfilms (Wikimedia (public domain))Source: Wikimedia (public domain)
Roboticists want people to trust their creations, but a recent experiment shows you can have too much of a good thing.
Aviva Rutkin

New Scientist
1 Mar 2016 - 12:28 PM  UPDATED 1 Mar 2016 - 12:28 PM

A university student is holed up in a small office with a robot, completing an academic survey. Suddenly, an alarm rings and smoke fills the hall outside the door.

The student is forced to make a quick choice: escape via the clearly marked exit that they entered through, or head in the direction the robot is pointing, along an unknown path and through an obscure door.

That was the real choice posed to 30 subjects in a recent experiment at the Georgia Institute of Technology in Atlanta. The results surprised researchers: almost everyone elected to follow the robot – even though it was taking them away from the real exit.

“We were surprised,” says Paul Robinette, the graduate student who led the study. “We thought that there wouldn’t be enough trust, and that we’d have to do something to prove the robot was trustworthy.”

We don’t want people to follow the instructions of a malicious or buggy machine

The unexpected result is another piece of a puzzle that roboticists are struggling to solve. If people don’t trust robots enough, then the bots probably won’t be successful in helping us escape disasters or otherwise navigate the real world. But we also don’t want people to follow the instructions of a malicious or buggy machine. To researchers, the nature of that human-robot relationship is still elusive.

In the emergency study, Robinette’s team used a modified Pioneer P3-AT, a robot that looks like a small bin with wheels and has lit-up foam arms to point. Each participant would individually follow the robot along a hallway until it pointed to the room they were to enter. They would then fill in a survey to rate the robot’s navigation skills and read a magazine article. The emergency was simulated with artificial smoke and a First Alert smoke detector.

A total of 26 of the 30 participants chose to follow the robot during the emergency. Of the remaining four, two were thrown out of the study for unrelated reasons, and the other two never left the room.

Misplaced trust?

The results suggest that if people are told the robot is designed to do a particular task – as was the case in this experiment – they will probably automatically trust it to do it, say the researchers. Indeed, in a survey given after the fake emergency was over, many of the participants explained that they followed the robot specifically because it was wearing a sign that read “EMERGENCY GUIDE ROBOT.”

The work will be presented in March at the ACM/IEEE International Conference on Human-Robot Interaction in Christchurch, New Zealand.

Robinette likens the relationship to the way in which drivers sometimes follow the odd routes mapped by their GPS devices. “As long as a robot can communicate its intentions in some way, people will probably trust it in most situations,” he says.

“It amazes me that everyone followed that robot,” says Holly Yanco, who studies human-robot interaction at the University of Massachusetts Lowell. She wonders whether the fact that it was an emergency situation rather than an ordinary laboratory task pushed people to trust the robot in that split second. “It might just be that they thought the robot had more information than they did,” she says.

How far would that blind trust go? In a series of follow-up experiments, Robinette and his colleagues put small groups of people through the same experience, but with added twists. Sometimes the robot would “break down” or freeze in place during the initial walk along the hallway, prompting a researcher to come out and apologise for its poor performance. Even so, almost everyone still followed the robot during the emergency.

In another follow-up test, the robot would point to a darkened room, with the doorway partially blocked by a piece of furniture. Two of six participants tried to squeeze past the obstruction rather than taking the clear path out.

Too much trust in a robot can be a serious problem, says Kerstin Dautenhahn at the University of Hertfordshire, UK. “Any piece of software will always have some bugs in it,” she says. “It is certainly a very important point to consider what that actually means for designers of robots, and how we can maybe design robots in a way where the limitations are clear.”

Read these too
A brief history of humans abusing robots
Cutting-edge robotics involves giving robots a hard time - to prove they have what it takes to function in the real world.
Scientists have made the most lifelike robot hand yet
Mimicking the biomechanics of human limbs has let researchers create a surprisingly nimble robot hand.
Cockroach inspires a helpful robot
Scientists have built a small search-and-rescue robot, inspired by the ability of cockroaches to squeeze through tiny crevices.

This article was originally published in New Scientist© All Rights reserved. Distributed by Tribune Content Agency.