Insults, provocations and slander hurt, even if they come from a robot9. December 2019
Insults, provocations and slander hurt, even if they come from a robot
New York, December 9, 2019.
Words are sometimes like arrows. Can discouraging words or robot insinuations produce the same reactions in humans? This was the question researchers at Carnegie Mellon University wanted to answer in their human-robot interaction studies.
Until now, Pepper, a well-known humanoid robot, was known for his sensitive and mild use of words. But he can also do differently: “I have to say, you’re a terrible player” he pointed out and immediately added: “In the course of the game, your game got mixed up”. The result of this confrontation is surprising: people who played a game with the robot did worse if he discouraged them, and better if he encouraged them. Lead author Aaron M. Roth said that some of the 40 study participants were technically proficient and fully understood that a machine was the source of their discomfort.
“This is one of the first studies of human-robot interaction in an environment where they do not cooperate,” explained co-author Fei Fang, assistant professor at the Institute for Software Research. The result could have an enormous impact, especially as we move towards a world where the number of robots and Internet of Things (IoT) devices with artificial intelligence is expected to grow exponentially. “We can expect domestic workers to be cooperative,” she said, “but some situations, such as online shopping, may not have the same goals as ours.
The game called “Guards and Treasures” had each participant play 35 times against the robot, while he heard either encouraging words or dismissive remarks from the robot. Although the rational response improved over the course of the games, even though the robot insulted or disparaged its opponent, those criticized by the robot achieved worse results than those praised.
So far, psychologists have known that an individual’s performance is influenced by what other people say. But this study revealed that people hardly react differently to what machines say, summarized Afsaneh Doryab, systems scientist at the Human-Computer Interaction Institute (HCII) of the CMU. What’s more, the ability of these machines to respond quickly could affect automated learning, mental health treatment, and even the use of robots as companions, she said.
Fang suggested that we now want to learn more about whether and how different types of machines – for example, a humanoid robot compared to a computer box – could cause different reactions in humans.