Robots are more convincing when they pretend to be human

Robots are more convincing when they pretend to be human

13. November 2019 0 Von Horst Buchwald

Robots are more convincing when they pretend to be human

New York, 14.11.2019

Recent technological breakthroughs in artificial intelligence have enabled machines or bots to be considered human. A research team led by Talal Rahwan, Associate Professor of Computer Science at NYU Abu Dhabi, conducted an experiment to examine how people interact with bots that they believe are human and how such interactions are influenced when bots reveal their identity.

The researchers found that bots are more efficient than humans in certain human-machine interactions, but only if they are allowed to hide their non-human nature.

In their article titled „Behavioral Evidence for a Transparency-Efficiency Tradeoff in Human-Machine Cooperation,“ published in „Nature Machine Intelligence,“ the researchers presented their experiment, in which the participants were asked to play a collaborative game with a human or a human Bot employees play. This game, called the Iterated Prisoner’s Dilemma, is designed to capture situations in which each of the interacting parties can either act selfishly to take advantage of the other, or cooperatively, to achieve a mutually beneficial outcome.

Crucially, the researchers gave some participants false information about their employee’s identity. Some participants who interacted with a human were told that they interacted with a bot, and vice versa. Using this experiment, researchers were able to determine whether people have prejudices against social partners they consider to be bots, and to assess the extent to which such prejudices affect the effectiveness of bots that are transparent to their non-human nature.

The results showed that bots posing as humans could more effectively persuade the partner to collaborate in the game. However, once their true nature was revealed, the cooperation rates and the superiority of the bots were negated. „Although there is a broad consensus that machines should be transparent when making decisions, it is less clear whether they should be transparent if they reveal themselves,“ Rahwan said.

„Consider, for example, Google Duplex, an automated language assistant that is capable of generating a human-like language to make calls and book appointments on behalf of its user.“ The language of Google Duplex is so realistic that the person on the phone would be able to speak the other side of the phone does not recognize that she’s talking to a bot, is it ethically correct to develop such a system, should we ban bots from posing as human beings, and force them to impersonate themselves as a machine? „