What helps people to cope with artificial intelligence?1. April 2020
What helps people to cope with artificial intelligence?
New York, 1.4.2020
Learning how humans interact with AI machines – and using that knowledge to improve people’s trust in AI can help us live in harmony with the ever-increasing number of robots, chat robots and other intelligent machines in our midst, according to a researcher at Penn State University.
In an article published in the current issue of the Journal of Computer-Mediated Communication, S. Shyam Sundar, James P. Jimirro Professor of Media Effects at the Donald P. Bellisario College of Communications and co-director of the Media Effects Research Laboratory, suggested a way or framework for studying artificial intelligence that could help researchers better understand how humans interact with artificial intelligence or human-AI interaction (HAII).
“This is an attempt to systematically explore all the ways in which artificial intelligence could affect users psychologically, particularly in terms of trust,” said Sundar, who is also a member of Penn State’s Institute for Computational and Data Sciences (ICDS). “Hopefully, the theoretical model presented in this paper will provide researchers with a framework and vocabulary for studying the socio-psychological effects of AI”.
The framework identifies two paths – cues and actions – that AI developers can focus on to gain trust and improve the user experience. The cue route is based on superficial indicators of what the AI looks like or seems to be doing,” he explained.
It provides several cues that affect whether users trust the AI. These cues are human-like features, such as a human face that some robots have, or a human-like voice that virtual assistants like Siri and Alexa use.
Other clues can be more subtle, such as an explanation of how the device works, or how Netflix explains why it recommends a particular movie to viewers. Each of these clues can trigger different mental shortcuts or heuristics, Sundar says.
“When an AI is identified to the user as a machine rather than a human being, as is often the case with modern chat bots, it triggers the ‘machine heuristics’, or a mental shortcut that causes us to automatically apply any stereotypes we have about machines,” Sundar said. “We may think that machines are accurate and precise, but we could also think of computers and machines as cold and unyielding”. These stereotypes in turn determine how much we trust the AI system.
Sundar suggested that autopilot systems in aircraft are an example of how excessive trust in AI can lead to negative effects. Pilots can trust the autopilot system so implicitly that they relax their vigilance and are not prepared for sudden changes in performance or aircraft malfunctions that would require their intervention.
There are also people with an algorithm aversion, Sundar said. They may have been deceived by ‘deepfakes’, or they may have received the wrong product recommendation from an e-commerce site, or they may have felt their privacy was violated by AI snooping around on their purchases.
Sundar advised the developers to pay special attention to the hints they might offer users. He calls influencing the user experience the “path of action”.
“The action path is really about collaboration,” said Sundar. “AI’s should really engage with us and work with us. Most of the new AI tools – the intelligent speakers, robots and chat bots – are highly interactive. In this case, it’s not just about visible clues to what they look like and what they’re saying, but also how they interact with you”.
“If your wise speaker asks you too many questions or interacts with you too much, that too could be a problem,” Sundar said. “People want cooperation. But they also want to minimize their costs. If the AI keeps asking you questions, the whole point of the AI, the convenience, is gone”.
Sundar said he expects that the framework of clues and actions will guide researchers in exploring these two avenues to trust the AI. This will provide evidence of how developers and designers create AI-based tools and technologies for people.
AI technology is developing so rapidly that many critics are pushing to ban certain applications altogether. Sundar said that it is a necessary step to give researchers the time to thoroughly investigate and understand how people interact with the technology to help society reap the benefits of the devices while minimizing the potential negative effects.
“We will make mistakes,” said Sundar. “From the printing press to the Internet, new media technologies have led to negative consequences, but they have also produced many more benefits. There is no question that certain manifestations of AI will frustrate us, but at some point we will have to coexist with AI and bring it into our lives”.