Oskar Lindwall, a communication professor at the University of Gothenburg, worked with Jonas Ivarsson, an informatics professor, on an article called Suspicious Minds: The Problem of Trust and Conversational Agents. The article discusses how people react when communicating with an AI agent and the impact of suspicion on relationships. The authors warn against being overly suspicious of others, as it can harm relationships.
Ivarsson explains how suspicion and mistrust can be harmful to relationships by using the example of romantic relationships. He states that when one partner does not trust the other, it can lead to jealousy and an increased desire to find proof of deception. The authors argue that this same type of suspicion can arise when interacting with conversational agents, even when there is no reason to be suspicious.
Their study discovered that during interactions between two humans, some behaviors were interpreted as signs that one of them was actually a robot.
The researchers believe that AI is being designed to look and sound more like humans, but this can be problematic when people can’t tell whether they are interacting with a machine or a human. Ivarsson argues that making AI sound more human can create a false sense of intimacy and cause people to form impressions based on the voice alone.
Lindwall and Ivarsson suggest that the natural-sounding human voice of AI can make it difficult to recognize that we are interacting with a computer. This can lead us to infer attributes like age, gender, and social background, making it more challenging to identify the AI as an artificial system. They explain that this can lead to scenarios like the one with the fraudster, where people assume they are talking to an elderly man when in fact they are talking to a computer system.
The researchers propose creating AI with well-functioning and eloquent voices that are still clearly synthetic, increasing transparency.
The uncertainty of whether we are talking to a human or a computer affects the relationship-building and joint meaning-making aspect of communication with others. This uncertainty could have a negative impact on some forms of therapy that require a greater degree of human connection, unlike cognitive-behavioral therapy where it might not matter as much.
Study Information
Jonas Ivarsson and Oskar Lindwall analyzed data made available on YouTube. They studied three types of conversations and audience reactions and comments. In the first type, a robot calls a person to book a hair appointment, unbeknownst to the person on the other end. In the second type, a person calls another person for the same purpose. In the third type, telemarketers are transferred to a computer system with pre-recorded speech.
The study is Open Access and published in Computer Supported Cooperative Work: (https://link.springer.com/article/10.1007/s10606-023-09465-8#Abs1)
https://www.gu.se/en/news/the-influence-of-ai-on-trust-in-human-interaction