People can now talk to and have a conversation with OpenAI’s chatbot ChatGPT. The company says it can generate human-like audio from just text and a few seconds of sample speech.
Mor Naaman, professor at Cornell Tech, studies AI-mediated communication and its impact on society. His previous research found biases baked into AI writing tools – whether intentional or unintentional – could have concerning repercussions for culture and politics.
Naaman says:
“One hopes that the companies deploying increasingly human-like interactions techniques have a good understanding of how it impacts trust and well-being – but I am worried that these companies did not do that legwork.
“We know that these language-models-based agents are not always the most accurate. We do not yet understand how people interacting with these models evaluate the models’ output. Leveling up with the interaction may exacerbate the tendencies of humans to accept answers from these chatbots as accurate or trustworthy. Regarding well-being, we know people treat technology like chatbot as they do other humans.
“Now that we all have a 21st century Clippy to interact with, how will that impact our social interactions beyond the bot? I am afraid we are about to find out, whether we want to or not.”
-30-