‘Silly’ to expect AI mimicking human-written text could be easily detected

OpenAI, the creator of the chatbot ChatGPT, has released a software tool to identify text generated by artificial intelligence. However, the company warned the tool only correctly identified generated text as “likely AI-written” about a quarter of the time. 

Mor Naaman, professor of information science at Cornell University and associate dean at Cornell Tech, researches the trustworthiness of our information ecosystem. He says the technology is – as usual – racing ahead with no guardrails.

Naaman says:

“It would be silly to expect AI that is designed to mimic human-written text to be easily distinguishable from such text. In an arms race of adversarial AI techniques, where new AI can improve its performance along dimensions such as detectability by existing classifiers, we are not likely to have technologies that robustly detects AI-generated text.

“Even more disconcerting, humans are also bad at this task: recent research shows that human are unable to distinguish AI- from human-written text – whether it is in online reviews, news articles, poetry, Shakespearean prose or online profile self-descriptions.

“To protect from a future flood of AI-written manipulation, we would need to develop new techniques, new publication norms, new mechanisms to mark human-created (or human-approved!) text, and perhaps new laws and regulation. The technology is as usual racing ahead with no guardrails – but there is already research (including from my group) that is trying to map out and address these challenges.”

For interviews contact:Becka Bowyercell: (607) 220-4185

[email protected]

Cornell University has dedicated television and audio studios available for media interviews.

-30-

withyou android app