All three being neuroscientists, these authors argue that although the responses of systems like ChatGPT seem conscious, they are most likely not. First, the inputs to language models lack the embodied, embedded information content characteristic of our sensory contact with the world around us. Secondly, the architectures of present-day AI algorithms are missing key features of the thalamocortical system that have been linked to conscious awareness in mammals. Finally, the evolutionary and developmental trajectories that led to the emergence of living conscious organisms arguably have no parallels in artificial systems as envisioned today. The existence of living organisms depends on their actions and their survival is intricately linked to multi-level cellular, inter-cellular, and organismal processes culminating in agency and consciousness.
Thus, while it is tempting to assume that ChatGPT and similar systems might be conscious, this would severely underestimate the complexity of the neural mechanisms that generate consciousness in our brains. Researchers do not have a consensus on how consciousness rises in our brains. What we know, and what this new paper points out, is that the mechanisms are likely way more complex than the mechanisms underlying current language models. For instance, as pointed out in this work, real neurons are not akin neurons in artificial neural networks. Biological neurons are real physical entities, which can grow and change shape, whereas neurons in large language models are just meaningless pieces of code. We still have a long way to understand consciousness and, hence, a long way to conscious machines.