Hinton says, Suppose I take one neuron in your brain—one brain cell—and replace it with a little piece of nanotechnology that behaves exactly [sic] the same way. It’s receiving pings from other neurons and responding by sending out pings in exactly the same way as the original brain cell. I’ve just replaced one brain cell.
Are you still conscious?
I think you’d say absolutely yes.
Hinton’s thought experiment is actually David Chalmers’ argument.[1] As Hinton ages, he seems less willing to credit others. Nevertheless, Chalmers’ argument defends functionalism.
The idea is that consciousness ensues on functional organization rather than any specific physical substrate. Chalmers’ goal was to show that functionalism preserves subjective experiences (i.e., qualia), including those implemented in non-biological substrates like silicon (i.e., computers).
Functionalism focuses on the causal roles of individuated functions, not the brain or what it is made of. Functionalism views intelligence (i.e., mental phenomenon) as the brain’s functional organization where individuated functions like language and vision are understood by their causal roles. Functionalism is not interested in how something works or if it is made of the same material. It doesn’t care if the thing that thinks is a brain or if that brain has a body. If it functions like intelligence, it is intelligent like anything that tells time is a clock. It doesn’t matter what the clock is made of as long as it keeps time. This is why I push back when someone says that artificial neural networks are inspired by the brain. They are not, and no one gives an F anyway. Functionalism sees intelligence emerging from the collection of individuated functions within some administrative structure. Functionalism is uninterested in intelligence. It seeks an administrative theory.
The hope for the functionalist view of intelligence is that researchers can replicate enough parts of the intelligence so that the machine can figure out how to do the rest. Unfortunately, functions are not intelligent. They are aspects of thinking. The issue with functionalism, aside from the reductionism that results from treating thinking as a collection of functions, is that it ignores intelligence. While the brain has localized functions with input-output pairs that can be represented as a physical system inside a computer, intelligence is not a loose collection of localized functions. Intelligence is not a neuron or collection of neurons. Intelligence is not part of the brain or even in the skull. Even if we had a detailed map enumerating the countless functions or a neuronal map of all neurons and their connections, it would have been hard to believe we could understand intelligence, let alone the hard problem of consciousness.
If functionalism were as unassailable as Hinton suggests, why don’t computers perform or fail like humans? Said differently, if functionalism explains why there are no differences between brains and computers, why are they so different?
If functionalism is not a theory of intelligence, it is unlikely to be a framework that supports consciousness. The flaw in Chalmers’ argument—and, by extension, Hinton’s—is it ignores the possibility that consciousness arises from the system-wide properties of biological cognition rather than from isolated computational functions.
Swapping a neuron for a functionally equivalent microchip assumes that neurons are just computational units rather than active participants in a larger, complex biological process that may not be reducible to computation. If functionalism were correct, AI should develop intelligence through functional replication alone, yet it imitates intelligence. It doesn’t generate it.
Ultimately, AI does not train, perform, or fail like humans because AI is not intelligent. It is a collection of functional simulations, often exceptional simulations. Yet, functionalism is not a cognitive theory. AI systems do not “think” as humans do, and the differences between artificial and biological cognition are not trivial but fundamental. If AI were conscious simply because it replicated functional roles, simulated fire would burn, simulated digestion would nourish, and a mirror would become the person it reflects.
[1] Absent Qualia, Fading Qualia, Dancing Qualia (1996)