Turing-machine functionalism
Explanation
Turing-machine functionalism is one of the first rigorous formulations of functionalism in the philosophy of mind. Hilary Putnam articulated it in the 1960s: mental states are not identified with specific physical states of the brain (as the identity theory holds), but with functional states characterizable by their computational role, analogous to the states of a Turing machine. What matters is not what the substrate is made of, but what it does, what causal relations it maintains with inputs, outputs and other states.
The Turing machine is an abstract device that computes functions according to a state table. Each state is defined by the transitions it produces in the face of different inputs, not by its intrinsic properties. Putnam applied this logic to the mind: a mental state such as "believing it is raining" is defined by its role in the network of beliefs, desires and behaviours, not by a particular neuronal configuration. Two systems with the same functional structure would have the same mental states, whatever their physical substrate.
This thesis has revolutionary implications. It is the theoretical basis of computationalism in AI: if the mind is a functional pattern, in principle it can be realized in any substrate (silicon, artificial neural networks, hydraulic systems, even a population of humans passing messages). The mind would be substrate-independent, opening the door to the possibility of artificial minds with genuine consciousness, not mere simulation.
For consciousness, functionalism is both attractive and problematic. Attractive: it explains how physical processes can have mental properties without invoking separate substances. Problematic: is functional organization enough to have subjective experience, or is something left (qualia, the "what it is like") that functional organization does not capture? This is the classical critique of the hard problem (Chalmers): we can imagine functional zombies without conscious experience.
Putnam himself, toward the end of his career, abandoned computational functionalism, considering that it did not capture essential aspects of meaning and mental experience. Other functionalists (Fodor, Dennett in a way) have defended it in modified versions. The discussion has generated variants: psychofunctional functionalism (based on real psychological structures, not abstract ones), homuncular (hierarchies of subsystems), causal (based on specific causal relations). All share the basic thesis: the mental is what it does, not what it is made of.
Additional critiques include Searle's famous "Chinese Room" argument: a person who follows rules to manipulate Chinese symbols without understanding them could produce the correct output without having consciousness or understanding. This suggests that functionalism conflates behavioural realization with genuine understanding. Defenders respond that the person in the room is only a component of a wider system that could indeed have understanding. The debate, still alive, is central to the philosophy of mind and to contemporary AI developments.
Strengths
- Compatible with naturalism and multiple realizability.
- Philosophical foundation of cognitive AI.
- Overcomes dualism without requiring identification of mental states with specific physical types.
- Flexible framework for theorizing about artificial and animal minds.
Main critiques
- Zombie, inverted-spectrum and China-brain arguments question sufficiency.
- Difficulty specifying the right function without circularity.
- Does not address the hard problem: function ≠ experience.