SOMA: self-organising metarepresentational account (Cleeremans)
Explanation
The Belgian psychologist and neuroscientist Axel Cleeremans has been holding for over a decade a provocative thesis: consciousness is not an innate property of the brain; it is something the brain learns to do. The SOMA framework (Self-Organizing Metarepresentational Account) formulates that intuition technically: consciousness emerges when a nervous system develops, through self-organising and socially mediated learning, representations of its own representations.
The SOMA architecture has three levels. At the first, the brain processes sensorimotor information non-consciously, like any well-trained deep neural network. At the second level, parts of the brain learn to model how the first level processes information: metarepresentations are generated, internal descriptions of the system's own processing. At the third level, those metarepresentations are refined through social interaction, linguistic feedback, autobiographical narratives. Conscious experience would correspond to the integrated metarepresentations of the second and third levels.
The hypothesis sophisticatedly hybridises ideas from GWT (global access), HOT (higher-order), social theories of mind (Mead, Vygotsky) and predictive processing (self-supervised learning as the engine of cognitive development), without dissolving into any of them. The self-organising dimension is crucial: SOMA does not posit an internal homunculus or dedicated modules; it posits that consciousness emerges from the dynamics of the system's own learning, as a stable pattern that the brain reaches by successively refining its self-models.
An interesting implication: if consciousness is something learned, there are degrees, trajectories and contingencies. Babies, animals, subjects with brain damage, psychiatric patients, may have partial or different forms of consciousness depending on the degree and type of metarepresentation developed. Modified states (sleep, psychedelics, deep meditation) would correspond to temporary reorganisations of the metarepresentational hierarchy. And, controversially, artificial systems could develop consciousness if they meet the dynamic conditions for recursively self-modelling.
SOMA has received a decisive institutional recognition: the ARC project has expressly included it together with HOT (higher-order thought), HOSS (higher-order state space) and PRM (perceptual reality monitoring) in an adversarial collaboration dedicated to higher-order theories. This places it in the current institutional landscape as one of the four variants to be empirically contrasted. The signal is bibliometric and organisational, not merely of popularity: SOMA has reached the formal debate as an interlocutor with its own identity.
The criticisms point in two directions. First, SOMA sometimes seems more like a broad integrative framework than a theory with hard contours: what specifically distinguishes it from other higher-order theories? Cleeremans has responded by emphasising the dynamic, learning-based dimension, against more static versions of HOT. Second, its distinctive predictions still need to be better separated from those of other higher-order versions. Even so, its attempt to combine learning, metarepresentation, self-organisation and social dimension makes it a rich proposal that deserves its own consideration in any contemporary catalogue.
Strengths
- Sophisticatedly hybridises GWT, HOT, social theories and learning.
- Explains development and degrees of consciousness.
- Dynamic, learning-based dimension, not static.
- Recognised in ARC as a differentiated higher-order theory.
- Applicable to animals, pathologies and artificial systems.
Main critiques
- Sometimes seems an integrative framework rather than a theory with hard contours.
- Distinctive predictions vis-à-vis HOT, HOSS, PRM still to refine.
- Risk of infinite regress of metarepresentations.
- Direct evidence still limited.