← Back to map

SOAR architecture

Allen Newell, John Laird, Paul Rosenbloom
EraSecond half of the 20th century · 1983
RegionNorth America · United States
DisciplineComputing / AI

Explanation

SOAR (State, Operator, And Result) is a general cognitive architecture developed since the 1980s by John Laird, Allen Newell and Paul Rosenbloom. Newell (1927-1992), one of the pioneers of AI alongside Herbert Simon, dedicated the last years of his life to formulating a unified theory of cognition set out in his book of the same name (1990), with SOAR as its computational realisation.

SOAR is based on a vision of classical cognitivism: all cognition can be modelled as problem-solving search in state spaces. At each moment, the agent is in a particular state, applies an operator to transform it, and obtains a result that is a new state. Intelligence consists in skilfully selecting appropriate operators to navigate the state space toward desirable goals.

Main components: production memory (set of condition-action rules), working memory (current state, goals, expectations), decision architecture (decision cycle that selects the next operator), learning by chunking (when a sub-problem is solved, SOAR compresses the solution as a new rule, acquiring operative knowledge). This allows SOAR to improve with experience.

SOAR has been applied in diverse domains: games (chess, draughts), control tasks, military simulations (tactical agents), psychological cognitive modelling (reproducing human patterns in experimental tasks), intelligent tutors. Over time it has been extended with additional components: episodic memory, semantic memory, reinforcement learning, spatial processing, emotional interaction.

Recently, SOAR has also been explored to endow artificial agents with some functional consciousness capacities: self-attention modelling, monitoring of one's own goals, detection of impasses (situations where there is no progress and meta-cognitive reflection is required), generation of explanations about one's own behaviour. These capacities resemble some functional aspects of human consciousness (directed attention, metacognition, self-observation), although there is no consensus on whether they produce real phenomenal consciousness.

For the theory of consciousness, SOAR is a paradigmatic example of a cognitive architecture that attempts to capture general intelligence in computational terms. It is relevant for debating which functional aspects of the human mind a machine can reproduce, and also for identifying which aspects seem to escape the classical paradigm (phenomenal consciousness, lived emotions, genuine intentionality). SOAR and similar architectures (ACT-R, CLARION, LIDA, which we will see later) are useful platforms for testing concrete hypotheses about how human cognition works and what conscious AI might require. Although the field has evolved towards deep learning (very different from classical cognitivism in philosophy and methods), cognitive architectures remain relevant as possible syntheses between neuroscience, cognitive psychology and AI.

Strengths

  • Empirically productive unified architecture for decades.
  • Quantitative predictions about learning curves.
  • Increasing integration of emotional and episodic modules.
  • Computational analogue of the global workspace.
  • Foundation for research on functional consciousness in AI.

Main critiques

  • Excessively symbolic; difficulty with rich perception.
  • Chunking as the sole learning mechanism is limited.
  • Does not address the hard problem of consciousness.
  • Assumption of rational optimality in human decisions is questionable.

Connections with other theories