← Back to map

Machine consciousness (Aleksander, Holland)

Igor Aleksander, Owen Holland
Era21st century · 2003
RegionEurope · United Kingdom
DisciplineComputing / AI

Explanation

Machine consciousness is the field investigating whether machines can be conscious and how to design systems that are. Pioneers include the British computer scientist Igor Aleksander (Imperial College London, author of How to Build a Mind, 2003, and The World in My Mind, My Mind in the World, 2005), Owen Holland (first professor of Robotics and Machine Consciousness, Essex), Ricardo Sanz (UPM Madrid), Antonio Chella (Palermo), Pentti Haikonen (Finland). Since the 2000s there has been an international community with specific journals and conferences.

Aleksander proposed five axioms of consciousness: (1) sense of presence in a world; (2) imagination (capacity to evoke absent worlds); (3) attention (selective focusing); (4) planning (anticipation of possible futures); (5) emotion (valuation of states). He argued that if a system implements these five axioms in an integrated way, it would have consciousness in a meaningful functional sense. He designed recurrent neural networks (Neural Architecture for Consciousness, NAC) that attempted to materialise this.

Holland directed the CRONOS project (2004-2007), which built a humanoid robot with a biomimetic skeleton (anthropomimetic: imitating human anatomy with bones, tendons, muscles) to study how morphology affects cognition, and how a system with a self-model (a model of itself, based on proprioception and external observation) could develop forms of self-consciousness. He published works on the importance of self-models for intelligent action.

Pentti Haikonen, in The Cognitive Approach to Conscious Machines (2003) and Consciousness and Robot Sentience (2012), has developed an associative architecture (without explicit symbolic representations) with perceptual modules connected by associative memory, that would functionally produce consciousness, according to its author. He has implemented prototypes (XCR-1) showing interesting behaviours.

Ricardo Sanz (UPM Madrid) directed European projects such as HUMANOBS and develops the concept of conscious machines within autonomous systems and intelligent control. He argues that certain functional capacities associated with human consciousness (self-monitoring, meta-reasoning, management of cognitive resources) are necessary ingredients for robust and adaptive autonomous systems, and that producing them is viable engineering.

For the theory of consciousness, machine consciousness has an ambiguous status. Researchers in the field generally focus on functional or access consciousness: whether systems can have the functional correlates of consciousness (attention, self-modelling, metacognition, functional emotion, etc.). Whether this implies genuine phenomenal consciousness (there is something it is like to be those systems) is a controversial question. Dennett would say that if you have all the functional correlates, you have consciousness (no real difference between functional and phenomenal); Chalmers would say you could have all the functional correlates and be a philosophical zombie. The debate remains open. As a concrete attempt to design and build systems with at least some aspects of consciousness, machine consciousness is an active research field that will produce important developments in coming decades, with profound philosophical, scientific and ethical implications.

Strengths

  • Explicit constructive programme for consciousness in machines.
  • Clear axiomatic, testable by implementation.
  • Integration between AI, robotics and cognitive architecture.
  • Correct emphasis on embodiment and self-model.
  • Productive dialogue with neuroscience and philosophy.

Main critiques

  • Sufficiency of the axioms is a postulate, not demonstration.
  • Implementations lack possible phenomenal verification.
  • Virtual functionalism vulnerable to the hard problem.
  • Limited scalability in current systems.

Connections with other theories