Citation for this page in APA citation style.           Close


Philosophers

Mortimer Adler
Rogers Albritton
Alexander of Aphrodisias
Samuel Alexander
William Alston
Anaximander
G.E.M.Anscombe
Anselm
Louise Antony
Thomas Aquinas
Aristotle
David Armstrong
Harald Atmanspacher
Robert Audi
Augustine
J.L.Austin
A.J.Ayer
Alexander Bain
Mark Balaguer
Jeffrey Barrett
William Barrett
William Belsham
Henri Bergson
George Berkeley
Isaiah Berlin
Richard J. Bernstein
Bernard Berofsky
Robert Bishop
Max Black
Susanne Bobzien
Emil du Bois-Reymond
Hilary Bok
Laurence BonJour
George Boole
Émile Boutroux
Daniel Boyd
F.H.Bradley
C.D.Broad
Michael Burke
Lawrence Cahoone
C.A.Campbell
Joseph Keim Campbell
Rudolf Carnap
Carneades
Nancy Cartwright
Gregg Caruso
Ernst Cassirer
David Chalmers
Roderick Chisholm
Chrysippus
Cicero
Randolph Clarke
Samuel Clarke
Anthony Collins
Antonella Corradini
Diodorus Cronus
Jonathan Dancy
Donald Davidson
Mario De Caro
Democritus
Daniel Dennett
Jacques Derrida
René Descartes
Richard Double
Fred Dretske
John Dupré
John Earman
Laura Waddell Ekstrom
Epictetus
Epicurus
Austin Farrer
Herbert Feigl
Arthur Fine
John Martin Fischer
Frederic Fitch
Owen Flanagan
Luciano Floridi
Philippa Foot
Alfred Fouilleé
Harry Frankfurt
Richard L. Franklin
Bas van Fraassen
Michael Frede
Gottlob Frege
Peter Geach
Edmund Gettier
Carl Ginet
Alvin Goldman
Gorgias
Nicholas St. John Green
H.Paul Grice
Ian Hacking
Ishtiyaque Haji
Stuart Hampshire
W.F.R.Hardie
Sam Harris
William Hasker
R.M.Hare
Georg W.F. Hegel
Martin Heidegger
Heraclitus
R.E.Hobart
Thomas Hobbes
David Hodgson
Shadsworth Hodgson
Baron d'Holbach
Ted Honderich
Pamela Huby
David Hume
Ferenc Huoranszki
Frank Jackson
William James
Lord Kames
Robert Kane
Immanuel Kant
Tomis Kapitan
Walter Kaufmann
Jaegwon Kim
William King
Hilary Kornblith
Christine Korsgaard
Saul Kripke
Thomas Kuhn
Andrea Lavazza
Christoph Lehner
Keith Lehrer
Gottfried Leibniz
Jules Lequyer
Leucippus
Michael Levin
Joseph Levine
George Henry Lewes
C.I.Lewis
David Lewis
Peter Lipton
C. Lloyd Morgan
John Locke
Michael Lockwood
Arthur O. Lovejoy
E. Jonathan Lowe
John R. Lucas
Lucretius
Alasdair MacIntyre
Ruth Barcan Marcus
Tim Maudlin
James Martineau
Nicholas Maxwell
Storrs McCall
Hugh McCann
Colin McGinn
Michael McKenna
Brian McLaughlin
John McTaggart
Paul E. Meehl
Uwe Meixner
Alfred Mele
Trenton Merricks
John Stuart Mill
Dickinson Miller
G.E.Moore
Thomas Nagel
Otto Neurath
Friedrich Nietzsche
John Norton
P.H.Nowell-Smith
Robert Nozick
William of Ockham
Timothy O'Connor
Parmenides
David F. Pears
Charles Sanders Peirce
Derk Pereboom
Steven Pinker
Plato
Karl Popper
Porphyry
Huw Price
H.A.Prichard
Protagoras
Hilary Putnam
Willard van Orman Quine
Frank Ramsey
Ayn Rand
Michael Rea
Thomas Reid
Charles Renouvier
Nicholas Rescher
C.W.Rietdijk
Richard Rorty
Josiah Royce
Bertrand Russell
Paul Russell
Gilbert Ryle
Jean-Paul Sartre
Kenneth Sayre
T.M.Scanlon
Moritz Schlick
Arthur Schopenhauer
John Searle
Wilfrid Sellars
Alan Sidelle
Ted Sider
Henry Sidgwick
Walter Sinnott-Armstrong
J.J.C.Smart
Saul Smilansky
Michael Smith
Baruch Spinoza
L. Susan Stebbing
Isabelle Stengers
George F. Stout
Galen Strawson
Peter Strawson
Eleonore Stump
Francisco Suárez
Richard Taylor
Kevin Timpe
Mark Twain
Peter Unger
Peter van Inwagen
Manuel Vargas
John Venn
Kadri Vihvelin
Voltaire
G.H. von Wright
David Foster Wallace
R. Jay Wallace
W.G.Ward
Ted Warfield
Roy Weatherford
C.F. von Weizsäcker
William Whewell
Alfred North Whitehead
David Widerker
David Wiggins
Bernard Williams
Timothy Williamson
Ludwig Wittgenstein
Susan Wolf

Scientists

David Albert
Michael Arbib
Walter Baade
Bernard Baars
Jeffrey Bada
Leslie Ballentine
Marcello Barbieri
Gregory Bateson
Horace Barlow
John S. Bell
Mara Beller
Charles Bennett
Ludwig von Bertalanffy
Susan Blackmore
Margaret Boden
David Bohm
Niels Bohr
Ludwig Boltzmann
Emile Borel
Max Born
Satyendra Nath Bose
Walther Bothe
Jean Bricmont
Hans Briegel
Leon Brillouin
Stephen Brush
Henry Thomas Buckle
S. H. Burbury
Melvin Calvin
Donald Campbell
Sadi Carnot
Anthony Cashmore
Eric Chaisson
Gregory Chaitin
Jean-Pierre Changeux
Rudolf Clausius
Arthur Holly Compton
John Conway
Jerry Coyne
John Cramer
Francis Crick
E. P. Culverwell
Antonio Damasio
Olivier Darrigol
Charles Darwin
Richard Dawkins
Terrence Deacon
Lüder Deecke
Richard Dedekind
Louis de Broglie
Stanislas Dehaene
Max Delbrück
Abraham de Moivre
Bernard d'Espagnat
Paul Dirac
Hans Driesch
John Eccles
Arthur Stanley Eddington
Gerald Edelman
Paul Ehrenfest
Manfred Eigen
Albert Einstein
George F. R. Ellis
Hugh Everett, III
Franz Exner
Richard Feynman
R. A. Fisher
David Foster
Joseph Fourier
Philipp Frank
Steven Frautschi
Edward Fredkin
Benjamin Gal-Or
Howard Gardner
Lila Gatlin
Michael Gazzaniga
Nicholas Georgescu-Roegen
GianCarlo Ghirardi
J. Willard Gibbs
James J. Gibson
Nicolas Gisin
Paul Glimcher
Thomas Gold
A. O. Gomes
Brian Goodwin
Joshua Greene
Dirk ter Haar
Jacques Hadamard
Mark Hadley
Patrick Haggard
J. B. S. Haldane
Stuart Hameroff
Augustin Hamon
Sam Harris
Ralph Hartley
Hyman Hartman
Jeff Hawkins
John-Dylan Haynes
Donald Hebb
Martin Heisenberg
Werner Heisenberg
John Herschel
Basil Hiley
Art Hobson
Jesper Hoffmeyer
Don Howard
John H. Jackson
William Stanley Jevons
Roman Jakobson
E. T. Jaynes
Pascual Jordan
Eric Kandel
Ruth E. Kastner
Stuart Kauffman
Martin J. Klein
William R. Klemm
Christof Koch
Simon Kochen
Hans Kornhuber
Stephen Kosslyn
Daniel Koshland
Ladislav Kovàč
Leopold Kronecker
Rolf Landauer
Alfred Landé
Pierre-Simon Laplace
Karl Lashley
David Layzer
Joseph LeDoux
Gerald Lettvin
Gilbert Lewis
Benjamin Libet
David Lindley
Seth Lloyd
Hendrik Lorentz
Werner Loewenstein
Josef Loschmidt
Ernst Mach
Donald MacKay
Henry Margenau
Owen Maroney
David Marr
Humberto Maturana
James Clerk Maxwell
Ernst Mayr
John McCarthy
Warren McCulloch
N. David Mermin
George Miller
Stanley Miller
Ulrich Mohrhoff
Jacques Monod
Vernon Mountcastle
Emmy Noether
Donald Norman
Alexander Oparin
Abraham Pais
Howard Pattee
Wolfgang Pauli
Massimo Pauri
Wilder Penfield
Roger Penrose
Steven Pinker
Colin Pittendrigh
Walter Pitts
Max Planck
Susan Pockett
Henri Poincaré
Daniel Pollen
Ilya Prigogine
Hans Primas
Zenon Pylyshyn
Henry Quastler
Adolphe Quételet
Pasco Rakic
Nicolas Rashevsky
Lord Rayleigh
Frederick Reif
Jürgen Renn
Giacomo Rizzolati
Emil Roduner
Juan Roederer
Jerome Rothstein
David Ruelle
David Rumelhart
Tilman Sauer
Ferdinand de Saussure
Jürgen Schmidhuber
Erwin Schrödinger
Aaron Schurger
Sebastian Seung
Thomas Sebeok
Franco Selleri
Claude Shannon
Charles Sherrington
David Shiang
Abner Shimony
Herbert Simon
Dean Keith Simonton
Edmund Sinnott
B. F. Skinner
Lee Smolin
Ray Solomonoff
Roger Sperry
John Stachel
Henry Stapp
Tom Stonier
Antoine Suarez
Leo Szilard
Max Tegmark
Teilhard de Chardin
Libb Thims
William Thomson (Kelvin)
Richard Tolman
Giulio Tononi
Peter Tse
Alan Turing
Francisco Varela
Vlatko Vedral
Mikhail Volkenstein
Heinz von Foerster
Richard von Mises
John von Neumann
Jakob von Uexküll
C. S. Unnikrishnan
C. H. Waddington
John B. Watson
Daniel Wegner
Steven Weinberg
Paul A. Weiss
Herman Weyl
John Wheeler
Wilhelm Wien
Norbert Wiener
Eugene Wigner
E. O. Wilson
Günther Witzany
Stephen Wolfram
H. Dieter Zeh
Semir Zeki
Ernst Zermelo
Wojciech Zurek
Konrad Zuse
Fritz Zwicky

Presentations

Biosemiotics
Free Will
Mental Causation
James Symposium
 
On Creative Machines and the Physical Origins of Freedom

(Source: for use at Milan conference June 4-6, 2013; "On creative machines and the physical origins of freedom", H.J. Briegel, Nature Scientific Reports 2, 522 (2012).)

We discuss the possibility of free behavior in embodied systems that are, with no exception and at all scales of their body, subject to physical law. We relate the discussion to a model of an artificial agent that exhibits a primitive notion of creativity and freedom in dealing with its environment, which is part of a recently introduced scheme of information processing called projective simulation. This provides an explicit proposal on how we can reconcile our understanding of universal physical law with the idea that higher biological entities can acquire a notion of freedom that allows them to increasingly detach themselves from a strict dependence on the surrounding world.

At a glance

Figures

left
  1. Model of episodic memory as a network of clips.
    Figure 1: Model of episodic memory as a network of clips.

    Triggered by perceptual input, the process of projected simulation starts a random walk through episodic memory, invoking patchwork-like sequences of virtual experience. Once a certain feature is detected, the random walk stops and is translated into motor action (See also Ref.11).

  2. Sequences of percepts and actions are simulated stochastically by variations and compositions of episodic memory (ECM), before real action is taken.
    Figure 2: Sequences of percepts and actions are simulated stochastically by variations and compositions of episodic memory (ECM), before real action is taken.

    Through the process of projective simulation, the agent is, in a sense, constantly ahead of itself.

right

Introduction

Are we free in our decisions and actions? Or is freedom an illusion and is what we think and how we act entirely determined by the laws of Nature? Recent developments in brain research have revived and stirred-up a centuries-old discussion, claiming that free will is essentially an illusion1, 2, 3, 4, 5. The discussion is not only of academic nature, but it has for example been suggested that the experimental findings of the neuro-sciences, together with their theoretical interpretations, should be reflected in future jurisdiction6. These developments have lead to a controversial debate between brain researchers, philosophers, law makers, behavior scientists, and others (see e.g.7, 8).

Considering what seems to be at stake, these reactions are not surprising. At the same time, they also emphasize the deep impact of the concepts and findings of modern science, in particular physics, neurobiology, and computer science, on the idea of human existence and responsibility.

The problem of free will has a long history in philosophy and science. We shall not try to give a full account of the various philosophical arguments that have been brought up against or in favor of free will. It seems however save to say that, up-to-date, this has remained a deeply puzzling problem that many consider as yet unsolved:

So it really does look as if everything we know about physics forces us to some form of denial of human freedom.

— John Searle, Minds, Brains and Science, p. 87 (1984).

This quote, out of a famous lecture by John Searle9, dates back more than 25 years, and it addresses a question of principle. It seems to us that this problem should be solved before any interpretation of experimental findings in the neurosciences (concerning the existence or non-existence of free will) can be reached.

Indeed, how can we accept the very possibility of free will if we assume, at the same time, that we are, with no exception and at all scales of our body, subject to physical law? Do we have to assume that the laws of physics are incomplete and that there are new kinds of laws waiting to be discovered – maybe on the level of more complex biological entities – that will ultimately free us from the strict rule of physics?

We will argue in this note that we do not need new laws to resolve this puzzle. We may still discover new laws in the future, which will hopefully help us to better understand the workings of the human mind and all that comes with it. But we claim that we shall not need such laws to resolve the conundrum of freedom. We can show, on the basis of physical laws as we understand them today, that entities with a certain degree of physical or biological organization, capable of evolving a specific type of memory, can indeed develop an original notion of creativity and freedom in their dealing with the environment. Our argument will be based on the concept of projective simulation which is a physical model of information processing for artificial agents that was recently introduced in11.

Many philosophers and scientists have addressed the problem of free will in the past, and have argued for the possibility of free will. This includes, in particular, a number of theories and ideas that have been refereed to as “two-stage models” for free will12. At the same time, the idea of freedom seems to be under strong attack from the empirical sciences. Are the experimental findings of modern brain research indeed so compelling that they could falsify all theories supporting free will?

In this paper, we would like to add a new perspective to this discussion. Rather than discussing the existence of free will in the context of current brain research, which we prefer to leave to the experts, we shall present a model of an artificial agent that exhibits a notion of freedom in dealing with its environment, which is part of a physically well-defined scheme of information processing and learning11. This model could in principle be realized, with present-day technology, in artificial agents such as robots. This demonstrates, first, that a notion of freedom can indeed exist for entities that operate, without exception and at all scales, under the laws of physics. It also shows that free behavior can be understood as an emergent property of biological systems of sufficient complexity that have evolved a specific form of memory.

Formally, our proposal might be listed under the heading of the two-stage models, but it differs from previous work in several essential respects.

  • We take an explicit perspective from physics and information processing. We introduce projective simulation as a fundamental information theoretic concept that gives room for a notion of freedom compatible with the laws of physics.
  • Together with the model of episodic and compositional memory, projective simulation may allow us to analyze and propose behavior experiments with simple animals.
  • Our scheme could be realized, in principle, with present-day technology in form of artificial (learning) agents or robots.

We want to emphasize that our model is not meant to be an “explanation of consciousness”13, 14, nor a theory of “how the brain works”. We leave this to the experts and to the brain researchers, and we are looking forward to the many new experimental findings and insights that we may expect to learn about in the years to come. Similarly, we are not claiming that we can explain the nature of human freedom and conscious choice.

What we can provide, however, is an explicit proposal on how we can reconcile our understanding of universal physical law with the idea that higher biological entities can exhibit a notion of freedom. It allows them to detach themselves from a strict dependence on the surrounding world and, at the same time, to truly create behavior on their own that is both spontaneous and meaningful in response to their environment.

Results

Machine intelligence and creativity

If we accept that free will is compatible with physical law, we also have to accept that it must be possible, in principle, to build a machine that would exhibit similar forms of freedom as the one we usually ascribe to humans and certain animals. It is likely to turn out that the task of building such a machine will be far too complex to be realizable in any practical terms, or that it will be at least as complex as the task (and pleasure) of raising and educating a human child within society. This observation, if true, may be disappointing to some people, but for many of us it has a positive and reconciling aspect. On the other hand, it may still be feasible to build more primitive forms of machines (or agents) that exhibit some rudimentary forms of freedom and creativity in their behavior.

Computers are special sorts of machines which play an increasingly important role in our modern society. They have not only transformed our practical daily life, but they are also beginning to change the perception of ourselves from “human subjects” to “information processing systems”. This will ultimately challenge the question of human existence and freedom, and all that comes with it (e.g. social responsibility, the ethics of action, and so on). Variants of computers are so-called intelligent agents and robots. They are often viewed (not quite correctly, though15) as computers equipped with some periphery, including sensors, with which they can perceive signals from the environment, and actuators, with which they can act on the environment. Intelligent agents are designed to operate autonomously in complex and changing environments, examples of which are traffic, remote space, or the internet. The design of intelligent agents, specifically for tasks such as learning, has become a unifying agenda of various branches of artificial intelligence16.

Even if we are willing to accept that artificial agents, and computers in general, may exhibit some form of intelligence (which is usually defined as the capability of the agent to perceive and act on its environment in a way that maximizes its chances of success16), we would hardly ascribe free will to them. In return, we would not like ourselves to be identified with such an agent. What is the reason for this disapproval? The main reason seems to be that the agent has a program which determines, for a given input (or sequence of inputs), its next step of action. Its action is the result of an algorithm: it is predictable and can e.g. be computed by some other machine.

The situation does not change fundamentally if the algorithm or program itself is not deterministic, as it is sometimes considered in computer science, invoking the notion of probabilistic Turing machines17. Even if randomized programs can sometimes increase the efficiency of certain computations, it is not clear what one should gain from such randomization in the present context. If, before, the agent was the slave of a deterministic program, it is then the slave of a random program. But random action is not the same as freedom.

The disturbing point with both described variants is the idea and existence of the program itself. If physics is looking for the laws of Nature, e.g. for the laws describing the way how things move and change in space and time, and of how they respond to our experimental inquiry, a more computer-science oriented approach looks for the program behind things, including living beings. Both notions appear to be in fundamental conflict with our basic idea of freedom.

In this paper, we will however show that the idea of being subjected to physical law does not contradict the possibility of freedom. We will base our argument on the explicit description of an information processing scheme, which we call projective simulation11, which could be part of the design of an artificial agent, a robot, or conceivably some biological entity. It combines the concepts of simulation, episodic memory, and randomness into a common framework.

Memory

A crucial element for the possibility of freedom of any agent (biological or artificial) is the existence of memory. By memory we mean any kind of organ, or physical device, that allows the agent to store and recall information about past experience. Generally speaking, memory allows the agent to relate its actions to its past. Memory per se is however not sufficient for the existence of freedom. Elementary forms of memory exist already in simple animals (reflex-type agents), as in the roundworm Caenorhabditis elegans, the well-studied sea slug Aplysia18, or the fruit fly Drosophila19, and learning consists in the modification and shaping of the molecular details of their neural circuits due to experience. Nevertheless, we are hesitating to ascribe a notion of freedom to invertebrates such as C. elegans or Aplysia, whose actions remain simple reflexes to environmental stimuli.

The brain of humans and higher primates is of course much more complex and much less understood. As we consider the brains of different species, moving from invertebrates to vertebrates including mammals, primates and the humans, the structure of their brains gets increasingly more sophisticated and complex. But it is always described by a network of neurons and synapses, and the basic principles of signal transmission and processing seem to be the same. The question then arises: How can an increasing complexity of a neural network lead to the emergence of a radically new feature and endow humans or higher primates, and arguably also simpler animals, with “freedom” in their behavior?

The answer, it seems, must be sought in the increasingly complex organization of memory. A difference between the simple memory of Aplysia and the complex memory of higher vertebrates, is the appearance of different functions of memory. Different from simple animals, a call of memory in humans and primates does not automatically lead to motor action. This means that there exists a platform on which memory content can be reinvoked, which is decoupled from immediate motor action. The evolutionary emergence of such a platform means that an agent with more complex memory can become increasingly detached from immediate response to environmental stimuli.

However, the actions of the agent still remain determined by the memory content, which itself was formed by the agent's percept history. In other words, the actions of the agent remain determined by its past, and there is no real notion of freedom. What is still missing is an element of spontaneity in the agent's response to a given environmental situation. If C elegans is enslaved by the present stimuli, a more complex agent remains still enslaved by its past, i.e., the history of its stimuli. How could Nature get rid of such a time-delayed enslavery?

A possibility to break determinism is to introduce indeterminism (i.e. genuine randomness). But, as we have discussed earlier, it is not clear what the effect of randomization should be. If we adopt a computational or algorithmic view of the brain, we will not change anything. However, the effect of indeterminism depends on the nature of the processing and memory where it occurs. We will show that it can indeed have a positive effect on the agent, not in the sense of making some “computations” more efficient, but in the sense of introducing an element of creative variation in its memory-driven interactions with the environment. Here it will be expedient to abandon the picture of the brain as a computer and, instead, propose a dynamic model of memory which is fully embedded in the agent's architecture and which grows as the agent interacts with the world.

In the next section, we will discuss an abstract scheme of memory processing which we call projective simulation. It operates entirely under the principles of physics but nevertheless exhibits an element of freedom in an agent's interaction with the environment. It is not clear whether this scheme is at all implemented in a real brain, but we claim that it could be realized, in principle, in artificial agents.

Projective simulation

In Ref.11, we considered a standard model of an artificial agent that is equipped with sensors and actuators, through which it can perceive its environment and act upon it, respectively. Internally, the agent has access to some kind of memory, which we shall describe below. Perceptual input can either lead to direct motor action (reflex-type scenario) or it first undergoes some processing (projective simulation) in the course of which it is related to memory.

The memory itself is of a specific type, which we call episodic & compositional memory (ECM). Its primary function is to store past experience of the agent in the form of episodes, which are (evaluated) sequences of remembered percepts and actions. Physically, ECM can be described as a stochastic network of clips, where clips are the basic units of episodic memory, corresponding to very short episodes (or patches of “space-time memory”)11.

The process of projective simulation can be described as follows. Triggered by perceptual input, some specific clip in memory, which relates to the input, is excited (or “activated”), as indicated in Figure 1. This active clip will then, with a certain probability, excite some neighboring clip, leading to a transition within the clip network. As the process continues, it will generate a random sequence of excited clips, which can be regarded as a recall and random reassembly of episodic fragments from the agent's past. This process stops once an excited clip couples out of memory and triggers motor action. The last step could be realized by a mechanism where the excited clips are screened for the presence of certain features. When a specific feature is detected in a clip (or it is above a certain “intensity” level) it will, with a certain probability, lead to motor action.

Figure 1: Model of episodic memory as a network of clips.
Model of episodic memory as a network of clips.

Triggered by perceptual input, the process of projected simulation starts a random walk through episodic memory, invoking patchwork-like sequences of virtual experience. Once a certain feature is detected, the random walk stops and is translated into motor action (See also Ref.11).

The decribed process is the basic version of episodic memory, but it is not the only one. In a more refined version, which we called episodic and compositional memory, we consider not only transitions between existing clips, but clips may themselves be randomly created (and varied), as part of the simulation process itself. Random clip sequences that are generated this way may introduce entirely fictitious episodes that never happened in the agent's past.

The random walk in memory space, as described, constitutes part of what we call projective simulation. In another part, the agent's actions that come out of the simulation are evaluated. The result of this evaluation then feeds back into the details of the network structure of episodic memory, leading to an update of transition probabilities and of “emotion tags” associated with certain clip transitions11. In a simple reinforcement setting, one assumes for example that certain actions or percept-action pairs are rewarded. Learning then takes place by modifying the network of clips (ECM) according to the given rewards. This modification of memory occurs in different ways: by bayesian updating of transition probabilities between existing clips; by adding new clips to the network via new perceptual input; by creating new clips from existing ones under certain compositional and variational principles; and by updating emotional tags associated with certain clip transitions. Details of this scheme are presented in Ref.11.

In the following, coming back to the main topic of this paper, we want to relate the projective structure of the agent's behavior to the emergence of a primitive notion of creativity and freedom. The basic idea is that the episodic memory provides a platform for the agent to “play with” previous experience, before concrete action is taken (see Figure 2). A call of episodic memory initiates a random walk through memory space, invoking patchwork-like sequences of past experience. This can be understood as a simulation of plausible future experience on the basis of past experience. It is a simulation because it takes place only in the agent's memory; it simulates plausible future experience because sequences of episodes that occurred frequently in the past will do so in simulation. Furthermore, the possibility of clip composition allows the agent to explore, as part of the simulation, new fictitious episodic sequences that it has never encountered before, but which are within a range of “conceivability” (as defined by the rules of clip composition). It is important to realize that, in a similar way as clips representing “real” experience, the clips representing “fictitious” experience will trigger factual action through the same mechanism. This means that fictitious experience, created within the memory of the agent, may de facto change and guide the real actions of the agent. One could also say that the agent acts under the influence of “ideas” that are generated by the agent itself.

Figure 2: Sequences of percepts and actions are simulated stochastically by variations and compositions of episodic memory (ECM), before real action is taken.
Sequences of percepts and actions are simulated stochastically by variations and compositions of episodic memory (ECM), before real action is taken.

Through the process of projective simulation, the agent is, in a sense, constantly ahead of itself.

In summary, through the process of projected simulation the agent projects itself into conceivable future situations and takes its actions under the influence of these projections, as it is illustrated in Figure 2. In this sense, the agent is no longer enslaved by its past, but plays with it, deliberated by variations and spontaneous compositions of episodic fragments. These fragments may come from the past, but they are transformed, by random processes, into new patterns for future action. The agent is, in this sense, always ahead of itself (see Discussion).

Connection with psychology

The notion of episodic memory was first introduced in psychology in the 1970s by Tulving20 and Ingvar21, and it has gained increasing attention in the cognitive neurosciences and in other fields. Recent developments have been reviewed e.g. by Schacter et al.22 and by Hasselmo23 who also discusses specific brain mechanisms for episodic memory.

The “network of clips” which we have described in the previous section can be regarded as a rudimentary form of episodic memory within a physical toy model. It should be emphasized that our model of episodic memory is much more primitive and does e.g. not assume any encoding of time or the ability of dating experience. On the other hand, we go beyond the notion of memory as a mere “storing device” and introduce dynamic rules how episodes are processed and become part of an information processing scheme which we call projective simulation11.

It should also be noted that the main focus of our paper is not on learning. Instead, we have used the model of projective simulation as the conceptual framework to discuss the possibility of creativity and freedom of artificial agents. The advantage of this approach is that the model is sufficiently abstract in its constitutive concepts, while at the same time based on clear physical principles.

Discussion

In this section, we shall put the model of projective simulation into a broader context and discuss its relation to the problem of free will.

The problem of free will is often discussed, e.g. by Searle10, in the context of conscious human experience, for example when we experience the freedom of choice between different options, say, of choosing between different meals in a restaurant. The problem then consists in an apparent inconsistency between such conscious experience of freedom, on the one hand, and the assumption that all of our conscious experiences are ultimately determined by neurobiological processes in the brain and as such subject to the laws of physics and biochemistry.

Other scientists, including neurobiologist Martin Heisenberg, see the problem of freedom already arise on the level of creatures that may not be conscious, but to which we would nevertheless ascribe a measure of initiative and self-determination in their behavior7.

Whatever definition one chooses, both notions of freedom, be it in the sense of conscious free choice or in the sense of self-generated action, have to be reconciled with the basic assumption that biological agents - conscious or unconscious - are, without exception and at all scales of their bodies, subject to physical law. The fundamental problem is, in both cases, how freedom can emerge from lawful processes. Both the freedom of self-generated action and the freedom of conscious choice require, at a certain level (e.g. in the brain), some notion of room to manoeuvre10, which is consistent with physical law. Where does this room come from? And how can it be realized within an explicit physical model?

In this paper, we have discussed a model of an artificial agent, where such room for manoeuvre is provided by a specific notion of memory (ECM) and the way how this memory is used via projected simulation of future actions. Room and ultimately freedom arises in two ways, first by the existence of a simulation platform, which enables the agent to detach itself from an immediate (stimulus-reflex type) embedding into its environment and, second, by the constitutive processes of the simulation, which generate a space of possibilities for responding to environmental stimuli. The mechanisms that allow the agent to explore this space of possibilities are based on (irreducible) random processes. The concept of projective simulation thus combines the basic notions of memory, randomness, and simulation in a unique way. In the remainder of this paper, we would like to come back to these ingredients of our scheme and comment on their specific role regarding the origin of freedom.

Memory. The existence of memory is required in a trivial sense, as a physical notion of experience. But memory also provides the first step to deliberate a system from its environment, i.e. from an immediate stimulus-reflex-type embedding into the world. By connecting perceptual input with memory content, the agent is able to relate it to past experience, on the basis of which it finds its next step. This endows the system with a more comprehensive way of responding to environmental input, but its responses are still fully determined by past experience. In the specific context of episodic memory this means that, as long as episodes are simply recalled without further modification, the agent remains caught by its past and will simply repeat old patterns of behavior. What is still missing, is the notion of the new.

The seed of the new is provided by introducing elements of variation and composition into the simulation process. The first kind of variation is provided by a random reshuffling of past episodes, realized by a random walk in clip space. While this will already lead to new patterns of behavior, the space of possibilities is still defined by past experience alone. The second kind of variation is based on clip composition, which enables the creation of new fictitious episodes. It is important to realize that these variations are truly created by the agent itself. The connection of the agent with its own past is thereby loosened and the agent becomes further “emancipated” from the environment. However, the agent's connection with the past is not simply blurred or erased, as it would be the case for an arbitrary randomization of memory. This would be a silly form of emancipation, depriving the system of what it may have learnt before. Instead, the agent still makes use of past experience, but it is no longer caught or enslaved by it. It rather “plays” with its experience in a constructive sense, creating fictitious sequences of action to guide its future actions. This type of simulation process is conservative in the sense that only variations around real (and proven) experience are considered. It is the range of those variations that defines the conceivable. The probability of variations is determined by certain rules of clip composition, i.e. how memory content can mutate or, more generally, transform during the simulation process. It is a stochastic process that originates and operates entirely within the memory system. In this sense, the deliberation of the agent is truly self-generated and, as such, represents a step of emancipation from its surrounding world.

Randomness. The notions of indeterminism and randomness play an important role in our discussion. Random processes have been assumed as part of projective simulation, both in basic memory recall –a random walk through a space of episodes– and in the mutation or compositional processes of individual clips. (Note that, when we speak of a random process with different outcomes, we mean that the different outcomes are not determined, but they occur only with certain probabilities, such as 0.1/0.9 or 0.5/0.5. We do not imply that these probabilities are all equal. Some people would instead speak of a stochastic process.) The reader may wonder how we can postulate random processes as part of our physical model. However, this is in fact a very natural assumption, which is in agreement with the fundamental laws of nature. Truly random processes are implemented and used routinely in modern quantum physics laboratories, e.g. for quantum information processing purposes. But also in biological systems, random processes are omnipresent, a fact which has recently been reemphasized in Ref.7. Although, for practical considerations, the origin of noise is usually not important, it is here a matter of principle. We may not need quantum mechanics to understand the principles of projective simulation, but we have it. And it is our safeguard that ensures true indeterminism on a molecular level, which is amplified to random noise on a higher level. Quantum randomness is truly irreducible and here it provides the seed for genuine spontaneity.

One should however also realize that the question of principle of the possibility of free will on the basis of natural law does not depend on specifics of neurobiology. Even if people doubt the relevance of quantum indeterminacy in biological agents, they must face the possibility that sooner or later mankind may build artificial intelligent agents that will use quantum elements as part of their design. To put it provocatively, even if human freedom were to be an illusion, humans would still be able, in principle, to build free robots. Amusing.

Finally, one might ask why randomness in our model of projective simulation is different from randomness in any other computational model, e.g. a boolean circuit. Why is it any better to be the slave of a random “mutation of clips” than of some “randomized algorithm”? This is an important question which goes back to the heart of the problem. Part of the answer is that, in the model of projective simulation, there is a clear functional role of randomness, which introduces variations around established patterns of behavior. It is only on the background of previous experience, where variations make proper sense and allow the agent to explore new possibilities via simulation, i.e. before actually trying them out. This is not a notion of slavery but of self-generated options. Furthermore, it is crucial to understand that indeterminism, both in form of the random walk in memory space and in the form of clip composition and variations, is an inherent feature of the agent's memory. There is no deterministic version of projective simulation which could then be “randomized”. In this sense, one can not separate “the agent” from “the randomness” (say, in from of an implanted random number generator) by which “it” could be enslaved. Instead, randomness plays a constitutive role in the very definition of the agent; it is so-to-speak part of its identity.

Simulation. The physical process of simulation, combining randomness and episodic memory to generate “virtual experience”, results in a projective structure of the agent's behavior in its interaction with the world, as illustrated in Figure 2. The agent takes actions under the influence of its own projections and is, in this sense, constantly ahead of itself. It is worth pointing out that this resembles a fundamental structure in philosophical phenomenology24, which plays a central role for the notion of human understanding and being-in-the-world. Clearly, in the present discussion we are not talking about conscious agents nor about any deeper aspect of human existence. What is remarkable, however, is that one of the key concepts of phenomenology can be connected to basic notions of modern physics and information processing. It thus seems to us that a careful analysis of human (and animal) behavior, both from the perspective of phenomenology and of developmental psychology25, may offer many new ideas towards a better understanding of artificial intelligence and the ultimate possibilities of “information processing” in biological agents.

References

  1. Soon, Chun Siong, Brass, Marcel, Heinze, Hans-Jochen & Haynes, John-Dylan. Unconscious determinants of free decisions in the human brain. Nature Neurosci. 11, 543 (2008).
  2. Haggard, Patrick. Human volition: towards a neuroscience of will. Nature Reviews Neuroscience 9, 934 (2008).
  3. Haynes, John-Dylan & Rees, Geraint. Decoding mental states from brain activity in humans. Nature Reviews Neuroscience 7, 523 (2006).
  4. Libet, Benjamin. Unconscious cerebral initiative and the role of conscious will in voluntary action. Behav. Brain Sci. 8, 529 (1985).
  5. Wegener, Daniel M. The illusion of conscious will, (MIT Press, Cambridge, MA, 2002).
  6. Singer, Wolf. Verschaltungen legen uns fest: Wir sollten aufhören von Freiheit zu sprechen. In: Hirnforschung und Willensfreiheit. Zur Deutung der neuesten Experimente (in German), ed. Ch. Geyer, (Suhrkamp, Frankfurt/Main, 2004), pp. 3065.
  7. Heisenberg, Martin. Is free will an illusion? Nature 459, 164 (2009).
  8. Geyer, Christian. (Ed.) Hirnforschung und Willensfreiheit. Zur Deutung der neuesten Experimente (in German), (Suhrkamp, Frankfurt/Main, 2004).
  9. Searle, John. Minds, Brains and Science, (Harvard University Press, Cambridge, MA, 1984).
  10. Searle, John. Freedom and Neurobiology: Reflections on Free Will, Language, and Political Power, (Columbia University Press, New York, 2007).
  11. Briegel, Hans J. & De las Cuevas, Gemma. Projective simulation for artificial intelligence. Scientific Reports 2, 400 (2012).
  12. See http://www.informationphilosopher.com/freedom/two-stage_models.html (May 9, 2011).
  13. Dennett, Daniel C. Consciousness explained, First paperback edition (Bay Back Books, Boston, 1991).
  14. Chalmers, David J. The conscious mind: in search of a fundamental theory, First paperback edition (Oxford University Press, 1996).
  15. Pfeiffer, Rolf. & Scheier, Christian. Understanding intelligence, First edition (MIT Press, Cambridge Massachusetts, 1999).
  16. Russel, Stuart J. & Norvig, Peter. Artifical intelligence - A modern approach, Second edition (Prentice Hall, New Jersey, 2003).
  17. Papadimitriou, Christos M. Computational Complexity (Addison-Wesley, Reading, 1994).
  18. Kandel, Eric. The molecular biology of memory storage: A dialog between genes and synapses. In “Nobel Lectures, Physiology or Medicine 1996-2000,” ed. Hans Jörnvall (World Scientific Publishing Co., Singapore, 2003).
  19. Sareen, Preeti S, Wolf, Reinhard. & Heisenberg, Martin. Attracting the attention of a fly. PNAS 108, 72307235 (2011).
  20. Tulving, Endel. Episodic and semantic memory. In: Organization of Memory, ed. Tulving, E, Donaldson, W, (Academic Press, New York, 1972), pp. 381403.
  21. Ingvar, D. H. “Memory of the future”: An essay on the temporal organization of conscious awareness. Human neurobiology 4 127-136 (1985).
  22. Schacter, Daniel L, Addis, Donna Rose & Buckner, Randy L. Episodic Simulation of Future Events: Concepts, Data, and Applications. Ann. N.Y. Acad. Sci. 1124, 3960 (2008).
  23. Hasselmo, Michael E. How we remember. Brain mechanisms of episodic memory. First edition (MIT Press, Cambridge Massachusetts, 2011).
  24. Heidegger, Sein und Zeit, Sixteenth edition (Max Niemeyer Verlag, Tübingen, 1986). English translation: Being and Time (translated by John Macquarrie and Edward Robinson) First edition (Blackwell Publishing, 1962).
  25. Tomasello, Michael. The Cultural Origins of Human Cognition (Harvard University Press, Cambridge, 1999).
  26. For Teachers
    For Scholars

Chapter 1.5 - The Philosophers Chapter 2.1 - The Problem of Knowledge
Home Part Two - Knowledge
Normal | Teacher | Scholar