Citation for this page in APA citation style.           Close


Philosophers

Mortimer Adler
Rogers Albritton
Alexander of Aphrodisias
Samuel Alexander
William Alston
G.E.M.Anscombe
Anselm
Louise Antony
Thomas Aquinas
Aristotle
David Armstrong
Harald Atmanspacher
Robert Audi
Augustine
J.L.Austin
A.J.Ayer
Alexander Bain
Mark Balaguer
Jeffrey Barrett
William Belsham
Henri Bergson
Isaiah Berlin
Bernard Berofsky
Robert Bishop
Max Black
Susanne Bobzien
Emil du Bois-Reymond
Hilary Bok
Laurence BonJour
George Boole
Émile Boutroux
F.H.Bradley
C.D.Broad
Michael Burke
C.A.Campbell
Joseph Keim Campbell
Rudolf Carnap
Carneades
Ernst Cassirer
David Chalmers
Roderick Chisholm
Chrysippus
Cicero
Randolph Clarke
Samuel Clarke
Anthony Collins
Antonella Corradini
Diodorus Cronus
Jonathan Dancy
Donald Davidson
Mario De Caro
Democritus
Daniel Dennett
Jacques Derrida
René Descartes
Richard Double
Fred Dretske
John Dupré
John Earman
Laura Waddell Ekstrom
Epictetus
Epicurus
Herbert Feigl
John Martin Fischer
Owen Flanagan
Luciano Floridi
Philippa Foot
Alfred Fouilleé
Harry Frankfurt
Richard L. Franklin
Michael Frede
Gottlob Frege
Peter Geach
Edmund Gettier
Carl Ginet
Alvin Goldman
Gorgias
Nicholas St. John Green
H.Paul Grice
Ian Hacking
Ishtiyaque Haji
Stuart Hampshire
W.F.R.Hardie
Sam Harris
William Hasker
R.M.Hare
Georg W.F. Hegel
Martin Heidegger
R.E.Hobart
Thomas Hobbes
David Hodgson
Shadsworth Hodgson
Baron d'Holbach
Ted Honderich
Pamela Huby
David Hume
Ferenc Huoranszki
William James
Lord Kames
Robert Kane
Immanuel Kant
Tomis Kapitan
Jaegwon Kim
William King
Hilary Kornblith
Christine Korsgaard
Saul Kripke
Andrea Lavazza
Keith Lehrer
Gottfried Leibniz
Leucippus
Michael Levin
George Henry Lewes
C.I.Lewis
David Lewis
Peter Lipton
John Locke
Michael Lockwood
E. Jonathan Lowe
John R. Lucas
Lucretius
Ruth Barcan Marcus
James Martineau
Storrs McCall
Hugh McCann
Colin McGinn
Michael McKenna
Brian McLaughlin
John McTaggart
Paul E. Meehl
Uwe Meixner
Alfred Mele
Trenton Merricks
John Stuart Mill
Dickinson Miller
G.E.Moore
C. Lloyd Morgan
Thomas Nagel
Friedrich Nietzsche
John Norton
P.H.Nowell-Smith
Robert Nozick
William of Ockham
Timothy O'Connor
David F. Pears
Charles Sanders Peirce
Derk Pereboom
Steven Pinker
Plato
Karl Popper
Porphyry
Huw Price
H.A.Prichard
Hilary Putnam
Willard van Orman Quine
Frank Ramsey
Ayn Rand
Michael Rea
Thomas Reid
Charles Renouvier
Nicholas Rescher
C.W.Rietdijk
Richard Rorty
Josiah Royce
Bertrand Russell
Paul Russell
Gilbert Ryle
Jean-Paul Sartre
Kenneth Sayre
T.M.Scanlon
Moritz Schlick
Arthur Schopenhauer
John Searle
Wilfrid Sellars
Alan Sidelle
Ted Sider
Henry Sidgwick
Walter Sinnott-Armstrong
J.J.C.Smart
Saul Smilansky
Michael Smith
Baruch Spinoza
L. Susan Stebbing
George F. Stout
Galen Strawson
Peter Strawson
Eleonore Stump
Francisco Suárez
Richard Taylor
Kevin Timpe
Mark Twain
Peter Unger
Peter van Inwagen
Manuel Vargas
John Venn
Kadri Vihvelin
Voltaire
G.H. von Wright
David Foster Wallace
R. Jay Wallace
W.G.Ward
Ted Warfield
Roy Weatherford
William Whewell
Alfred North Whitehead
David Widerker
David Wiggins
Bernard Williams
Timothy Williamson
Ludwig Wittgenstein
Susan Wolf

Scientists

Michael Arbib
Bernard Baars
Gregory Bateson
John S. Bell
Charles Bennett
Ludwig von Bertalanffy
Susan Blackmore
Margaret Boden
David Bohm
Niels Bohr
Ludwig Boltzmann
Emile Borel
Max Born
Satyendra Nath Bose
Walther Bothe
Hans Briegel
Leon Brillouin
Stephen Brush
Henry Thomas Buckle
S. H. Burbury
Donald Campbell
Anthony Cashmore
Eric Chaisson
Jean-Pierre Changeux
Arthur Holly Compton
John Conway
John Cramer
E. P. Culverwell
Charles Darwin
Terrence Deacon
Louis de Broglie
Max Delbrück
Abraham de Moivre
Paul Dirac
Hans Driesch
John Eccles
Arthur Stanley Eddington
Paul Ehrenfest
Albert Einstein
Hugh Everett, III
Franz Exner
Richard Feynman
R. A. Fisher
Joseph Fourier
Lila Gatlin
Michael Gazzaniga
GianCarlo Ghirardi
J. Willard Gibbs
Nicolas Gisin
Paul Glimcher
Thomas Gold
A.O.Gomes
Brian Goodwin
Joshua Greene
Jacques Hadamard
Patrick Haggard
Stuart Hameroff
Augustin Hamon
Sam Harris
Hyman Hartman
John-Dylan Haynes
Martin Heisenberg
Werner Heisenberg
John Herschel
Jesper Hoffmeyer
E. T. Jaynes
William Stanley Jevons
Roman Jakobson
Pascual Jordan
Ruth E. Kastner
Stuart Kauffman
Simon Kochen
Stephen Kosslyn
Ladislav Kovàč
Rolf Landauer
Alfred Landé
Pierre-Simon Laplace
David Layzer
Benjamin Libet
Seth Lloyd
Hendrik Lorentz
Josef Loschmidt
Ernst Mach
Donald MacKay
Henry Margenau
James Clerk Maxwell
Ernst Mayr
Ulrich Mohrhoff
Jacques Monod
Emmy Noether
Howard Pattee
Wolfgang Pauli
Massimo Pauri
Roger Penrose
Steven Pinker
Colin Pittendrigh
Max Planck
Susan Pockett
Henri Poincaré
Daniel Pollen
Ilya Prigogine
Hans Primas
Adolphe Quételet
Juan Roederer
Jerome Rothstein
David Ruelle
Erwin Schrödinger
Aaron Schurger
Claude Shannon
David Shiang
Herbert Simon
Dean Keith Simonton
B. F. Skinner
Roger Sperry
Henry Stapp
Tom Stonier
Antoine Suarez
Leo Szilard
William Thomson (Kelvin)
Peter Tse
Heinz von Foerster
John von Neumann
John B. Watson
Daniel Wegner
Steven Weinberg
Paul A. Weiss
John Wheeler
Wilhelm Wien
Norbert Wiener
Eugene Wigner
E. O. Wilson
H. Dieter Zeh
Ernst Zermelo
Wojciech Zurek

Presentations

Biosemiotics
Free Will
Mental Causation
James Symposium
 
Rolf Landauer

Rolf Landauer extended the ideas of John von Neumann and Leo Szilard, who, along with many other physicists, had connected a physical measurement with thermodynamical irreversibility, that is to say a dissipation of energy and increase in entropy.

The increase in entropy (or decrease in available negentropy, as Leon Brillouin put it), must equal or exceed the increase in information acquired in the measurement, in order to satisfy the second law of thermodynamics.

Landauer studied the special case of digital computers which read and write information as part of their calculations, but have extremely small or even zero energy dissipation, especially in computations that are in principle logically reversible. Such calculations must include their input values along with their outputs, in order to allow the computer to step backward through the calculation and restore the original state.

Some of Landauer's thinking assumes completely deterministic classical mechanics, in which trajectories are a known function of the forces and initial conditions. This is of course an idealization not realizable in the physical world, but can be approximated by large classical objects such as billiard balls (cf., the digital physics of Ed Fredkin).

Since the introduction of quantum mechanics and the realization that we live in a universe with irreducible background noise (the cosmic microwave background radiation with a temperature of about 3°K), noise and entropy-free deterministic systems are the idealizations of mathematicians, philosophers, and computer scientists.

Indeed, a major difference between Bell Labs and IBM can perhaps be seen in the observation that Bell Labs has learned to communicate signals in the presence of noise and discovered the ultimate cosmic source of entropic noise, where IBM has excelled at eliminating the effects of noise from our best computers. At Bell Labs, Claude Shannon developed information theory, with its fundamental connection to Ludwig Boltzmann's entropy. And there Arno Penzias and Robert Wilson discovered the cosmic background radiation. At IBM, Landauer and his colleague Charles Bennett are famous for logically and thermodynamically reversible computing, which ignores the effects of noise and entropy until the computer bits of information must be erased (Landauer's Principle).

Note that Landauer's work is mostly logical and does not discuss the underlying physics of irreversibility.
Logically irreversible devices do not remember the inputs. They are thus one-way processes that lose information. Logically irreversible devices are necessary to computing, says Landauer, and logical irreversibility implies physical irreversibility.
We shall call a device logically irreversible if the output of a device does not uniquely define the inputs. We believe that devices exhibiting logical irreversibility are essential to computing. Logical irreversibility, we believe, in turn implies physical irreversibility, and the latter is accompanied by dissipative effects.
Landauer then goes on to describe classes of computers that can be considered logically reversible. They must not only save their inputs, but also the results of all intermediate logical steps, to provide the necessary information to perform all the steps backwards and restore the original conditions. In particular, he says, no information can be erased. That the entropy must go up on erasure is known as Landauer's Principle. Landauer's colleague at IBM, Charles Bennett, carries on the investigations of logically reversible computing.

Landauer describes two examples of logically reversible machines.

[The first is] a particular class of computers, namely those using logical functions of only one or two variables. After a machine cycle each of our N binary elements is a function of the state of at most two of the binary elements before the machine cycle. Now assume that the computer is logically reversible. Then the machine cycle maps the 2N possible initial states of the machine onto the same space of 2N states, rather than just a subspace thereof. In the 2N possible states each bit has a ONE and a ZERO appearing with equal frequency. Hence the reversible computer can utilize only those truth functions whose truth table exhibits equal numbers of ONES and ZEROS. The admissible truth functions then are the identity and negation, the EXCLUSIVE OR and its negation. These, however, are not a complete set' and do not permit a synthesis of all other truth functions.

[Landauer also describes] more general devices. Consider, for example, a particular three-input, three-output device, i.e., a small special purpose computer with three bit positions. Let p, q, and r be the variables before the machine cycle. The particular truth function under consideration is the one which replaces r by p • q if r = 0, and replaces r by NOT p • q if r = 1. The variables p and q are left unchanged during the machine cycle. We can consider r as giving us a choice of program, and p, q as the variables on which the selected program operates. This is a logically reversible device, its output always defines its input uniquely. Nevertheless it is capable of performing an operation such as AND which is not, in itself, reversible. The computer, however, saves enough of the input information so that it supplements the desired result to allow reversibility. It is interesting to note, however, that we did not "save" the program; we can only deduce what it was.

Now consider a more general purpose computer, which usually has to go through many machine cycles to carry out a program. At first sight it may seem that logical reversibility is simply obtained by saving the input in some corner of the machine. We shall, however, label a machine as being logically reversible, if and only if all its individual steps are logically reversible. This means that every single time a truth function of two variables is evaluated we must save some additional information about the quantities being operated on, whether we need it or not. Erasure, which is equivalent to RESTORE TO ONE. discussed in the Introduction, is not permitted. We will, therefore, in a long program clutter up our machine bit positions with unnecessary information about intermediate results. Furthermore if we wish to use the reversible function of three variables, which was just discussed. as an AND, then we must supply in the initial programming a separate ZERO for every AND operation which is subsequently required, since the "bias" which programs the device is not saved, when the AND is performed. The machine must therefore have a great deal of extra capacity to store both the extra "bias" bits and the extra outputs. Can it be given adequate capacity to make all intermediate steps reversible? If our machine is capable, as machines are generally understood to be, of a non-terminating program, then it is clear that the capacity for preserving all the information about all the intermediate steps cannot be there.

Let us, however, not take such an easy way out. Perhaps it is just possible to devise a machine, useful in the normal sense, but not capable of embarking on a nonterminating program. Let us take such a machine as it normally comes, involving logically irreversible truth functions. An irreversible truth function can be made into a reversible one, as we have illustrated, by "embedding" it in a truth function of a large number of variables. The larger truth function, however, requires extra inputs to bias it, and extra outputs to hold the information which provides the reversibility. What we now contend is that this larger machine, while it is reversible, is not a useful computing machine in the normally accepted sense of the word.

First of all, in order to provide space for the extra inputs and outputs, the embedding requires knowledge of the number of times each of the operations of the original (irreversible) machine will be required. The usefulness of a computer stems, however, from the fact that it is more than just a table look-up device; it can do many programs which were not anticipated in full detail by the designer. Our enlarged machine must have a number of bit positions, for every embedded device of the order of the number of program steps and requires a number of switching events during program loading comparable to the number that occur during the program itself. The setting of bias during program loading, which would typically consist of restoring a long row of bits to say ZERO, is just the type of nonreversible logical operation we are trying to avoid. Our unwieldy machine has therefore avoided the irreversible operations during the running of the program, only at the expense of added comparable irreversibility during the loading of the program.

4. Logical irreversibility and entropy generation
The detailed connection between logical irreversibility and entropy changes remains to be made. Consider again, as an example, the operation RESTORE TO ONE. The generalization to more complicated logical operations will be trivial. Imagine first a situation in which the RESTORE operation has already been carried out on each member of an assembly of such bits. This is somewhat equivalent to an assembly of spins, all aligned with the positive z-axis. In thermal equilibrium the bits (or spins) have two equally favored positions. Our specially prepared collections show much more order, and therefore a lower temperature and entropy than is characteristic of the equilibrium state. In the adiabatic demagnetization method we use such a prepared spin state, and as the spins become disoriented they take up entropy from the surroundings and thereby cool off the lattice in which the spins are embedded. An assembly of ordered bits would act similarly. As the assembly thermalizes and forgets its initial state the environment would be cooled off. Note that the important point here is not that all bits in the assembly initially agree with each other, but only that there is a single, well-defined initial state for the collection of bits. The well-defined initial state corresponds, by the usual statistical mechanical definition of entropy, S = k loge W, to zero entropy. The degrees of freedom associated with the information can, through thermal relaxation, go to any one of 2N states (for N bits in the assembly) and therefore the entropy can increase by k N loge 2 as the initial information becomes thermalized.

Note that our argument here does not necessarily depend upon connections, frequently made in other writings, between entropy and information. We simply think of each bit as being located in a physical system, with perhaps a great many degrees of freedom, in addition to the relevant one. However, for each possible physical state which will be interpreted as a ZERO, there is a very similar possible physical state in which the physical system represents a ONE. Hence a system which is in a ONE state has only half as many physical states available to it as a system which can be in a ONE or ZERO state. (We shall ignore in this Section and in the subsequent considerations the case in which the ONE and ZERO are represented by states with different entropy. This case requires arguments of considerably greater complexity but leads to similar physical conclusions.)

In carrying out the RESTORE TO ONE operation we are doing the opposite of the thermalization. We start with each bit in one of two states and end up with a well-defined state. Let us view this operation in some detail.

Consider a statistical ensemble of bits in thermal equilibrium. If these are all reset to ONE, the number of states covered in the ensemble has been cut in half. The entropy therefore has been reduced by k loge 2 = 0.6931 k per bit. The entropy of a closed system, e.g., a computer with its own batteries, cannot decrease; hence this entropy must appear elsewhere as a heating effect, supplying 0.6931 kT per restored bit to the surroundings. This is, of course, a minimum heating effect, and our method of reasoning gives no guarantee that this minimum is in fact achievable.

Our reset operation, in the preceding discussion, was applied to a thermal equilibrium ensemble. In actuality we would like to know what happens in a particular computing circuit which will work on information which has not yet been thermalized, but at any one time consists of a well-defined ZERO or a well-defined ONE. Take first the case where, as time goes on, the reset operation is applied to a random chain of ONES and ZEROS. We can, in the usual fashion, take the statistical ensemble equivalent to a time average and therefore conclude that the dissipation per reset operation is the same for the time-wise succession as for the thermalized ensemble.

A computer, however, is seldom likely to operate on random data. One of the two bit possibilities may occur more often than the other, or even if the frequencies are equal, there may be a correlation between successive bits. In other words the digits which are reset may not carry the maximum possible information. Consider the extreme case, where the inputs are all ONE, and there is no need to carry out any operation. Clearly then no entropy changes occur and no heat dissipation is involved. Alternatively if the initial states are all ZERO they also carry no information, and no entropy change is involved in resetting them all to ONE. Note, however, that the reset operation which sufficed when the inputs were all ONE (doing nothing) will not suffice when the inputs are all ZERO. When the initial states are ZERO, and we wish to go to ONE, this is analogous to a phase transformation between two phases in equilibrium, and can, presumably, be done reversibly and without an entropy increase in the universe, but only by a procedure specifically designed for that task. We thus see that when the initial states do not have their fullest possible diversity, the necessary entropy increase in the RESET operation can be reduced, but only by taking advantage of our knowledge about the inputs, and tailoring the reset operation accordingly...

The question arises whether the entropy is really reduced by the logically irreversible operation. If we really map the possible initial ZERO states and the possible initial ONE states into the same space, i.e., the space of ONE states, there can be no question involved. But, perhaps, after we have performed the operation there can be some small remaining difference between the systems which were originally in the ONE state already and those that had to be switched into it. There is no harm in such differences persisting for some time, but as we saw in the discussion of the dissipationless subharmonic oscillator, we cannot tolerate a cumulative process, in which differences between various possible ONE states become larger and larger according to their detailed past histories. Hence the physical "many into one" mapping, which is the source of the entropy change, need not happen in full detail during the machine cycle which performed the logical function. But it must eventually take place, and this is all that is relevant for the heat generation argument.

Summary

The information-bearing degrees of freedom of a computer interact with the thermal reservoir represented by the remaining degrees of freedom. This interaction plays two roles. First of all, it acts as a sink for the energy dissipation involved in the computation. This energy dissipation has an unavoidable minimum arising from the fact that the computer performs irreversible operations. Secondly, the interaction acts as a source of noise causing errors. In particular thermal fluctuations give a supposedly switched element a small probability of remaining in its initial state, even after the switching force has been applied for a long time. It is shown, in terms of two simple models, that this source of error is dominated by one of two other error sources:

1) Incomplete switching due to inadequate time allowed for switching.

2) Decay of stored information due to thermal fluctuations.

It is, of course, apparent that both the thermal noise and the requirements for energy dissipation are on a scale which is entirely negligible in present-day computer components. The dissipation as calculated, however, is an absolute minimum. Actual devices which are far from minimal in size and operate at high speeds will be likely to require a much larger energy dissipation to serve the purpose of erasing the unnecessary details of the computer's past history.

For Teachers
For Scholars

Chapter 1.5 - The Philosophers Chapter 2.1 - The Problem of Knowledge
Home Part Two - Knowledge
Normal | Teacher | Scholar