Citation for this page in APA citation style.           Close


Philosophers

Mortimer Adler
Rogers Albritton
Alexander of Aphrodisias
Samuel Alexander
William Alston
Anaximander
G.E.M.Anscombe
Anselm
Louise Antony
Thomas Aquinas
Aristotle
David Armstrong
Harald Atmanspacher
Robert Audi
Augustine
J.L.Austin
A.J.Ayer
Alexander Bain
Mark Balaguer
Jeffrey Barrett
William Barrett
William Belsham
Henri Bergson
George Berkeley
Isaiah Berlin
Richard J. Bernstein
Bernard Berofsky
Robert Bishop
Max Black
Susanne Bobzien
Emil du Bois-Reymond
Hilary Bok
Laurence BonJour
George Boole
Émile Boutroux
Daniel Boyd
F.H.Bradley
C.D.Broad
Michael Burke
Lawrence Cahoone
C.A.Campbell
Joseph Keim Campbell
Rudolf Carnap
Carneades
Nancy Cartwright
Gregg Caruso
Ernst Cassirer
David Chalmers
Roderick Chisholm
Chrysippus
Cicero
Randolph Clarke
Samuel Clarke
Anthony Collins
Antonella Corradini
Diodorus Cronus
Jonathan Dancy
Donald Davidson
Mario De Caro
Democritus
Daniel Dennett
Jacques Derrida
René Descartes
Richard Double
Fred Dretske
John Dupré
John Earman
Laura Waddell Ekstrom
Epictetus
Epicurus
Austin Farrer
Herbert Feigl
Arthur Fine
John Martin Fischer
Frederic Fitch
Owen Flanagan
Luciano Floridi
Philippa Foot
Alfred Fouilleé
Harry Frankfurt
Richard L. Franklin
Bas van Fraassen
Michael Frede
Gottlob Frege
Peter Geach
Edmund Gettier
Carl Ginet
Alvin Goldman
Gorgias
Nicholas St. John Green
H.Paul Grice
Ian Hacking
Ishtiyaque Haji
Stuart Hampshire
W.F.R.Hardie
Sam Harris
William Hasker
R.M.Hare
Georg W.F. Hegel
Martin Heidegger
Heraclitus
R.E.Hobart
Thomas Hobbes
David Hodgson
Shadsworth Hodgson
Baron d'Holbach
Ted Honderich
Pamela Huby
David Hume
Ferenc Huoranszki
Frank Jackson
William James
Lord Kames
Robert Kane
Immanuel Kant
Tomis Kapitan
Walter Kaufmann
Jaegwon Kim
William King
Hilary Kornblith
Christine Korsgaard
Saul Kripke
Thomas Kuhn
Andrea Lavazza
Christoph Lehner
Keith Lehrer
Gottfried Leibniz
Jules Lequyer
Leucippus
Michael Levin
Joseph Levine
George Henry Lewes
C.I.Lewis
David Lewis
Peter Lipton
C. Lloyd Morgan
John Locke
Michael Lockwood
Arthur O. Lovejoy
E. Jonathan Lowe
John R. Lucas
Lucretius
Alasdair MacIntyre
Ruth Barcan Marcus
Tim Maudlin
James Martineau
Nicholas Maxwell
Storrs McCall
Hugh McCann
Colin McGinn
Michael McKenna
Brian McLaughlin
John McTaggart
Paul E. Meehl
Uwe Meixner
Alfred Mele
Trenton Merricks
John Stuart Mill
Dickinson Miller
G.E.Moore
Thomas Nagel
Otto Neurath
Friedrich Nietzsche
John Norton
P.H.Nowell-Smith
Robert Nozick
William of Ockham
Timothy O'Connor
Parmenides
David F. Pears
Charles Sanders Peirce
Derk Pereboom
Steven Pinker
Plato
Karl Popper
Porphyry
Huw Price
H.A.Prichard
Protagoras
Hilary Putnam
Willard van Orman Quine
Frank Ramsey
Ayn Rand
Michael Rea
Thomas Reid
Charles Renouvier
Nicholas Rescher
C.W.Rietdijk
Richard Rorty
Josiah Royce
Bertrand Russell
Paul Russell
Gilbert Ryle
Jean-Paul Sartre
Kenneth Sayre
T.M.Scanlon
Moritz Schlick
Arthur Schopenhauer
John Searle
Wilfrid Sellars
Alan Sidelle
Ted Sider
Henry Sidgwick
Walter Sinnott-Armstrong
J.J.C.Smart
Saul Smilansky
Michael Smith
Baruch Spinoza
L. Susan Stebbing
Isabelle Stengers
George F. Stout
Galen Strawson
Peter Strawson
Eleonore Stump
Francisco Suárez
Richard Taylor
Kevin Timpe
Mark Twain
Peter Unger
Peter van Inwagen
Manuel Vargas
John Venn
Kadri Vihvelin
Voltaire
G.H. von Wright
David Foster Wallace
R. Jay Wallace
W.G.Ward
Ted Warfield
Roy Weatherford
C.F. von Weizsäcker
William Whewell
Alfred North Whitehead
David Widerker
David Wiggins
Bernard Williams
Timothy Williamson
Ludwig Wittgenstein
Susan Wolf

Scientists

David Albert
Michael Arbib
Walter Baade
Bernard Baars
Jeffrey Bada
Leslie Ballentine
Marcello Barbieri
Gregory Bateson
Horace Barlow
John S. Bell
Mara Beller
Charles Bennett
Ludwig von Bertalanffy
Susan Blackmore
Margaret Boden
David Bohm
Niels Bohr
Ludwig Boltzmann
Emile Borel
Max Born
Satyendra Nath Bose
Walther Bothe
Jean Bricmont
Hans Briegel
Leon Brillouin
Stephen Brush
Henry Thomas Buckle
S. H. Burbury
Melvin Calvin
Donald Campbell
Sadi Carnot
Anthony Cashmore
Eric Chaisson
Gregory Chaitin
Jean-Pierre Changeux
Rudolf Clausius
Arthur Holly Compton
John Conway
Jerry Coyne
John Cramer
Francis Crick
E. P. Culverwell
Antonio Damasio
Olivier Darrigol
Charles Darwin
Richard Dawkins
Terrence Deacon
Lüder Deecke
Richard Dedekind
Louis de Broglie
Stanislas Dehaene
Max Delbrück
Abraham de Moivre
Bernard d'Espagnat
Paul Dirac
Hans Driesch
John Eccles
Arthur Stanley Eddington
Gerald Edelman
Paul Ehrenfest
Manfred Eigen
Albert Einstein
George F. R. Ellis
Hugh Everett, III
Franz Exner
Richard Feynman
R. A. Fisher
David Foster
Joseph Fourier
Philipp Frank
Steven Frautschi
Edward Fredkin
Benjamin Gal-Or
Howard Gardner
Lila Gatlin
Michael Gazzaniga
Nicholas Georgescu-Roegen
GianCarlo Ghirardi
J. Willard Gibbs
James J. Gibson
Nicolas Gisin
Paul Glimcher
Thomas Gold
A. O. Gomes
Brian Goodwin
Joshua Greene
Dirk ter Haar
Jacques Hadamard
Mark Hadley
Patrick Haggard
J. B. S. Haldane
Stuart Hameroff
Augustin Hamon
Sam Harris
Ralph Hartley
Hyman Hartman
Jeff Hawkins
John-Dylan Haynes
Donald Hebb
Martin Heisenberg
Werner Heisenberg
John Herschel
Basil Hiley
Art Hobson
Jesper Hoffmeyer
Don Howard
John H. Jackson
William Stanley Jevons
Roman Jakobson
E. T. Jaynes
Pascual Jordan
Eric Kandel
Ruth E. Kastner
Stuart Kauffman
Martin J. Klein
William R. Klemm
Christof Koch
Simon Kochen
Hans Kornhuber
Stephen Kosslyn
Daniel Koshland
Ladislav Kovàč
Leopold Kronecker
Rolf Landauer
Alfred Landé
Pierre-Simon Laplace
Karl Lashley
David Layzer
Joseph LeDoux
Gerald Lettvin
Gilbert Lewis
Benjamin Libet
David Lindley
Seth Lloyd
Hendrik Lorentz
Werner Loewenstein
Josef Loschmidt
Ernst Mach
Donald MacKay
Henry Margenau
Owen Maroney
David Marr
Humberto Maturana
James Clerk Maxwell
Ernst Mayr
John McCarthy
Warren McCulloch
N. David Mermin
George Miller
Stanley Miller
Ulrich Mohrhoff
Jacques Monod
Vernon Mountcastle
Emmy Noether
Donald Norman
Alexander Oparin
Abraham Pais
Howard Pattee
Wolfgang Pauli
Massimo Pauri
Wilder Penfield
Roger Penrose
Steven Pinker
Colin Pittendrigh
Walter Pitts
Max Planck
Susan Pockett
Henri Poincaré
Daniel Pollen
Ilya Prigogine
Hans Primas
Zenon Pylyshyn
Henry Quastler
Adolphe Quételet
Pasco Rakic
Nicolas Rashevsky
Lord Rayleigh
Frederick Reif
Jürgen Renn
Giacomo Rizzolati
Emil Roduner
Juan Roederer
Jerome Rothstein
David Ruelle
David Rumelhart
Tilman Sauer
Ferdinand de Saussure
Jürgen Schmidhuber
Erwin Schrödinger
Aaron Schurger
Sebastian Seung
Thomas Sebeok
Franco Selleri
Claude Shannon
Charles Sherrington
David Shiang
Abner Shimony
Herbert Simon
Dean Keith Simonton
Edmund Sinnott
B. F. Skinner
Lee Smolin
Ray Solomonoff
Roger Sperry
John Stachel
Henry Stapp
Tom Stonier
Antoine Suarez
Leo Szilard
Max Tegmark
Teilhard de Chardin
Libb Thims
William Thomson (Kelvin)
Richard Tolman
Giulio Tononi
Peter Tse
Alan Turing
Francisco Varela
Vlatko Vedral
Mikhail Volkenstein
Heinz von Foerster
Richard von Mises
John von Neumann
Jakob von Uexküll
C. S. Unnikrishnan
C. H. Waddington
John B. Watson
Daniel Wegner
Steven Weinberg
Paul A. Weiss
Herman Weyl
John Wheeler
Wilhelm Wien
Norbert Wiener
Eugene Wigner
E. O. Wilson
Günther Witzany
Stephen Wolfram
H. Dieter Zeh
Semir Zeki
Ernst Zermelo
Wojciech Zurek
Konrad Zuse
Fritz Zwicky

Presentations

Biosemiotics
Free Will
Mental Causation
James Symposium
 
Claude Shannon

Claude Shannon is often described as "the father of information theory" although he described his work as "communication theory." it was Shannon who put the communication of signals in the presence of noise on a sound mathematical basis. His co-workers at Bell Laboratories called what was being transmitted "intelligence" (Harry Nyquist) and "information" (Ralph Hartley). Shannon used "intelligence" in his earliest reports and proposals and information later.

Hartley had a deep insight into

In 1871, James Clerk Maxwell showed how an intelligent being could in principle sort out the disorder in a gas of randomly moving molecules, by gathering knowledge-intelligence-information about their speeds and sorting them into hot and cold gases, in apparent violation of the second law of thermodynamics. William Thomson (Lord Kelvin) called this being "Maxwell's intelligent demon."

Ludwig Boltzmann, who established the statistical physics foundation of thermodynamics, chose the logarithm of the number W of equiprobable microstates as the measure for his entropy, because he wanted entropy to be an extensive additive quantity.

S = k log W

where k is Boltzmann's constant. If one system can be in one thousand possible states and another system also in a thousand possible states, the combined system has a million possible states. In a base 10 system, log101000 = 3, and 3 + 3 is 6 = log101000000.

In 1929, Leo Szilard imagined a gas with but a single molecule in a container. He then devised a mechanism that could behave like Maxwell's demon. It would insert a partition into the middle of the container, then gather the information about which of the two sides of the partition the molecule was in. This was a binary decision and it allowed Szilard to develop the mathematical form for the amount of entropy S produced by a one-bit measurement, which Szilard identified as the acquisition of information and storage in the "memory" of a physical device or of a human observer.

S = k log 2

The base-2 logarithm reflects the binary decision.

The amount of entropy generated by the measurement may, of course, always be greater than this fundamental amount of negative entropy (information) created, but not smaller, or the second law - that overall entropy must increase - would be violated.

The earlier work of Maxwell, Boltzmann, and Szilard did not figure directly in Shannon's work. Shannon studied the design of early analog computers (specifically Vannevar Bush's differential analyzer at MIT, which was used by Coolidge and James to calculate the first wave functions of the hydrogen molecule in 1936). Then, with John von Neumann and Alan Turing, Shannon helped design the first digital computers, based on the Boolean logic of 1's and 0's and binary arithmetic.

Shannon analyzed telephone switching circuits that used electromagnetic relay switches, then realized that the switches could solve some problems in Boolean algebra.

During World War II, Shannon worked at Bell Labs on cryptography and sending "intelligence" signals in the presence of noise. Alan Turing visited the labs for a couple of months and showed Shannon his 1936 ideas for a universal computer (the "Turing Machine").

Shannon's work on communications, control systems, and cryptography were initially classified, but they contained almost all of the mathematics that eventually appeared in his landmark 1948 article "A Mathematical Theory of Communication," that is considered the basis for modern information theory.

Norbert Wiener's work on probability theory in Cybernetics had an important influence on Shannon. There can be no new information in a world of certainty. Probability and statistics are at the heart of both information theory and quantum theory.

Following a suggestion by John von Neumann, Shannon developed his expression for an information (Shannon) entropy, which he showed has the same mathematical form as thermodynamic (Boltzmann) entropy. He wrote:

Suppose we have a set of possible events whose probabilities of occurrence are p1, p2, • • • , pn. These probabilities are known but that is all we know concerning which event will occur. Can we find a measure of how much "choice" is involved in the selection of the event or of how uncertain we are of the outcome?

If there is such a measure, say H(p1, p2, • • • , pn), it is reasonable to require of it the following properties:

1. H should be continuous in the pn. 2. If all the pn are equal, pi = 1/n, then H should be a monotonic increasing function of n. With equally likely events there is more choice, or uncertainty, when there are more possible events.

3. If a choice be broken down into two successive choices, the original H should be the weighted sum of the individual values of H. The meaning of this is illustrated in Fig. 6.

Fig. 6.— Decomposition of a choice from three possibilities.

At the left we have three possibilities p1 = 1/2, p2 = 1/3, p3 = 1/6. On the right we first choose between two possibilities each with probability 1/2, and if the second occurs make another choice with probabilities 2/3, 1/3. The final results have the same probabilities as before. We require, in this special case, that

H(1/2, 1/3, 1/6) = H(1/2, 1/2) + 1/2 H(2/3, 1/3)

The coefficient 1/2 is the weighting factor introduced because this second choice only occurs half the time.

The only H satisfying the three above assumptions is of the form:

H = K Σ pi log pi

where K is a positive constant.

Quantities, of the form H = Σ pi log pi (the constant K merely amounts to a choice of a unit of measure) play a central role in information theory as measures of information, choice and uncertainty. The form of H will be recognized as that of entropy as defined in certain formulations of statistical mechanics where pi is the probability of a system being in cell i of its phase space.

H is then, for example, the H in Boltzmann's famous H theorem. We shall call H = pi log pi the entropy of the set of probabilities p1, p2, • • • , pn.

Shannon Entropy and Boltzmann Entropy

Boltzmann entropy: S = k ∑ pi ln pi.        Shannon information: I = - ∑ pi ln pi.

Shannon entropy is the average (expected) value of the information contained in a received message. If there are many possible messages, we get a lot more information than when there are only two possibilities (one bit of information). It is the base 2 logarithm of the number of possibilities. Shannon entropy thus characterizes our uncertainty about the information in an incoming message, and increases for more possibilities with greater randomness. The less likely an event is, the more information it provides when it occurs. Shannon defined his entropy or information as the negative of the logarithm of the probability distribution. He briefly considered calling it "uncertainty (after the Heisenberg uncertainty principle). but decided on "entropy" following the suggestion by von Neumann. One bit of information is also known as one "shannon."

Boltzmann entropy is maximized when the particle distribution is maximally random among positions in phase space, when the number of microstates W corresponding to a given macrostate is as large as possible. An improbable macrostate might be when every particle is in the same microstate. Finding all the particles in a corner of the possible volume is information in the same sense as receiving one of the possible messages. An equilibrium macrostate is when particles are as randomly distributed as possible. Any information is gone.

Counterintuitively, maximum Boltzmann entropy (no information) is maximal uncertainty before a message is received and then maximal Shannon entropy (information), after a message is received, making the two entropies hard to compare.

Historical Background
Information in physical systems was connected to a measure of the structural order in a system as early as the nineteenth century by William Thomson (Lord Kelvin) and Ludwig Boltzmann, who described an increase in the thermodynamic entropy as “lost information.”

In 1877, Boltzmann proved his “H-Theorem” that the entropy or disorder in the universe always increases. He defined entropy S as the logarithm of the number W of possible states of a physical system, an equation now known as Boltzmann’s Principle,

S = k log W.

In 1929, Leo Szilard showed the mean value of the quantity of information produced by a 1-bit, two-possibility (“yes/no”) measurement as S = k log 2, where k is Boltzmann’s constant, connecting information directly to entropy.

Following Szilard, Ludwig von Bertalanffy, Erwin Schrödinger, Norbert Wiener, Claude Shannon, Warren Weaver, John von Neumann, and Leon Brillouin, all expressed similar views on the connection between physical entropy and abstract “bits” of information.

Schrödinger said the information in a living organism is the result of “feeding on negative entropy” from the sun. Wiener said “The quantity we define as amount of information is the negative of the quantity usually defined as entropy in similar situations.”

Brillouin created the term “negentropy” because he said, “One of the most interesting parts in Wiener’s Cybernetics is the discussion on “Time series, information, and communication,” in which he specifies that a certain “amount of information is the negative of the quantity usually defined as entropy in similar situations.”

Shannon, with a nudge from von Neumann, used the term entropy to describe his estimate of the amount of information that can be communicated over a channel, because his mathematical theory of the communication of information produced a mathematical formula identical to Boltzmann’s equation for entropy, except for a minus sign (the negative in negative entropy).

Shannon described a set of i messages, each with probability pi. He then defined a quantity H,

H = k Σ pi log pi,

where k is a positive constant. Since H looked like the H in Boltzmann’s H-Theorem, Shannon called it the entropy of the set of probabilities p1, p2, . . . , pn.

To see the connection between the two entropies, we can note that Boltzmann assumed that all his probabilities were equal. For n equal states, the probability of each state is p = 1/n.

The sum over n states, Σ pi log pi, is then n x 1/n x log (1/n) = log (1/n) = - log n.

If we set Shannon's number of possible messages n equal to Boltzmann's number of possible microstates W, we get Boltzmann’s entropy with a minus sign,

H = - k log W.

Shannon’s entropy H is the negative of Boltzmann’s S.

Shannon showed that a communication that is certain to tell you something you already know (one of the messages has probability unity) contains no new information.

For Teachers
For Scholars
The Mathematical Theory of Communication (excerpts)
Introduction
The recent development of various methods of modulation such as PCM and PPM which exchange bandwidth for signal-to-noise ratio has intensified the interest in a general theory of communication. A basis for such a theory is contained in the important papers of Nyquist1 and Hartley2 on this subject. In the present paper we will extend the theory to include a number of new factors, in particular the effect of noise in the channel, and the savings possible due to the statistical structure of the original message and due to the nature of the final destination of the information.
The fundamental problem of communication is that of reproducing at one point either exactly or approximately a message selected at another point. Frequently the messages have meaning; that is they refer to or are correlated according to some system with certain physical or conceptual entities. These semantic aspects of communication are irrelevant to the engineering problem. The significant aspect is that the actual message is one selected from a set of possible messages. The system must be designed to operate for each possible selection, not just the one which will actually be chosen since this is unknown at the time of design.

If the number of messages in the set is finite then this number or any monotonic function of this number can be regarded as a measure of the information produced when one message is chosen from the set, all choices being equally likely. As was pointed out by Hartley [and Szilard and Boltzmann] the most natural choice is the logarithmic function. Although this definition must be generalized considerably when we consider the influence of the statistics of the message and when we have a continuous range of messages, we will in all cases use an essentially logarithmic measure.

The logarithmic measure is more convenient for various reasons:

1. It is practically more useful. Parameters of engineering importance such as time, bandwidth, number of relays, etc., tend to vary linearly with the logarithm of the number of possibilities. For example, adding one relay to a group doubles the number of possible states of the relays. It adds 1 to the base 2 logarithm of this number. Doubling the time roughly squares the number of possible messages, or doubles the logarithm, etc.

2. It is nearer to our intuitive feeling as to the proper measure. This is closely related to (1) since we intuitively measure entities by linear comparison with common standards. One feels, for example, that two punched cards should have twice the capacity of one for information storage, and two identical channels twice the capacity of one for transmitting information.

3. It is mathematically more suitable. Many of the limiting operations are simple in terms of the logarithm but would require clumsy restatement in terms of the number of possibilities.

The choice of a logarithmic base corresponds to the choice of a unit for measuring information. If the base 2 is used the resulting units may be called binary digits, or more briefly bits, a word suggested by J. W. Tukey. A device with two stable positions, such as a relay or a flip-flop circuit, can store one bit of information. N such devices can store N bits, since the total number of possible states is 2N and log2 2N = N. If the base 10 is used the units may be called decimal digits. Since

log2 M = log10M/log102
= 3.32 log10 M,

a decimal digit is about 3+1/2 bits. A digit wheel on a desk computing machine has ten stable positions and therefore has a storage capacity of one decimal digit. In analytical work where integration and differentiation are involved the base e is sometimes useful. The resulting units of information will be called natural units. Change from the base a to base b merely requires multiplication by logba.

By a communication system we will mean a system of the type indicated schematically in Fig. 1. It consists of essentially five parts:

Fig. 1 Schematic diagram of a general communication system.

1. An information source which produces a message or sequence of messages to be communicated to the receiving terminal. The message may be of various types: (a) A sequence of letters as in a telegraph or teletype system; (b) A single function of time f(t) as in radio or telephony; (c) A function of time and other variables as in black and white television — here the message may be thought of as a function f(x, y, t) of two space coordinates and time, the light intensity at point (x, y) and time t on a pickup tube plate; (d) Two or more functions of time, say f t), g(t), h(t) — this is the case in "three-dimensional" sound transmission or if the system is intended to service several individual channels in multiplex; (e) Several functions of several variables — in color television the message consists of three functions f(x, y, t), g(x, y, t), h(x, y, t) defined in a three-dimensional continuum -- we may also think of these three functions as components of a vector field defined in the region — similarly, several black and white television sources would produce "messages" consisting of a number of functions of three variables; (f) Various combinations also occur, for example in television with an associated audio channel.

2. A transmitter which operates on the message in some way to produce a signal suitable for transmission over the channel. In telephony this operation consists merely of changing sound pressure into a proportional electrical current. In telegraphy we have an encoding operation which produces a sequence of dots, dashes and spaces on the channel corresponding to the message. In a multiplex PCM system the different speech functions must be sampled, compressed, quantized and encoded, and finally interleaved properly to construct the signal. Vocoder systems, television and frequency modulation are other examples of complex operations applied to the message to obtain the signal. 3. The channel is merely the medium used to transmit the signal from transmitter to receiver. It may be a pair of wires, a coaxial cable, a band of radio frequencies, a beam of light, etc. During transmission, or at one of the terminals, the signal may be perturbed by noise. This is indicated schematically in Fig. 1 by the noise source acting on the transmitted signal to produce the received signal.

4. The receiver ordinarily performs the inverse operation of that done by the transmitter, reconstructing the message from the signal.

5. The destination is the person (or thing) for whom the message is intended.

We wish to consider certain general problems involving communication systems. To do this it is first necessary to represent the various elements involved as mathematical entities, suitably idealized from their physical counterparts. We may roughly classify communication systems into three main categories: discrete, continuous and mixed. By a discrete system we will mean one in which both the message and the signal are a sequence of discrete symbols. A typical case is telegraphy where the message is a sequence of letters and the signal a sequence of dots, dashes and spaces. A continuous system is one in which the message and signal are both treated as continuous functions, e.g., radio or television. A mixed system is one in which both discrete and continuous variables appear, e.g., PCM transmission of speech.

We first consider the discrete case. This case has applications not only in communication theory, but also in the theory of computing machines, the design of telephone exchanges and other fields. In addition the discrete case forms a foundation for the continuous and mixed cases which will be treated in the second half of the paper.

6. Choice, Uncertainty and Entropy
We have represented a discrete information source as a Markoff process. Can we define a quantity which will measure, in some sense, how much information is "produced" by such a process, or better, at what rate information is produced?

Suppose we have a set of possible events whose probabilities of occurrence are p1, p2, • • • , pn. These probabilities are known but that is all we know concerning which event will occur. Can we find a measure of how much "choice" is involved in the selection of the event or of how uncertain we are of the outcome?

If there is such a measure, say H(p1, p2, • • • , pn), it is reasonable to require of it the following properties:

1. H should be continuous in the pn. 2. If all the pn are equal, pi = 1/n, then H should be a monotonic increasing function of n. With equally likely events there is more choice, or uncertainty, when there are more possible events.

3. If a choice be broken down into two successive choices, the original H should be the weighted sum of the individual values of H. The meaning of this is illustrated in Fig. 6.

Fig. 6.— Decomposition of a choice from three possibilities.

At the left we have three possibilities p1 = 1/2, p2 = 1/3, p3 = 1/6. On the right we first choose between two possibilities each with probability 1/2, and if the second occurs make another choice with probabilities 2/3, 1/3. The final results have the same probabilities as before. We require, in this special case, that

H(1/2, 1/3, 1/6) = H(1/2, 1/2) + 1/2 H(2/3, 1/3)

The coefficient 1/2 is the weighting factor introduced because this second choice only occurs half the time.

In Appendix 2, the following result is established:

Theorem 2: The only H satisfying the three above assumptions is of the form:

H = K Σ pi log pi

where K is a positive constant. This theorem, and the assumptions required for its proof, are in no way necessary for the present theory. It is given chiefly to lend a certain plausibility to some of our later definitions. The real justification of these definitions, however, will reside in their implications.

Quantities, of the form H = Σ pi log pi (the constant K merely amounts to a choice of a unit of measure) play a central role in information theory as measures of information, choice and uncertainty. The form of H will be recognized as that of entropy as defined in certain formulations of statistical mechanics8 where pi is the probability of a system being in cell i of its phase space.

H is then, for example, the H in Boltzmann's famous H theorem. We shall call H = pi log pi the entropy of the set of probabilities p1, p2, • • • , pn. If x is a chance variable we will write H(x) for its entropy; thus x is not an argument of a function but a label for a number, to differentiate it from H(y) say, the entropy of the chance variable y.

The quantity H has a number of interesting properties which further substantiate it as a reasonable measure of choice or information.

1. H = 0 if and only if all the pi but one are zero, this one having the value unity. Thus only when we are certain of the outcome does H vanish. Otherwise H is positive.

2. For a given n, H is a maximum and equal to log n when all the pi are equal, i.e., 1/n. This is also intuitively the most uncertain situation.


Normal | Teacher | Scholar