Citation for this page in APA citation style.           Close


Philosophers

Mortimer Adler
Rogers Albritton
Alexander of Aphrodisias
Samuel Alexander
William Alston
Anaximander
G.E.M.Anscombe
Anselm
Louise Antony
Thomas Aquinas
Aristotle
David Armstrong
Harald Atmanspacher
Robert Audi
Augustine
J.L.Austin
A.J.Ayer
Alexander Bain
Mark Balaguer
Jeffrey Barrett
William Barrett
William Belsham
Henri Bergson
George Berkeley
Isaiah Berlin
Richard J. Bernstein
Bernard Berofsky
Robert Bishop
Max Black
Susanne Bobzien
Emil du Bois-Reymond
Hilary Bok
Laurence BonJour
George Boole
Émile Boutroux
Daniel Boyd
F.H.Bradley
C.D.Broad
Michael Burke
Lawrence Cahoone
C.A.Campbell
Joseph Keim Campbell
Rudolf Carnap
Carneades
Nancy Cartwright
Gregg Caruso
Ernst Cassirer
David Chalmers
Roderick Chisholm
Chrysippus
Cicero
Tom Clark
Randolph Clarke
Samuel Clarke
Anthony Collins
Antonella Corradini
Diodorus Cronus
Jonathan Dancy
Donald Davidson
Mario De Caro
Democritus
Daniel Dennett
Jacques Derrida
René Descartes
Richard Double
Fred Dretske
John Dupré
John Earman
Laura Waddell Ekstrom
Epictetus
Epicurus
Austin Farrer
Herbert Feigl
Arthur Fine
John Martin Fischer
Frederic Fitch
Owen Flanagan
Luciano Floridi
Philippa Foot
Alfred Fouilleé
Harry Frankfurt
Richard L. Franklin
Bas van Fraassen
Michael Frede
Gottlob Frege
Peter Geach
Edmund Gettier
Carl Ginet
Alvin Goldman
Gorgias
Nicholas St. John Green
H.Paul Grice
Ian Hacking
Ishtiyaque Haji
Stuart Hampshire
W.F.R.Hardie
Sam Harris
William Hasker
R.M.Hare
Georg W.F. Hegel
Martin Heidegger
Heraclitus
R.E.Hobart
Thomas Hobbes
David Hodgson
Shadsworth Hodgson
Baron d'Holbach
Ted Honderich
Pamela Huby
David Hume
Ferenc Huoranszki
Frank Jackson
William James
Lord Kames
Robert Kane
Immanuel Kant
Tomis Kapitan
Walter Kaufmann
Jaegwon Kim
William King
Hilary Kornblith
Christine Korsgaard
Saul Kripke
Thomas Kuhn
Andrea Lavazza
Christoph Lehner
Keith Lehrer
Gottfried Leibniz
Jules Lequyer
Leucippus
Michael Levin
Joseph Levine
George Henry Lewes
C.I.Lewis
David Lewis
Peter Lipton
C. Lloyd Morgan
John Locke
Michael Lockwood
Arthur O. Lovejoy
E. Jonathan Lowe
John R. Lucas
Lucretius
Alasdair MacIntyre
Ruth Barcan Marcus
Tim Maudlin
James Martineau
Nicholas Maxwell
Storrs McCall
Hugh McCann
Colin McGinn
Michael McKenna
Brian McLaughlin
John McTaggart
Paul E. Meehl
Uwe Meixner
Alfred Mele
Trenton Merricks
John Stuart Mill
Dickinson Miller
G.E.Moore
Thomas Nagel
Otto Neurath
Friedrich Nietzsche
John Norton
P.H.Nowell-Smith
Robert Nozick
William of Ockham
Timothy O'Connor
Parmenides
David F. Pears
Charles Sanders Peirce
Derk Pereboom
Steven Pinker
U.T.Place
Plato
Karl Popper
Porphyry
Huw Price
H.A.Prichard
Protagoras
Hilary Putnam
Willard van Orman Quine
Frank Ramsey
Ayn Rand
Michael Rea
Thomas Reid
Charles Renouvier
Nicholas Rescher
C.W.Rietdijk
Richard Rorty
Josiah Royce
Bertrand Russell
Paul Russell
Gilbert Ryle
Jean-Paul Sartre
Kenneth Sayre
T.M.Scanlon
Moritz Schlick
John Duns Scotus
Arthur Schopenhauer
John Searle
Wilfrid Sellars
David Shiang
Alan Sidelle
Ted Sider
Henry Sidgwick
Walter Sinnott-Armstrong
Peter Slezak
J.J.C.Smart
Saul Smilansky
Michael Smith
Baruch Spinoza
L. Susan Stebbing
Isabelle Stengers
George F. Stout
Galen Strawson
Peter Strawson
Eleonore Stump
Francisco Suárez
Richard Taylor
Kevin Timpe
Mark Twain
Peter Unger
Peter van Inwagen
Manuel Vargas
John Venn
Kadri Vihvelin
Voltaire
G.H. von Wright
David Foster Wallace
R. Jay Wallace
W.G.Ward
Ted Warfield
Roy Weatherford
C.F. von Weizsäcker
William Whewell
Alfred North Whitehead
David Widerker
David Wiggins
Bernard Williams
Timothy Williamson
Ludwig Wittgenstein
Susan Wolf

Scientists

David Albert
Michael Arbib
Walter Baade
Bernard Baars
Jeffrey Bada
Leslie Ballentine
Marcello Barbieri
Gregory Bateson
Horace Barlow
John S. Bell
Mara Beller
Charles Bennett
Ludwig von Bertalanffy
Susan Blackmore
Margaret Boden
David Bohm
Niels Bohr
Ludwig Boltzmann
Emile Borel
Max Born
Satyendra Nath Bose
Walther Bothe
Jean Bricmont
Hans Briegel
Leon Brillouin
Stephen Brush
Henry Thomas Buckle
S. H. Burbury
Melvin Calvin
Donald Campbell
Sadi Carnot
Anthony Cashmore
Eric Chaisson
Gregory Chaitin
Jean-Pierre Changeux
Rudolf Clausius
Arthur Holly Compton
John Conway
Jerry Coyne
John Cramer
Francis Crick
E. P. Culverwell
Antonio Damasio
Olivier Darrigol
Charles Darwin
Richard Dawkins
Terrence Deacon
Lüder Deecke
Richard Dedekind
Louis de Broglie
Stanislas Dehaene
Max Delbrück
Abraham de Moivre
Bernard d'Espagnat
Paul Dirac
Hans Driesch
John Eccles
Arthur Stanley Eddington
Gerald Edelman
Paul Ehrenfest
Manfred Eigen
Albert Einstein
George F. R. Ellis
Hugh Everett, III
Franz Exner
Richard Feynman
R. A. Fisher
David Foster
Joseph Fourier
Philipp Frank
Steven Frautschi
Edward Fredkin
Benjamin Gal-Or
Howard Gardner
Lila Gatlin
Michael Gazzaniga
Nicholas Georgescu-Roegen
GianCarlo Ghirardi
J. Willard Gibbs
James J. Gibson
Nicolas Gisin
Paul Glimcher
Thomas Gold
A. O. Gomes
Brian Goodwin
Joshua Greene
Dirk ter Haar
Jacques Hadamard
Mark Hadley
Patrick Haggard
J. B. S. Haldane
Stuart Hameroff
Augustin Hamon
Sam Harris
Ralph Hartley
Hyman Hartman
Jeff Hawkins
John-Dylan Haynes
Donald Hebb
Martin Heisenberg
Werner Heisenberg
John Herschel
Basil Hiley
Art Hobson
Jesper Hoffmeyer
Don Howard
John H. Jackson
William Stanley Jevons
Roman Jakobson
E. T. Jaynes
Pascual Jordan
Eric Kandel
Ruth E. Kastner
Stuart Kauffman
Martin J. Klein
William R. Klemm
Christof Koch
Simon Kochen
Hans Kornhuber
Stephen Kosslyn
Daniel Koshland
Ladislav Kovàč
Leopold Kronecker
Rolf Landauer
Alfred Landé
Pierre-Simon Laplace
Karl Lashley
David Layzer
Joseph LeDoux
Gerald Lettvin
Gilbert Lewis
Benjamin Libet
David Lindley
Seth Lloyd
Werner Loewenstein
Hendrik Lorentz
Josef Loschmidt
Alfred Lotka
Ernst Mach
Donald MacKay
Henry Margenau
Owen Maroney
David Marr
Humberto Maturana
James Clerk Maxwell
Ernst Mayr
John McCarthy
Warren McCulloch
N. David Mermin
George Miller
Stanley Miller
Ulrich Mohrhoff
Jacques Monod
Vernon Mountcastle
Emmy Noether
Donald Norman
Alexander Oparin
Abraham Pais
Howard Pattee
Wolfgang Pauli
Massimo Pauri
Wilder Penfield
Roger Penrose
Steven Pinker
Colin Pittendrigh
Walter Pitts
Max Planck
Susan Pockett
Henri Poincaré
Daniel Pollen
Ilya Prigogine
Hans Primas
Zenon Pylyshyn
Henry Quastler
Adolphe Quételet
Pasco Rakic
Nicolas Rashevsky
Lord Rayleigh
Frederick Reif
Jürgen Renn
Giacomo Rizzolati
A.A. Roback
Emil Roduner
Juan Roederer
Jerome Rothstein
David Ruelle
David Rumelhart
Robert Sapolsky
Tilman Sauer
Ferdinand de Saussure
Jürgen Schmidhuber
Erwin Schrödinger
Aaron Schurger
Sebastian Seung
Thomas Sebeok
Franco Selleri
Claude Shannon
Charles Sherrington
Abner Shimony
Herbert Simon
Dean Keith Simonton
Edmund Sinnott
B. F. Skinner
Lee Smolin
Ray Solomonoff
Roger Sperry
John Stachel
Henry Stapp
Tom Stonier
Antoine Suarez
Leo Szilard
Max Tegmark
Teilhard de Chardin
Libb Thims
William Thomson (Kelvin)
Richard Tolman
Giulio Tononi
Peter Tse
Alan Turing
C. S. Unnikrishnan
Francisco Varela
Vlatko Vedral
Vladimir Vernadsky
Mikhail Volkenstein
Heinz von Foerster
Richard von Mises
John von Neumann
Jakob von Uexküll
C. H. Waddington
John B. Watson
Daniel Wegner
Steven Weinberg
Paul A. Weiss
Herman Weyl
John Wheeler
Jeffrey Wicken
Wilhelm Wien
Norbert Wiener
Eugene Wigner
E. O. Wilson
Günther Witzany
Stephen Wolfram
H. Dieter Zeh
Semir Zeki
Ernst Zermelo
Wojciech Zurek
Konrad Zuse
Fritz Zwicky

Presentations

Biosemiotics
Free Will
Mental Causation
James Symposium
 
The Measurement Problem
The "Problem of Measurement" in quantum mechanics has been defined in various ways, originally by physicists, and more recently by philosophers of physics who question the "foundations of quantum mechanics."

Many physicists define the "problem" of measurement simply as the logical contradiction between two "laws" that appear to contradict one another when describing the motion or "evolution" in space and time of a quantum system.

The first motion "law" is the irreversible, non-unitary, discrete, discontinuous, and indeterministic or "random" collapse of the wave function. P.A.M.Dirac called it his Projection Postulate. A few years later John von Neumann called this Process 1 At the moment of this "collapse" new information appears in the universe. It is this information that is the "outcome" of a measurement. Werner Heisenberg saw this law as "acausal" and "statistical."

The second motion "law" is the time reversible, unitary, continuous, and deterministic evolution of the Schrödinger equation (von Neumann's Process 2). Nothing observable happens during this motion. No new information appears that might be observed.

John von Neumann was perhaps first to see a logical problem with these two distinct (indeed, opposing) processes. Later physicists saw no mechanism that can explain the transition from a continuous evolution to the discontinuous state change. The standard formalism of quantum mechanics says that the deterministic continuous evolution "law" describes only the probability of the indeterministic "collapse" of the second "law."

Max Born summarized this conflict as a paradox: "The motion of the particle follows the laws of probability, but the probability itself propagates in accord with causal laws."

The mathematical formalism of quantum mechanics simply provides no way to predict when the wave function stops evolving in a predictable deterministic fashion and indeterministicallycollapses randomly and unpredictably.

Starting with Von Neumann, physicists have claimed that the collapse must occur when a microscopic quantum system interacts with a macroscopic (approximately classical) measuring apparatus. The apparatus "measures" the quantum system, producing the irreversible information that can be seen by an observer.

But we must note that this classical measurement apparatus has been only an ad hoc assumption that has never produced a model of its inner workings.

Some theorists have added extra non-linear terms to the Schrödinger equation to force the collapse. But these ad hoc extra terms still do not predict the time of the collapse exactly, nor do they describe what is happening during the collapse process.

So ultimately, the collapse happens at a random time and at that time macroscopically observable new information appears irreversibly, as first claimed by von Neumann.

To describe the problem of measurement more fully we need diverse concepts in quantum physics such as:

The original problem, said to be a consequence of Niels Bohr's "Copenhagen interpretation" of quantum mechanics, was to explain how our measuring instruments, which are usually macroscopic objects and treatable with classical physics, can give us information about the microscopic world of atoms and subatomic particles like electrons and photons.

Bohr's idea of "complementarity" insisted that a specific experiment could reveal only partial information - for example, either a particle's position or the wavelength of the particle's "dual" complementary wave nature. . "Exhaustive" information requires complementary experiments, for example to determine a particle's position and also its momentum (within the limits of Werner Heisenberg's indeterminacy principle).

In general, a quantum system with internal structure consists of a number of internal quantum states that can be arranged in a "ground state" with a minimal energy and a number of "excited" states, with increasing "energy levels." As Albert Einstein showed in 1916, the "population" of each energy level (the probable number of systems in that state) decreases with the energy E of the level according to the "Boltzmann factor," e-E/kT.

P.A.M. Dirac's transformation theory of quantum mechanics describes quantum states as vectors in an abstract space called a "Hilbert space."

Others define the measurement problem as the failure to observe macroscopic superpositions. For example, the paradoxical idea of Schrödinger's cat being in a superposition of "dead" and "alive" states.

Decoherence theorists. e.g., H. Dieter Zeh and Wojciech Zurek, use various non-standard interpretations of quantum mechanics that deny the projection postulate, quantum jumps, and even the existence of particles. They define the measurement problem as the failure to observe macroscopic superpositions. The deterministic, linear, and unitary time evolution of the wave function according to the Schrödinger wave equation should produce such macroscopic superpositions, they claim.

Information physics treats a measuring apparatus quantum mechanically by describing parts of it as in a metastable state like the excited states of an atom, the critically poised electrical potential energy in the discharge tube of a Geiger counter, or the supersaturated water and alcohol molecules of a Wilson cloud chamber. (The pi-bond orbital rotation from cis- to trans- in the light-sensitive retinal molecule is an example of a critically poised apparatus).

Excited (metastable) states are poised to collapse when an electron (or photon) collides with the sensitive detector elements in the apparatus. This collapse is macroscopic and irreversible, generally a cascade of quantum events that release large amounts of energy, increasing the (Boltzmann) entropy. But in a "measurement" there is also a local decrease in the entropy (negative entropy or information). The global entropy increase is normally orders of magnitude more than the small local decrease in entropy (an increase in stable information or Shannon entropy) that constitutes the "measured" experimental data available to human observers.

The creation of new information in a measurement thus follows the same two core processes of all information creation - quantum cooperative phenomena and thermodynamics. These two are involved in the formation of microscopic objects like atoms and molecules, as well as macroscopic objects like galaxies, stars, and planets.

According to the correspondence principle, all the laws of quantum physics asymptotically approach the laws of classical physics in the limit of large quantum numbers and large numbers of particles. Quantum mechanics can be used to describe large macroscopic systems.

Does this mean that the positions and momenta of macroscopic objects are uncertain? Yes, it does, although the uncertainty becomes vanishingly small for large objects, it is not zero. Niels Bohr used the uncertainty of macroscopic objects to defeat Albert Einstein's several objections to quantum mechanics at the 1927 Solvay conference.

But Bohr and Heisenberg also insisted that a measuring apparatus must be a regarded as a purely classical system. They can't have it both ways. Can the macroscopic apparatus also be treated by quantum physics or not? Can it be described by the Schrödinger equation? Can it be regarded as in a superposition of states?

The most famous examples of macroscopic superposition are perhaps Schrödinger's Cat, which is claimed to be in a superposition of live and dead cats, and the Einstein-Podolsky-Rosen experiment, in which entangled electrons or photons are in a superposition of two-particle states that collapse over macroscopic distances to exhibit properties "nonlocally" suggesting "actions-at-a-distance" at speeds faster than the speed of light.

These treatments of macroscopic systems with quantum mechanics were intended to expose inconsistencies and incompleteness in quantum theory. Some of the critics hoped to restore determinism and "local reality" to physics. They resulted in some strange and extremely popular "mysteries" about "quantum reality," such as the "many-worlds" interpretation, "hidden variables," and signaling faster than the speed of light.

We develop a quantum-mechanical treatment of macroscopic systems, especially a measuring apparatus, to show how it can create new information. If the apparatus were describable only by classical deterministic laws, no new information could come into existence. In a deterministic universe, information is a constant at all times, like the total of matter and energy. The apparatus need only be adequately determined, that is to say, "classical" to a sufficient degree of accuracy.

How Classical Is a Macroscopic Measuring Apparatus?
As Landau and Lifshitz described it in their 1958 textbook Quantum Mechanics"
The possibility of a quantitative description of the motion of an electron requires the presence also of physical objects which obey classical mechanics to a sufficient degree of accuracy. If an electron interacts with such a "classical object", the state of the latter is, generally speaking, altered. The nature and magnitude of this change depend on the state of the electron, and therefore may serve to characterise it quantitatively...

We have defined "apparatus" as a physical object which is governed, with sufficient accuracy, by classical mechanics. Such, for instance, is a body of large enough mass. However, it must not be supposed that apparatus is necessarily macroscopic. Under certain conditions, the part of apparatus may also be taken by an object which is microscopic, since the idea of "with sufficient accuracy" depends on the actual problem proposed.

Thus quantum mechanics occupies a very unusual place among physical theories: it contains classical mechanics as a limiting case [correspondence principle], yet at the same time it requires this limiting case for its own formulation.

The measurement problem was analyzed mathematically in 1932 by John von Neumann. Following the work of Niels Bohr and Werner Heisenberg, von Neumann divided the world into a microscopic (atomic-level) quantum system and a macroscopic (classical) measuring apparatus.

Von Neumann explained that two fundamentally different processes are going on in quantum mechanics.

  1. A non-causal process 1, in which the measured electron winds up randomly in one of the possible physical states (eigenstates) of the measuring apparatus plus electron.

    This process came to be called the collapse of the wave function or the reduction of the wave packet.

    The probability for finding the electron in a specific eigenstate is given by the square of the coefficients cn of the expansion of the original system state (wave function ψ) in an infinite set of wave functions φ that represent the eigenfunctions of the measuring apparatus plus electron.

    This is as close as we get to a description of the motion of the particle aspect of a quantum system. According to von Neumann, the particle simply shows up somewhere as a result of a measurement.

    Information physics says that the particle "shows up" only when a new stable information structure is created, information that subsequently can be observed.

    Process 1b. The information created in Von Neumann's Process 1 will only be stable if an amount of positive entropy greater than the negative entropy in the new information structure is transported away, in order to satisfy the second law of thermodynamics.

  2. A causal process 2, in which the electron wave function ψ evolves deterministically according to Schrödinger's equation of motion for the wavelike aspect.

    (ih/2π) ∂ψ/∂t = .

    This evolution describes the motion of the probability amplitude wave ψ between measurements. The wave function exhibits interference effects. But interference is destroyed if the particle has a definite position or momentum.

    The particle path can not be observed.

Von Neumann claimed there is another major difference between these two processes. Process 1 is thermodynamically irreversible. Process 2 is reversible. This confirms the fundamental connection between quantum mechanics and thermodynamics that information physics finds at the heart of all information creation.

Information physics can show quantum mechanically how process 1 creates information. Indeed, something like process 1 is always involved when any information is created, whether or not the new information is ever "observed" by a human being.

Process 2 is deterministic and information preserving.

Just as the new information recorded in the measurement apparatus cannot subsist unless a compensating amount of entropy is transferred away from the new information, something similar to Process 1b must happen in the mind of an observer if the new information is to constitute an "observation."

It is only in cases where information persists long enough for a human being to observe it that we can properly describe the observation as a "measurement" and the human being as an "observer." So, following von Neumann's "process" terminology, we can complete his theory of the measuring process by adding an anthropomorphic

Process 3 - a conscious observer recording new information in a mind. This is only possible if there are two local reductions in the entropy (the first in the measurement apparatus, the second in the mind), both balanced by even greater increases in positive entropy that must be transported away from the apparatus and the mind, so the overall increase in entropy can satisfy the second law of thermodynamics.

For some physicists, it is the wave-function collapse that gives rise to the problem of measurement because its randomness prevents us from including it in the mathematical formalism of the deterministic Schrödinger equation in process 2.

The randomness that is irreducibly involved in all information creation lies at the heart of human freedom. It is the "free" in "free will." The "will" part is as adequately and statistically determined as any macroscopic object.

Designing a Quantum Measurement Apparatus

The first step is to build an apparatus that allows different components of the wave function to evolve along distinguishable paths into different regions of space, where the different regions correspond to (are correlated with) the physical properties we want to measure. We then can locate a detector in these different regions of space to catch particles travelling a particular path.

We do not say that the system is on a particular path in this first step. That would cause the probability amplitude wave function to collapse. This first step is reversible, at least in principle. It is deterministic and an example of von Neumann process 2.

Let's consider the separation of a beam of photons into horizontally and vertically polarized photons by a birefringent crystal.

We need a beam of photons (and the ability to reduce the intensity to a single photon at a time). Vertically polarized photons pass straight through the crystal. They are called the ordinary ray, shown in red. Horizontally polarized photons, however, are deflected at an angle up through the crystal, then exit the crystal back at the original angle. They are called the extraordinary ray, shown in blue.

Note that this first part of our apparatus accomplishes the separation of our two states into distinct physical regions.

We have not actually measured yet, so a single photon passing through our measurement apparatus is described as in a linear combination (a superposition) of horizontal and vertical polarization states,

| ψ > = ( 1/√2) | h > + ( 1/√2) | v >          (1)

See the Dirac Three Polarizers experiment for more details on polarized photons.

An Information-Preserving, Reversible Example of Process 2

To show that process 2 is reversible, we can add a second birefringent crystal upside down from the first, but inline with the superposition of physically separated states,

Since we have not made a measurement and do not know the path of the photon, the phase information in the (generally complex) coefficients of equation (1) has been preserved, so when they combine in the second crystal, they emerge in a state identical to that before entering the first crystal (black arrow).

Note that the two crystals can be treated classically, according to standard optics.

An Information-Creating, Irreversible Example of Process 1

But now suppose we insert something between the two crystals that is capable of a measurement to produce observable information. We need a detector that locates the photon in one of the two rays.

We can now create an information-creating, irreversible example of process 1. Suppose we insert something between the two crystals that is capable of a measurement to produce observable information. We need detectors, for example two charge-coupled devices that locate the photon in one of the two rays.

We can write a quantum description of the CCDs, one measuring horizontal photons, | Ah > (shown as the blue spot), and the other measuring vertical photons, | Av > (shown as the red spot).

We treat the detection systems quantum mechanically, and say that each detector has two eigenstates, e.g., | Ah0 >, corresponding to its initial state and correlated with no photons, and the final state | Ah1 >, in which it has detected a horizontal photon.

When we actually detect the photon, say in a horizontal polarization state with statistical probability 1/2, two "collapses" or "jumps" occur.

The first is the jump of the probability amplitude wave function | ψ > of the photon in equation (1) into the horizontally polarized state | h >.

The second is the quantum jump of the horizontal detector from | Ah0 > to | Ah1 >.

These two happen together, as the quantum states have become correlated with the states of the sensitive detectors in the classical apparatus.

One can say that the photon has become entangled with the sensitive horizontal detector area, so that the wave function describing their interaction is a superposition of photon and apparatus states that cannot be observed independently.

| ψ > + | Ah0 >      =>      | ψ, Ah0 >      =>      | h, Ah1 >

These jumps destroy (unobservable) phase information, raise the (Boltzmann) entropy of the apparatus, and increase visible information (Shannon entropy) in the form of the visible spot. The entropy increase takes the form of a large chemical energy release when the photographic spot is developed (or a cascade of electrons in a CCD).

Note that the birefringent crystal and the parts of the macroscopic apparatus other than the sensitive detectors are treated classically.

We can animate these irreversible and reversible processes,

We see that our example agrees with Von Neumann. A measurement which finds the photon in a specific state n is thermodynamically irreversible, whereas the deterministic evolution described by Schrödinger's equation is reversible as long as the photon is in a superposition of possible states.

We thus establish a clear connection between a measurement, which increases the information by some number of bits (Shannon entropy), and the necessary compensating increase in the (Boltzmann) entropy of the macroscopic apparatus, and the cosmic creation process, where new particles form, reducing the entropy locally, and the energy of formation is radiated or conducted away as Boltzmann entropy.

Note that the Boltzmann entropy can only be radiated away (ultimately into the night sky to the cosmic microwave background) because the expansion of the universe provides a sink for the entropy, as pointed out by David Layzer. Note also that this cosmic information-creating process requires no conscious observer. The universe is its own observer.

The Boundary between the Classical and Quantum Worlds
Some scientists (John von Neumann and Eugene Wigner, for example) have argued that in the absence of a conscious observer, or some "cut" between the microscopic and macroscopic world, the evolution of the quantum system and the macroscopic measuring apparatus would be described deterministically by Schrödinger's equation of motion for the wave function | ψ + A > with the Hamiltonian H energy operator,
(ih/2π) ∂/∂t | ψ + A > = H | ψ + A >.

Our quantum mechanical analysis of the measurement apparatus in the above case allows us to locate the "cut" or "Schnitt" between the microscopic and macroscopic world at those components of the "adequately classical and deterministic" apparatus that put the apparatus in an irreversible stable state providing new information to the observer.

John Bell drew a diagram to show the various possible locations for what he called the "shifty split." Information physics shows us that the correct location for the boundary is the first of Bell's possibilities.

The Role of a Conscious Observer
In 1941, Carl von Weizsäcker described the measurement problem as an interaction between a Subject and an Object, a view shared by the philosopher of science Ernst Cassirer.

Fritz London and Edmond Bauer made the strongest case for the critical role of a conscious observer in 1939:

So far we have only coupled one apparatus with one object. But a coupling, even with a measuring device, is not yet a measurement. A measurement is achieved only when the position of the pointer has been observed. It is precisely this increase of knowledge, acquired by observation, that gives the observer the right to choose among the different components of the mixture predicted by theory, to reject those which are not observed, and to attribute thenceforth to the object a new wave function, that of the pure case which he has found.

We note the essential role played by the consciousness of the observer in this transition from the mixture to the pure case. Without his effective intervention, one would never obtain a new function.

In 1961, Eugene Wigner made quantum physics even more subjective, claiming that a quantum measurement requires a conscious observer, without which nothing ever happens in the universe.

When the province of physical theory was extended to encompass microscopic phenomena, through the creation of quantum mechanics, the concept of consciousness came to the fore again: it was not possible to formulate the laws of quantum mechanics in a fully consistent way without reference to the consciousness All that quantum mechanics purports to provide are probability connections between subsequent impressions (also called "apperceptions") of the consciousness, and even though the dividing line between the observer, whose consciousness is being affected, and the observed physical object can be shifted towards the one or the other to a considerable degree [cf., von Neumann] it cannot be eliminated. It may be premature to believe that the present philosophy of quantum mechanics will remain a permanent feature of future physical theories; it will remain remarkable, in whatever way our future concepts may develop, that the very study of the external world led to the conclusion that the content of the consciousness is an ultimate reality.

Other physicists were more circumspect. Niels Bohr contrasted Paul Dirac's view with that of Heisenberg:

These problems were instructively commented upon from different sides at the Solvay meeting, in the same session where Einstein raised his general objections. On that occasion an interesting discussion arose also about how to speak of the appearance of phenomena for which only predictions of statistical character can be made. The question was whether, as to the occurrence of individual effects, we should adopt a terminology proposed by Dirac, that we were concerned with a choice on the part of "nature," or, as suggested by Heisenberg, we should say that we have to do with a choice on the part of the "observer" constructing the measuring instruments and reading their recording. Any such terminology would, however, appear dubious since, on the one hand, it is hardly reasonable to endow nature with volition in the ordinary sense, while, on the other hand, it is certainly not possible for the observer to influence the events which may appear under the conditions he has arranged. To my mind, there is no other alternative than to admit that, in this field of experience, we are dealing with individual phenomena and that our possibilities of handling the measuring instruments allow us only to make a choice between the different complementary types of phenomena we want to study.
Landau and Lifshitz said clearly that quantum physics was independent of any observer:
In this connection the "classical object" is usually called apparatus, and its interaction with the electron is spoken of as measurement. However, it must be most decidedly emphasised that we are here not discussing a process of measurement in which the physicist-observer takes part. By measurement, in quantum mechanics, we understand any process of interaction between classical and quantum objects, occurring apart from and independently of any observer.

David Bohm agreed that what is observed is distinct from the observer:

If it were necessary to give all parts of the world a completely quantum-mechanical description, a person trying to apply quantum theory to the process of observation would be faced with an insoluble paradox. This would be so because he would then have to regard himself as something connected inseparably with the rest of the world. On the other hand,the very idea of making an observation implies that what is observed is totally distinct from the person observing it.
And John Bell said:
It would seem that the [quantum] theory is exclusively concerned about 'results of measurement', and has nothing to say about anything else. What exactly qualifies some physical systems to play the role of 'measurer'? Was the wavefunction of the world waiting to jump for thousands of millions of years until a single-celled living creature appeared? Or did it have to wait a little longer, for some better qualified system...with a Ph.D.? If the theory is to apply to anything but highly idealised laboratory operations, are we not obliged to admit that more or less 'measurement-like' processes are going on more or less all the time, more or less everywhere? Do we not have jumping then all the time?
Three Essential Steps in a "Measurement" and "Observation"
We can distinguish three required elements in a measurement that can clarify the ongoing debate about the role of a conscious observer.
  1. In standard quantum theory, the first required element is the collapse of the wave-function. This is the Dirac projection postulate and von Neumann Process 1.

    However, the collapse might not leave a determinate record. If nothing in the environment is macroscopically affected so as to leave an indelible record of the collapse, we can say that no information about the collapse is created. The overwhelming fraction of collapses are of this kind. Moreover, information might actually be destroyed. For example, collisions between atoms or molecules in a gas that erase past information about their paths.

  2. If the collapse occurs when the quantum system is entangled with a macroscopic measurement apparatus, a well-designed apparatus will also "collapse" into a correlated "pointer" state.

    As we showed above for photons, the detector in the upper half of a Stern-Gerlach apparatus will fire, indicating detection of an electron with spin up. As with photons, if the probability amplitude | ↑ > in the upper half does not collapse as the electron is detected, it can still be recombined with the probability amplitude | ↓ > in the lower half to reconstruct the unseparated beam.

    When the apparatus detects a particle, the second required element is that it produce a determinate record of the event. But this is impossible without an irreversible thermodynamic process that involves: a) the creation of at least one bit of new information (negative entropy) and b) the transfer away from the measuring apparatus of an amount of positive entropy (generally much, much) greater than teh information created.

    Notice that no conscious observer need be involved. We can generalize this second step to an event in the physical world that was not designed as a measurement apparatus by a physical scientist, but nevertheless leaves an indelible record of the collapse of a quantum state. This might be a highly specific single event, or the macroscopic consequence of billions of atomic-molecular level of events.

  3. Finally, the third required element is an indelible determinate record that can be looked at by an observer (presumably conscious, although the consciousness itself has nothing to do with the measurement).

When we have all three of these essential elements, we have what we normally mean by a measurement and an observation, both involving a human being.

When we have only the first two, we can say metaphorically that the "universe is measuring itself," creating an information record of quantum collapse events. For example, every hydrogen atom formed in the early recombination era is a record of the time period when macroscopic bodies could begin to form. A certain pattern of photons records the explosion of a supernova billions of light years away. When detected by the CCD in a telescope, it becomes a potential observation. Craters on the back side of the moon recorded collisions with solar system debris that could become observations only when the first NASA mission circled the moon.

For Teachers
For Scholars

John Bell on Measurement
It would seem that the [quantum] theory is exclusively concerned about 'results of measurement', and has nothing to say about anything else. What exactly qualifies some physical systems to play the role of 'measurer'? Was the wavefunction of the world waiting to jump for thousands of millions of years until a single-celled living creature appeared? Or did it have to wait a little longer, for some better qualified system...with a Ph.D.? If the theory is to apply to anything but highly idealised laboratory operations, are we not obliged to admit that more or less 'measurement-like' processes are going on more or less all the time, more or less everywhere? Do we not have jumping then all the time?

Does [the 'collapse of the wavefunction'] happen sometimes outside laboratories? Or only in some authorized 'measuring apparatus'? And whereabouts in that apparatus? In the Einstein—Podolsky-Rosen—Bohm experiment, does 'measurement' occur already in the polarizers, or only in the counters? Or does it occur still later, in the computer collecting the data, or only in the eye, or even perhaps only in the brain, or at the brain—mind interface of the experimenter?


David Bohm on Measurement
In his 1950 textbook Quantum Theory, Bohm discusses measurement in chapter 22, section 12.
12. Irreversibility of Process of Measurement and Its Fundamental Role in Quantum Theory.
From the previous work it follows that a measurement process is irreversible in the sense that, after it has occurred, re-establishment of definite phase relations between the eigenfunctions of the measured variable is overwhelmingly unlikely. This irreversibility greatly resembles that which appears in thermodynamic processes, where a decrease of entropy is also an overwhelmingly unlikely possibility.*

* There is, in fact, a close connection between entropy and the process of measurement. See L. Szilard, , 53, 840, 1929. The necessity for such a connection can be seen by considering a box divided by a partition into two equal parts, containing an equal number of gas molecules in each part. Suppose that in this box is placed a device that can provide a rough measurement of the position of each atom as it approaches the partition. This device is coupled automatically to a gate in the partition in such a way that the gate will be opened if a molecule approaches the gate from the right, but closed if it approaches from the left. Thus, in time, all the molecules can be made to accumulate on the left-hand side. In this way, the entropy of the gas decreases. If there were no compensating increase of entropy of the mechanism, then the second law of thermodynamics would be violated. We have seen, however, that in practice, every process which can provide a definite measurement disclosing in which side of the box the molecule actually is, must also be attended by irreversible changes in the measuring apparatus. In fact, it can be shown that these changes must be at least large enough to compensate for the decrease in entropy of the gas. Thus, the second law of thermodynamics cannot actually be violated in this way. This means, of course, that Maxwell's famous "sorting demon " cannot operate, if he is made of matter obeying all of the laws of physics. (See L. Brillouin, American Scientist, 38, 594, 1950.)

Because the irreversible behavior of the measuring apparatus is essential for the destruction of definite phase relations and because, in turn, the destruction of definite phase relation's is essential for the consistency of the quantum theory as a whole, it follows that thermodynamic irreversibility enters into the quantum theory in an integral way. This is in remarkable contrast to classical theory, where the concept of thermodynamic irreversibility plays no fundamental role in the basic sciences of mechanics and electrodynamics. Thus, whereas in classical theory fundamental variables (such as position or momentum of an elementary particle) are regarded as having definite values independently of whether the measuring apparatus is reversible or not, in quantum theory we find that such a quantity can take on a well defined value only when the system is coupled indivisibly to a classically describable system undergoing irreversible processes. The very definition of the state of any one system at the microscopic level therefore requires that matter in the large shall undergo irreversible processes. There is a strong analogy here to the behavior of biological systems, where, likewise, the very existence of the fundamental elements (for example, the cells) depends on the maintenance of irreversible processes involving the oxidation of food throughout an organism as a whole. (A stoppage of these processes would result in the dissolution of the cell.)


John von Neumann on Measurement
In his 1932 Mathematical Foundations of Quantum Mechanics (in German, English edition 1955) John von Neumann explained that two fundamentally different processes are going on in quantum mechanics.
  1. A non-causal process, in which the measured electron winds up randomly in one of the possible physical states (eigenstates) of the measuring apparatus plus electron.

    The probability for each eigenstate is given by the square of the coefficients cn of the expansion of the original system state (wave function ψ) in an infinite set of wave functions φ that represent the eigenfunctions of the measuring apparatus plus electron.

    cn = < φn | ψ >

    This is as close as we get to a description of the motion of the particle aspect of a quantum system. According to von Neumann, the particle simply shows up somewhere as a result of a measurement. Information physics says it shows up whenever a new stable information structure is created.

  2. A causal process, in which the electron wave function ψ evolves deterministically according to Schrödinger's equation of motion for the wavelike aspect. This evolution describes the motion of the probability amplitude wave ψ between measurements.

    (ih/2π) ∂ψ/∂t =

Von Neumann claimed there is another major difference between these two processes. Process 1 is thermodynamically irreversible. Process 2 is reversible. This confirms the fundamental connection between quantum mechanics and thermodynamics that is explainable by information physics.

Information physics establishes that process 1 may create information. It is always involved when information is created.

Process 2 is deterministic and information preserving.

The first of these processes has come to be called the collapse of the wave function.

It gave rise to the so-called problem of measurement because its randomness prevents it from being a part of the deterministic mathematics of process 2.

Information physics has solved the problem of measurement by identifying the moment and place of the collapse of the wave function with the creation of an observable information structure.

The presence of a conscious observer is not necessary. It is enough that the new information created is observable, should a human observer try to look at it in the future. Information physics is thus subtly involved in the question of what humans can know (epistemology).

The Schnitt

von Neumann described the collapse of the wave function as requiring a "cut" (Schnitt in German) between the microscopic quantum system and the observer. He said it did not matter where this cut was placed, because the mathematics would produce the same experimental results.

There has been a lot of controversy and confusion about this cut. Some have placed it outside a room which includes the measuring apparatus and an observer A, and just before observer B makes a measurement of the physical state of the room, which is imagined to evolve deterministically according to process 2 and the Schrödinger equation.

The case of Schrödinger's Cat is thought to present a similar paradoxical problem.

von Neumann contributed a lot to this confusion in his discussion of subjective perceptions and "psycho-physical parallelism, which was encouraged by Neils Bohr. Bohr interpreted his "complementarity principle" as explaining the difference between subjectivity and objectivity (as well as several other dualisms). von Neumann wrote:

The difference between these two processes is a very fundamental one: aside from the different behaviors in regard to the principle of causality, they are also different in that the former is (thermodynamically) reversible, while the latter is not.

Let us now compare these circumstances with those which actually exist in nature or in its observation. First, it is inherently entirely correct that the measurement or the related process of the subjective perception is a new entity relative to the physical environment and is not reducible to the latter. Indeed, subjective perception leads us into the intellectual inner life of the individual, which is extra-observational by its very nature (since it must be taken for granted by any conceivable observation or experiment).

Nevertheless, it is a fundamental requirement of the scientific viewpoint -- the so-called principle of the psycho-physical parallelism -- that it must be possible so to describe the extra-physical process of the subjective perception as if it were in reality in the physical world -- i.e., to assign to its parts equivalent physical processes in the objective environment, in ordinary space. (Of course, in this correlating procedure there arises the frequent necessity of localizing some of these processes at points which lie within the portion of space occupied by our own bodies. But this does not alter the fact of their belonging to the "world about us," the objective environment referred to above.)

In a simple example, these concepts might be applied about as follows: We wish to measure a temperature. If we want, we can pursue this process numerically until we have the temperature of the environment of the mercury container of the thermometer, and then say: this temperature is measured by the thermometer. But we can carry the calculation further, and from the properties of the mercury, which can be explained in kinetic and molecular terms, we can calculate its heating, expansion, and the resultant length of the mercury column, and then say: this length is seen by the observer.

Going still further, and taking the light source into consideration, we could find out the reflection of the light quanta on the opaque mercury column, and the path of the remaining light quanta into the eye of the observer, their refraction in the eye lens, and the formation of an image on the retina, and then we would say: this image is registered by the retina of the observer.

And were our physiological knowledge more precise than it is today, we could go still further, tracing the chemical reactions which produce the impression of this image on the retina, in the optic nerve tract and in the brain, and then in the end say: these chemical changes of his brain cells are perceived by the observer. But in any case, no matter how far we calculate -- to the mercury vessel, to the scale of the thermometer, to the retina, or into the brain, at some time we must say: and this is perceived by the observer. That is, we must always divide the world into two parts, the one being the observed system, the other the observer. In the former, we can follow up all physical processes (in principle at least) arbitrarily precisely. In the latter, this is meaningless.

the Schnitt
The boundary between the two is arbitrary to a very large extent. In particular we saw in the four different possibilities in the example above, that the observer in this sense needs not to become identified with the body of the actual observer: In one instance in the above example, we included even the thermometer in it, while in another instance, even the eyes and optic nerve tract were not included. That this boundary can be pushed arbitrarily deeply into the interior of the body of the actual observer is the content of the principle of the psycho-physical parallelism -- but this does not change the fact that in each method of description the boundary must be put somewhere, if the method is not to proceed vacuously, i.e., if a comparison with experiment is to be possible. Indeed experience only makes statements of this type: an observer has made a certain (subjective) observation; and never any like this: a physical quantity has a certain value.

Now quantum mechanics describes the events which occur in the observed portions of the world, so long as they do not interact with the observing portion, with the aid of the process 2, but as soon as such an interaction occurs, i.e., a measurement, it requires the application of process 1. The dual form is therefore justified.* However, the danger lies in the fact that the principle of the psycho-physical parallelism is violated, so long as it is not shown that the boundary between the observed system and the observer can be displaced arbitrarily in the sense given above.

(The Mathematical Foundations of Quantum Mechanics, pp.418-21)


Quantum Mechanics, by Albert Messiah, on Measurement
Messiah says a detailed study of the mechanism of measurement will not be made in his book, but he does say this.
The dynamical state of such a system is represented at a given instant of time by its wave function at that instant. The causal relationship between the wave function γ(to) at an initial time to, and the wave function γ(t) at any later time, is expressed through the Schrödinger equation. However, as soon as it is subjected to observation, the system experiences some reaction from the observing instrument. Moreover, the above reaction is to some extent unpredictable and uncontrollable since there is no sharp separation between the observed system and the observing instrument. They must be treated as an indivisible quantum system whose wave function Ψ(t) depends upon the coordinates of the measuring device as well as upon those of the observed system. During the process of observation, the measured system can no longer be considered separately and the very notion of a dynamical state defined by the simpler wave function γ(t) loses its meaning. Thus the intervention of the observing instrument destroys all causal connection between the state of the system before and after the measurement; this explains why one cannot in general predict with certainty in what state the system will be found after the measurement; one can only make predictions of a statistical nature1
1) The statistical predictions concerning the results of measurement are derived very naturally from the study of the mechanism of the measuring operation itself, a study in which the measuring instrument is treated as a quantized object and the complex (system + measuring instrument) evolves in causal fashion in accordance with the Schrödinger equation. A very concise and simple presentation of the measuring process in Quantum Mechanics is given. in F. London and E. Bauer, La Théorie de l'Observation en Mécanique Quantique (Paris, Hermann, 1939). More detailed discussions of this problem may be found in J. von Neumann, Mathematical Foundations of Quantum Mechanics (Princeton, Princeton University Press, 1955), and in D. Bohm, (Quantum Theory New York, Prentice-Hall, 1951).
Quantum Mechanics, vol. I, p. 157
.
Decoherence Theorists on Measurement

In general, decoherence theorists see the problem of measurement as why do we not see macroscopic superpositions of states. Why do measurements always show a system and its measuring apparatus to be in a particular state - a "pointer state," and not in a superposition?

Our answer is that we never see microscopic systems in a superposition of states either. Dirac's principle of superposition says only that the probability (amplitudes) of finding a system in different states has non-zero values for different states. Measurements always reveal a system to be in one state. Which state is found is a matter of chance. [Decoherence theorists do not like this indeterminism.] The statistics from large numbers of measurements of identically prepared systems verify the predicted probabilities for the different states. The accuracy of these quantum mechanical predictions (1 part in 1015) shows quantum mechanics to be the most accurate theory ever known.

Guido Bacciagaluppi summarized the view of decoherence theorists in an article for the Stanford Encyclopedia of Philosophy. He defines the measurement problem as the lack of macroscopic superpositions:

The measurement problem, in a nutshell, runs as follows. Quantum mechanical systems are described by wave-like mathematical objects (vectors) of which sums (superpositions) can be formed (see the entry on quantum mechanics). Time evolution (the Schrödinger equation) preserves such sums. Thus, if a quantum mechanical system (say, an electron) is described by a superposition of two given states, say, spin in x-direction equal +1/2 and spin in x-direction equal -1/2, and we let it interact with a measuring apparatus that couples to these states, the final quantum state of the composite will be a sum of two components [that is to say, a macroscopic superposition, which is of course never seen!], one in which the apparatus has coupled to (has registered) x-spin = +1/2, and one in which the apparatus has coupled to (has registered) x-spin = -1/2...

[D]ecoherence as such does not provide a solution to the measurement problem, at least not unless it is combined with an appropriate interpretation of the theory (whether this be one that attempts to solve the measurement problem, such as Bohm, Everett or GRW; or one that attempts to dissolve it, such as various versions of the Copenhagen interpretation). Some of the main workers in the field such as Zeh (2000) and (perhaps) Zurek (1998) suggest that decoherence is most naturally understood in terms of Everett-like interpretations.

Maximilian Schlosshauer situates the problem of measurement in the context of the so-called "quantum-to-classical transition," namely the question of exactly how deterministic classical behavior emerges from the indeterministic microscopic quantum world.
In this section, we shall describe the (in)famous measurement problem of quantum mechanics that we have already referred to in several places in the text. The choice of the term "measurement problem" has purely historical reasons: Certain foundational issues associated with the measurement problem were first illustrated in the context of a quantum-mechanical description of a measuring apparatus interacting with a system.

However, one may regard the term "measurement problem" as implying too narrow a scope, chiefly for the following two reasons. First, as we shall see below, the measurement problem is composed of three distinct issues, so it would make sense to rather speak of measurement problems. Second, quantum measurement and the arising foundational problems are but a special case of the more general problem of the quantum-to-classical transition, i.e., the question of how effectively classical systems and properties around us emerge from the underlying quantum domain.

On the one hand, then, the problem of the quantum-to-classical transition has a much broader scope than the issue of quantum measurement in the literal sense. On the other hand, however, many interactions between physical systems can be viewed as measurement-like interactions. For example, light scattering off an object carries away information about the position of the object, and it is in this sense that we thus may view these incident photons as a "measuring device." Such ubiquitous measurement-like interactions lie at the heart of the explanation of the quantum-to-classical transition by means of decoherence. Measurement, in the more general sense, thus retains its paramount importance also in the broader context of the quantum-to-classical transition, which in turn motivates us not to abandon the term "measurement problem" altogether in favor of the more general "problem of the quantum-to-classical transition."

As indicated above, the measurement problem (and the problem of the quantum-to-classical transition) is composed of three parts, all of which we shall describe in more detail in the following:

1. The problem of the preferred basis (Sect. 2.5.2). What singles out the preferred physical quantities in nature—e.g., why are physical systems usually observed to be in definite positions rather than in superpositions of positions?

2. The problem of the nonobservability of interference (Sect. 2.5.3). Why is it so difficult to observe quantum interference effects, especially on macroscopic scales?

3. The problem of outcomes (Sect. 2.5.4). Why do measurements have outcomes at all, and what selects a particular outcome among the different possibilities described by the quantum probability distribution? [The answer (since Einstein, 1916) is chance.]

Familiarity with these problems will turn out to be important for a proper understanding of the scope, achievements, and implications of decoherence. To anticipate, it is fair to conclude that decoherence has essentially resolved the first two problems. Since these problems and their resolution can be formulated in purely operational terms within the standard formalism of quantum mechanics, the role played by decoherence in addressing these two issues is rather undisputed.

By contrast, the success of decoherence in tackling the third issue — the problem of outcomes — remains a matter of debate, in particular, because this issue is almost inextricably linked to the choice of a specific interpretation of quantum mechanics (which mostly boils down to a matter of personal preference). In fact, most of the overly optimistic or pessimistic statements about the ability of decoherence to solve "the" measurement problem can be traced back to a misunderstanding of the scope that a standard quantum effect such as decoherence may have in resolving the more interpretive problem of outcomes.

The main concern of the decoherence theorists then is to recover a deterministic picture of quantum mechanics that would allow them to predict the outcome of a particular experiment. They have what William James called an "antipathy to chance."

Max Tegmark and John Wheeler made this clear in a 2001 article in Scientific American:

The discovery of decoherence, combined with the ever more elaborate experimental demonstrations of quantum weirdness, has caused a noticeable shift in the views of physicists. The main motivation for introducing the notion of wave-function collapse had been to explain why experiments produced specific outcomes and not strange superpositions of outcomes. Now much of that motivation is gone. Moreover, it is embarrassing that nobody has provided a testable deterministic equation specifying precisely when the mysterious collapse is supposed to occur.

H. Dieter Zeh, the founder of the "decoherence program," defines the measurement problem as a macroscopic entangled superposition of all possible measurement outcomes:

Because of the dynamical superposition principle, an initial superposition Σ cn | n > does not lead to definite pointer positions (with their empirically observed frequencies). If decoherence is neglected, one obtains their entangled superposition Σ cn | n > | Ψ n >, that is, a state that is different from all potential measurement outcomes | n > | Ψ n >. This dilemma represents the "quantum measurement problem" to be discussed in Sect. 2.3. Von Neumann's interaction is nonetheless regarded as the first step of a measurement (a "pre-measurement"). Yet, a collapse seems still to be required - now in the measurement device rather than in the microscopic system. Because of the entanglement between system and apparatus, it would then affect the total system.
Zeh continues:
2.3 The Measurement Problem
The superposition of different measurement outcomes, resulting according to a Schrodinger equation when applied to the total system (as discussed above), demonstrates that a "naive ensemble interpretation" of quantum mechanics in terms of incomplete knowledge is ruled out.
It's not clear why the standard ensemble interpretation is "ruled out," but Zeh offers a solution, which is to deny the projection postulate of standard quantum mechanics and use an unconventional interpretation that makes wave-function collapses only "apparent":
A way out of this dilemma within quantum mechanical concepts requires one of two possibilities: a modification of the Schrodinger equation that explicitly describes a collapse (also called "spontaneous localization" - see Chap. 8), or an Everett type interpretation, in which all measurement outcomes are assumed to exist in one formal superposition, but to be perceived separately as a consequence of their dynamical autonomy resulting from decoherence.
It was John Bell who called Everett's Many-Worlds Interpretation "extravagant."
While this latter suggestion has been called "extravagant" (as it requires myriads of co-existing quasi-classical "worlds"), it is similar in principle to the conventional (though nontrivial) assumption, made tacitly in all classical descriptions of observation, that consciousness is localized in certain semi-stable and sufficiently complex subsystems (such as human brains or parts thereof) of a much larger external world.
Everett rejected the intuitively simple collapse of multiple possibilities to one actual situation. Instead he proposed the instantaneous creation of multiple universes, each with all the matter and energy of the observable universe. Surely his Many Worlds was the most absurd anti-Occam proposal ever made!
Occam's razor, often applied to the "other worlds", is a dangerous instrument: philosophers of the past used it to deny the existence of the interior of stars or of the back side of the moon, for example. So it appears worth mentioning at this point that environmental decoherence, derived by tracing out unobserved variables from a universal wave function, readily describes precisely the apparently observed "quantum jumps" or "collapse events" (as will be discussed in great detail in various parts of this book).
Jeffrey Bub worked with David Bohm to develop Bohm's theory of "hidden variables." They hoped their theory might provide a deterministic basis for quantum theory and support Albert Einstein's view of a physical world independent of observations of the world. The standard theory of quantum mechanics is irreducibly statistical and indeterministic, a consequence of the collapse of the wave function when many possibilities for physical outcomes of an experiment reduce to a single actual outcome.

This is a book about the interpretation of quantum mechanics, and about the measurement problem. The conceptual entanglements of the measurement problem have their source in the orthodox interpretation of 'entangled' states that arise in quantum mechanical measurement processes...

All standard treatments of quantum mechanics take an observable as having a determinate value if the quantum state is an eigenstate of that observable. If the state is not an eigenstate of the observable, no determinate value is attributed to the observable. This principle - sometimes called the 'eigenvalue-eigenstate link' - is explicitly endorsed by Dirac (1958, pp. 46-7) and von Neumann (1955, p. 253), and clearly identified as the 'usual' view by Einstein, Podolsky, and Rosen (1935) in their classic argument for the incompleteness of quantum mechanics (see chapter 2). Since the dynamics of quantum mechanics described by Schrödinger's time-dependent equation of motion is linear, it follows immediately from this orthodox interpretation principle that, after an interaction between two quantum mechanical systems that can be interpreted as a measurement by one system on the other, the state of the composite system is not an eigenstate of the observable measured in the interaction, and not an eigenstate of the indicator observable functioning as a 'pointer.' So, on the orthodox interpretation, neither the measured observable nor the pointer reading have determinate values, after a suitable interaction that correlates pointer readings with values of the measured observable. This is the measurement problem of quantum mechanics.

References

Bacciagaluppi, Guido, The Role of Decoherence in Quantum Mechanics, first published Mon Nov 3, 2003; substantive revision Mon Apr 16, 2012

Jeffrey Bub, Interpreting the Quantum World. Cambridge University, 1997, p.2.

Adriana Daneri, A. Loinger, and G. M. Prosperi, Nuclear Physics, 33 (1962) pp.297-319. (W&Z, p.657)

Erich Joos, H. Dieter Zeh, et al., Decoherence and the Appearance of a Classical World in Quantum Theory. Springer, 2010,

Gunter Ludwig, Zeitschrift für Physik, 135 (1953) p.483

Maximilian Schlosshauer, Decoherence and the Quantum-to-Classical Transition. Springer, 2007, pp.49-50

Leo Szilard, Behavioral Science, 9 (1964) pp.301-10. (W&Z, p.539)

Max Tegmark and John Wheeler, Scientific American, February (2001) pp.68-75.

John von Neumann, The Mathematical Foundations of Quantum Mechanics, (Princeton, NJ, Princeton U. Press, 1955), pp.347-445. (W&Z, p.549)

John Wheeler and Wojciech Zurek, Quantum Theory and Measurement (Princeton, NJ, Princeton U. Press, 1983) (= W&Z)

Eugene Wigner, "The Problem of Measurement," Symmetries and Reflections (Bloomington, IN, Indiana U. Press, 1967) pp.153-70. (W&Z, p.324)


Chapter 5.4 - Immortality Chapter 5.6 - Mind-Body
Part Four - Knowledge Part Six - Solutions
Normal | Teacher | Scholar