Philip Ball wrote a long aeon.co essay titled Quantum Common Sense which argues that quantum mechanics isn't really "weird" and everything that was mysterious was basically explained... by decoherence.
I sympathize with the text to some extent and the extent could have approached 100% some 25 years ago. In particular, decoherence is a legitimate insight and quantum mechanics isn't weird when you look at it calmly. But there are lots of claims that Ball makes that I heavily disagree with, too.
OK, what is decoherence? And equally importantly, what decoherence isn't? Well, starting with the positive things, decoherence is a process that
Decoherence is an effective process – perhaps a gedanken process – which is irreversible and erases the information about the relative complex phases. You start with an observed physical system, like a cat. Decoherence will ultimately pick "dead" and "alive" as the preferred basis. The key observation is that the cat interacts with some "environmental" degrees of freedom, which I will take to be air in order to be would-be witty, so after some time, the basis vectors of the cat, "dead" and "alive", along with the state of the air evolve in specific ways.
What is the evolution of the dead-or-alive cat's basis vectors?\[
U: \,\, \ket{\rm alive} \otimes \ket{\rm odorless} \to \ket{\rm alive} \otimes \ket{\rm odorless}\\
U: \,\, \ket{\rm dead} \otimes \ket{\rm odorless} \to \ket{\rm dead} \otimes \ket{\rm stinky}
\] You see that air surrounding the cat (in Schrödinger's lethal box) was odorless to start with but it becomes stinky if the cat has died. You should appreciate that the evolution rules above don't have to be "postulated". In a microscopic theory including the quantum mechanics of biological systems, you can derive these rules. When a cat is dead, the circulation stops, the blood no longer takes some chemicals away from the skin, and bacteria find it easy to devour the dead cat.
The number of stinky molecules in the air remains "low" when the cat is alive and it becomes "high" when the cat is dead. The states of air, "low" and "high", are orthogonal to each other. Imagine we won't ever "smell" the environment again, so all predictions for observations of the cat itself may also be obtained from the density matrix that only describes the cat's own degrees of freedom, not the air.
It means that you may compute the reduced density matrix for the cat. And because the "odorless" and "stinky" states of the air are orthogonal to each other, this reduced density matrix will be diagonal. If the pure vector describing the initial state of the cat was\[
\ket{\psi}_{\rm initial} = a \ket{\rm alive} + b\ket{\rm dead},
\] the final state's density matrix – after you traced over the environment – will be\[
\rho_{\rm final,cat} = {\rm diag} ( |a|^2 , |b|^2 ).
\] Note that at least for \(|a|^2\neq |b|^2\), the density matrix is only diagonal in this particular basis, not in others. So the process of the interaction with the air – the environment – has picked a preferred basis. The diagonal entries of the density matrix may be interpreted in the very same way as the classical probabilities \(P_{\rm alive},P_{\rm dead}\) – probabilities that we use when we throw dice or when we calculate classical statistical physics of atoms. The phases of the complex numbers \(a,b\in\CC\) have become unobservable, even the relative phase.
This was a trivial sketch. When you actually calculate decoherence, you must consider how quickly the environment evolves to its characteristic final states (which know something about the state of the observed object), and you must realize that they're not exactly orthogonal to each other. In practice, the observed system – a cat or its generalization – imprints itself roughly to \(\exp(t/t_0)\) "degrees of freedom" in the environment, because their influences' on others grow like an exponential avalanche. But the inner products of the environment's degrees of freedom aren't exactly orthogonal. Instead, the (squared absolute value of the) inner product is some \(\exp(-B)\) per degree of freedom. Exponentiate this probability to the exponent equal to the number of degrees of freedom, \(\exp(t/t_0)\), and you will determine that the off-diagonal degrees of freedom of the reduced density matrix go down like \[
\rho_{\rm alive,dead}(t)\sim \exp[-B \exp(t/t_0)].
\] For the sake of clarity, I have kept the notation from the dead-or-alive cat's example. I am sure that you can generalize the labels. OK: It's an exponential of an exponential so it decreases much faster than an exponential. In practice, the dimensionless number \(B\) won't be too different from "a number of order one" so the realistic calculation of the "speed of decoherence" will mostly depend on the timescale \(t_0\). The more "air" or environment surrounds the cat, or the more strongly this environment interacts with the cat, the shorter the timescale \(t_0\) will be. At times \(t\gg t_0\), you may assume that the off-diagonal elements of the density matrix are basically zero.
I sincerely hope that some people have just learned what decoherence is for the first time in their life by reading the text above!
OK, now the negative comments. Why doesn't decoherence actually erase all the things that some people consider mysterious, especially the dependence on the observers?
Also, the observer is still needed for the measurement to end. Decoherence doesn't replace the measurement: the pick of one of the options (either "dead" or "alive" in my example). And to a limited extent, the observer is still needed in between because even the basis in which the density matrix is diagonal for a given choice of the environment may refuse to be unique. In such cases or to this extent, an observer is still needed to determine what observable he wants to be measured – what the basis of options should be.
If I have to pick the most incorrect representative phrase from Ball's article, it would be this sentence written in a large font:
When you look rationally at decoherence, you will realize that it changes nothing about the observer dependence of quantum mechanics. It's really just a framework to figure out when the classical description becomes OK enough. But the quantum description is always exactly accurate, on both sides of this cut.
There is also one amusing historical fact that destroys the idea that "decoherence erases the need for minds in quantum mechanics". The first paper that really introduced the mathematical operations behind "decoherence" was written by (no, it wasn't Wojciech Zurek!) H. Dieter Zeh in 1970. Here is the actual damn paper. As you can see on the Wikipedia page about Zeh, he not only refused to eliminate minds from quantum mechanics. He has multiplied them because he's been a proponent of the many-minds interpretation of quantum mechanics. Funny. Maybe decoherence isn't such a staunch "killer of the minds", after all.
Incidentally, while I prefer when people avoid the "interpretation talk" altogether, "many-minds interpretation" is surely the kind of ideology that is being attacked by all the Marxist inkspillers in the media etc. When they see a "mind", they have a hissy fit. But this "interpretation" is nothing else than the "minimum fix" needed to be applied to Everett's "many worlds interpretation" so that the modified "interpretation" becomes at least morally correct. Needless to say, the "fix" is basically equivalent to going back to the Copenhagen interpretation.
I think that if we fairly look at the actual beliefs of the folks such as Niels Bohr, they knew the "qualitative message" of this blog post – i.e. of decoherence – even if they haven't ever coined words and formalisms for them. They knew that when things become complicated, the relative phases just become impossible to predict or trace, and then the quantum mechanical predictions become qualitatively indistinguishable from predictions in classical statistical physics: They may predict probabilities for the elements of a preferred list of possible outcomes while the measurement of non-commuting observables effectively disappears, along with all the relevance of the quantum phases. Bohr surely thought about these procedures but didn't go far enough to explicitly discuss the environment. But he did offer the correct final answers. In particular, the "Bohr correspondence principle" revealed that when quantum numbers such as \(n\) become high, the atom etc. starts to behave as in classical physics.
From this broader viewpoint, and especially if you're not really interested in the calculation of the "gradual disappearance of coherence", decoherence may be said to be much ado about nothing. It changes nothing about the postulates of quantum mechanics. Instead, it is just an approximate way to organize certain calculations and to focus on certain questions, and to use a trick that enables a pseudo-classical procedure to calculate the relevant predictions. But there's always the same quantum mechanics – observer-dependent quantum mechanics – underlying all these calculations.
I sympathize with the text to some extent and the extent could have approached 100% some 25 years ago. In particular, decoherence is a legitimate insight and quantum mechanics isn't weird when you look at it calmly. But there are lots of claims that Ball makes that I heavily disagree with, too.
OK, what is decoherence? And equally importantly, what decoherence isn't? Well, starting with the positive things, decoherence is a process that
- allows one to calculate at what "point" in the parameter space, classical physics (gradually) becomes a decent approximate theory for a given physical system
- puts severe constraints on the possible "basis of states" that may arise as "the states" after a measurement
- eliminates the physical visibility of the complex phases in the probability amplitudes, so that the probability amplitudes may effectively be replaced with their absolute values
Decoherence is an effective process – perhaps a gedanken process – which is irreversible and erases the information about the relative complex phases. You start with an observed physical system, like a cat. Decoherence will ultimately pick "dead" and "alive" as the preferred basis. The key observation is that the cat interacts with some "environmental" degrees of freedom, which I will take to be air in order to be would-be witty, so after some time, the basis vectors of the cat, "dead" and "alive", along with the state of the air evolve in specific ways.
What is the evolution of the dead-or-alive cat's basis vectors?\[
U: \,\, \ket{\rm alive} \otimes \ket{\rm odorless} \to \ket{\rm alive} \otimes \ket{\rm odorless}\\
U: \,\, \ket{\rm dead} \otimes \ket{\rm odorless} \to \ket{\rm dead} \otimes \ket{\rm stinky}
\] You see that air surrounding the cat (in Schrödinger's lethal box) was odorless to start with but it becomes stinky if the cat has died. You should appreciate that the evolution rules above don't have to be "postulated". In a microscopic theory including the quantum mechanics of biological systems, you can derive these rules. When a cat is dead, the circulation stops, the blood no longer takes some chemicals away from the skin, and bacteria find it easy to devour the dead cat.
The number of stinky molecules in the air remains "low" when the cat is alive and it becomes "high" when the cat is dead. The states of air, "low" and "high", are orthogonal to each other. Imagine we won't ever "smell" the environment again, so all predictions for observations of the cat itself may also be obtained from the density matrix that only describes the cat's own degrees of freedom, not the air.
It means that you may compute the reduced density matrix for the cat. And because the "odorless" and "stinky" states of the air are orthogonal to each other, this reduced density matrix will be diagonal. If the pure vector describing the initial state of the cat was\[
\ket{\psi}_{\rm initial} = a \ket{\rm alive} + b\ket{\rm dead},
\] the final state's density matrix – after you traced over the environment – will be\[
\rho_{\rm final,cat} = {\rm diag} ( |a|^2 , |b|^2 ).
\] Note that at least for \(|a|^2\neq |b|^2\), the density matrix is only diagonal in this particular basis, not in others. So the process of the interaction with the air – the environment – has picked a preferred basis. The diagonal entries of the density matrix may be interpreted in the very same way as the classical probabilities \(P_{\rm alive},P_{\rm dead}\) – probabilities that we use when we throw dice or when we calculate classical statistical physics of atoms. The phases of the complex numbers \(a,b\in\CC\) have become unobservable, even the relative phase.
This was a trivial sketch. When you actually calculate decoherence, you must consider how quickly the environment evolves to its characteristic final states (which know something about the state of the observed object), and you must realize that they're not exactly orthogonal to each other. In practice, the observed system – a cat or its generalization – imprints itself roughly to \(\exp(t/t_0)\) "degrees of freedom" in the environment, because their influences' on others grow like an exponential avalanche. But the inner products of the environment's degrees of freedom aren't exactly orthogonal. Instead, the (squared absolute value of the) inner product is some \(\exp(-B)\) per degree of freedom. Exponentiate this probability to the exponent equal to the number of degrees of freedom, \(\exp(t/t_0)\), and you will determine that the off-diagonal degrees of freedom of the reduced density matrix go down like \[
\rho_{\rm alive,dead}(t)\sim \exp[-B \exp(t/t_0)].
\] For the sake of clarity, I have kept the notation from the dead-or-alive cat's example. I am sure that you can generalize the labels. OK: It's an exponential of an exponential so it decreases much faster than an exponential. In practice, the dimensionless number \(B\) won't be too different from "a number of order one" so the realistic calculation of the "speed of decoherence" will mostly depend on the timescale \(t_0\). The more "air" or environment surrounds the cat, or the more strongly this environment interacts with the cat, the shorter the timescale \(t_0\) will be. At times \(t\gg t_0\), you may assume that the off-diagonal elements of the density matrix are basically zero.
I sincerely hope that some people have just learned what decoherence is for the first time in their life by reading the text above!
OK, now the negative comments. Why doesn't decoherence actually erase all the things that some people consider mysterious, especially the dependence on the observers?
- Decoherence is just an emergent, approximate "process" that depends on the separation of the degrees of freedom to the observed ones and the environment. Well, only an observer may separate things to those that are observed and those that are not – the environment! In this sense, the very assumptions of the decoherence calculation require an observer – who knows what is and what isn't observed (what isn't observable by him in the future may be counted as the environment).
- The selection of the preferred basis is "mostly unique" or "basically unique" but it isn't quite unique and it isn't guaranteed to be unique, and one may design examples in which it demonstrably isn't unique.
- Most importantly, the final result of the decoherence calculation was a diagonal density matrix with nonzero entries \(|a|^2,|b|^2\) on the diagonal. The final result was not \({\rm diag}(1,0)\) or \({\rm diag}(0,1)\). So decoherence hasn't actually picked any "actual outcome" from the basis i.e. from the list of possible outcomes of the measurements.
Also, the observer is still needed for the measurement to end. Decoherence doesn't replace the measurement: the pick of one of the options (either "dead" or "alive" in my example). And to a limited extent, the observer is still needed in between because even the basis in which the density matrix is diagonal for a given choice of the environment may refuse to be unique. In such cases or to this extent, an observer is still needed to determine what observable he wants to be measured – what the basis of options should be.
If I have to pick the most incorrect representative phrase from Ball's article, it would be this sentence written in a large font:
We don’t need a conscious mind to measure or look. With or without us, the Universe is always looking.Sorry but if you read the description above, you should be able to understand that the Universe cannot be looking by itself. Nothing is collapsing without the observer. The collapse of the cat to "dead" or "alive" i.e. the setting of the reduced density matrix to \({\rm diag}(1,0)\) or \({\rm diag}(0,1)\) is the "philosophically irritating stage" that must be done after decoherence. And even the previous erasure of the off-diagonal matrix entries only happens "approximately" because in the strictly exact description, the information about the relative phases is never completely lost.
When you look rationally at decoherence, you will realize that it changes nothing about the observer dependence of quantum mechanics. It's really just a framework to figure out when the classical description becomes OK enough. But the quantum description is always exactly accurate, on both sides of this cut.
There is also one amusing historical fact that destroys the idea that "decoherence erases the need for minds in quantum mechanics". The first paper that really introduced the mathematical operations behind "decoherence" was written by (no, it wasn't Wojciech Zurek!) H. Dieter Zeh in 1970. Here is the actual damn paper. As you can see on the Wikipedia page about Zeh, he not only refused to eliminate minds from quantum mechanics. He has multiplied them because he's been a proponent of the many-minds interpretation of quantum mechanics. Funny. Maybe decoherence isn't such a staunch "killer of the minds", after all.
Incidentally, while I prefer when people avoid the "interpretation talk" altogether, "many-minds interpretation" is surely the kind of ideology that is being attacked by all the Marxist inkspillers in the media etc. When they see a "mind", they have a hissy fit. But this "interpretation" is nothing else than the "minimum fix" needed to be applied to Everett's "many worlds interpretation" so that the modified "interpretation" becomes at least morally correct. Needless to say, the "fix" is basically equivalent to going back to the Copenhagen interpretation.
I think that if we fairly look at the actual beliefs of the folks such as Niels Bohr, they knew the "qualitative message" of this blog post – i.e. of decoherence – even if they haven't ever coined words and formalisms for them. They knew that when things become complicated, the relative phases just become impossible to predict or trace, and then the quantum mechanical predictions become qualitatively indistinguishable from predictions in classical statistical physics: They may predict probabilities for the elements of a preferred list of possible outcomes while the measurement of non-commuting observables effectively disappears, along with all the relevance of the quantum phases. Bohr surely thought about these procedures but didn't go far enough to explicitly discuss the environment. But he did offer the correct final answers. In particular, the "Bohr correspondence principle" revealed that when quantum numbers such as \(n\) become high, the atom etc. starts to behave as in classical physics.
From this broader viewpoint, and especially if you're not really interested in the calculation of the "gradual disappearance of coherence", decoherence may be said to be much ado about nothing. It changes nothing about the postulates of quantum mechanics. Instead, it is just an approximate way to organize certain calculations and to focus on certain questions, and to use a trick that enables a pseudo-classical procedure to calculate the relevant predictions. But there's always the same quantum mechanics – observer-dependent quantum mechanics – underlying all these calculations.
Decoherence doesn't make observers unnecessary
Reviewed by DAL
on
June 23, 2017
Rating:
No comments: