Empirical evidence directly implies that uniqueness of perceptions has to be subjective, not due to some objective mechanisms
Four years ago, I discussed the problems of the Ghirardi-Rimini-Weber (GRW 1986, paper) "interpretation" or modification of quantum mechanics. In my opinion, it's the most well-defined realist "interpretation" of quantum mechanics out there. It's also the reason why the proof that it contradicts the empirical evidence may be done most rigorously.
Here, I want to review those arguments of mine and also rephrase those arguments in terms of "subjectivity or objectivity of unique human perceptions", place the brain cells at the center, and show that the same conclusions – the uniqueness of the perceptions cannot be objective – applies universally. No "interpretation" or modification of quantum mechanics where the perceptions are unique is compatible with the empirical evidence.
Quantum mechanics says that the wave function is evolving into complex superpositions of all conceivable states, including macroscopically distinct states. In proper, Copenhagen-like interpretations, the wave function is a template to calculate probability distributions and can't be understood as a "real" object, a variation of a classical field. Quantum mechanics says that no "real" state of affairs exists prior to the measurements.
Some people want to view the basic framework of classical physics as an eternally valid principle, a dogma, so they want to believe that "something" must be objectively well-defined even prior to the measurement. Because the wave function (or the density matrix) is apparently needed to make the right predictions, they generally assume that in one form or another, the information in the wave function (or density matrix) is a collection of classical degrees of freedom that should be matched to "what we see". Different realist "interpretations" may add some extra degrees of freedom, extra universes etc. but the wave function is a subset of the objective information at a given moment.
An immediate problem is that unlike the wave function, the reality doesn't "spread" indefinitely. There are no "dead and alive cats" – you know what I mean. Quantum mechanics with the proper, intrinsically probabilistic interpretation doesn't have any problem with this fact: the predictions are probabilities and if there's a probability \(P\) for the cat to be dead and \(1-P\) that it is alive, the prediction implicitly says that it cannot be "dead and alive" at the same moment. It is either dead or alive. These two options are mutually exclusive. We just don't know what the outcome is going to be.
Those who think that they may assume "realism" don't like it. So the wave function does evolve into ever more complex and diluted superpositions and because the wave function describes the "shape of objects" in some sense, they simply have to tame this evolution. Any realist "interpretation" needs some modification of the standard unitary evolution in order to make the underlying wave function "look" like the objects we see, objects that are not becoming ever more diluted.
The Bohmian-deBroglie pilot wave theory needs a mechanism to prepare the "pilot wave" in a quasi-localized form after each measurement – although this missing mechanism is almost never discussed (even though it's clearly necessary for the theory to say that it has replaced the standard probabilistic "measurement theory" by something else). The many worlds "interpretation" has to objectively split the world at some moment, so that the "diluted" wave function is divided to some pieces and each of them becomes "less diluted" and occupies a separate "classical universe".
Similarly and most explicitly, the GRW collapse theory introduces "collapse events" that make the wave function look more "classical" or less "diluted". In particular, at random moments, and in average, once per the period of time \(T\) which is chosen to be \(T=10^{15}\,{\rm sec}\), each particle "measures itself" and its coordinate gets more classical. In particular, the density matrix is transformed to the expression in equation (2.10)\[
\bra{q}\rho_{\rm new}\ket{Q}= \bra{q}\rho_{\rm old}\ket{Q}\cdot e^{-(q-Q)^2/ 4R^2}
\] where \(R=\sqrt{1/\alpha}\approx 10^{-7}\,{\rm m}\) is a distance scale. You see that this operation suppresses off-diagonal elements of the density matrix which are too far from the "classical" \(q=Q\) diagonal. One may either do the transformation above which still allows very different values of \(q\approx Q\) to be represented after the collapse (in that case, one still needs a probabilistic interpretation of \(\rho\) and the whole addition is pretty much meaningless); or one may combine the step with an objective localization of \(q\approx Q\) around a random point (given by their distribution before the collapse), too.
Both versions of the theory lead to undesirable "kicks" that GRW apparently ignore. You know, if \(\rho\) is multiplied by something like a Gaussian wave packet in the \(x\)-representation, you may trace what happens with it in the \(p\)-representation, too. What it does is that it gets convoluted with some Gaussians in the \(p\)-representation. And the width of those packets in the \(p\)-representation is the inverse of the width in the position space.
It really means that the "collapse" operation gives a kick to the momentum \(p\) of the particle with\[
\Delta p \approx \frac{\hbar}{R}
\] If \(R\) is \(0.1\) micron as we said previously, then the momentum \(\Delta p\) is comparable to 1 electronvolt over \(c\). This is a small energy for a particle physicist but a very high energy to create havoc – that obviously contradicts the experimental evidence. If electrons were getting this not so negligible kick, we would easily see it.
For example, the Cooper pairs, electron pairs at the root of superconductivity, have the size (=the "BCS coherence length") comparable to a micron, so they are somewhat larger than \(R\). This large size of the bound state is caused by the weak force that holds the electrons together – the exchange of phonons. If the GRW localization existed, then it would be destroying a Cooper pair every \(10^{15}\) seconds over the number of electrons in all the Cooper pairs. In a kilogram of matter (superconductor), a billion of Cooper pairs would be broken each second. You could easily see it. It would really destroy the superconductivity.
In a similar way, these "kicks" would produce extra flashes constantly coming from crystals. Note that if you change \(p\) of a particle by \(\Delta p\) in a random direction, you change its kinetic energy \(p^2/2m\) by \((\Delta p)^2 / 2m\) in average if the initial momentum was very small and much larger, \(p\cdot \Delta p / m\), if \(p\gg \Delta p\). For electrons whose \(m\) is rather small, this energy change is still potentially detectable.
If you look at several "clean types of materials" experimentally, the conclusion is clear: such "kicks" or modifications don't exist and can't exist.
In other words, the wave function clearly does spread, just like exact quantum mechanics says, and the "localized" results of measurements are due to the probabilistic interpretation of the wave function, not due to some modification of the evolution!
On the other hand, you may see that every realist theory will demand the modifications that change \(\Delta p\) at least by the amount comparable to the value chosen by GRW, and at least as often as they say – once per \(10^{15}\) seconds for one particle. Once you will understand and approve this point, you will have checked the contradiction: realist theories simply contradict experiments. They predict effects that are observed not to exist.
Why do we need the localization to approximately \(R\approx 0.1\) microns (or shorter) each approximately \(T\approx 10^{15}\) seconds per particle (or more frequently)? This is where the brain enters the discussion.
You know, a defender of GRW – or another realist program – could argue that these unwanted effects may be more invisible either by sending the frequency of the "collapses" to zero i.e. \(T\to \infty\), or by making the localization distance nearly infinite (which means no change of the wave function at all), \(R\to \infty\). In these two limits or in some combination of the two, the evolution of the density matrix reduces to the exact quantum mechanical equations.
It's good because all the unwanted new effects almost disappear. But it's bad because the wanted effect – the objective localization – disappears, too. ;-) Perhaps you may allow some objects to evolve into linear superpositions but the strange Schrödinger-cat-like superpositions should be avoided in some contexts where the realist folks are "sure" that we can't have them. What are they?
Well, the most important ones where they "know" that the superpositions don't exist are the perceptions. You either feel that you have seen a dead cat; or you feel that you have seen an alive cat. So your brain's perceptions have to be well-defined and a collapse must occur in time. But the brain's perceptions depend on locations of electrons in neurons. Some currents are running through your brain and the exact Schrödinger's equation implies that the wave function evolves into a superposition of neurons that feel one thing "as well as" another thing.
Regions of the brain comparable to a micron already carry some information and they may distinguish different feelings (think about small organisms and admit that they may feel things just like we do). And the states of the brain corresponding to different feelings differ in electron positions by a micron, too. And we may be sure about a sharp feeling every second (or more frequently).
If you combine these things, you will see that you need at least one collapse in the cubic micron each second; and the precision with which the electron becomes localized must be around a micron or more accurate, too. If either of the two conditions is violated, the piece of the brain will evolve into superpositions of different feelings, so the apparently well-defined perceptions you experience cannot be explained by an objective form of the wave function!
Conclusion
That's why some "localization process" with similar parameters (frequency, size of the new packet) as those in the GRW paper has to exist for your well-defined, unambiguous perceptions each second to be explained by the objective form of the wave function – or a more general set of objective degrees of freedom that do contain the wave function.
But as I said, the same "extra interventions" also imply unwanted (unobserved) effects such as the destruction of Cooper pairs in superconductors or additional flashes in crystals. So these extra interventions, strong enough to ban the superpositions of "different feelings" of groups of neurons, are falsified by experiments dealing with silent materials.
You just can't have both – both the clean evolution of superconductors and crystals and the "unambiguous" or "localized" feelings in the brain. Experiments show that the "interventions" such as the GRW collapses – deviations from the exact Schrödinger's equation – don't exist because we would have already observed many of their effects. Experiments imply that they don't exist at all (exact quantum mechanics) or they're slow or weak enough so that they can't prevent the relevant pieces of the brain to evolve into Schrödinger's-cat-like superpositions.
If that's so, the whole point of realist interpretations is a failure, anyway, because we would still need the quantum statistical interpretation of the wave function, even for the perceptions of our brains.
You may present my arguments above in terms of the Wigner's friend thought experiment. Even though Wigner's friend subjectively thinks that he only has unambiguous perceptions when he observes something in the box, Wigner himself must correctly describe his friend in terms of the wave function that does evolve into superpositions of different perceptions of Wigner's friend. If Wigner did something else and included some extra "localization/decision" mechanisms that make Wigner's friend's perceptions objectively unambiguous, the same strong enough effects would also cripple the superconductivity of superconductors and the silence of the crystals by incorporating new "noise" which Wigner and no one else has ever observed.
The wave function must therefore describe the subjective knowledge and subjective probabilities – Wigner's friend uses a wave function that collapses when he feels or learns something, but Wigner only changes his wave function later, when Wigner himself feels or learns something – and Schrödinger's equation must be allowed to "dilute" and "spread" the wave function (and allow all the "counterintuitive" superpositions) without any strong enough effects that would matter for the interpretation of the theory.
And that's the memo.
Four years ago, I discussed the problems of the Ghirardi-Rimini-Weber (GRW 1986, paper) "interpretation" or modification of quantum mechanics. In my opinion, it's the most well-defined realist "interpretation" of quantum mechanics out there. It's also the reason why the proof that it contradicts the empirical evidence may be done most rigorously.
Here, I want to review those arguments of mine and also rephrase those arguments in terms of "subjectivity or objectivity of unique human perceptions", place the brain cells at the center, and show that the same conclusions – the uniqueness of the perceptions cannot be objective – applies universally. No "interpretation" or modification of quantum mechanics where the perceptions are unique is compatible with the empirical evidence.
Quantum mechanics says that the wave function is evolving into complex superpositions of all conceivable states, including macroscopically distinct states. In proper, Copenhagen-like interpretations, the wave function is a template to calculate probability distributions and can't be understood as a "real" object, a variation of a classical field. Quantum mechanics says that no "real" state of affairs exists prior to the measurements.
Some people want to view the basic framework of classical physics as an eternally valid principle, a dogma, so they want to believe that "something" must be objectively well-defined even prior to the measurement. Because the wave function (or the density matrix) is apparently needed to make the right predictions, they generally assume that in one form or another, the information in the wave function (or density matrix) is a collection of classical degrees of freedom that should be matched to "what we see". Different realist "interpretations" may add some extra degrees of freedom, extra universes etc. but the wave function is a subset of the objective information at a given moment.
An immediate problem is that unlike the wave function, the reality doesn't "spread" indefinitely. There are no "dead and alive cats" – you know what I mean. Quantum mechanics with the proper, intrinsically probabilistic interpretation doesn't have any problem with this fact: the predictions are probabilities and if there's a probability \(P\) for the cat to be dead and \(1-P\) that it is alive, the prediction implicitly says that it cannot be "dead and alive" at the same moment. It is either dead or alive. These two options are mutually exclusive. We just don't know what the outcome is going to be.
Those who think that they may assume "realism" don't like it. So the wave function does evolve into ever more complex and diluted superpositions and because the wave function describes the "shape of objects" in some sense, they simply have to tame this evolution. Any realist "interpretation" needs some modification of the standard unitary evolution in order to make the underlying wave function "look" like the objects we see, objects that are not becoming ever more diluted.
The Bohmian-deBroglie pilot wave theory needs a mechanism to prepare the "pilot wave" in a quasi-localized form after each measurement – although this missing mechanism is almost never discussed (even though it's clearly necessary for the theory to say that it has replaced the standard probabilistic "measurement theory" by something else). The many worlds "interpretation" has to objectively split the world at some moment, so that the "diluted" wave function is divided to some pieces and each of them becomes "less diluted" and occupies a separate "classical universe".
Similarly and most explicitly, the GRW collapse theory introduces "collapse events" that make the wave function look more "classical" or less "diluted". In particular, at random moments, and in average, once per the period of time \(T\) which is chosen to be \(T=10^{15}\,{\rm sec}\), each particle "measures itself" and its coordinate gets more classical. In particular, the density matrix is transformed to the expression in equation (2.10)\[
\bra{q}\rho_{\rm new}\ket{Q}= \bra{q}\rho_{\rm old}\ket{Q}\cdot e^{-(q-Q)^2/ 4R^2}
\] where \(R=\sqrt{1/\alpha}\approx 10^{-7}\,{\rm m}\) is a distance scale. You see that this operation suppresses off-diagonal elements of the density matrix which are too far from the "classical" \(q=Q\) diagonal. One may either do the transformation above which still allows very different values of \(q\approx Q\) to be represented after the collapse (in that case, one still needs a probabilistic interpretation of \(\rho\) and the whole addition is pretty much meaningless); or one may combine the step with an objective localization of \(q\approx Q\) around a random point (given by their distribution before the collapse), too.
Both versions of the theory lead to undesirable "kicks" that GRW apparently ignore. You know, if \(\rho\) is multiplied by something like a Gaussian wave packet in the \(x\)-representation, you may trace what happens with it in the \(p\)-representation, too. What it does is that it gets convoluted with some Gaussians in the \(p\)-representation. And the width of those packets in the \(p\)-representation is the inverse of the width in the position space.
It really means that the "collapse" operation gives a kick to the momentum \(p\) of the particle with\[
\Delta p \approx \frac{\hbar}{R}
\] If \(R\) is \(0.1\) micron as we said previously, then the momentum \(\Delta p\) is comparable to 1 electronvolt over \(c\). This is a small energy for a particle physicist but a very high energy to create havoc – that obviously contradicts the experimental evidence. If electrons were getting this not so negligible kick, we would easily see it.
For example, the Cooper pairs, electron pairs at the root of superconductivity, have the size (=the "BCS coherence length") comparable to a micron, so they are somewhat larger than \(R\). This large size of the bound state is caused by the weak force that holds the electrons together – the exchange of phonons. If the GRW localization existed, then it would be destroying a Cooper pair every \(10^{15}\) seconds over the number of electrons in all the Cooper pairs. In a kilogram of matter (superconductor), a billion of Cooper pairs would be broken each second. You could easily see it. It would really destroy the superconductivity.
In a similar way, these "kicks" would produce extra flashes constantly coming from crystals. Note that if you change \(p\) of a particle by \(\Delta p\) in a random direction, you change its kinetic energy \(p^2/2m\) by \((\Delta p)^2 / 2m\) in average if the initial momentum was very small and much larger, \(p\cdot \Delta p / m\), if \(p\gg \Delta p\). For electrons whose \(m\) is rather small, this energy change is still potentially detectable.
If you look at several "clean types of materials" experimentally, the conclusion is clear: such "kicks" or modifications don't exist and can't exist.
In other words, the wave function clearly does spread, just like exact quantum mechanics says, and the "localized" results of measurements are due to the probabilistic interpretation of the wave function, not due to some modification of the evolution!
On the other hand, you may see that every realist theory will demand the modifications that change \(\Delta p\) at least by the amount comparable to the value chosen by GRW, and at least as often as they say – once per \(10^{15}\) seconds for one particle. Once you will understand and approve this point, you will have checked the contradiction: realist theories simply contradict experiments. They predict effects that are observed not to exist.
Why do we need the localization to approximately \(R\approx 0.1\) microns (or shorter) each approximately \(T\approx 10^{15}\) seconds per particle (or more frequently)? This is where the brain enters the discussion.
You know, a defender of GRW – or another realist program – could argue that these unwanted effects may be more invisible either by sending the frequency of the "collapses" to zero i.e. \(T\to \infty\), or by making the localization distance nearly infinite (which means no change of the wave function at all), \(R\to \infty\). In these two limits or in some combination of the two, the evolution of the density matrix reduces to the exact quantum mechanical equations.
It's good because all the unwanted new effects almost disappear. But it's bad because the wanted effect – the objective localization – disappears, too. ;-) Perhaps you may allow some objects to evolve into linear superpositions but the strange Schrödinger-cat-like superpositions should be avoided in some contexts where the realist folks are "sure" that we can't have them. What are they?
Well, the most important ones where they "know" that the superpositions don't exist are the perceptions. You either feel that you have seen a dead cat; or you feel that you have seen an alive cat. So your brain's perceptions have to be well-defined and a collapse must occur in time. But the brain's perceptions depend on locations of electrons in neurons. Some currents are running through your brain and the exact Schrödinger's equation implies that the wave function evolves into a superposition of neurons that feel one thing "as well as" another thing.
Regions of the brain comparable to a micron already carry some information and they may distinguish different feelings (think about small organisms and admit that they may feel things just like we do). And the states of the brain corresponding to different feelings differ in electron positions by a micron, too. And we may be sure about a sharp feeling every second (or more frequently).
If you combine these things, you will see that you need at least one collapse in the cubic micron each second; and the precision with which the electron becomes localized must be around a micron or more accurate, too. If either of the two conditions is violated, the piece of the brain will evolve into superpositions of different feelings, so the apparently well-defined perceptions you experience cannot be explained by an objective form of the wave function!
Conclusion
That's why some "localization process" with similar parameters (frequency, size of the new packet) as those in the GRW paper has to exist for your well-defined, unambiguous perceptions each second to be explained by the objective form of the wave function – or a more general set of objective degrees of freedom that do contain the wave function.
But as I said, the same "extra interventions" also imply unwanted (unobserved) effects such as the destruction of Cooper pairs in superconductors or additional flashes in crystals. So these extra interventions, strong enough to ban the superpositions of "different feelings" of groups of neurons, are falsified by experiments dealing with silent materials.
You just can't have both – both the clean evolution of superconductors and crystals and the "unambiguous" or "localized" feelings in the brain. Experiments show that the "interventions" such as the GRW collapses – deviations from the exact Schrödinger's equation – don't exist because we would have already observed many of their effects. Experiments imply that they don't exist at all (exact quantum mechanics) or they're slow or weak enough so that they can't prevent the relevant pieces of the brain to evolve into Schrödinger's-cat-like superpositions.
If that's so, the whole point of realist interpretations is a failure, anyway, because we would still need the quantum statistical interpretation of the wave function, even for the perceptions of our brains.
You may present my arguments above in terms of the Wigner's friend thought experiment. Even though Wigner's friend subjectively thinks that he only has unambiguous perceptions when he observes something in the box, Wigner himself must correctly describe his friend in terms of the wave function that does evolve into superpositions of different perceptions of Wigner's friend. If Wigner did something else and included some extra "localization/decision" mechanisms that make Wigner's friend's perceptions objectively unambiguous, the same strong enough effects would also cripple the superconductivity of superconductors and the silence of the crystals by incorporating new "noise" which Wigner and no one else has ever observed.
The wave function must therefore describe the subjective knowledge and subjective probabilities – Wigner's friend uses a wave function that collapses when he feels or learns something, but Wigner only changes his wave function later, when Wigner himself feels or learns something – and Schrödinger's equation must be allowed to "dilute" and "spread" the wave function (and allow all the "counterintuitive" superpositions) without any strong enough effects that would matter for the interpretation of the theory.
And that's the memo.
Silence of matter rules out realist "interpretations"
Reviewed by DAL
on
May 30, 2015
Rating:
No comments: