banner image
Sedang Dalam Perbaikan

There are no 't Hooft's ontological bases

Fred Singer informed me about a huge, 202-page-long quant-ph paper by Gerard 't Hooft (via Physics arXiv Blog)
The Cellular Automaton Interpretation of Quantum Mechanics. A View on the Quantum Nature of our Universe, Compulsory or Impossible?
which reviews the author's more than 15-year-long struggle to show that the foundations of quantum phenomena are classical (these are not just my words, he explicitly says so; at least, I appreciate that he is not trying to mask this basic goal by thick layers of fog like other "interpreters" do). I also appreciate that 't Hooft doesn't cite any paper by himself newer than 1993 so the new long preprint is no self-citation fest as some other papers. The article was posted exactly one week before Steven Weinberg's paper on modified density matrices. One week seems to be the current timescale of producing another paper by a (more prominent than average) physics Nobel prize winner that displays the author's dissatisfaction with quantum mechanics. And in the case of 't Hooft, the dissatisfaction is really the primary point of the paper.

Should you read the paper? I don't think so. I won't pretend that I have read the whole paper or that I plan to do so, either.




Gerard 't Hooft has an incredible patience and the paper looks formally fine – it is surely no garden-variety amateur physicist's paper by its format and wording (although the author does praise himself as a heretic – it's no heresy to say the same thing that about 7 billion people on this planet freely think about quantum mechanics; his musings are just wrong, misconceptions equal to those of an average man, not a heresy).




If you read at least several pages that are sufficiently close to being equivalent to the pages I have read, you will see that there is lots of standard basic quantum mechanics and his "cellular automaton interpretation" of quantum mechanics (he would previously use very different words like "hydrodynamics" to describe his thoughts) has converged much closer to the correct quantum mechanics, or at least a version of it dogmatically confined in the Schr̦dinger picture (which the people who don't want to understand that quantum mechanics is different from classical physics usually prefer while the quantum gurus usually don't; the laymen prefer it for the same reasons as 't Hooft Рthey want to think that something, probably the wave function, is the "classical object" beneath everything).

Is there any difference between 't Hooft's "interpretation" and the correct theory of quantum mechanics? For years, I was realizing that he has really converged to "mimicking" most of the mathematical content of quantum mechanics so the differences were increasingly difficult to find and the claim that his theory is ultimately supposed to be "classical" looked increasingly preposterous. The main problem was that he was trying to pick a preferred basis and claim that all other nontrivial linear superpositions of the basis vectors are "contrived".

Of course, this departure is exactly what makes everything that 't Hooft has written about the foundations of quantum mechanics childishly wrong. There are no preferred bases. If you want to be sure that the existence of the preferred basis – one that he calls the "ontological basis" (the word "ontological" is extremely popular among "philosophers" and other anti-quantum zealots because while they refuse the main basis of modern science and insist on the 90-years-dead "classical objective reality", they at least impress readers with a Latin word for the "classical objective reality"; the term "ontological basis" was previously used by Bohm et al. in 1987) – is really supposed to be the point of departure of Gerard 't Hooft from quantum mechanics, go e.g. to page 173. Yes, you have to read through 170+ pages of slightly twisted but not too original noise before you can see a clear formulation of the "new ideas" that the author is proposing.

On that page, 't Hooft says that he accepts the Born rule, and probably much of the Copenhagen interpretation, but:
The most important point where we depart from Copenhagen is that we make a fundamental assumption:
  1. We postulate the existence of an ontological basis. It is an orthonormal basis of Hilbert space that is truly superior to the basis choices that we are familiar with. In terms of an ontological basis, the evolution operator for a sufficiently fine mesh of time variables, does nothing more than permute the states.
How exactly to define the mesh of time variables we do not know at present, and may well become a subject of debate, particularly in view of the known fact that space and time has a general coordinate invariance built in. We do not claim to know how to fabricate this construction – it is too difficult.
So if you allow me a Pilsner interpretation of the words above, the point of departure of Utrecht from Copenhagen is the denial of a fundamental postulate of quantum mechanics, the superposition principle (which states that all complex linear superpositions of pure states are equally real and equally allowed – and therefore form a Hilbert space of possible states), and he thinks it's "difficult" to find the "right", "ontological" basis. Too bad that he hasn't considered the possibility that it's demonstrably wrong rather than difficult.



Monospinatically challenged horses may be cute which doesn't mean that it's wise to spend 15 years by looking for them.

OK, so is there an ontological basis? On page 173, we see two definitions of such a basis. The quote above defines the ontological basis by saying that the evolution operator is a permutation matrix acting on the elements of this basis. Equivalently, on top of page 173, we read that the ontological basis is one such that the relative phases between the coefficients in combinations\[

\alpha\ket{\psi_1}+\beta\ket{\psi_2},\quad \alpha,\beta\in\CC

\] cannot be measured. However:

According to both definitions, an ontological basis doesn't exist and cannot exist.

Why hasn't Prof 't Hooft first asked the question whether the key new object whose existence he hypothesizes may exist? He could have avoided writing hundreds of pages of nonsense, not to mention the thousands of pages that he has already written about the "subject" in the previous 15 years.

First, let me say a few words about the "evolution operator as a permutation matrix". It may sound appealing to many ears. In mid 1990s, I would be intrigued by the special role of permutations, too. That's why I was also attracted by an essay about the event-symmetric spacetime by a writer I didn't know too well at that time, Phil Gibbs. (This URL in my directory has been there for 18 years and I couldn't even erase it for 15 years. As someone who had a web page since 1994, you may call me a web fossil.) This "increased affinity" to permutation matrices may have driven me closer to Matrix Theory and Matrix String Theory.

(I can't resist to say that even though Matrix Theory and Utrecht share the discussion of the relationships between the unitary and permutation operators, they are actually sending physics along very different paths. While 't Hooft tries to reduce the group of possible evolution operators on an \(N\)-dimensional Hilbert space from \(U(N)\) to \(S_N\), Matrix Theory and Matrix String Theory really push us exactly in the opposite direction. They teach us that the groups we thought to be just \(S_N\) – like the group of permutations of identical particles – should be enhanced to a larger group, \(U(N)\), the gauge group of the matrix model, that contains \(S_N\) as an important subgroup. The permutation group – well, its semidirect product with some \(U(1)\) factors – remains unbroken when the particles are far from each other.)



City 1

However, the evolution operator is a different thing. You might think that the elements of \(S_\infty\), the permutation group of infinitely many elements, may approximate or resemble any transformation you might like. And because it is such a bizarre hypothesis involving the infinity, you could fool yourself into thinking that it's plausible. But wake up. You should get back to your senses. \(S_\infty\) may be complicated but it still is a set without any real-number-like "continuity structure". I wanted to say that it is intrinsically a "discrete" set but I decided to avoid this claim because what I normally mean by this adjective is that the set of possible elements is countable. However, the elements of \(S_\infty\) – if you allow to permute really anything in any way and don't require any "asymptotic stability of the permutations" – is not really countable. It has \(\infty!\) elements which is too many – in fact, it's more than \(\exp(\infty)\). Maybe I should have written the ordinal numbers \(\aleph_0! \gt \aleph_0\) to convey the idea. (The weird letter is the Aleph, the first letter of the Hebrew alphabet, and is used to represent the particularly infinite "cardinal" number of elements of the set of integers.)

However, it's still extremely probable (and probably easily provable) that you won't be able to find continuous functions\[

f: t\mapsto g(t), \quad g\in S_\infty

\] into a group of permutations, not even permutations of an infinite, countable number of elements. Any "manageable" subset of \(S_\infty\) will be pretty much countable and functions taking values in such a subset will be explicitly discontinuous. The evolution operators "waiting for time \(t\)" are continuous functions of time – recall that 't Hooft wants to "steal" the differential equations from proper quantum mechanics and elements of \(S_\infty\) just can't enter as unknown functions in differential equations – so this construction won't really work. There can't be any solution. It is totally self-evident. You may also speculate that the time may be discrete as well, and if it is discrete, you will avoid the need for continuous functions.

But be sure that such theories with a fundamentally discrete time will avoid any agreement with the basic observations, too. Such theories may resemble castling in chess – or other moves – but the real world significantly and conceptually differs from chess and everyone who has made at least basic observations of the real world has already noticed the profound differences. If you "hope" in something else, you are just deluding yourself into thinking that it is fine to deny even the very basic experimental data. The time coordinate is manifestly continuous and even if it were not continuous in some sense, there must be an explanation why it looks "almost fully continuous". There isn't any such explanation here. The position is indefensible. If you really want to deny the basic properties of all observations, such as the continuous evolution of things (the state vectors) in time, and defend the existence of a loophole, you should have at least a feeble clue what such a loophole could look like and why such a complete coup in physics could still explain the observable data. You are starting physics from scratch and you should realize that until you have a "complete new theory", every single fact and explanation that children and adults are learning at schools and outside schools is against you.



City 2. Doesn't it look exactly like City 1? Too bad that the Copenhagen and Utrecht interpretations are so different, especially in their ability to agree with basic features of the observations of Nature.

The position is equally indefensible if we use 't Hooft's other definition of his ontological basis, one saying that the relative phases are unmeasurable. This may be true in approximate effective (classical) descriptions of a physical system; and it may be true for wave functions combining vectors from different superselection sectors (which you shouldn't combine, anyway).

However, small enough physical objects such as elementary particles and their small bound states are demonstrably elements of a single superselection sector and no classical approximation is good enough. All superpositions may really be prepared by an experimental procedure I may describe and all relative phases are completely physical. It is totally essential that the wave function – or the density matrix – carries more information than the squared absolute values of the amplitudes \(|c_i|^2\). All the phases, except for the overall phase (if you change all phases in the same way), matter. They have observable consequences.

It's easy to demonstrate it for every real-world "small" or "elementary" physical object and an undergraduate student of quantum mechanics should be able to figure these things herself. But let me pick a trivial example, the spin of an electron. When the other observables of the electron (i.e. position and/or the complementary momenta) are known or ignored, the wave function for the electron (for its spin) is a column of two complex numbers.\[

s = \pmatrix{ \alpha \\ \beta}, \quad \alpha,\beta\in\CC

\] The upper component \(\alpha\) describes the probability amplitude for the spin to be up, the lower one \(\beta\) describes the probability amplitude that the spin is down. The usual normalization of the total probability (squared length of the vector) to 100 percent gives us\[

|\alpha^2|+|\beta|^2 = 1

\] Also, I have said that the overall phase change of the state vector by \(\exp(i\varphi)\)\[

(\alpha,\beta) \to \exp(i\varphi) (\alpha,\beta)

\] makes no impact on the observations – on predicted probabilities – because the phase factor \(\exp(i\varphi)\) cancels in the density matrix \(\rho = \ket{s}\bra{s}\) because \(\bra{s}\) comes with the opposite phase \(\exp(-i\varphi)\) and all the probabilities may be written in terms of the density matrix.

However, these two complex numbers \(\alpha,\beta\) contained "four real numbers" in them and they're only constrained by one real normalization condition. This leaves us with three real parameters. And they're identified within "one-dimensional" equivalence classes by the redefinition of the overall phase. So the equivalence classes are still labeled by two real parameters: e.g. the ratio of the absolute values of \(|\alpha|/|\beta|\) – which is the "distribution of probabilities" that has a classical interpretation and 't Hooft accepts it – and also the relative phase between \(\alpha,\beta\), i.e. the argument \({\rm arg}(\alpha/\beta)\), which he doesn't accept.

But both parameters are equally physical and equally important. In fact, one may calculate the average vector of the spin in the state \(\ket{s}\). It is given by \[

\langle\vec s\rangle = \frac{\hbar}{2} \bra{s} \vec \sigma \ket{s}

\] which is a vector equation i.e. a triplet of equations where you may replace \(\vec s\) and \(\vec \sigma\) by their \(x,y,z\) components to get three equations and where \(\vec \sigma\) are the Pauli matrices\[

\sigma_1 = \sigma_x =
\begin{pmatrix}
0&+1\\
1&0
\end{pmatrix}\\
\sigma_2 = \sigma_y =
\begin{pmatrix}
0&-i\\
+i&0
\end{pmatrix}\\
\sigma_3 = \sigma_z =
\begin{pmatrix}
+1&0\\
0&-1
\end{pmatrix}

\] Using traces of products of two Pauli matrices, you may quickly prove that the length of the three-dimensional vector \(\langle \vec s\rangle\) is actually \(\hbar/2\) i.e. "one" if we have omitted the \(\hbar/2\) factor everywhere. (I included the factor for the reader to see that all these things are totally observable quantities, the same angular momentum that can make a gyroscope or the Earth spin a little bit faster, too.)

So the average vector of the spin \(\langle \vec s\rangle\) actually lies on a two-sphere, and that's why it's described by two parameters. In terms of the usual up-down basis for the spin, the "real parameter" \(|\alpha|/|\beta|\) whose existence 't Hooft accepts describes the latitude on the two-sphere; the relative phase between \(\alpha,\beta\) that 't Hooft doesn't accept determines the longitude.

But both latitude and longitude are real and equally important to describe a point on a sphere which, in this case, encodes the direction of the spin!

The argument above shows that all the normalized 2-complex-component spinors describe a pure state which is a pure "up state" relatively to a general axis going from the origin to a point on the original two-sphere whose latitude is determined by the absolute values \(|\alpha|/|\beta|\) and whose longitude is encoded in the relative phase between \(\alpha,\beta\). It's that simple.

You can't dismiss the second, relative phase variable as being less physical than the first one. Doing so would be like telling your pilot that the latitude of his destination is enough. It's not enough! The longitude matters, too. It is critical to incorporate both coordinates and treat them equally in order to preserve the \(SO(3)\) or \(Spin(3)\) rotational symmetry!

The spin of the electron is an extremely simple system you should fully master and, much like the double slit experiment, it is totally enough to see that the complex amplitudes in quantum mechanics have to have the usual "probability amplitude" interpretation clarified by the Copenhagen folks! Whenever you try to "reinterpret" the spin in an "ontological way", you inevitably make the choice of the original up-down axis visible and you break the rotational symmetry. The rotational symmetry of a theory with a spinor describing a single particle may only hold if the components of the spinor are probability amplitudes and if any basis is treated on par with any basis. After all, as I reminded you, the choice of an orthonormal basis on the space of 2-complex-component spinors is exactly the same thing as the choice of the axis with respect to which we may talk about "up" and "down" spin states.

It should be totally clear to you why the relative phase is as important for the spin as the real-valued parameters connected with absolute values. If you can't get this point, you should ask your instructor to give you a failing grade in your undergraduate quantum mechanics course. And if Prof 't Hooft wants to propose that there is a preferred ontological basis of the two-dimensional Hilbert space of the electron's spin, he should fail in the undergraduate quantum mechanics course, too. (The Nobel prizes may be given to people without the bachelor degrees, however, so the failing grade is just fine.)

In the case of the spin, the meaning of the relative phase was particularly clear – it informs us about the longitude of the end point of the axis with respect to which the electron is spinning up. But in all other examples in quantum mechanical theories (including quantum field theory and string theory, of course), one may show that relative phases always influence predictions of observable phenomena.

In the double slit experiment which is also enough to understand all of quantum mechanics if you think about it carefully enough, as Feynman pointed out, the relative phase between the amplitudes for the particle to be near the "left slit" and for the particle to be near the "right slit" informs you about the position of the interference maxima and minima. The interference minima will be at points\[

x = D \left(N+\frac{\varphi}{2\pi}\right), \quad N\in\ZZ

\] where \(N\) is an integer, \(D\) is the distance between the minima, and \(\varphi\) is the same angle determining the relative phase as before. The relative phase just moves the "strips" in the interference pattern in the transverse direction to these strips.

Similarly, if you have the usual wave function of the position, \(\psi(x,y,z)\), something that the average quantum mechanics textbooks want to start with, it is also true that not just the squared absolute values \(|\psi(x,y,z)|^2\) but also the relative phases matter. They affect the Fourier transform \(\tilde \psi(p_x,p_y,p_z)\). The more quickly the phase of \(\psi(x,y,z)\) is changing in the \(x\)-direction, the higher the momentum \(p_x\) – and the corresponding velocity – the particle has! After all, in the position representation, the momentum operator is defined as the rate at which the phase is changing! And vice versa. If you choose the momentum representation of the wave function, the relative phases affect where the particle is sitting in the \(x\)-space.

I have been talking about spins, positions, and momenta, but the message is completely universal. For every observable dynamical quantity which is given – as quantum mechanics says – by a Hermitian operator \(L\), there exist operators such as \(M\) that don't commute with \(L\),\[

LM-ML \neq 0.

\] You see that \(LM\) isn't the same thing as \(ML\). For example, the former is your humble correspondent while the latter is not. And it means that if you change the relative phases in the basis of \(L\) eigenstates, you will change the predictions for observations of \(M\). So the relative phases do matter.

Perhaps, someone could suggest, a "refinement" of quantum mechanics would only be built from observables like \(x,y,z\) that commute with each other. Can you always find an operator \(M\) that refuses to commute with \(L\)? You bet. For \(x\), it is \(p\), and vice versa. For \(\sigma_x\), it's the other components \(\sigma_y,\sigma_z\) of the angular momentum that don't commute with a chosen component \(\sigma_x\), but the existence of non-commuting observables that are "perfectly physical and measurable" holds in general. It's particularly clear if we show it in the Heisenberg picture where the evolution of \(L\) is given by the Heisenberg equation\[

i\hbar\frac{d}{dt} L = [L,H]

\] So as long as \(L\) is dynamical and changes with time, i.e. as long as the left hand side is nonzero, there must be operators that don't commute with \(L\), such as the Hamiltonian \(H\). Of course, the operator \(H\) isn't the only operator that refuses to commute with \(L\). There are usually "simpler" or "more elementary" operators \(M_i\) that don't commute with \(L\), either, and the reason why \([L,H]\neq 0\) is that \(H\) depends on this (or these) \(M_i\), too. Imagine that \(L=x\) and \(M=p\) if you have trouble to see what I am saying. It is really trivial.

So if \(L\) is time-independent by the laws of physics, e.g. if \(L=\pi\approx 3.14\) or any \(c\)-number (a mathematical constant or a constant of Nature that doesn't really change), then you will be unable to find an operator that has a nonzero commutator with \(L\). But the evolution of a physical system can't be captured by these constant numbers alone. The very word "evolution" means that there are things that depend on time. And for each observable that depends on time, there exist operators that don't commute with it. The uncertainties of these complementary operators will obey the Heisenberg uncertainty principle. And the relative phases in the probability amplitude written relatively to the basis of \(L\) eigenstates will influence the predictions for the measurements of \(M\) – and vice versa, of course.

It's really right to define an observable quantity as any – generic – Hermitian operator on the Hilbert space. For each such matrix, one may find a gadget that measures it. Because a matrix refuses to commute with almost all other matrices (the matrix multiplication is noncommutative), it's obvious that a generic pair of observables will be non-commuting. You can't avoid the nonzero commutators in quantum mechanics. Not just all the "new aspects" of quantum mechanics such as the uncertainty principle boil to the nonzero commutators. Even things we thought we knew in classical physics – the time-evolution of observables – depends on nonzero commutators according to quantum mechanics! You would destroy everything if you overtly or covertly assumed that the commutators are zero.



These relative phases always matter and these relative phases in the initial state may always be changed by a physical procedure. For example, in the case of the electron spin, the relative phase between \(\alpha,\beta\) that we discussed may be changed by simply rotating the spin around the \(z\)-axis – that's how we change the longitude. The rotation of the axis of rotation is known as "precession" (the animation of the gyroscope above only differs from the electron in a magnetic field by the large value of the spin, \(|\vec j|\gg \hbar\), which justifies the classical pictures and makes components of \(\vec j\) effectively continuous) and you can achieve this "precession" around the vertical \(z\)-axis if you surround the electron by the magnetic field \[

\vec B = (0,0,B_z).

\] If the electron sits in this magnetic field, the spin-up amplitude's phase is changing at a different rate than the spin-down amplitude's phase – because the spin-up and spin-down states have different energies in the magnetic field, due to the extra \(\Delta E = -\vec m\cdot \vec B\) term in the Hamiltonian that pretty much defines the magnetic moment as a coefficient in front of \(-\vec B\). And if the phases are spinning at different rates, the relative phase is changing with time!

The relative phases are spinning (or undergoing "precession", if you wish), but at the end, you may only measure one bit of information about the spin (up or down relatively to an axis?) – and the probabilities may be predicted from the one quantum bit of quantum information. Nothing else is possible, experiments clearly show and consistency in fact demands.

Again, the case of the spin is particularly clear and geometric but the relative phases in the initial states may always be changed by a procedure. There is always something analogous to the \(z\)-directed magnetic field in the example above. If you want the relative phase of \(\psi(x,y,z)\) to change more quickly in the \(z\)-direction, just accelerate the particle in this direction! That will increase its momentum which has exactly the required effect on the relative phases. Such an observable must exist because, as I have already explained or proven, the Hamiltonian must depend on these observables \(M\) that don't commute with \(L\), otherwise \(L\) would be constant in time and therefore uninteresting for physics (non-dynamical)!

So Gerard 't Hooft may have already eliminated some of his previous "really large and silly" deviations from quantum mechanics but those that are left in his current "cellular automaton interpretation" may still be seen to be wrong – to be the ultimate causes of wrongness of predictions of any 't Hooft's "theory".
There are no 't Hooft's ontological bases There are no 't Hooft's ontological bases Reviewed by MCH on May 19, 2014 Rating: 5

No comments:

Powered by Blogger.