For a week, I wanted to promote one topic in my 2011 text about Paul Dirac's forgotten quantum wisdom to a separate blog post.
Finally, I managed to tunnel through the barrier of hesitation and moderate laziness.
What is it? It's his observation – understood by most physicists between 1900-1930 – that most of the internal structure of the atoms has to be invisible because it doesn't contribute to the specific heat. It was one of the reasons that made it possible for Gentlemen like Dirac to see that classical, realist theories had to be abandoned and that allowed them to complete their understanding of their revolutionary new framework within months from some important insights – even though so many people remain clueless even now, 90 years later.
Many of the chronic interpreters – a term that is mostly synonymous to the anti-quantum zealots – could be able to see that their favorite classical theory – now renamed as "realist interpretation of quantum mechanics" – has to be wrong. It can be killed in seconds. In this blog post, I won't copy and paste my 2011 essay. Instead, I will try to be complementary. But the section "Absent heat capacity from electrons" in my 2011 text is overlapping with the content of this blog post.
When I say that all realist "interpretations" of quantum mechanics are dead for thermodynamic (and many other) reasons, I mean all of them: the Bohm-deBroglie pilot wave theory and its variations, realist Everett's many worlds "interpretations", Ghirardi-Rimini-Weber "collapse theories", and other classes.
What all of these attempts share is that they are "realist interpretations". The term "realist interpretation" is of course nothing else than a childish synonym for what the mature physicists call the "classical theories". They are theories assuming that at every moment \(t\), the "state of the Universe" is described by some objective information that is knowable in principle and that all well-informed observers would agree upon.
Quantum mechanics denies this assumption.
In those "realist interpretations", the "state of the universe" is described by some data that are often called the "beables" – because John Bell didn't consider the term "realist interpretations" childish enough. Needless to say, the adult physicists have a synonym for the "beables" as well. They are known as the "coordinates on the phase space" or, somewhat less accurately, a "classical degree of freedom". They are all the data that identify which of the a priori allowed states at time \(t\) is actually realized by the physical system.
In the pilot wave theory, the "beables" include the coordinates \(x_i,p_i\) of the "real particles" as well as the classical wave \(\psi(x_1,y_1,z_1,x_2,y_2,z_2,\dots)\) that emulates the wave function in quantum mechanics but is "interpreted" objectively.
In the many world interpretations, one assumes that there are many universes, so for each allowed "state of one Universe", one has some occupation numbers (or fractions) that say how many Universes are found at one state or another. (For obvious reasons, the Universes in MWI have to be ludicrously considered as bosons because copies are allowed, but their "locations" within the multiverse cannot be distinguished) MWI has never been well-defined – it is just a vague philosophy hyped by people who are constantly high – but I think it's obvious that the previous sentence is the most concise formulation of the most intrinsic assumption of any theory deserving the MWI label.
In the GRW collapse theories, the wave function \(\psi(x_1,y_1,z_1,x_2,y_2,z_2,\dots)\) is again "reinterpreted" as a classical wave, so the space of possible waves is the phase space. However, the evolution involves random jumps so the phase space admits stochastic discontinuous jumps. This is just a detail about the dynamics of these theories. The random discontinuous jumps may still be described by laws evolving the point in the phase space. Place lots of wormholes with some "jump there from here" banners all over the phase space.
Heat capacities
Heat capacity \(C\) is the amount of energy (heat) you need to increase the temperature of an object by the unit of temperature. We often talk about the specific heat (per unit of mass of the material) and the molar heat capacity (the heat capacity of one mole). Up to the factor of Avogadro's constant \(N_A\), the molar specific heat is nothing else than the heat capacity of one atom or one molecule of the material.
How much the average energy per atom (or molecule) changes if the temperature is increased by \(T\)?
These numbers may be measured. It has been known for quite some time that the monoatomic gas has\[
E = \frac 32 kT, \quad \hat c_V = 3/2
\] where \(k\) is Boltzmann's constant and \(T\) is the absolute temperature (or its increase if \(E\) represents the corresponding increase of the energy). When this \(E\sim T\) dependence is linear, the heat capacity is simply \(E/T\) e.g. \(C=3k/2\) for the monoatomic gas atom. The energy carried by one atom of the monoatomic gas directly "translates" to the heat capacity at a constant volume. If you want to know the heat capacity at a constant pressure \(\hat c_p\), you may get the answer by replacing energy \(U\) (the Hamiltonian) by the enthalpy \(H=U+pV\) (which is not the Hamiltonian, despite the letter \(H\)).
To keep the pressure constant, the increase of the internal energy must be accompanied by the increase of the volume (otherwise the pressure would increase), and the extra mechanical work tells you that\[
\hat c_p = \hat c_V + 1
\] So it's \(\hat c_p=5/2\) for the monoatomic gas and \(\hat c_p=7/2\) for the diatomic gas. All these numbers may be measured with tons of different chemicals. The energy per atom or molecule is always comparable to \(kT\); the heat capacity of an atom or a molecule is comparable to \(k\). The additional coefficient never leaves the interval between one and five or so. This may be puzzling for many reasons because the atoms and molecules of different elements and compounds dramatically differ.
In fact, this insight that the heat capacities are of order "one" holds for solids, too. It seems that one atom in a crystal lattice carries the energy\[
E = 3kT
\] and this law – known as the Dulong-Petit law formulated whopping 196 years ago – applies with a great accuracy, too. Why don't the materials care how complicated the atoms and molecules are?
Einstein has done lots of things but one of the great insights that is not terribly well-known is his 1907 Einstein solid. The crystal is composed of atoms and each atom is pretty much a harmonic oscillator – because it's attached by some springs to the adjacent atoms in the crystal lattice.
Well, a three-dimensional harmonic oscillator carries the energy \(E=3kT\) at temperature \(T\). Why? Because classical physics allows you to derive a simple thing. Each degree of freedom with a quadratic contribution to the energy contributes \(kT/2\) to the energy. Monoatomic gases have \(\hat c_V=3/2\) because they have three quadratic contributions from the momentum components; diatomic gases have \(\hat c_V=5/2\) because there are two additional rotational degrees of freedom (two components of the angular momentum; the third one – the rotation around the diatomic axis of the molecule – is unphysical). Some more complicated gas molecules may have \(\hat c_V=6\) because all the rotations matter (no axial symmetry).
The virial theorem gives you a simple expression for the average potential energy whose form is given by \(\alpha r^N\), too.
Entropy from phase spaces
It's important to realize that these formulae for the temperature-dependent average energy – and for the heat capacity – apply to every simple enough classical system, including all the "realist interpretations" of quantum mechanics. Some straightforward manipulations in thermodynamics allow you to find out that the heat capacity is also equal to\[
C_V = T\zav{ \pfrac ST }
\] Note that all these conclusions in thermodynamics are "phenomenological" in character. They're derived from thermodynamic experiments, if you wish. What is a priori open is the relationship between \(C,S,T\) on one side and your microscopic theory on the other side.
However, things are unequivocal as soon as you visit Boltzmann's tomb in Zentralfriedhof, Vienna. The entropy is equal to\[
S = k\cdot \ln W
\] where \(W\) is the number of mutually exclusive, in principle (but not in practice) distinguishable microstates that look like the same macroscopic configuration we are able to observe. The constant \(k\), Boltzmann's constant, has to be added for dimensional reasons. It has the unit of entropy (which may be chosen dimensionless, but originally wasn't so) and converts the information in nats to entropy.
One has to choose the logarithmic dependence because \(\ln(W_1 W_2)=\ln(W_1)+\ln(W_2)\) which is needed for the entropy to be additive (and extensive). Note that the number of possibilities for two independent systems is \(W_1 W_2\). It's combined multiplicatively; the set of options is a Cartesian product.
As you have heard, in quantum mechanics, we have \(W=d\), the integer labeling the dimension of the Hilbert space of states that look the same macroscopically. The constant \(k\) is cleverly Boltzmann's constant, so that curious schoolkids may ask why Boltzmann's constant is involved in a formula on Boltzmann's grave.
However, in classical physics, it is unavoidable that\[
W = \frac{{\rm Vol(phase\, space)}}{(2\pi\hbar)^\alpha}
\] the argument of Boltzmann's logarithm is the volume of the relevant phase space (either a shell in it, or the whole interior, it doesn't matter in the thermodynamic limit). The volume of the phase space is divided by a certain unit – which I wrote as the denominator, so that the transition to quantum mechanics is straightforward. But the denominator is really an "undetermined constant" in classical physics – a constant describing the accuracy with which the "beables" may be measured. Because we take the logarithm of the ratio, the multiplicative uncertainty of the denominator translates to an additive shift uncertainty about the entropy.
So if you know nothing about quantum mechanics, the entropy of the Universe may be redefined by\[
S \to S + \Delta S
\] where \(\Delta S\) is a state-independent constant (shift of the entropy), and with this redefinition, you will get "equally good" thermodynamics because only the entropy changes/differences may be measured classically. Quantum mechanically, the denominator is fixed – and so is \(\Delta S\) – by the uncertainty principle which dictates the size of the "smallest cell" on the phase space. Because of that, we also have \(S=0\) at \(T=0\). When a typical system is completely frozen, it sits in its unique ground state (eigenstate with the lowest energy).
All these things should be standard. But what is apparently not standard is to realize that these insights really hold for any classical theory – and "realist interpretations" of quantum mechanics belong to this set.
The measured heat capacities look so small
As I have already announced, the problem that kills all the classical and realist theories is that they predict way too huge heat capacities \(C\). Dirac dedicated a few sentences to this observation in his well-known textbook of quantum mechanics; around 1930, this insight was well appreciated by everyone. Equivalently, the classical or realist theories predict much higher energies than \(kT\) at temperature \(T\). Also equivalently, they predict that the entropy \(S\) – or the volume of the relevant phase space – grows way too quickly with \(T\) because there are apparently "too many degrees of freedom".
Take a planetary model of a nitrogen atom – and pray that Encyclopedia Britannica allows me to embed images in this way. There's a nucleus in the middle and lots of electrons orbiting the nucleus. Each electron has the kinetic energy \(p^2/2m\). So each electron should contribute \(3kT/2\) to the energy at temperature \(T\), shouldn't it? Similarly, there should be a contribution from the potential energy. For example, if some harmonic-oscillator-based "valley" potential energy keeps the electrons in the orbits, there should be some \(3kT/2\) contribution to the energy from each electron's potential energy, too.
You may see that the total energy should scale like \(3kT\cdot Z\) where \(Z\) is the atomic number. But the actual measured value is just \(3kT\) or so. Why don't the \(Z\) individual electrons increase the energy?
This problem is morally equivalent to the "ultraviolet catastrophe" in classical electromagnetism – which is a special example of the inadequacy of classical physics discussed in this blog post. The electromagnetic field (even in a box) may be decomposed to infinitely many Fourier components. Each Fourier component of the electromagnetic field should carry some \(kT/2\) as well. So the total energy carried by the electromagnetic field in a box should be infinite, as Brian Greene nicely explained in Chapter 4 of The Elegant Universe, years before he began to write pure bullÅ¡it about quantum mechanics.
The solution has to be that all these extra "beables", the additional degrees of freedom like the momenta of the individual electrons or the electromagnetic Fourier modes at too high frequencies, must become invisible, frozen, effectively non-existent. The classical information carried by them must be zero. There's no other way to avoid the huge (or infinite) entropies and heat capacities that would otherwise be predicted. If there were too many switches, too many "beables" that may have one value or another, the incoming heat could switch and change all of them, just like if you have too many atoms, and the object would be able to absorb too much energy per unit temperature difference. Its heat capacity would be too high – typically infinite. And yes, any viable "realist" (classical) laws of physics would have to define a conserved energy on the phase space which becomes higher for the "more wiggly" values of the "beables"; otherwise the laws would break the energy conservation law by huge amounts.
In Boltzmann's formula \(S=k\cdot \ln W\), we may somewhat carefully define \(W\) as
The correct theory – quantum mechanics – in some sense makes and has to make the "set of possible states" in which the physical systems may be found much smaller than it is in simple Newtonian mechanics!
The "pilot wave" in Bohmian mechanics is interpreted as a classical wave in a multi-dimensional space. The motion of one particle may be claimed to emulate the right behavior predicted by quantum mechanics if you neglect all problems with spin, relativity, preparation of the initial pilot wave, the fate of the pilot wave when the particle is observed, and all the conceivable measurements inequivalent to a position measurement. But can this "pilot wave" theory be extended so that you could use it to predict thermodynamic phenomena involving larger objects? Can the right "mechanisms of absorption" of the particle be ever well-defined?
The answer is obviously No. The phase space of possible "pilot waves" that may evolve in an equilibrium – when you add the necessary mechanisms for the "sweeping" of the "pilot wave" after a particle is observed somewhere – will inevitably be huge, and the corresponding \(W\) will increase way too quickly (well, really infinitely quickly) with \(T\), thus predicting huge (and probably infinite) heat capacities. It's similar in the GRW "collapse theories".
And the problems are the same or even worse in a realist "many worlds interpretation". This armchair physics paradigm supposes that one has many universes at a given moment and keeps track of the number (or fractions) of universes that are objectively found in each allowed classical state of one universe. The "beables" include a huge (infinite) number of occupation numbers (or fractions). Those will make the number of in principle distinguishable microstates increase way too quickly with temperature.
Think about any "realist interpretation" you wish. You will quickly see that whatever it is, it simply obeys all the normal requirements for a "classical theory", and will predict too huge "phase spaces" i.e. "too huge entropies" and "too huge heat capacities".
If you try to invent a "realist interpretation" with too few "beables", you will obviously fail, too. You need something at least as powerful as the wave function to describe the state of the physical systems in realist theories.
How does quantum mechanics solve the problem?
What is the magic by which quantum mechanics cures this otherwise lethal disease? Quantum mechanics really doesn't change anything about the texts above. In particular, it preserves \(S=k\cdot \ln W\), the equation on Boltzmann's grave, without any modifications, and it even allows us to keep the definition of \(W\) as
The Hilbert space is basically\[
\HH = \CC^{d}
\] and requires us to specify \(d\) complex numbers or so to unique isolate a pure vector. If the wave function were a classical observable – a "beable" – then the phase space could distinguish about \(100^d\) different points, assuming that the real and imaginary part of each amplitude would be known with the accuracy of \(0.1\), and the entropy would have to be\[
S = k\cdot \ln W = k\cdot \ln(100^d) = kd \ln 100.
\] The entropy would be proportional to the dimension of the Hilbert space \(d\) – which however has to grow exponentially with the number of particles! That's too bad. The entropy wouldn't be additive.
The main novelty of quantum mechanics that is relevant in this context is the insight that only orthogonal states are mutually exclusive. So we can't treat the wave function as a classical "beable". Only the basis vectors of a basis in the relevant Hilbert space are mutually exclusive and it's nothing else than the dimension of the Hilbert space \(d\), and not its exponential, that should be substituted as \(W\) to Boltzmann's grave formula.
Everything works perfectly. In particular, if your macroscopic measurements tell you that you have frozen a crystal (or vacuum) perfectly at \(T=0\), the microstate that may give rise to the observed macrostate is completely unique and Boltzmann's \(W=d=1\) leading to \(S=0\). The entropy \(S\) at the absolute zero \(T=0\) is equal to a universal constant, namely zero. (Before quantum mechanics, the precise value of the entropy at \(T=0\) was a constant \(\Delta S\) that could have been changed with your conventions, as I wrote above.)
You must realize that the low heat capacities that are compatible with the observations only arise if the number of "mutually exclusive microstates" is much smaller than the cardinality of the Hilbert space of allowed microstates. In other words, it is totally critical for the right theory to imply that two generic allowed states \(\ket \phi\) and \(\ket\psi\) are not mutually exclusive. This "refusal to be mutually exclusive" may also be said to be the reason why the phase space is separated to the cells (and why \(W=d\) may be one or a small integer, after all). Smaller cells in the phase space aren't allowed due to the uncertainty principle and overlapping cells correspond to states of the particle that are not mutually exclusive.
In quantum mechanics, the generic two states are not mutually exclusive. If the system is in state \(\ket\phi\) with \(\bra \phi \phi\rangle=1\), then we can't be sure whether the system is in the state \(\ket\psi\neq \ket\phi\) with \(\bra\psi\psi\rangle=1\). Instead, the laws of physics say that the probability is\[
P_{\psi=\phi} = \abs{ \bra \phi \psi\rangle }^2
\] that the system in the state \(\ket\phi\) is in the state \(\ket\psi\). That's the "most elementary" form of Born's rule, I would say, one that only uses pure vectors and no specific observables (although we always determine pure vectors by measuring observables). Feel free to call it the Lumo-Born rule if you haven't heard of it in this form before. There can't exist any objective answer to the question whether the system is in the state \(\ket\psi\) or the state \(\ket\phi\) for two non-orthogonal states. They are simply not mutually exclusive. The state that will be used to describe the physical system unavoidably depends on the knowledge of the observer – on what measurements he did (and acknowledged to be measurements i.e. a source of information). The Wigner's friend thought experiment makes it clear that the two men will use different wave functions at some moments – and this difference doesn't imply a contradiction because they're not orthogonal.
If your theory is realist i.e. allows you to distinguish all the possible values of the wave function or something similar in principle, then this theory will predict way too huge values of \(N,W,C,S,E\) at a finite temperature \(T\), and these huge values of the heat capacity will completely contradict the experiments.
The only way out is a non-realist theory called quantum mechanics where Boltzmann's grave \(W\) is interpreted as the dimension \(d\) of the relevant complex linear space. All the linear superpositions may be "represented" by the basis which is why the "number of options" remains low and only slowly increases with the temperature.
Do I expect the Bohmian, Everettian, GRWian, and even worse anti-quantum zealots to understand these simple matters about thermodynamics that all good physicists understood in 1930 – and much of it was well appreciated long before the birth of quantum mechanics in 1925 – and admit that they have been totally deluded throughout their lives? I don't. But I do hope that many new TRF readers will be able to appreciate how staggeringly obviously wrong all the anti-quantum zealots are.
And that's the memo.
Finally, I managed to tunnel through the barrier of hesitation and moderate laziness.
What is it? It's his observation – understood by most physicists between 1900-1930 – that most of the internal structure of the atoms has to be invisible because it doesn't contribute to the specific heat. It was one of the reasons that made it possible for Gentlemen like Dirac to see that classical, realist theories had to be abandoned and that allowed them to complete their understanding of their revolutionary new framework within months from some important insights – even though so many people remain clueless even now, 90 years later.
Many of the chronic interpreters – a term that is mostly synonymous to the anti-quantum zealots – could be able to see that their favorite classical theory – now renamed as "realist interpretation of quantum mechanics" – has to be wrong. It can be killed in seconds. In this blog post, I won't copy and paste my 2011 essay. Instead, I will try to be complementary. But the section "Absent heat capacity from electrons" in my 2011 text is overlapping with the content of this blog post.
When I say that all realist "interpretations" of quantum mechanics are dead for thermodynamic (and many other) reasons, I mean all of them: the Bohm-deBroglie pilot wave theory and its variations, realist Everett's many worlds "interpretations", Ghirardi-Rimini-Weber "collapse theories", and other classes.
What all of these attempts share is that they are "realist interpretations". The term "realist interpretation" is of course nothing else than a childish synonym for what the mature physicists call the "classical theories". They are theories assuming that at every moment \(t\), the "state of the Universe" is described by some objective information that is knowable in principle and that all well-informed observers would agree upon.
Quantum mechanics denies this assumption.
In those "realist interpretations", the "state of the universe" is described by some data that are often called the "beables" – because John Bell didn't consider the term "realist interpretations" childish enough. Needless to say, the adult physicists have a synonym for the "beables" as well. They are known as the "coordinates on the phase space" or, somewhat less accurately, a "classical degree of freedom". They are all the data that identify which of the a priori allowed states at time \(t\) is actually realized by the physical system.
In the pilot wave theory, the "beables" include the coordinates \(x_i,p_i\) of the "real particles" as well as the classical wave \(\psi(x_1,y_1,z_1,x_2,y_2,z_2,\dots)\) that emulates the wave function in quantum mechanics but is "interpreted" objectively.
In the many world interpretations, one assumes that there are many universes, so for each allowed "state of one Universe", one has some occupation numbers (or fractions) that say how many Universes are found at one state or another. (For obvious reasons, the Universes in MWI have to be ludicrously considered as bosons because copies are allowed, but their "locations" within the multiverse cannot be distinguished) MWI has never been well-defined – it is just a vague philosophy hyped by people who are constantly high – but I think it's obvious that the previous sentence is the most concise formulation of the most intrinsic assumption of any theory deserving the MWI label.
In the GRW collapse theories, the wave function \(\psi(x_1,y_1,z_1,x_2,y_2,z_2,\dots)\) is again "reinterpreted" as a classical wave, so the space of possible waves is the phase space. However, the evolution involves random jumps so the phase space admits stochastic discontinuous jumps. This is just a detail about the dynamics of these theories. The random discontinuous jumps may still be described by laws evolving the point in the phase space. Place lots of wormholes with some "jump there from here" banners all over the phase space.
Heat capacities
Heat capacity \(C\) is the amount of energy (heat) you need to increase the temperature of an object by the unit of temperature. We often talk about the specific heat (per unit of mass of the material) and the molar heat capacity (the heat capacity of one mole). Up to the factor of Avogadro's constant \(N_A\), the molar specific heat is nothing else than the heat capacity of one atom or one molecule of the material.
How much the average energy per atom (or molecule) changes if the temperature is increased by \(T\)?
These numbers may be measured. It has been known for quite some time that the monoatomic gas has\[
E = \frac 32 kT, \quad \hat c_V = 3/2
\] where \(k\) is Boltzmann's constant and \(T\) is the absolute temperature (or its increase if \(E\) represents the corresponding increase of the energy). When this \(E\sim T\) dependence is linear, the heat capacity is simply \(E/T\) e.g. \(C=3k/2\) for the monoatomic gas atom. The energy carried by one atom of the monoatomic gas directly "translates" to the heat capacity at a constant volume. If you want to know the heat capacity at a constant pressure \(\hat c_p\), you may get the answer by replacing energy \(U\) (the Hamiltonian) by the enthalpy \(H=U+pV\) (which is not the Hamiltonian, despite the letter \(H\)).
To keep the pressure constant, the increase of the internal energy must be accompanied by the increase of the volume (otherwise the pressure would increase), and the extra mechanical work tells you that\[
\hat c_p = \hat c_V + 1
\] So it's \(\hat c_p=5/2\) for the monoatomic gas and \(\hat c_p=7/2\) for the diatomic gas. All these numbers may be measured with tons of different chemicals. The energy per atom or molecule is always comparable to \(kT\); the heat capacity of an atom or a molecule is comparable to \(k\). The additional coefficient never leaves the interval between one and five or so. This may be puzzling for many reasons because the atoms and molecules of different elements and compounds dramatically differ.
In fact, this insight that the heat capacities are of order "one" holds for solids, too. It seems that one atom in a crystal lattice carries the energy\[
E = 3kT
\] and this law – known as the Dulong-Petit law formulated whopping 196 years ago – applies with a great accuracy, too. Why don't the materials care how complicated the atoms and molecules are?
Einstein has done lots of things but one of the great insights that is not terribly well-known is his 1907 Einstein solid. The crystal is composed of atoms and each atom is pretty much a harmonic oscillator – because it's attached by some springs to the adjacent atoms in the crystal lattice.
Well, a three-dimensional harmonic oscillator carries the energy \(E=3kT\) at temperature \(T\). Why? Because classical physics allows you to derive a simple thing. Each degree of freedom with a quadratic contribution to the energy contributes \(kT/2\) to the energy. Monoatomic gases have \(\hat c_V=3/2\) because they have three quadratic contributions from the momentum components; diatomic gases have \(\hat c_V=5/2\) because there are two additional rotational degrees of freedom (two components of the angular momentum; the third one – the rotation around the diatomic axis of the molecule – is unphysical). Some more complicated gas molecules may have \(\hat c_V=6\) because all the rotations matter (no axial symmetry).
The virial theorem gives you a simple expression for the average potential energy whose form is given by \(\alpha r^N\), too.
Entropy from phase spaces
It's important to realize that these formulae for the temperature-dependent average energy – and for the heat capacity – apply to every simple enough classical system, including all the "realist interpretations" of quantum mechanics. Some straightforward manipulations in thermodynamics allow you to find out that the heat capacity is also equal to\[
C_V = T\zav{ \pfrac ST }
\] Note that all these conclusions in thermodynamics are "phenomenological" in character. They're derived from thermodynamic experiments, if you wish. What is a priori open is the relationship between \(C,S,T\) on one side and your microscopic theory on the other side.
However, things are unequivocal as soon as you visit Boltzmann's tomb in Zentralfriedhof, Vienna. The entropy is equal to\[
S = k\cdot \ln W
\] where \(W\) is the number of mutually exclusive, in principle (but not in practice) distinguishable microstates that look like the same macroscopic configuration we are able to observe. The constant \(k\), Boltzmann's constant, has to be added for dimensional reasons. It has the unit of entropy (which may be chosen dimensionless, but originally wasn't so) and converts the information in nats to entropy.
One has to choose the logarithmic dependence because \(\ln(W_1 W_2)=\ln(W_1)+\ln(W_2)\) which is needed for the entropy to be additive (and extensive). Note that the number of possibilities for two independent systems is \(W_1 W_2\). It's combined multiplicatively; the set of options is a Cartesian product.
As you have heard, in quantum mechanics, we have \(W=d\), the integer labeling the dimension of the Hilbert space of states that look the same macroscopically. The constant \(k\) is cleverly Boltzmann's constant, so that curious schoolkids may ask why Boltzmann's constant is involved in a formula on Boltzmann's grave.
However, in classical physics, it is unavoidable that\[
W = \frac{{\rm Vol(phase\, space)}}{(2\pi\hbar)^\alpha}
\] the argument of Boltzmann's logarithm is the volume of the relevant phase space (either a shell in it, or the whole interior, it doesn't matter in the thermodynamic limit). The volume of the phase space is divided by a certain unit – which I wrote as the denominator, so that the transition to quantum mechanics is straightforward. But the denominator is really an "undetermined constant" in classical physics – a constant describing the accuracy with which the "beables" may be measured. Because we take the logarithm of the ratio, the multiplicative uncertainty of the denominator translates to an additive shift uncertainty about the entropy.
So if you know nothing about quantum mechanics, the entropy of the Universe may be redefined by\[
S \to S + \Delta S
\] where \(\Delta S\) is a state-independent constant (shift of the entropy), and with this redefinition, you will get "equally good" thermodynamics because only the entropy changes/differences may be measured classically. Quantum mechanically, the denominator is fixed – and so is \(\Delta S\) – by the uncertainty principle which dictates the size of the "smallest cell" on the phase space. Because of that, we also have \(S=0\) at \(T=0\). When a typical system is completely frozen, it sits in its unique ground state (eigenstate with the lowest energy).
All these things should be standard. But what is apparently not standard is to realize that these insights really hold for any classical theory – and "realist interpretations" of quantum mechanics belong to this set.
The measured heat capacities look so small
As I have already announced, the problem that kills all the classical and realist theories is that they predict way too huge heat capacities \(C\). Dirac dedicated a few sentences to this observation in his well-known textbook of quantum mechanics; around 1930, this insight was well appreciated by everyone. Equivalently, the classical or realist theories predict much higher energies than \(kT\) at temperature \(T\). Also equivalently, they predict that the entropy \(S\) – or the volume of the relevant phase space – grows way too quickly with \(T\) because there are apparently "too many degrees of freedom".
Take a planetary model of a nitrogen atom – and pray that Encyclopedia Britannica allows me to embed images in this way. There's a nucleus in the middle and lots of electrons orbiting the nucleus. Each electron has the kinetic energy \(p^2/2m\). So each electron should contribute \(3kT/2\) to the energy at temperature \(T\), shouldn't it? Similarly, there should be a contribution from the potential energy. For example, if some harmonic-oscillator-based "valley" potential energy keeps the electrons in the orbits, there should be some \(3kT/2\) contribution to the energy from each electron's potential energy, too.
You may see that the total energy should scale like \(3kT\cdot Z\) where \(Z\) is the atomic number. But the actual measured value is just \(3kT\) or so. Why don't the \(Z\) individual electrons increase the energy?
This problem is morally equivalent to the "ultraviolet catastrophe" in classical electromagnetism – which is a special example of the inadequacy of classical physics discussed in this blog post. The electromagnetic field (even in a box) may be decomposed to infinitely many Fourier components. Each Fourier component of the electromagnetic field should carry some \(kT/2\) as well. So the total energy carried by the electromagnetic field in a box should be infinite, as Brian Greene nicely explained in Chapter 4 of The Elegant Universe, years before he began to write pure bullÅ¡it about quantum mechanics.
The solution has to be that all these extra "beables", the additional degrees of freedom like the momenta of the individual electrons or the electromagnetic Fourier modes at too high frequencies, must become invisible, frozen, effectively non-existent. The classical information carried by them must be zero. There's no other way to avoid the huge (or infinite) entropies and heat capacities that would otherwise be predicted. If there were too many switches, too many "beables" that may have one value or another, the incoming heat could switch and change all of them, just like if you have too many atoms, and the object would be able to absorb too much energy per unit temperature difference. Its heat capacity would be too high – typically infinite. And yes, any viable "realist" (classical) laws of physics would have to define a conserved energy on the phase space which becomes higher for the "more wiggly" values of the "beables"; otherwise the laws would break the energy conservation law by huge amounts.
In Boltzmann's formula \(S=k\cdot \ln W\), we may somewhat carefully define \(W\) as
the number \(W\) of mutually exclusive microstates – in principle distinguishable within the accuracy of the best apparatuses – that macroscopically look like the same macroscopic configuration we deal with.Too much classical information – too many "beables" – will inevitably make \(W\) grow too quickly with \(T\). You may see that the "hidden variables" and similar Å¡it are a complete catastrophe – literally, because this failure is equivalent to what is technically known as the ultraviolet catastrophe. The folks who add "hidden variables" think that they are helping to avoid the contradictions. But in reality, they are making the contradiction much worse.
The correct theory – quantum mechanics – in some sense makes and has to make the "set of possible states" in which the physical systems may be found much smaller than it is in simple Newtonian mechanics!
The "pilot wave" in Bohmian mechanics is interpreted as a classical wave in a multi-dimensional space. The motion of one particle may be claimed to emulate the right behavior predicted by quantum mechanics if you neglect all problems with spin, relativity, preparation of the initial pilot wave, the fate of the pilot wave when the particle is observed, and all the conceivable measurements inequivalent to a position measurement. But can this "pilot wave" theory be extended so that you could use it to predict thermodynamic phenomena involving larger objects? Can the right "mechanisms of absorption" of the particle be ever well-defined?
The answer is obviously No. The phase space of possible "pilot waves" that may evolve in an equilibrium – when you add the necessary mechanisms for the "sweeping" of the "pilot wave" after a particle is observed somewhere – will inevitably be huge, and the corresponding \(W\) will increase way too quickly (well, really infinitely quickly) with \(T\), thus predicting huge (and probably infinite) heat capacities. It's similar in the GRW "collapse theories".
And the problems are the same or even worse in a realist "many worlds interpretation". This armchair physics paradigm supposes that one has many universes at a given moment and keeps track of the number (or fractions) of universes that are objectively found in each allowed classical state of one universe. The "beables" include a huge (infinite) number of occupation numbers (or fractions). Those will make the number of in principle distinguishable microstates increase way too quickly with temperature.
Think about any "realist interpretation" you wish. You will quickly see that whatever it is, it simply obeys all the normal requirements for a "classical theory", and will predict too huge "phase spaces" i.e. "too huge entropies" and "too huge heat capacities".
If you try to invent a "realist interpretation" with too few "beables", you will obviously fail, too. You need something at least as powerful as the wave function to describe the state of the physical systems in realist theories.
How does quantum mechanics solve the problem?
What is the magic by which quantum mechanics cures this otherwise lethal disease? Quantum mechanics really doesn't change anything about the texts above. In particular, it preserves \(S=k\cdot \ln W\), the equation on Boltzmann's grave, without any modifications, and it even allows us to keep the definition of \(W\) as
the number \(W\) of mutually exclusive microstates – in principle distinguishable within the accuracy of the best apparatuses – that macroscopically look like the same macroscopic configuration we deal with.So why does quantum mechanics predict the small heat capacities \(\O(kT)\) per atom, regardless of the number of electrons inside? And why is the black body radiation finite? It's mainly because of the words "mutually exclusive" in the definition above.
The Hilbert space is basically\[
\HH = \CC^{d}
\] and requires us to specify \(d\) complex numbers or so to unique isolate a pure vector. If the wave function were a classical observable – a "beable" – then the phase space could distinguish about \(100^d\) different points, assuming that the real and imaginary part of each amplitude would be known with the accuracy of \(0.1\), and the entropy would have to be\[
S = k\cdot \ln W = k\cdot \ln(100^d) = kd \ln 100.
\] The entropy would be proportional to the dimension of the Hilbert space \(d\) – which however has to grow exponentially with the number of particles! That's too bad. The entropy wouldn't be additive.
The main novelty of quantum mechanics that is relevant in this context is the insight that only orthogonal states are mutually exclusive. So we can't treat the wave function as a classical "beable". Only the basis vectors of a basis in the relevant Hilbert space are mutually exclusive and it's nothing else than the dimension of the Hilbert space \(d\), and not its exponential, that should be substituted as \(W\) to Boltzmann's grave formula.
Everything works perfectly. In particular, if your macroscopic measurements tell you that you have frozen a crystal (or vacuum) perfectly at \(T=0\), the microstate that may give rise to the observed macrostate is completely unique and Boltzmann's \(W=d=1\) leading to \(S=0\). The entropy \(S\) at the absolute zero \(T=0\) is equal to a universal constant, namely zero. (Before quantum mechanics, the precise value of the entropy at \(T=0\) was a constant \(\Delta S\) that could have been changed with your conventions, as I wrote above.)
You must realize that the low heat capacities that are compatible with the observations only arise if the number of "mutually exclusive microstates" is much smaller than the cardinality of the Hilbert space of allowed microstates. In other words, it is totally critical for the right theory to imply that two generic allowed states \(\ket \phi\) and \(\ket\psi\) are not mutually exclusive. This "refusal to be mutually exclusive" may also be said to be the reason why the phase space is separated to the cells (and why \(W=d\) may be one or a small integer, after all). Smaller cells in the phase space aren't allowed due to the uncertainty principle and overlapping cells correspond to states of the particle that are not mutually exclusive.
In quantum mechanics, the generic two states are not mutually exclusive. If the system is in state \(\ket\phi\) with \(\bra \phi \phi\rangle=1\), then we can't be sure whether the system is in the state \(\ket\psi\neq \ket\phi\) with \(\bra\psi\psi\rangle=1\). Instead, the laws of physics say that the probability is\[
P_{\psi=\phi} = \abs{ \bra \phi \psi\rangle }^2
\] that the system in the state \(\ket\phi\) is in the state \(\ket\psi\). That's the "most elementary" form of Born's rule, I would say, one that only uses pure vectors and no specific observables (although we always determine pure vectors by measuring observables). Feel free to call it the Lumo-Born rule if you haven't heard of it in this form before. There can't exist any objective answer to the question whether the system is in the state \(\ket\psi\) or the state \(\ket\phi\) for two non-orthogonal states. They are simply not mutually exclusive. The state that will be used to describe the physical system unavoidably depends on the knowledge of the observer – on what measurements he did (and acknowledged to be measurements i.e. a source of information). The Wigner's friend thought experiment makes it clear that the two men will use different wave functions at some moments – and this difference doesn't imply a contradiction because they're not orthogonal.
If your theory is realist i.e. allows you to distinguish all the possible values of the wave function or something similar in principle, then this theory will predict way too huge values of \(N,W,C,S,E\) at a finite temperature \(T\), and these huge values of the heat capacity will completely contradict the experiments.
The only way out is a non-realist theory called quantum mechanics where Boltzmann's grave \(W\) is interpreted as the dimension \(d\) of the relevant complex linear space. All the linear superpositions may be "represented" by the basis which is why the "number of options" remains low and only slowly increases with the temperature.
Do I expect the Bohmian, Everettian, GRWian, and even worse anti-quantum zealots to understand these simple matters about thermodynamics that all good physicists understood in 1930 – and much of it was well appreciated long before the birth of quantum mechanics in 1925 – and admit that they have been totally deluded throughout their lives? I don't. But I do hope that many new TRF readers will be able to appreciate how staggeringly obviously wrong all the anti-quantum zealots are.
And that's the memo.
All realistic "interpretations" falsified by low heat capacities
Reviewed by DAL
on
May 14, 2015
Rating:
No comments: