Sean Carroll has ignited another Cosmic Variance discussion on the legitimacy and predictivity of multiverse theories,
Carroll begins by quoting an interview with Craig Callender, a San Diego philosopher. He says that positing too many universes is too high a price to pay for an explanation of the low entropy of the newborn universe. Carroll correctly disagrees that it's not a price at all, it's really a prediction of the theory; I will discuss this point later, too.
However, Callender is right when he says that one shouldn't pay a high price for a new explanation of the low entropy. The reason is that there is actually no open problem associated with this general fact at all. The interviewer was pushing Callender to an irrational mode of reasoning by these misleading comments:
Why? By definition, the initial entropy of the Universe is the entropy of the earliest possible moment of the Universe we may consider. What does it mean? It means a moment such that we can no longer ask "What was there before that moment?". We can no longer ask "What were the events that led to that moment and that explained the properties of the Universe at that moment?" This is the only legitimate model-independent description of what the adjective "initial" or the phrase "beginning of the Universe" may mean.
Low initial entropy is a tautology
Now, why can't we ask "What was there before that moment?"
We can't ask this question because there couldn't have been any other moment before this particular moment that we decided to call the "beginning of the Universe". And why couldn't have been any moment before that? There had to be a law that would prevent us from reconstructing the history to even earlier moments. What could this law look like? The only law we know that has the capability of "terminating the time in the past" is the second law of thermodynamics. Even if you disagreed it's the only one, it doesn't matter: the uniqueness isn't essential.
What is essential is that the second law says that the entropy has to be a non-decreasing function of time and away from the thermal equilibrium – which is an irrelevant concept for the early Universe that was almost certainly very far from thermal equilibrium – the entropy of a connected system has to increase with time. But entropy can't be negative, either. So when we are reconstructing the history of a physical system and we reach a moment in which the entropy of this system was zero (or approximately zero, within error margins of various origins), we know that there couldn't have been any earlier moment.
A moment with a (nearly) vanishing entropy is the only moment for which this argument applies; it's the only moment that can be called "the beginning of the Universe". For any other moment, i.e. if the entropy were any other number, we could always keep on asking "What was there before that" and continue the backwards reconstruction of the history, thus falsifying the proposition that the moment we started with was the ultimate beginning. So the vanishing entropy of the newborn Universe is really a tautology; it trivially follows from the second law of thermodynamics.
The people who try to make the vanishing of the entropy look mysterious are probably making one more general mistake in their failure to reconstruct the valid stream of ideas above: they are imagining that "the beginning of the Universe" is some God-given concept that doesn't have to be verified or thought about rationally (and they also like to evolve this God-given moment to the future as well as into the past while they're using completely wrong methods to evolve it into the past – they use the method of predictions even though the right methods of retrodictions are completely different and require a form of a Bayesian analysis with subjective priors and other things). But much like with any other concept in science, we must be careful how it is really defined and whether a particular observed object or pattern or property agrees with the definition.
If we are trying to identify or locate the "beginning of the Universe" today, in 2012, we must carefully study what we mean by that moment, "and a moment that had no other moment before that" is obviously the only refinement of the definition we may think of. Once we have this refinement and as we're trying to learn more about that moment, we must think why there would be no moment before the moment declared to be "the beginning" and we will find out that the entropy is the most universal explanation, to say the least, why one can't get further into the past.
The people who think irrationally about these matters don't really think at all. They want to isolate the term "beginning of the Universe" from all of our observations and irrationally postulate that the research of the entropy of "this holy moment" should be a job that cosmologists should work on for the whole eternity, regardless of whether they know how "this holy moment" is defined and what is its relationship to future moments and a priori plausible previous moments. But the proper definition of the concept, "beginning of the Universe", and the relationship to the past and future that this definition involves automatically (with the help of the well-known second law of thermodynamics) answer the question about the entropy, too.
Any claim that we need to pay any price – e.g. flood the archives by extravagant new theories – for another explanation of the low entropy of the initial state is a sign that someone totally misunderstands thermodynamics and statistical physics. Sean Carroll certainly does. There is nothing more to explain here. Having another scheme inequivalent to the second law that would claim to dictate the entropy trends would mean that there would be a contradiction with thermodynamics and statistical physics. You can't have two different, inequivalent laws controlling the same questions.
Does the large number of worlds hurt?
So one shouldn't pay any price for such explanations of the low initial entropy; one shouldn't look for new explanations of this well-understood fact at all. We may do nontrivial research into the detailed properties of the "initial state"; but its low entropy surely isn't among its characteristics that remain mysterious. Everyone who makes it sound mysterious and who is doing "important new research" of it is doing pseudoscience for the very same reason why the researchers of the phlogiston are doing pseudoscience: they are working on alternative, non-scientific explanations of features of Nature (either low entropy of the initial state or heat in general) that have perfectly well-known scientific explanations, namely those rooted in thermodynamics and the microscopic framework that explains it, i.e. statistical physics (the explanations are the second law and the thermal motion of the molecules, respectively).
On the other hand, if there were other things to explain – e.g. the smallness of the cosmological constant which still looks mysterious (although the degree to which this mystery has been worshiped has certainly been over the edge) – it would be legitimate to consider theories that require or predict the multiverse. For those cases, Carroll, Callender, and others are right that "quantitative simplicity" (a feature of a theory that predicts that only a small number of copies of an object exist) shouldn't be considered a virtue when we decide whether the theory is more or less acceptable. If this principle of "quantitative simplicity" has ever helped the people to locate the right theories sometime in the past, it was pretty much a coincidence that it had worked.
Having a small number of visible universes (e.g. one) and having a huge number of them are very different hypotheses and various "numerical parameters" by which we may describe them have very different values. But they are two competing, qualitatively different answers to a question, and unless we can falsify one of them by the evidence – and we cannot at this point – we must assign them with comparable prior probabilities. It's a rule I often like to state that
Some people want to use some perverse form of "Occam's razor" (and they use lots of other demagogic terms) to argue that theories having a large multiverse or a large anything or a large number of any objects must be eliminated a priori, even without any evidence; they think that these theories are just immensely contrived. Other people take the opposite but equally flawed approach: they "reward" theories with large multiverses; they argue that such large multiverses have a huge number of observers so "most observers" find themselves in the large multiverse which is supposed to be the mechanism by which the multiverse wins and becomes much more likely. They think that the vast number of the observers that their multiverse picture predicts or postulates is a big enough argument that may even defeat the "risk" that the multiverse doesn't exist at all. ;-)
Note that the conclusions reached by these two camps are opposite to one another. But both arguments are utterly irrational. If the actual observations are compatible with both of these frameworks, there can't exist any valid logical consideration that could use these observations to increase the probability of one explanation relatively to the other. It's just not possible! So everything that the people talking about "generic observers" on one side or "Occam's razors" on the other side reveal in these monologues are their philosophical prejudices and their inability to look at the evidence impartially. The actual evidence remains silent about this big question.
It is only legitimate to use the "genericity" arguments favoring the "large ensembles" and "theories with large ensembles" if there is an actual, demonstrable (or likely to be true) mechanism that makes the probability of each element equally or comparably like (like the probabilities of microstates in statistical physics after thermalization – in the far enough future: it will be revisited at the end of this blog entry). But if there's no such mechanism, and there obviously isn't any reason why there should be democracy e.g. between the different, inequivalent vacua of string theory, the "genericity" arguments are illegitimate.
An analogous comment holds for "Occam's razor" or other weapons by the opposing extreme camp. Occam's razor is a valid consideration in some situations. It is legitimate to disfavor theories that make too many independent assumptions, that contain too many permanently independent elementary building blocks or too many permanently uncalculable parameters. The reason why such theories should be "punished" is that they must inevitably share the prior probability with very many similar theories of the same kind, very many other theories that have too many assumptions, building blocks, and parameters, too.
However, the multiverse isn't immediately eliminated by this reasoning because a large number of the universes in a multiverse doesn't translate to a large number of assumptions. As many people on Cosmic Variance, including Sean Carroll, correctly state, a large number of the universes is a consequence of a theoretical framework, a feature of it, not an arbitrary assumption, so Occam's razor can't be allowed to "cut necks" here. Even if the large number of universes were considered an assumption, it's just one assumption so the "punishment" would be negligible. (However, the idea that the multiverse must play a key role in the explanation of the features or initial conditions of the visible Universe is a completely different issue, an unjustified and probably incorrect assumption, as I will discuss below.)
As Carroll, Callender, and others rightfully say, one must distinguish "qualitative simplicity" (a small number of assumptions etc.) which is indeed favored by a legitimate version of Occam's razor from the "quantitative simplicity" (a large number of copies of a kind of an object predicted by a theory) which is neutral and has no impact on the plausibility of a theory as long as there isn't a way to empirically falsify the theory. The terminology, "qualitative and quantitative simplicity", is due to David Lewis, a philosopher, but be sure that we could invent such words (and maybe better words), too, so if the question were whether a philosopher would deserve a salary for the creation of such buzzwords, my answer would still be No.
Qualitative simplicity and explanations
Despite the apparent tension, I actually do feel that the people on that thread would ultimate agree that "quantitative simplicity" – e.g. the harassment of a theory that dares to predict that the number of electrons in the Universe is really high – isn't a valid argument to abandon a theory or a hypothesis. Many quantities in Nature possess high values and correct theories must agree with this fact, whether or not you like large numbers. Laymen of course love to think in similar unthoughtful ways and anti-science hate blogs are encouraging them to make such simple conclusions but that can't change the fact that the identification of "theory predicts many things or deals with large numbers" and "theory is bad" is utterly illogical and unjustifiable.
On the other hand, the ability of a theory to explain things is surely important, and so is the number of independent assumptions that must be chosen, independent building blocks that have to be inserted, and independent unexplainable parameters that have to be adjusted. I have emphasized the adjective "independent" because a theory may postulate many new things that are shown not to be independent so you can't use their multiplicity against the theory. The Standard Model contains lots of 2-component spinor fields but they're organized into representations (color triplets, electroweak doublets, etc.) and you should count the representations as wholes (and maybe some extra unification of leptons and quarks and unification of families is achieved by a deeper framework e.g. in grand unification and string theory).
Analogously, electric and magnetic fields are different 3-dimensional vector fields but the special theory of relativity shows that one of them implies the other (one field emerges from the viewpoint of a moving inertial system), so relativity shows that the number of independent building blocks in electricity and magnetism is lower than people had thought (surely one of the reasons that made Einstein certain that he was on the right track). Also, there are many superpartners in supersymmetry but they're not independent assumptions; all of them follow from a single assumption of supersymmetry (combined with the list of known particle species).
I have also included the word "unexplainable" in the previous paragraph because if an effective theory needs many parameters to describe the reality, it's just how it works. These parameters may ultimately be explained by a deeper theory so the larger number of parameters in an effective theory isn't really a fundamental problem; in this sense, the larger number of parameters in an effective theory is just a consequence of a deeper theory. If you had another effective theory that had a lower number of parameters, you could favor it but you must be careful because once you admit (or even have some more detailed explanation) that these theories are not really fundamental, your preference could be similar to the preference of "quantitative simplicity" I mentioned above.
And it's simply not rational to prefer a theory just because it predicts a lower number of electrons in the Universe; it's up to Nature to decide how high this number should be (and it incidentally turns out to be rather high) so you must be careful to avoid unsubstantiated prejudices. In the same way, it is probably irrational to prefer an effective field theory with a somewhat lower number of soft parameters. It's up to a deeper theory to decide how many soft parameters will be needed in its effective descriptions and as long as you don't understand this deeper theory well enough, you shouldn't impose any conditions or prejudices about whether this number should be high or small.
The construction of models that fit the observed data from the bottom up is a different enterprise, of course: in that case, you typically start with "minimal" theories in the naive sense and you add fields (particle species) and interaction terms "one by one". But this semi-mechanical strategy to enrich your theories one step after another has nothing to say about the validity or relative likelihood of top-down theories that are making greater leaps. Just because you like to pursue the bottom-up strategy (one step after another) doesn't mean that you have a valid argument for or against theoretical claims that address realms where your down-to-Earth approach hasn't gotten yet.
Many worlds in quantum mechanics and cosmology
Let me say some words about the appraisal of particular theories involving many worlds. I completely agree with the first comment by Matt Leifer:
The large size and flatness of the visible Universe is a strong piece of evidence supporting a mechanism that produces a large flat Universe, and those explanations pretty much inevitably involve scalar fields (fundamental or effective ones). We know that such fields may roll and tunnel so it's pretty reasonable to say that with some loopholes, the existence of the multiverse may follow from the cosmic inflation which does follow from the observed flatness and size of the visible Universe. String theory also implies the existence of a complicated configuration space, the landscape, although the existence of a landscape doesn't "immediately" imply that the landscape (of possibilities) is realized in an actual multiverse (composed of tangible universes); a few more steps (with a risk that they don't occur in Nature) are needed but they're at least "reasonably plausible".
So there's actually some semi-convincing empirical framework supporting the inflationary multiverse. (Again, let me emphasize that I am not saying that this multiverse – especially the detailed pre-history of the ancestor universes that have led to ours – is a good starting point for predicting the properties of our Universe. I surely tend to think that this is way too constrained and therefore not necessarily correct a method to think about the initial conditions of our Universe.)
On the other hand, Leifer is also right that the many worlds in Everett's interpretations only look convincing to those who can't abandon the idea that the wave function is a real object that objectively exists (that's what the omnipresent buzzword "ontological" is supposed to say).
There's ample evidence that this assumption isn't valid; the wave function isn't an objective, real entity. It is a semi-finished product ready to prepare probabilistic distributions which are fundamentally subjective although many of their inter-subjective if not "objective" features may be demonstrated, too. Once you get rid of the prejudice that objects such as the wave function should be fundamentally real and objective entities, the a priori preference for frameworks such as Everett's many worlds evaporates completely. I did this step more than 20 years ago because the evidence has overwhelmingly convinced me this is the right and necessary step. I am surprised if not flabbergasted that many people fail to realize this point after decades of looking at the subject.
Jeremy in the comment #2 correctly points out that in the tree of the many worlds, one has to be ours. Because the theory offers no mechanism that could tell us something about the question which branch is ours, the extra labeling that is needed – to mark our branch – is an extra piece of assumptions in this theory, one that turns all the remaining branches into a non-explanatory "bloat", using Jeremy's words (or "unnecessary superstructure", using Einstein's words for the de Broglie-Bohm pilot wave theory which is analogous from the epistemological perspective).
The impossibility to actually derive the generally unequal probabilities from the many worlds "interpretation" – and the probabilities that are a priori arbitrary numbers between 0 and 1 is what almost 100% of the knowledge in quantum mechanics is about – only underscores the insight that Everett's interpretation is a worthless pile of philosophically flavored emotions and irrationality from a scientific viewpoint.
Even if the "more philosophical" problems about the other branches – such as those in the paragraph preceding the previous one – were absent, I just can't understand why a serious scientist would ever consider a theory that (in its present form or any form imaginable in a foreseeable future) doesn't have anything to do with the numbers we actually want to be calculated – and all of them in quantum mechanics come in the form of probability amplitudes (and their functions).
If you don't have any deeper explanation for these particular numbers, the probability amplitudes, then you have no deeper explanation for anything in modern physics that makes a scientific sense because all of the information and all of the predictions in modern physics boil down to complex probability amplitudes. Is this obvious conclusion really so controversial? I don't believe it should be allowed to be controversial.
People who don't care whether their "theory" has anything to do with the probability amplitudes are simply not doing physics.
Moshe on the vanishing explanatory power of the inflationary multiverse
I also subscribe to this comment #4 by Moshe Rozali:
There's no direct or indirect evidence for this claim and it is still an assumption that may very well be wrong, and I think it is wrong. If we take the risky position of adopting an assumption that may very well be wrong, we are increasing the probability that we will find ourselves on a wrong track and we should at least get something in exchange, something that partly compensates the risk of being wrong. But the anthropic explanations don't really explain anything and the amount of uncertainty concerning all of its parts makes it impossible to find any positive evidence in a foreseeable future.
So the attempts by some people to promote the anthropic thinking is just an ideological movement that has nothing to do with the evidence or logically or mathematically justifiable arguments. It wants the people to adopt one particular way of thinking even though it's clearly not the only possible frameworks how to think about the relevant set of empirical questions, in this case the initial conditions and particle properties in our visible Universe. Clarifications of the Hartle-Hawking wave function and similar approaches are arguably more promising a route towards an explanation than the anthropic ones.
Sean Carroll responds to Moshe Rozali – with some unbelievable, hollow promises:
It reminds me of the people who say that a crackpot theory is "exactly like string theory" which was also studied by O(10) people in the 1970s only but later, the number of people would jump to thousands once the theory was proven to have some remarkable properties. Well, there's still a difference: for string theory it's been proven while for the crackpot theories, it hasn't. One can't treat some speculative wishful thinking about the future as facts. It's much more likely for a crackpot theory to remain on the dumping ground of science together with thousands of other crackpot theories than for it to gain the status comparable to that of string theory. It's just bad if some people don't distinguish facts and evidence from speculations and a wishful thinking or if they suggest that the difference isn't important.
If some accurate and impressive evidence supporting the anthropic multiverse theory were found, sensible people would start to consider it more seriously, to say the least. But this isn't what has actually happened in this world as of 2012 so sensible people don't consider the anthropic multiverse theory as a serious explanation for anything physical at this point. Carroll tries to obscure the not-so-subtle fact that the evidence he is promising doesn't exist at this moment so no one knows whether it may be found in the future. What he is left with are prejudices, bias, and a reduced degree of scientific integrity.
You know, this is a very different situation from the discussion of the parameters in SUSY models. I wrote that the large number of new fields and parameters mustn't be counted against SUSY because all the new fields follow from one principle of SUSY, one assumption (this is a perfectly well understood point at the present); and the large number of (soft SUSY-breaking) parameters result from a more accurate short-distance theory. The accurate short-distance theory relevant for our world isn't known at this point, of course; but the fact that the low-energy soft SUSY-breaking parameters follow from this deeper theory (that may in principle calculate all the parameters) is a well-understood general fact that has been realized in practice in many semi-realistic examples.
However, there's one critical difference between the two situations: in the case of soft SUSY-breaking parameters, no one actually claims that some unquestionable empirical evidence for the picture (except for the several motivating arguments such as string theory, hierarchy problem, dark matter, and gauge coupling unification) has been found. Carroll not only wants to act as if such arguments existed in the case of the anthropic multiverse; he wants to act in this way before they actually find any clue how the predictions could actually be extracted from some detailed underlying anthropic theory. So he's making two unjustified quantum leaps and not just one.
Chicken and egg and North of the North Pole
A slightly different topic. The comment #6 by Josh is playful but it is also misguided:
These models with "nothing else before the beginning" correspond to the established Big Bang cosmology. More speculatively, one may consider cosmological models with a pre-history, including the eternal inflation. It's OK, there are many possibilities to "extend the picture". However, Josh is wrong about another point. He suggests that the pre-history before the Big Bang is necessary to avoid some singularities (in the negative sense) or other unsatisfactory properties of the Big Bang cosmology. This claim simply isn't true.
No valid implication of this sort can be deduced from any empirical data and/or any well-established theory. In particular, the Hartle-Hawking wave function is a way to completely eliminate all the singular features of the Big Bang singularity while refusing any "pre-Big-Bang prehistory" at the same moment. Moreover, one must be very careful about the word "singularity" itself. It may mean various things. Some of the singularities imply an inconsistency but others don't. Some quantities may be strictly singular at the beginning of the Universe – in a certain description – but the theory involving such singularities may still be totally healthy, satisfactory, and complete. A related example: String theorists have learned how to describe the motion of strings on some singular spacetimes (orbifolds, conifolds, and others, including topology changing ones) and they have proved that the physics is totally consistent (and in many cases, equivalent to a different string description that uses a fully non-singular spacetime manifold).
Attempts to promote the concept of a pre-Big-Bang cosmology to the role of a vital tool in the struggle against inconsistencies is just an example of a sloppy thinking or demagogy. There exists no scientifically justifiable link of this kind.
Josh's comparison to heliocentrism is an even more obvious piece of demagogy. There is no analogy between the question whether there was something before the Big Bang and the question whether and why the Sun sits at/near the center of the Solar System – beyond the fact that both of these questions are questions. So whether one question is legitimate and/or may be answered (and whether the answer is Yes or No) has absolutely no implication for the other. Heliocentric theories are more accurate, natural, and correct than the geocentric ones because the Sun is heavier than the Earth which means that the same forces manifest themselves as a lower acceleration of the Sun, because of F=ma.
The opposite viewpoint, the geocentric one, used to be favored as a dogma for religious and philosophical reasons. All things that are fundamental should be close to us, people thought. It was wrong to believe this assumption as a dogma; but the opposite extreme dogma – everything that is fundamental must be as far from us as possible – would be equally wrong. Some important things may be close to us. It depends which things. Different questions may have different answers.
The reasons why cosmology without any pre-Big-Bang prehistory may be a viable explanation of the evolution of our Cosmos have nothing to do with geocentrism – instead, return about 3-4 paragraphs above to recall what the reasons are – so linking these two things is just illogical demagoguery.
I fully agree with another comment by Moshe, too:
However, the actual reason why he hadn't gotten any answer is that those folks actually do believe that the world is obliged to fundamentally follow the framework of classical physics in which the "state of the system" is described by objective numbers at each time or objective numbers that describe all the events in the spacetime. The only reason why they were afraid to say so explicitly is that they weakly realized that in the context Moshe could prepare, their position was incredibly narrow-minded and childish, indeed.
Moshe wants to suggest that one may talk about some versions of "ontology" that are compatible with the facts we know about the quantum phenomena. His comment that "ontology" is ill-defined is similar to the proposition that the validity of Genesis as a cosmological and geological theory is disputable because it depends how you interpret it etc. Except that those "pundits in the real world" who consider Genesis to be a serious cosmological model do interpret it literally or at least literally enough so that one may still easily show that the model is invalid. In the very same sense, the "pundits in the real world" who love to talk about "ontology" in the context of quantum mechanics take the classical framework for the real world as a dogma. By "ontology", they do mean that the world must be fundamentally classical. So the "pundits in the real world" are incompatible with the available scientific insights whether we talk about the Christian fundamentalists or the ontological quantum philosophers.
But I primarily wanted to endorse the last sentences in Moshe's comment: even for a completely well-defined toy model of the anthropic reasoning, no one knows how to make predictions. That's totally true. In most cases, the anthropic advocates love to talk about "subtle technical problems such as the problem of the measure" (how to count "majorities" in infinite sets of observers in a curved and complicated multiverse). They want to suggest that these "technical problems" may be resolved and the anthropic principle becomes a predictive theory right afterwords.
These technical problems with majorities of infinite sets aren't really technical details, they're a part of the problems in principle that prevent the anthropic principle from transforming itself into a functional theory. However, even if the problems with infinite sets were absent, they would still have no way to deduce predictions from an anthropic theory. It would still be meaningless as well as unjustifiable to say that we're generic observers. Such a proposition requires one to sharply count "individual souls"; and to adopt some probability distribution of "it is me" on these souls.
But both steps suffer from lethal problems. In politics, we may define who is a citizen, what his or her age or gender has to be in order to become a citizen, how many citizens Siamese twins are, and so on. But all these things are just political conventions. Each of the answers to these questions – and I could tell you thousands of them – depends on new assumptions. They're independent moving parts of your legal system (the political counterpart of a theory). If life gets more diverse, the ambiguity becomes much more dramatic.
If we're looking for generic intelligent observers in the Universe, should we count a human as 1 observer or a society of 10 trillion cells that collaborate? The cells on another planet could have the size somewhere in between our cells and our mammals and they could collaborate in various semi-loosely bound communities; would they be cells or whole observers? Should North Korea be counted as 1 entity, a communist nation with a certain structure, or a conglomerate of 24 million people? If citizens of another planet look like Intel quad-core processors, should we count the processors or the individual cores? Should we count chimps, horses, bacteria, viruses? If an extraterrestrial animal gets killed and resuscitated with some new material 1 billion times, will you count it as one life or 1 billion lives? Should you prefer the long-lived civilizations at all (i.e. do you want to count "genericity" in the whole spacetime or just one slice of the spacetime)? Each of these options may change the relative odds by many orders of magnitude, sometimes by dozens of orders of magnitude. None of the answers to the questions may really be justified by anything that makes sense so even with the dozens-of-orders-of-magnitude error margins, you're still far from being certain about anything.
There's really no preferred probability distribution and that's the main point Moshe is making. The anthropic people want to introduce some "egalitarianism between observers" into the probability that "I am a particular observer". Except that it is obvious that no universal egalitarianism may be defined by the laws of physics themselves – because observers are clearly inequivalent and differently "worthy" and any convention requires to make some arbitrary "cuts" – and even if we could define a unique system of rules for this "egalitarianism between observers", we would still have no evidence that this "egalitarian measure" is the right one.
There are infinitely many measures (infinity to the power of infinity, if you allow me to describe the large size a bit more suggestively) and if you randomly pick one of them, it doesn't mean that you actually have a good reason why you haven't picked another one. Even if one could identify an "egalitarian or uniform" measure as a canonical or special one, it would still not mean that it is the right one. To transform the "egalitarian anthropic reasoning" into a theory, you also have to supplement the theory with a methodology to deduce the preferred probability distribution from some deeper principles or mechanisms. There doesn't exist a glimpse of how these deeper principles or mechanisms could look like. In fact, it seems to be the very point and main philosophical assumption of the anthropic reasoning that there should be no deeper principles like that: the counting of observers should be the most fundamental layer of the explanations of features of the Universe we happen to be surrounded by.
So saying that "observers are equal in a multiverse" involves the need to give arbitrary answers to infinitely many political questions about the Siamese twins, viruses, and Intel processor cores. Because there doesn't exist any plausible way how these choices could be derived from a "more complete short-distance theory" (in fact, the very point of the anthropic reasoning is that the counting of observers is the fundamental layer of the probability distributions so there can't be anything more fundamental!), if I recall the example of the soft SUSY-breaking parameters, these choices must be considered irreducible and independent moving parts in the theory and the anthropic theory – once it becomes a theory – is therefore inevitably infinitely contrived and infinitely arbitrary.
Even though the anthropic advocates try to dismiss the criticism of these ambiguities that have to be decided as mere epistemological extravagance, Moshe is completely right that in order to actually convert their philosophy into a theory, one has to brutally violate the rules of the "qualitative simplicity". By adopting the anthropic reasoning, one still abandons many other, perhaps more plausible types of explanations (of the cosmological and particle physics parameters and other things), and it's just wrong to confine yourself in a straitjacket if you have absolutely no evidence that this should be done.
In the Cosmic Variance comment thread, many other commenters are offering their opinions and an increasing fraction of them is confused and confusing. Let me jump to Sean Carroll's comment #21 where he is displaying his illogic concerning the low initial entropy again:
But it is not true that this "special" distribution is the relevant one for answering any question about distributions. Any other distribution among the infinitely many distributions is a priori comparably likely a candidate for the right expectations as the uniform one; that's the rule of "avoiding prejudices" I have previously mentioned. We only know one context – and one required mechanism – in which the "uniform measure" is the right one: thermal equilibrium. If a physical system evolves for a long enough time, it may converge to equilibrium. Due to the chaotic evolution, each microstate with the right values of conserved quantities will be equally likely in the final state. But this situation – equilibrium – isn't the right description of any and all situations. It's only the right description for "distant enough future", for physical systems that have had enough time to reach the equilibrium by the chaotic evolution that has diluted all the non-uniformities. But without this "waiting time", the assumptions behind the thermalization mechanism are explicitly violated.
In particular, they are maximally violated if we consider the initial state of the Universe. It is complete bullshit to think that the "default state" should be one that has no non-uniformities to start with. In fact, it is really shocking that Carroll, a self-described cosmologist, fails to understand this point. The main reason why we consider cosmic inflation is that the cosmic microwave background is observed to be nearly uniform – but the Big Bang cosmology can't offer an explanation exactly because the photons hadn't have enough time to interact with each other and achieve the uniform temperature (according to the simple Big Bang cosmology). Equilibrium may only be achieved after a long enough time in which parts of the subsystem may interact. That requires the Penrose causal diagram to be sufficiently "tall". Whenever we encounter equilibrium without such a previous process in which the thermalization could occur (or even without a region of the spacetime in which such a process could occur at all, due to the causal restrictions), it's a mystery why the equilibrium is there.
Carroll says exactly the opposite: he thinks that things must be in equilibrium if they haven't interacted before. It's just pure bullshit and I am not willing to endorse the opinion that people who fail to see why this is bullshit are intelligent, impartial people. This is such a fatal hole in someone's ability to think rationally that it decides about my opinions about someone's rationality.
Even Carroll's example with the dark and light grains of sand on the beach – which was probably chosen because the answer should be obvious – is utterly irrational. The formation of sand and beaches is a complicated conglomerate of processes. If we knew – and when people knew – nothing about geology and other historical sciences and nothing about physics, they couldn't have had any opinion about the uniformity or non-uniformity of sand. If one were impartial, he would consider "uniform sand" and "non-uniform sand" to be two a priori equally likely hypotheses. However, he would also observe the sand empirically and he would find out that the "non-uniform sand hypothesis" is heavily favored.
Now when we know lots of physics, geology, and other pieces of science, we know lots of mechanisms that favor "non-uniform sand" and some mechanism(s) that favor "uniform sand". Different colors of sand may have different size and density of the particles so they may naturally order themselves gravitationally, thus increasing non-uniformity: heavier particles drop lower and smaller particles may fill the holes between the larger particles at the bottom, too. The sand also arises from the fragmentation of rocks and they're ordered as well; differently colored rocks originated from different geological epochs and the chronology gets mostly imprinted to the non-uniformity of the sand, too.
There's really only one mechanism that would favor a uniform sand: it requires one to assume that the differently colored particles are equally heavy and it requires one to shake or otherwise mix the sand for a long enough time so that the arrangement of the grains becomes chaotic. But this is one of the least important mechanisms underlying the dynamics of sand in this actual Universe – and the theory considering this mechanism as the "core" is one of the most inaccurate approximate theories of sand you may think of. And even if this shaking process were the most important one for a particular beach, it only applies in the future – it requires a long enough time for "shaking". If you consider the initial state of rocks etc., of course that it is much more reasonable to expect that everything is hugely non-uniform. This expectation agrees with the evidence, too. At different depths in the Earth, you find different materials with different colors. This non-uniformity boils down to non-uniform initial conditions of the Earth as well as processes that actually favor non-uniform distributions (such as the simple fact that heavier materials want to go down).
The uniform distributions may be said to correspond to many microstates – and they consequently win in the "higher entropy contest" – but this high-entropy macrostate is just one among many different qualitatively distinguishable theories so it probably loses in the competition of "many qualitatively different theories that were fairly assigned comparable priors" (there are many versions of non-uniform hypotheses that may win). But none of the two contests – which have opposite results – may be identified with the probability that one of the contestants is actually the right theory. If there's no genuine evidence, you can't choose a winning theory by screaming that numbers should be low or high (even though you don't actually know the answer)! Some things in the world are (nearly) uniform and others are (highly) non-uniform. There's clearly no universal answer that everything is uniform. Almost all things we care about are non-uniform; uniform objects are just an extreme approximate description of a small but nonzero fraction of the objects and situations but the uniform descriptions only hold if many conditions are satisfied. It can't be a "default assumption for all of physics" if not science because it's clearly wrong in most cases.
If Sean Carroll is only able to see the process(es) or argument(s) that favor(s) a uniform sand and if he moreover believes that the resulting state arising from such processes should actually naturally occur even if no processes have occurred at all, then I must emphasize that he is just dumb beyond any reasonable imagination. But he's not the only one. There is a whole army of irrational people who got used to parroting totally wrong and scientifically indefensible claims about uniform measures etc., claims that a slightly intelligent 11-year-old schoolkid must see to be patently wrong. The underlying driver behind all this lunacy is, I believe, some kind of a political ideology – egalitarianism etc. These people don't dare to question preposterous claims such as "non-uniformly colored beaches are so insanely unlikely that they require a special investigation while a beach with perfectly mixed up grains of two colors doesn't require any" because these breathtaking delusions sound left-wing enough and anything that is left-wing, even if it is the smelliest pile of feces in the world, is just okay as an "intellectual material" they may spread anywhere.
And that's the memo.
Does This Ontological Commitment Make Me Look Fat?About one-half of the points that people make in that thread are right (and some of the people, such as Moshe Rozali, are likely to make valid points only); the other half is demonstrably wrong. It always puzzles me how so many people fail to think rationally about so many rudimentary issues.
Carroll begins by quoting an interview with Craig Callender, a San Diego philosopher. He says that positing too many universes is too high a price to pay for an explanation of the low entropy of the newborn universe. Carroll correctly disagrees that it's not a price at all, it's really a prediction of the theory; I will discuss this point later, too.
However, Callender is right when he says that one shouldn't pay a high price for a new explanation of the low entropy. The reason is that there is actually no open problem associated with this general fact at all. The interviewer was pushing Callender to an irrational mode of reasoning by these misleading comments:
... But there is nothing in the second law of thermodynamics to explain why the universe starts with low entropy. Now maybe its just a brute fact that there’s nothing to explain. But some physicists believe they need to explain it. So Sean Carroll develops an idea of a multiverse to explain the low entropy. ...We are told that some scientists may believe that they need to search for an explanation of the low entropy our world started with. However, science is not about beliefs, science is about facts and explanations. The low entropy of the initial state is a fact and another fact (one that proves the previous one) is that it trivially follows from the second law of thermodynamics, in contradiction with the first sentence above.
Why? By definition, the initial entropy of the Universe is the entropy of the earliest possible moment of the Universe we may consider. What does it mean? It means a moment such that we can no longer ask "What was there before that moment?". We can no longer ask "What were the events that led to that moment and that explained the properties of the Universe at that moment?" This is the only legitimate model-independent description of what the adjective "initial" or the phrase "beginning of the Universe" may mean.
Low initial entropy is a tautology
Now, why can't we ask "What was there before that moment?"
We can't ask this question because there couldn't have been any other moment before this particular moment that we decided to call the "beginning of the Universe". And why couldn't have been any moment before that? There had to be a law that would prevent us from reconstructing the history to even earlier moments. What could this law look like? The only law we know that has the capability of "terminating the time in the past" is the second law of thermodynamics. Even if you disagreed it's the only one, it doesn't matter: the uniqueness isn't essential.
What is essential is that the second law says that the entropy has to be a non-decreasing function of time and away from the thermal equilibrium – which is an irrelevant concept for the early Universe that was almost certainly very far from thermal equilibrium – the entropy of a connected system has to increase with time. But entropy can't be negative, either. So when we are reconstructing the history of a physical system and we reach a moment in which the entropy of this system was zero (or approximately zero, within error margins of various origins), we know that there couldn't have been any earlier moment.
A moment with a (nearly) vanishing entropy is the only moment for which this argument applies; it's the only moment that can be called "the beginning of the Universe". For any other moment, i.e. if the entropy were any other number, we could always keep on asking "What was there before that" and continue the backwards reconstruction of the history, thus falsifying the proposition that the moment we started with was the ultimate beginning. So the vanishing entropy of the newborn Universe is really a tautology; it trivially follows from the second law of thermodynamics.
The people who try to make the vanishing of the entropy look mysterious are probably making one more general mistake in their failure to reconstruct the valid stream of ideas above: they are imagining that "the beginning of the Universe" is some God-given concept that doesn't have to be verified or thought about rationally (and they also like to evolve this God-given moment to the future as well as into the past while they're using completely wrong methods to evolve it into the past – they use the method of predictions even though the right methods of retrodictions are completely different and require a form of a Bayesian analysis with subjective priors and other things). But much like with any other concept in science, we must be careful how it is really defined and whether a particular observed object or pattern or property agrees with the definition.
If we are trying to identify or locate the "beginning of the Universe" today, in 2012, we must carefully study what we mean by that moment, "and a moment that had no other moment before that" is obviously the only refinement of the definition we may think of. Once we have this refinement and as we're trying to learn more about that moment, we must think why there would be no moment before the moment declared to be "the beginning" and we will find out that the entropy is the most universal explanation, to say the least, why one can't get further into the past.
The people who think irrationally about these matters don't really think at all. They want to isolate the term "beginning of the Universe" from all of our observations and irrationally postulate that the research of the entropy of "this holy moment" should be a job that cosmologists should work on for the whole eternity, regardless of whether they know how "this holy moment" is defined and what is its relationship to future moments and a priori plausible previous moments. But the proper definition of the concept, "beginning of the Universe", and the relationship to the past and future that this definition involves automatically (with the help of the well-known second law of thermodynamics) answer the question about the entropy, too.
Any claim that we need to pay any price – e.g. flood the archives by extravagant new theories – for another explanation of the low entropy of the initial state is a sign that someone totally misunderstands thermodynamics and statistical physics. Sean Carroll certainly does. There is nothing more to explain here. Having another scheme inequivalent to the second law that would claim to dictate the entropy trends would mean that there would be a contradiction with thermodynamics and statistical physics. You can't have two different, inequivalent laws controlling the same questions.
Does the large number of worlds hurt?
So one shouldn't pay any price for such explanations of the low initial entropy; one shouldn't look for new explanations of this well-understood fact at all. We may do nontrivial research into the detailed properties of the "initial state"; but its low entropy surely isn't among its characteristics that remain mysterious. Everyone who makes it sound mysterious and who is doing "important new research" of it is doing pseudoscience for the very same reason why the researchers of the phlogiston are doing pseudoscience: they are working on alternative, non-scientific explanations of features of Nature (either low entropy of the initial state or heat in general) that have perfectly well-known scientific explanations, namely those rooted in thermodynamics and the microscopic framework that explains it, i.e. statistical physics (the explanations are the second law and the thermal motion of the molecules, respectively).
On the other hand, if there were other things to explain – e.g. the smallness of the cosmological constant which still looks mysterious (although the degree to which this mystery has been worshiped has certainly been over the edge) – it would be legitimate to consider theories that require or predict the multiverse. For those cases, Carroll, Callender, and others are right that "quantitative simplicity" (a feature of a theory that predicts that only a small number of copies of an object exist) shouldn't be considered a virtue when we decide whether the theory is more or less acceptable. If this principle of "quantitative simplicity" has ever helped the people to locate the right theories sometime in the past, it was pretty much a coincidence that it had worked.
Having a small number of visible universes (e.g. one) and having a huge number of them are very different hypotheses and various "numerical parameters" by which we may describe them have very different values. But they are two competing, qualitatively different answers to a question, and unless we can falsify one of them by the evidence – and we cannot at this point – we must assign them with comparable prior probabilities. It's a rule I often like to state that
A scientist or a person who is not prejudiced assigns comparable prior probabilities to all qualitatively different, a priori plausible hypotheses.It's an important principle and people unfortunately love to violate it on both sides.
Some people want to use some perverse form of "Occam's razor" (and they use lots of other demagogic terms) to argue that theories having a large multiverse or a large anything or a large number of any objects must be eliminated a priori, even without any evidence; they think that these theories are just immensely contrived. Other people take the opposite but equally flawed approach: they "reward" theories with large multiverses; they argue that such large multiverses have a huge number of observers so "most observers" find themselves in the large multiverse which is supposed to be the mechanism by which the multiverse wins and becomes much more likely. They think that the vast number of the observers that their multiverse picture predicts or postulates is a big enough argument that may even defeat the "risk" that the multiverse doesn't exist at all. ;-)
Note that the conclusions reached by these two camps are opposite to one another. But both arguments are utterly irrational. If the actual observations are compatible with both of these frameworks, there can't exist any valid logical consideration that could use these observations to increase the probability of one explanation relatively to the other. It's just not possible! So everything that the people talking about "generic observers" on one side or "Occam's razors" on the other side reveal in these monologues are their philosophical prejudices and their inability to look at the evidence impartially. The actual evidence remains silent about this big question.
It is only legitimate to use the "genericity" arguments favoring the "large ensembles" and "theories with large ensembles" if there is an actual, demonstrable (or likely to be true) mechanism that makes the probability of each element equally or comparably like (like the probabilities of microstates in statistical physics after thermalization – in the far enough future: it will be revisited at the end of this blog entry). But if there's no such mechanism, and there obviously isn't any reason why there should be democracy e.g. between the different, inequivalent vacua of string theory, the "genericity" arguments are illegitimate.
An analogous comment holds for "Occam's razor" or other weapons by the opposing extreme camp. Occam's razor is a valid consideration in some situations. It is legitimate to disfavor theories that make too many independent assumptions, that contain too many permanently independent elementary building blocks or too many permanently uncalculable parameters. The reason why such theories should be "punished" is that they must inevitably share the prior probability with very many similar theories of the same kind, very many other theories that have too many assumptions, building blocks, and parameters, too.
However, the multiverse isn't immediately eliminated by this reasoning because a large number of the universes in a multiverse doesn't translate to a large number of assumptions. As many people on Cosmic Variance, including Sean Carroll, correctly state, a large number of the universes is a consequence of a theoretical framework, a feature of it, not an arbitrary assumption, so Occam's razor can't be allowed to "cut necks" here. Even if the large number of universes were considered an assumption, it's just one assumption so the "punishment" would be negligible. (However, the idea that the multiverse must play a key role in the explanation of the features or initial conditions of the visible Universe is a completely different issue, an unjustified and probably incorrect assumption, as I will discuss below.)
As Carroll, Callender, and others rightfully say, one must distinguish "qualitative simplicity" (a small number of assumptions etc.) which is indeed favored by a legitimate version of Occam's razor from the "quantitative simplicity" (a large number of copies of a kind of an object predicted by a theory) which is neutral and has no impact on the plausibility of a theory as long as there isn't a way to empirically falsify the theory. The terminology, "qualitative and quantitative simplicity", is due to David Lewis, a philosopher, but be sure that we could invent such words (and maybe better words), too, so if the question were whether a philosopher would deserve a salary for the creation of such buzzwords, my answer would still be No.
Qualitative simplicity and explanations
Despite the apparent tension, I actually do feel that the people on that thread would ultimate agree that "quantitative simplicity" – e.g. the harassment of a theory that dares to predict that the number of electrons in the Universe is really high – isn't a valid argument to abandon a theory or a hypothesis. Many quantities in Nature possess high values and correct theories must agree with this fact, whether or not you like large numbers. Laymen of course love to think in similar unthoughtful ways and anti-science hate blogs are encouraging them to make such simple conclusions but that can't change the fact that the identification of "theory predicts many things or deals with large numbers" and "theory is bad" is utterly illogical and unjustifiable.
On the other hand, the ability of a theory to explain things is surely important, and so is the number of independent assumptions that must be chosen, independent building blocks that have to be inserted, and independent unexplainable parameters that have to be adjusted. I have emphasized the adjective "independent" because a theory may postulate many new things that are shown not to be independent so you can't use their multiplicity against the theory. The Standard Model contains lots of 2-component spinor fields but they're organized into representations (color triplets, electroweak doublets, etc.) and you should count the representations as wholes (and maybe some extra unification of leptons and quarks and unification of families is achieved by a deeper framework e.g. in grand unification and string theory).
Analogously, electric and magnetic fields are different 3-dimensional vector fields but the special theory of relativity shows that one of them implies the other (one field emerges from the viewpoint of a moving inertial system), so relativity shows that the number of independent building blocks in electricity and magnetism is lower than people had thought (surely one of the reasons that made Einstein certain that he was on the right track). Also, there are many superpartners in supersymmetry but they're not independent assumptions; all of them follow from a single assumption of supersymmetry (combined with the list of known particle species).
I have also included the word "unexplainable" in the previous paragraph because if an effective theory needs many parameters to describe the reality, it's just how it works. These parameters may ultimately be explained by a deeper theory so the larger number of parameters in an effective theory isn't really a fundamental problem; in this sense, the larger number of parameters in an effective theory is just a consequence of a deeper theory. If you had another effective theory that had a lower number of parameters, you could favor it but you must be careful because once you admit (or even have some more detailed explanation) that these theories are not really fundamental, your preference could be similar to the preference of "quantitative simplicity" I mentioned above.
And it's simply not rational to prefer a theory just because it predicts a lower number of electrons in the Universe; it's up to Nature to decide how high this number should be (and it incidentally turns out to be rather high) so you must be careful to avoid unsubstantiated prejudices. In the same way, it is probably irrational to prefer an effective field theory with a somewhat lower number of soft parameters. It's up to a deeper theory to decide how many soft parameters will be needed in its effective descriptions and as long as you don't understand this deeper theory well enough, you shouldn't impose any conditions or prejudices about whether this number should be high or small.
The construction of models that fit the observed data from the bottom up is a different enterprise, of course: in that case, you typically start with "minimal" theories in the naive sense and you add fields (particle species) and interaction terms "one by one". But this semi-mechanical strategy to enrich your theories one step after another has nothing to say about the validity or relative likelihood of top-down theories that are making greater leaps. Just because you like to pursue the bottom-up strategy (one step after another) doesn't mean that you have a valid argument for or against theoretical claims that address realms where your down-to-Earth approach hasn't gotten yet.
Many worlds in quantum mechanics and cosmology
Let me say some words about the appraisal of particular theories involving many worlds. I completely agree with the first comment by Matt Leifer:
Carroll: “I’m the first to admit that there are all sorts of very good objections to the cosmological multiverse (fewer for the many-worlds interpretation, but there are still some there, too).”I completely agree.
I find it amusing that you, as a cosmologist, believe this, but I, as someone who works on the foundations of quantum theory, see it the other way round. On my understanding, we have pretty good evidence for believing inflation, and pretty good reasons for believing that the best way of implementing inflation involves a cosmological multiverse. Those “other universes” would be unambiguously real, in just the same sense that ours is.
On the other hand, the many-worlds multiverse is only compelling if you believe that the wavefunction should be treated ontologically, as a literal description of reality, but there are many compelling arguments that suggest it should be treated as an epistemic object, more like a probability distribution. The supposed “killer argument” that wavefunctions can interfere is not compelling, because it has been shown that interference can arise naturally in an epistemic interpretation. Therefore, I would say that the many-worlds multiverse rests on much shakier ground than the cosmological one, and that is even before we start to think about probabilities.
The large size and flatness of the visible Universe is a strong piece of evidence supporting a mechanism that produces a large flat Universe, and those explanations pretty much inevitably involve scalar fields (fundamental or effective ones). We know that such fields may roll and tunnel so it's pretty reasonable to say that with some loopholes, the existence of the multiverse may follow from the cosmic inflation which does follow from the observed flatness and size of the visible Universe. String theory also implies the existence of a complicated configuration space, the landscape, although the existence of a landscape doesn't "immediately" imply that the landscape (of possibilities) is realized in an actual multiverse (composed of tangible universes); a few more steps (with a risk that they don't occur in Nature) are needed but they're at least "reasonably plausible".
So there's actually some semi-convincing empirical framework supporting the inflationary multiverse. (Again, let me emphasize that I am not saying that this multiverse – especially the detailed pre-history of the ancestor universes that have led to ours – is a good starting point for predicting the properties of our Universe. I surely tend to think that this is way too constrained and therefore not necessarily correct a method to think about the initial conditions of our Universe.)
On the other hand, Leifer is also right that the many worlds in Everett's interpretations only look convincing to those who can't abandon the idea that the wave function is a real object that objectively exists (that's what the omnipresent buzzword "ontological" is supposed to say).
There's ample evidence that this assumption isn't valid; the wave function isn't an objective, real entity. It is a semi-finished product ready to prepare probabilistic distributions which are fundamentally subjective although many of their inter-subjective if not "objective" features may be demonstrated, too. Once you get rid of the prejudice that objects such as the wave function should be fundamentally real and objective entities, the a priori preference for frameworks such as Everett's many worlds evaporates completely. I did this step more than 20 years ago because the evidence has overwhelmingly convinced me this is the right and necessary step. I am surprised if not flabbergasted that many people fail to realize this point after decades of looking at the subject.
Jeremy in the comment #2 correctly points out that in the tree of the many worlds, one has to be ours. Because the theory offers no mechanism that could tell us something about the question which branch is ours, the extra labeling that is needed – to mark our branch – is an extra piece of assumptions in this theory, one that turns all the remaining branches into a non-explanatory "bloat", using Jeremy's words (or "unnecessary superstructure", using Einstein's words for the de Broglie-Bohm pilot wave theory which is analogous from the epistemological perspective).
The impossibility to actually derive the generally unequal probabilities from the many worlds "interpretation" – and the probabilities that are a priori arbitrary numbers between 0 and 1 is what almost 100% of the knowledge in quantum mechanics is about – only underscores the insight that Everett's interpretation is a worthless pile of philosophically flavored emotions and irrationality from a scientific viewpoint.
Even if the "more philosophical" problems about the other branches – such as those in the paragraph preceding the previous one – were absent, I just can't understand why a serious scientist would ever consider a theory that (in its present form or any form imaginable in a foreseeable future) doesn't have anything to do with the numbers we actually want to be calculated – and all of them in quantum mechanics come in the form of probability amplitudes (and their functions).
If you don't have any deeper explanation for these particular numbers, the probability amplitudes, then you have no deeper explanation for anything in modern physics that makes a scientific sense because all of the information and all of the predictions in modern physics boil down to complex probability amplitudes. Is this obvious conclusion really so controversial? I don't believe it should be allowed to be controversial.
People who don't care whether their "theory" has anything to do with the probability amplitudes are simply not doing physics.
Moshe on the vanishing explanatory power of the inflationary multiverse
I also subscribe to this comment #4 by Moshe Rozali:
I think this is not a fair criticism. I also tend to think that what should count is not the number of ontological units (whatever that may mean), but the number of independent assumptions that goes into the theory. So, as far as the existence of other universes, or other branches of the wave function is concerned, I agree with your sentiment. But, you and others are trying something harder — to explain some features of our own universe, or our own branch of the wave function, in terms of some structure on the space of all possible worlds/branches. For that you need know many things beyond the mere existence of other universes: e.g. what those other universes look like, what made them come into being, and even what questions make sense in this context. I think that amount of uncertainty in all these questions currently (and perhaps even in principle) makes the properties of all the other universes/branches (or their distribution, which amounts to the same thing) independent moving parts of the theory. In this context, I think it is not ridiculous to complain about excessive ontological baggage, or (if you don’t care for ontology since you don’t know a sharp definition of “existing”) about the lack of “compression” in any multiverse-based explanation, these are basically isomorphic questions.He's just using slightly different (but not too different) words for something I've said many times, too. The problem in the anthropic multiverse approaches isn't that they use theories that imply that there are many objects of a certain kind, namely many universes in a multiverse. The problem is that these folks – without any real evidence – advocate the idea that these other universes should play a central role in the explanation of the properties of our visible Cosmos.
There's no direct or indirect evidence for this claim and it is still an assumption that may very well be wrong, and I think it is wrong. If we take the risky position of adopting an assumption that may very well be wrong, we are increasing the probability that we will find ourselves on a wrong track and we should at least get something in exchange, something that partly compensates the risk of being wrong. But the anthropic explanations don't really explain anything and the amount of uncertainty concerning all of its parts makes it impossible to find any positive evidence in a foreseeable future.
So the attempts by some people to promote the anthropic thinking is just an ideological movement that has nothing to do with the evidence or logically or mathematically justifiable arguments. It wants the people to adopt one particular way of thinking even though it's clearly not the only possible frameworks how to think about the relevant set of empirical questions, in this case the initial conditions and particle properties in our visible Universe. Clarifications of the Hartle-Hawking wave function and similar approaches are arguably more promising a route towards an explanation than the anthropic ones.
Sean Carroll responds to Moshe Rozali – with some unbelievable, hollow promises:
... But that’s a criticism of our current state of theory-building, and would hopefully be a temporary condition; if we knew the underlying theory better, that vagueness would dissipate. ...If we knew more details, the anthropic theory would become accurate and exactly agree with the data; the anthropic thinkers would get lots of Nobel prizes, too. Cute. Except that this is just a belief, not a fact. Much more likely, if we knew the relevant features of the anthropic theories more accurately, we could much more crisply show that it doesn't work, we could unambiguously falsify it. We don't know for sure which of the scenarios is right and Carroll just adopts his "hopeful" position. He can't really separate facts and logical arguments from beliefs and hopes.
It reminds me of the people who say that a crackpot theory is "exactly like string theory" which was also studied by O(10) people in the 1970s only but later, the number of people would jump to thousands once the theory was proven to have some remarkable properties. Well, there's still a difference: for string theory it's been proven while for the crackpot theories, it hasn't. One can't treat some speculative wishful thinking about the future as facts. It's much more likely for a crackpot theory to remain on the dumping ground of science together with thousands of other crackpot theories than for it to gain the status comparable to that of string theory. It's just bad if some people don't distinguish facts and evidence from speculations and a wishful thinking or if they suggest that the difference isn't important.
If some accurate and impressive evidence supporting the anthropic multiverse theory were found, sensible people would start to consider it more seriously, to say the least. But this isn't what has actually happened in this world as of 2012 so sensible people don't consider the anthropic multiverse theory as a serious explanation for anything physical at this point. Carroll tries to obscure the not-so-subtle fact that the evidence he is promising doesn't exist at this moment so no one knows whether it may be found in the future. What he is left with are prejudices, bias, and a reduced degree of scientific integrity.
You know, this is a very different situation from the discussion of the parameters in SUSY models. I wrote that the large number of new fields and parameters mustn't be counted against SUSY because all the new fields follow from one principle of SUSY, one assumption (this is a perfectly well understood point at the present); and the large number of (soft SUSY-breaking) parameters result from a more accurate short-distance theory. The accurate short-distance theory relevant for our world isn't known at this point, of course; but the fact that the low-energy soft SUSY-breaking parameters follow from this deeper theory (that may in principle calculate all the parameters) is a well-understood general fact that has been realized in practice in many semi-realistic examples.
However, there's one critical difference between the two situations: in the case of soft SUSY-breaking parameters, no one actually claims that some unquestionable empirical evidence for the picture (except for the several motivating arguments such as string theory, hierarchy problem, dark matter, and gauge coupling unification) has been found. Carroll not only wants to act as if such arguments existed in the case of the anthropic multiverse; he wants to act in this way before they actually find any clue how the predictions could actually be extracted from some detailed underlying anthropic theory. So he's making two unjustified quantum leaps and not just one.
Chicken and egg and North of the North Pole
A slightly different topic. The comment #6 by Josh is playful but it is also misguided:
Sean, I really appreciate your willingness to critique what I call “deontological arrogance”, the idea that the possibility a question may be too corrupt to give a meaningful answer is more likely than there being an explanation. This reminds me of the “North of the North Pole” answer that used to be one of the most common replies given to the layperson as to “what happened before the Big Bang?” Honest experts, I contend, recast this awkward layman’s question which is sometimes simply dismissed as a corrupt chicken-and-egg recursion argument into a more full discussion of models that potentially avoid Big Bang singularities or other proposals that may show time-like worldlines could extend beyond a 13.7 billion year old “initial state”.The analogy between "time before the Big Bang" and "North of the North Pole" isn't just a misleading story for the laymen. It is a rather accurate description that may be, in fact, almost totally accurate (if we add extra dimensions). In fact, one may mention the Hartle-Hawking wave function which also assumes some singular beginning in the Minkowski spacetime but also constructs states in the Euclidean spacetime in which the neighborhood of the "ultimate beginning" is exactly as smooth as the vicinity of the North Pole. Because time is measured as a distance from this point, the analogy between "time before the Big Bang" and "North of the North Pole" becomes more than an analogy; it's actually a mathematical isomorphism. So Josh isn't right when he dismisses this comparison.
Another example might be that the heliocentric models of Copernicus or Kepler that lacked an explanation as to why the Sun was at the center of the universe. It really took a Newtonian perspective to explain its prominence in the solar system in terms of gravitational mass. I can imagine Copernicans and Keplerians responding with deontological assurances before Newton that the Sun being at the center was a bald fact of our universe that need not be explained any more than the question as to why the Earth had a single moon or a 23 degree axis tilt. While it was to some extent not really possible to answer the question as to why heliocentrism “worked” until Newton, the question wasn’t really corrupt and we have pretty good explanations for this question today that aren’t just “the question is meaningless”.
These models with "nothing else before the beginning" correspond to the established Big Bang cosmology. More speculatively, one may consider cosmological models with a pre-history, including the eternal inflation. It's OK, there are many possibilities to "extend the picture". However, Josh is wrong about another point. He suggests that the pre-history before the Big Bang is necessary to avoid some singularities (in the negative sense) or other unsatisfactory properties of the Big Bang cosmology. This claim simply isn't true.
No valid implication of this sort can be deduced from any empirical data and/or any well-established theory. In particular, the Hartle-Hawking wave function is a way to completely eliminate all the singular features of the Big Bang singularity while refusing any "pre-Big-Bang prehistory" at the same moment. Moreover, one must be very careful about the word "singularity" itself. It may mean various things. Some of the singularities imply an inconsistency but others don't. Some quantities may be strictly singular at the beginning of the Universe – in a certain description – but the theory involving such singularities may still be totally healthy, satisfactory, and complete. A related example: String theorists have learned how to describe the motion of strings on some singular spacetimes (orbifolds, conifolds, and others, including topology changing ones) and they have proved that the physics is totally consistent (and in many cases, equivalent to a different string description that uses a fully non-singular spacetime manifold).
Attempts to promote the concept of a pre-Big-Bang cosmology to the role of a vital tool in the struggle against inconsistencies is just an example of a sloppy thinking or demagogy. There exists no scientifically justifiable link of this kind.
Josh's comparison to heliocentrism is an even more obvious piece of demagogy. There is no analogy between the question whether there was something before the Big Bang and the question whether and why the Sun sits at/near the center of the Solar System – beyond the fact that both of these questions are questions. So whether one question is legitimate and/or may be answered (and whether the answer is Yes or No) has absolutely no implication for the other. Heliocentric theories are more accurate, natural, and correct than the geocentric ones because the Sun is heavier than the Earth which means that the same forces manifest themselves as a lower acceleration of the Sun, because of F=ma.
The opposite viewpoint, the geocentric one, used to be favored as a dogma for religious and philosophical reasons. All things that are fundamental should be close to us, people thought. It was wrong to believe this assumption as a dogma; but the opposite extreme dogma – everything that is fundamental must be as far from us as possible – would be equally wrong. Some important things may be close to us. It depends which things. Different questions may have different answers.
The reasons why cosmology without any pre-Big-Bang prehistory may be a viable explanation of the evolution of our Cosmos have nothing to do with geocentrism – instead, return about 3-4 paragraphs above to recall what the reasons are – so linking these two things is just illogical demagoguery.
I fully agree with another comment by Moshe, too:
I am not going to argue too hard for any ontological position, since in my current state of knowledge any question of ontology, as applied to regimes far from our daily experience, is ill-defined (or at least I cannot make sense out of it). But, I do think that the criticism of epistemological extravagance is not all that different, and I tend to sympathize with it, and also tend to think that it is not just a matter of our current knowledge but more of a matter of principle. For example, even in the context of simple toy models in which every concept is finite and calculable, it is not clear to me what a multiverse based prediction might look like (e.g. what physical principle picks the measure, and what prevents me from picking another one).Except that I think that Moshe is being too kind here. In previous CV discussions on the foundations of quantum mechanics, he was trying to get an answer to a simple question he asked to the many-worlds-interpretation and other wave-function-is-real advocates: How does their cherished "ontology" differ from the assumption that the world fundamentally follows the framework of classical physics? He hadn't gotten any answer to speak of which may be why he says that the term "ontology" is ill-defined.
However, the actual reason why he hadn't gotten any answer is that those folks actually do believe that the world is obliged to fundamentally follow the framework of classical physics in which the "state of the system" is described by objective numbers at each time or objective numbers that describe all the events in the spacetime. The only reason why they were afraid to say so explicitly is that they weakly realized that in the context Moshe could prepare, their position was incredibly narrow-minded and childish, indeed.
Moshe wants to suggest that one may talk about some versions of "ontology" that are compatible with the facts we know about the quantum phenomena. His comment that "ontology" is ill-defined is similar to the proposition that the validity of Genesis as a cosmological and geological theory is disputable because it depends how you interpret it etc. Except that those "pundits in the real world" who consider Genesis to be a serious cosmological model do interpret it literally or at least literally enough so that one may still easily show that the model is invalid. In the very same sense, the "pundits in the real world" who love to talk about "ontology" in the context of quantum mechanics take the classical framework for the real world as a dogma. By "ontology", they do mean that the world must be fundamentally classical. So the "pundits in the real world" are incompatible with the available scientific insights whether we talk about the Christian fundamentalists or the ontological quantum philosophers.
But I primarily wanted to endorse the last sentences in Moshe's comment: even for a completely well-defined toy model of the anthropic reasoning, no one knows how to make predictions. That's totally true. In most cases, the anthropic advocates love to talk about "subtle technical problems such as the problem of the measure" (how to count "majorities" in infinite sets of observers in a curved and complicated multiverse). They want to suggest that these "technical problems" may be resolved and the anthropic principle becomes a predictive theory right afterwords.
These technical problems with majorities of infinite sets aren't really technical details, they're a part of the problems in principle that prevent the anthropic principle from transforming itself into a functional theory. However, even if the problems with infinite sets were absent, they would still have no way to deduce predictions from an anthropic theory. It would still be meaningless as well as unjustifiable to say that we're generic observers. Such a proposition requires one to sharply count "individual souls"; and to adopt some probability distribution of "it is me" on these souls.
But both steps suffer from lethal problems. In politics, we may define who is a citizen, what his or her age or gender has to be in order to become a citizen, how many citizens Siamese twins are, and so on. But all these things are just political conventions. Each of the answers to these questions – and I could tell you thousands of them – depends on new assumptions. They're independent moving parts of your legal system (the political counterpart of a theory). If life gets more diverse, the ambiguity becomes much more dramatic.
If we're looking for generic intelligent observers in the Universe, should we count a human as 1 observer or a society of 10 trillion cells that collaborate? The cells on another planet could have the size somewhere in between our cells and our mammals and they could collaborate in various semi-loosely bound communities; would they be cells or whole observers? Should North Korea be counted as 1 entity, a communist nation with a certain structure, or a conglomerate of 24 million people? If citizens of another planet look like Intel quad-core processors, should we count the processors or the individual cores? Should we count chimps, horses, bacteria, viruses? If an extraterrestrial animal gets killed and resuscitated with some new material 1 billion times, will you count it as one life or 1 billion lives? Should you prefer the long-lived civilizations at all (i.e. do you want to count "genericity" in the whole spacetime or just one slice of the spacetime)? Each of these options may change the relative odds by many orders of magnitude, sometimes by dozens of orders of magnitude. None of the answers to the questions may really be justified by anything that makes sense so even with the dozens-of-orders-of-magnitude error margins, you're still far from being certain about anything.
There's really no preferred probability distribution and that's the main point Moshe is making. The anthropic people want to introduce some "egalitarianism between observers" into the probability that "I am a particular observer". Except that it is obvious that no universal egalitarianism may be defined by the laws of physics themselves – because observers are clearly inequivalent and differently "worthy" and any convention requires to make some arbitrary "cuts" – and even if we could define a unique system of rules for this "egalitarianism between observers", we would still have no evidence that this "egalitarian measure" is the right one.
There are infinitely many measures (infinity to the power of infinity, if you allow me to describe the large size a bit more suggestively) and if you randomly pick one of them, it doesn't mean that you actually have a good reason why you haven't picked another one. Even if one could identify an "egalitarian or uniform" measure as a canonical or special one, it would still not mean that it is the right one. To transform the "egalitarian anthropic reasoning" into a theory, you also have to supplement the theory with a methodology to deduce the preferred probability distribution from some deeper principles or mechanisms. There doesn't exist a glimpse of how these deeper principles or mechanisms could look like. In fact, it seems to be the very point and main philosophical assumption of the anthropic reasoning that there should be no deeper principles like that: the counting of observers should be the most fundamental layer of the explanations of features of the Universe we happen to be surrounded by.
So saying that "observers are equal in a multiverse" involves the need to give arbitrary answers to infinitely many political questions about the Siamese twins, viruses, and Intel processor cores. Because there doesn't exist any plausible way how these choices could be derived from a "more complete short-distance theory" (in fact, the very point of the anthropic reasoning is that the counting of observers is the fundamental layer of the probability distributions so there can't be anything more fundamental!), if I recall the example of the soft SUSY-breaking parameters, these choices must be considered irreducible and independent moving parts in the theory and the anthropic theory – once it becomes a theory – is therefore inevitably infinitely contrived and infinitely arbitrary.
Even though the anthropic advocates try to dismiss the criticism of these ambiguities that have to be decided as mere epistemological extravagance, Moshe is completely right that in order to actually convert their philosophy into a theory, one has to brutally violate the rules of the "qualitative simplicity". By adopting the anthropic reasoning, one still abandons many other, perhaps more plausible types of explanations (of the cosmological and particle physics parameters and other things), and it's just wrong to confine yourself in a straitjacket if you have absolutely no evidence that this should be done.
In the Cosmic Variance comment thread, many other commenters are offering their opinions and an increasing fraction of them is confused and confusing. Let me jump to Sean Carroll's comment #21 where he is displaying his illogic concerning the low initial entropy again:
Igor – well, there is a measure on the space of states. Defining that entropy relies on that measure. Saying that the early universe had a low entropy is saying that it is in a very tiny region of the space of states. If we hadn’t observed anything about the universe, but were told what its space of states looked like, we would have expected it to be more likely to be high-entropy than low-entropy. The fact that it’s not suggests that there is possibly more going on to things than a universe just picked out of a hat. It’s not that the measure is some sort of “correct” probability distribution (it’s clearly not), but that its failure suggests that we can learn something by trying to explain this feature of the universe.This is completely wrong. Start with the first sentence. Carroll tells Igor that "there is a measure on the space of states". Well, a more accurate sentence is that there are infinitely many measures on the space of states (the number of the measures is really infinity to the power of infinity: each allowed density matrix gives us an allowed measure on the space of states). In this case, there also exists an "algebraically special" or "canonical" measure, namely one associated with the density matrix that is proportional to the identity (unit) matrix. (This is only possible if the number of states in the Hilbert space is finite; once it gets infinite, the distribution couldn't be normalized. There can't exist any uniform measures on infinite sets.)
It’s easy to find analogies. Say you’re on a beach where there are both light and dark grains of sand. If the grains are all mixed together, there’s nothing to be explained. If all the dark grains are piled in one particular location, you figure there is probably some explanation.
But it is not true that this "special" distribution is the relevant one for answering any question about distributions. Any other distribution among the infinitely many distributions is a priori comparably likely a candidate for the right expectations as the uniform one; that's the rule of "avoiding prejudices" I have previously mentioned. We only know one context – and one required mechanism – in which the "uniform measure" is the right one: thermal equilibrium. If a physical system evolves for a long enough time, it may converge to equilibrium. Due to the chaotic evolution, each microstate with the right values of conserved quantities will be equally likely in the final state. But this situation – equilibrium – isn't the right description of any and all situations. It's only the right description for "distant enough future", for physical systems that have had enough time to reach the equilibrium by the chaotic evolution that has diluted all the non-uniformities. But without this "waiting time", the assumptions behind the thermalization mechanism are explicitly violated.
In particular, they are maximally violated if we consider the initial state of the Universe. It is complete bullshit to think that the "default state" should be one that has no non-uniformities to start with. In fact, it is really shocking that Carroll, a self-described cosmologist, fails to understand this point. The main reason why we consider cosmic inflation is that the cosmic microwave background is observed to be nearly uniform – but the Big Bang cosmology can't offer an explanation exactly because the photons hadn't have enough time to interact with each other and achieve the uniform temperature (according to the simple Big Bang cosmology). Equilibrium may only be achieved after a long enough time in which parts of the subsystem may interact. That requires the Penrose causal diagram to be sufficiently "tall". Whenever we encounter equilibrium without such a previous process in which the thermalization could occur (or even without a region of the spacetime in which such a process could occur at all, due to the causal restrictions), it's a mystery why the equilibrium is there.
Carroll says exactly the opposite: he thinks that things must be in equilibrium if they haven't interacted before. It's just pure bullshit and I am not willing to endorse the opinion that people who fail to see why this is bullshit are intelligent, impartial people. This is such a fatal hole in someone's ability to think rationally that it decides about my opinions about someone's rationality.
Even Carroll's example with the dark and light grains of sand on the beach – which was probably chosen because the answer should be obvious – is utterly irrational. The formation of sand and beaches is a complicated conglomerate of processes. If we knew – and when people knew – nothing about geology and other historical sciences and nothing about physics, they couldn't have had any opinion about the uniformity or non-uniformity of sand. If one were impartial, he would consider "uniform sand" and "non-uniform sand" to be two a priori equally likely hypotheses. However, he would also observe the sand empirically and he would find out that the "non-uniform sand hypothesis" is heavily favored.
Now when we know lots of physics, geology, and other pieces of science, we know lots of mechanisms that favor "non-uniform sand" and some mechanism(s) that favor "uniform sand". Different colors of sand may have different size and density of the particles so they may naturally order themselves gravitationally, thus increasing non-uniformity: heavier particles drop lower and smaller particles may fill the holes between the larger particles at the bottom, too. The sand also arises from the fragmentation of rocks and they're ordered as well; differently colored rocks originated from different geological epochs and the chronology gets mostly imprinted to the non-uniformity of the sand, too.
There's really only one mechanism that would favor a uniform sand: it requires one to assume that the differently colored particles are equally heavy and it requires one to shake or otherwise mix the sand for a long enough time so that the arrangement of the grains becomes chaotic. But this is one of the least important mechanisms underlying the dynamics of sand in this actual Universe – and the theory considering this mechanism as the "core" is one of the most inaccurate approximate theories of sand you may think of. And even if this shaking process were the most important one for a particular beach, it only applies in the future – it requires a long enough time for "shaking". If you consider the initial state of rocks etc., of course that it is much more reasonable to expect that everything is hugely non-uniform. This expectation agrees with the evidence, too. At different depths in the Earth, you find different materials with different colors. This non-uniformity boils down to non-uniform initial conditions of the Earth as well as processes that actually favor non-uniform distributions (such as the simple fact that heavier materials want to go down).
The uniform distributions may be said to correspond to many microstates – and they consequently win in the "higher entropy contest" – but this high-entropy macrostate is just one among many different qualitatively distinguishable theories so it probably loses in the competition of "many qualitatively different theories that were fairly assigned comparable priors" (there are many versions of non-uniform hypotheses that may win). But none of the two contests – which have opposite results – may be identified with the probability that one of the contestants is actually the right theory. If there's no genuine evidence, you can't choose a winning theory by screaming that numbers should be low or high (even though you don't actually know the answer)! Some things in the world are (nearly) uniform and others are (highly) non-uniform. There's clearly no universal answer that everything is uniform. Almost all things we care about are non-uniform; uniform objects are just an extreme approximate description of a small but nonzero fraction of the objects and situations but the uniform descriptions only hold if many conditions are satisfied. It can't be a "default assumption for all of physics" if not science because it's clearly wrong in most cases.
If Sean Carroll is only able to see the process(es) or argument(s) that favor(s) a uniform sand and if he moreover believes that the resulting state arising from such processes should actually naturally occur even if no processes have occurred at all, then I must emphasize that he is just dumb beyond any reasonable imagination. But he's not the only one. There is a whole army of irrational people who got used to parroting totally wrong and scientifically indefensible claims about uniform measures etc., claims that a slightly intelligent 11-year-old schoolkid must see to be patently wrong. The underlying driver behind all this lunacy is, I believe, some kind of a political ideology – egalitarianism etc. These people don't dare to question preposterous claims such as "non-uniformly colored beaches are so insanely unlikely that they require a special investigation while a beach with perfectly mixed up grains of two colors doesn't require any" because these breathtaking delusions sound left-wing enough and anything that is left-wing, even if it is the smelliest pile of feces in the world, is just okay as an "intellectual material" they may spread anywhere.
And that's the memo.
Multiverse, low entropy, and ontological commitments
Reviewed by DAL
on
June 04, 2012
Rating:
No comments: