Three days ago, I discussed a new paper by Susskind that promoted the idea that the quantum theory of black holes can be and should be rephrased in terms of the complexity theory – basically a branch of computer science. It seems to me that some people who defended Susskind's view were pure computer scientists who had no idea about physics – and the very meaning of the word "physics" – at all.
But Susskind's paper was probably not the best one to explain what is really so utterly irrational about the attempts to rebrand fundamental physics as a part of computer science. Meanwhile, David Brown asked me about the 2017 paper
First, amusingly enough, the 2017 paper is titled as the second part following the first part of the paper. That's cute except that the first part was published in 2006, more than 11 years earlier:
OK, the 2006 paper shows that the problem to find a vacuum with a tiny cosmological constant in the "Bousso-Polchinski model of a discretuum of very many random flux vacua" is NP-complete, in the usual computer scientists' sense of the term, and that's important because there's a possibility that
The implication surely doesn't exist as a solid logical one – and their suggestion that it is strong enough evidence is pure ideology.
To be sure that you understand the meaning of the key word here, the 2000 Bousso-Polchinski paper was an early toy model for the "string theory landscape". They suggested that a stringy compactification on a qualitatively realistic compactification manifold may be decorated with one hundred or so extra integers \(K_i\) where \(i=1,2,\dots 100\), the generalized electromagnetic fluxes through non-contractible cycles (submanifolds) of the compactification manifold.
If the fluxes \(K_i\) may be assumed to be between \(1\) and \(100\), then you have about \(100^{100}\) (squared googol, with my choice of numbers) possible values of the 100-tuplets \(\{K_i\}\). The cosmological constant depends on the numbers \(K_i\) is some rather generic way (it is typically increasing) but the consequences will be similar if we simplify the dependence to something like \[
\Lambda = -1 + \sum_{i=1}^{100} f_i K_i
\] with some fixed random values of the coefficients \(f_i\). Some of the choices of \(K_i\) may accidentally produce \(\Lambda\) that is extremely close to zero, imagine \(|\Lambda| \leq 10^{-122}\). But those are basically random choices of the integers that randomly produce a physically interesting result, one with a small \(\Lambda\), even though there is nothing fundamentally interesting about them.
If that is so and if you want to find the right vacuum with the small \(|\Lambda|\), you basically need to go through a majority of the "googol squared" possibilities by brute force, one by one, and it can't be done in a realistic time, and that's why we can never find the right assignment of the fluxes in practice.
Does it mean that it has been shown that you cannot find the right vacuum in string theory? No, because:
Lots of such people have presented incomplete, intrinsically ideological, arguments to make you think that science is hopeless – at least the search for the truly deep insights about the Universe cannot succeed. It may be true but it may be false. As long as your "proof" is incomplete to the extent that loopholes exist and are perfectly conceivable, you simply shouldn't claim that you have made a big step towards proving one possible answer. Their paper basically tells you "you should overlook the loopholes" and has no evidence for it – so the paper is propaganda trying to manipulate, not a package of persuasive evidence.
NP-completeness is an absolutely inapplicable label for any calculation or decision problem within string theory
In 2006, Douglas and Denef were really addressing two very different problems – and the whole "apparent power" of their paper was based on the suggestion that these problems "are the same" even though they are not. One of these problems is a technical problem similar to the traveling salesman:
Not really. Why? Because string theory is a unique theory. Its set of vacua and their properties are completely uniquely determined. The search for a vacuum that obeys some properties is a single and specific problem. This problem isn't parameterized by any \(N\) at all. For example, the number of cycles of a Calabi-Yau three-fold is believed to be bounded (by a thousand or so) which means that you cannot send any such hypothetical \(N\to\infty\) and discuss the asymptotic behavior of the "complexity" for large \(N\) at all.
On top of that, even if you decided that some value of \(N\) is fair for the "actual problem to search for a good stringy vacuum", the definition of the complexity wouldn't involve any maximization of the time over possible values of \(f_i\), the "distances between the cities", because all these constants \(f_i\) are completely uniquely determined by string theory.
In fact, all the amplitudes in all string vacua should be considered elements of a class of special functions of a new stringy kind. String theory is a unique theory much like \(\zeta(s)\) is a unique function with certain properties. So all functions that describe aspects of string theory are unique and important, obey lots of identities, and there are usually lots of simplifications and alternative ways to determine all these functions. Any suggestion that these functions simply "have to" be searched for by the stupidest, brute force method because they're just some random gibberish are bound to be wrong. To say the least, the statement about the "gibberish" hasn't been demonstrated and it seems unlikely to ever be. The properties of string vacua weren't picked by any simple random generator – so they probably disagree with the numbers that you would get by a simple random generator.
So when you want to find a compactification with some properties, it's not the search for the "worst case scenario". Instead, it's analogous to the traveling salesman problem for a single particular distribution of the cities that the salesman should visit. And be sure, one can arrange the cities so that the shortest path through these cities is found very quickly. And you can even quickly prove that it's the shortest one, indeed.
Now, is the stringy problem analogous to the "worst case scenario" or one of the "easy or easier examples" of the traveling salesman problem? Douglas and Denef didn't really have any evidence for either answer to this fundamental question. They assumed that the stringy problem is close to the "worst case scenario", and then they proudly "almost proved" that the hopes are close to the "worst case scenario". Their reasoning was absolutely circular.
And I am generously overlooking the fact that even for the "worst case scenario", it hasn't really been proven that no reasonably fast algorithm exists. In particular, \(P=NP\) is still possible. But even if you decided to believe that \(P\neq NP\) is a safe enough assumption, my point is that they're making very many additional – and perhaps stronger – assumptions on top of that. Their conclusions almost trivially follow from these assumptions and they celebrate these conclusions as if they demonstrated something nontrivial. But they haven't.
The paper's role was an ideological one, a support for the defeatist attitude. Don't look for additional facts about the right theory of Nature or the right compactification. You're just a little germ who can't find anything. This ideology could have been used – and has been used – to discourage people from science at many moments in the past. Some people continued doing proper research and they have made huge progress, however. Of course the ideology "science would never make substantial progress again" was always based on some rationalization or predetermined pessimistic conclusions and the arguments always assumed some "worst case scenario". These pessimistic claims always assumed that there would be no new patterns and the remaining unknown facts about Nature would be impenetrable random gibberish. But there were always new patterns, disagreeing with the "random gibberish" assumptions. Science has repeatedly shown that these assumptions were way too strong – Nature has no reason to pay lip service to "worst case scenarios".
Jump to the 2017 paper now
OK, the 2017 paper has two more authors and assumes that the reader buys everything the two chaps wrote in 2006. But their thinking is even much more unscientific than the thinking in the 2006 paper. Among other things, it's all about "simulations of the multiverse".
You know, I translated Greene's popular book on the multiverse and a chapter is dedicated to Ms Simulator – all of us may live in Her computer game. It's OK to include such a chapter into a popular book of this kind – but mostly for entertainment reasons. To think that this is really how research of cosmology may be done is too bad.
In the abstract, the Lady and Gentlemen announce that they incorporate complexity into the "measure factors" that are considered in many papers about the multiverse. It already sounds bad but the following sentence of the abstract must make you say "WTF":
If "cosmology" has some rules, it may also produce spacetime that do not obey these properties invented by humans. If the deepest known rules of cosmology that we have also produce spacetimes where the cosmological constant is never tiny, then these spacetimes are still products of cosmology according to the deepest known rules of cosmology. Saying that you can invalidate this principle – basically a tautology – by "defining cosmology" in your own way is utterly irrational.
You can't define whole disciplines of science to agree with random constraints that you invented. Instead, the purpose of disciplines of science is to decide whether your assumptions about the Cosmos and other things are correct. If there is a disagreement between the best theory and your assumptions, it's your assumptions that are wrong according to science.
So I think that this thinking about "defining cosmology" involves a misunderstanding of the basic logic of the scientific method. Like in so many other cases, the authors simply want to make up constraints that they find psychologically pleasing and dictate what properties the final laws of physics should obey.
But another problem with the first part of the sentence is that they think that research may be done by dividing objects to classes that obey or don't obey some cherry-picked properties. But this is a characteristic procedure for social sciences, not natural sciences. You know, social sciences may divide organisms to humans and non-humans – and assign vastly different rights to the humans than non-humans, despite the fact that the differences between pairs of humans are often comparable to the differences between some humans and some non-humans.
I am not saying that it's wrong to allow civil rights to humans, no civil rights to animals, and draw a thick line in between them. It's a convention that works fine for most societies. But the thick line is a social construct. Natural scientists know that nothing like that exists at the fundamental level. When a geneticist can distinguish a chimp from a human, she can also distinguish two humans from each other. The idea that some qualitative properties that may distinguish two objects are metaphysically more important than all other parameters is a pure superstition, something that no real scientist may believe. Physics and other natural sciences are quantitative so it doesn't really use the categorization of real-world objects into boxes by inventing arbitrary thick lines.
However, these superstitions are common among the fans of the anthropic principle. They divide Universes by thick lines into those that contain "intelligent beings" from those that don't. But the definition of an "intelligent being" contains a randomly cherry-picked subset of properties of humans or beings in our Universe. Why did you require some properties and not others, Gentlemen? This whole procedure is another social convention. It self-evidently cannot have any true physical significance.
If you use the existence (somewhere in the Universe) of beings that have some human properties as a condition to pick the vacua or Universes, it's just fine – because the existence of objects sharing some features with the humans is an experimentally proven fact. It's a fact simply because humans have been observed. But by describing the properties of humans in some "neutral language", you don't make your explanation less dependent on the empirical data. And to cherry-pick some properties of our Universe while "pretending the ignorance" of others is just utterly irrational. Once you are allowed to use the existence of animals as a criterion to pick the vacua, you're also allowed to use the value of the fine-structure constant \(\alpha\approx 1/137.036\) or so – and all other observed facts.
You can play a game in which you challenge yourself and try to find your vacuum as accurately as you can by using just some empirically observed facts. But it's just an arbitrary game, not science. A scientist is always allowed to use all empirically known facts to refine his knowledge of the right theory and/or parameters that need to be substituted to the theory. Indeed, it's the goal of science to extract the right theory by studying the empirical facts cleverly! At the end, the set of empirical facts that are sufficient to identify the right theory may be greatly reduced. But you don't know the reduced collection of facts from the beginning – you can only determine this collection when the correct theory is found.
But I had to laugh when I read the words about "defining the cosmic time for the whole multiverse as the time shown by the simulation". What!? What can it possibly mean and why you would write such a thing in a paper posted to a physics archive? Which simulation do they discuss? What is the exact program to simulate the Universe? Does this simulation properly reflect the actual laws of physics? If it does not, why would a random caricature of physics – some computer game – be relevant for physics? And if it is aligned, why don't you discuss physics directly instead of its simulations?
In Greene's book about the multiverse, he discussed a scenario in which he is an NPC in a computer simulation and the boss – Miss Simulator (probably George Soros with a lipstick) – decides to kick him out of the computer game because Greene says something politically incorrect. That was cute but I was assuming that he was just mocking religions. When I saw this paper with Denef and others, it seems he was damn serious. He wants everyone to be an NPC who just blindly worships some hypothetical Miss Simulator who is in charge of the Universe. This is not only "like" religions. It is completely isomorphic to religions.
If a programmer writes a computer game marketed as a simulation of the multiverse which has some cosmic time, it doesn't mean that her choice of the cosmic time agrees with how cosmic time works in physics. The cosmic time may be incorporated in tons of ways – some of them are more physically realistic, others less so. In fact, just our talk about the cosmic time in a simulation doesn't even imply that it makes any sense to define a universal time in the multiverse. Different patches of the multiverse may very well be mutually exclusive. By the horizon complementarity, the quantum fields in different patches may refuse to commute with each other. They don't have to "exist simultaneously" at all.
Just because you envision a would-be authoritative "programmer who created a simulation" along certain lines doesn't mean that you have any evidence that these lines are physically correct, sensible, or realistic.
Simulations and computer games may strikingly differ from reality and in most cases, they do. NPCs in computer games don't really behave like intelligent humans because they have lots of limitations. Computer games often allow things that are prohibited in the real world – such as the superluminal motion of rockets. On the other hand, computer programs are often unable to do things that are trivial to do for Nature – such as the calculation of the energy spectrum of a complicated molecule.
If you allow imperfect simulations, the imperfections may be huge and sufficient for a sensible person to see that simulations and the reality are completely different things. You may hypothetically think about some very precise representations of the laws of physics. But if you don't know something about these laws of physics, just talking about the "equivalent simulation" won't bring you any closer to the answers.
At the end, I think that the authors think like the social pseudoscientists. They think that someone – like a coder – may be placed above physics and physicists. He or more likely she studies the world – including the multiverse – by some categorization that is enough for comparative literature, by arbitrary defining cosmic time in some extremely stupid ways, and many other things, and physicists are obliged to take this stuff seriously.
It is pretty much exactly like the postmodern sociologists or anthropologists who want to study the scientific community using similar methods they use to study savages in Polynesia. Can't a sociologist simply stand above the physicists and understand everything that is truly important about them, their community, and their activities – much more than they understand it themselves?
Well, it's not possible. A social scientist is still a relatively clueless moron. If she weren't a moron, she could become a theoretical physicist instead of a social scientist. She may be smarter than savages in Polynesia but she's not smarter than physicists, at least the bright ones. So she's simply not standing above the physicists and by superficially looking at some people's behavioral patterns, she still completely misses the key things. The key things do depend on the validity of the theories, strength of the evidence, and the arguments. If she understands nothing about those, she can't understand anything truly important about the interactions between physicists! She's still similar to a puppy who learns the right reaction to several worlds used by the owner. By learning them, the puppy doesn't become a top expert in physics or neuroscience.
Denef et al. did something analogous to those sociologists or anthropologists. They envisioned some hypothetical authority, a programmer, and made guesses about her choice how to write a program. And because She is such a divine figure in our multiverse, Her choices must be considered serious insights of physics. I am sorry, Lady and Gentlemen, but readers with the IQ above 70 still see that those are your choices, not a divinity's choices, and they see that there is no evidence that you have found any picture that makes sense. Even if that divine programmer existed, it would still be just a simulation that could give a misleading picture about physics.
The simplest point they seem not to get is that programming, categorization, social sciences and all activities like that are emergent – they cannot possibly be fundamental in the sense of fundamental physics. This statement is tautologically true, it is true by construction. We know that animals, humans, societies, their conventions, and also computer programs have evolved from the pre-existing laws of physics. So no insight about these complex things – humans, societies, programs – can teach us any reliable insights about the fundamental laws of physics. Do they really disagree with this trivial assertion?
In particular, if you pick some random conventions – basically social conventions or some conventions extracted from your arbitrary assumptions about how some simulation of a multiverse should be written – it is absolutely obvious that a measure that you "calculate" out of these conventions is just another convention. Garbage in, garbage out. In fact, you have inserted some arbitrary garbage as the starting point but you have manipulated it in some even weirder and more arbitrary way so the "measure" you ended up with must be an even greater garbage than what you assumed at the beginning.
The main verdict is that there are no justified results or conclusions backed by arguments in such papers. It's just about the transformation of some garbage into another garbage. The last paragraph of their introduction says:
The paper has 57 pages and one could write 570 pages to clarify why many detailed assertions in the paper are ludicrous. For example, by worshiping Miss Simulator, they claim to "solve the Boltzmann Brain problem", among others. But the "Boltzmann Brain problem" is just another pseudo-problem that arose from irrational ways to think about the Universe – ways that are completely analogous to this paper. We can easily empirically exclude the theory that we're Boltzmann Brains – and no theory that has actually been successful in science predicts that we should be Boltzmann Brains. Only completely flawed and irrational applications of the probability calculus and crackpot theories about cosmology suggest that we "should be" Boltzmann Brains.
Developing a theory that is free of the problem "the theory is predicting that we are the Boltzmann Brain" isn't a difficult task – you just need to throw away the stupidest possible approaches to probability and physics. Because it's not a difficult task, it's ludicrous to view the "cure for the Boltzmann Brain problem" as significant evidence that your theory of physics is valid.
But Susskind's paper was probably not the best one to explain what is really so utterly irrational about the attempts to rebrand fundamental physics as a part of computer science. Meanwhile, David Brown asked me about the 2017 paper
Computational complexity of the landscape II - Cosmological considerationsby Denef, Douglas, Greene, and Zukowski. I have known the three male co-authors well and I think that they're powerful minds but writing things like that is just plain stupid. The boldly phrased paper has 8 followups after 16 months so I believe it's right to say that almost all the people in the field share my skepticism. But it's normal to express the skepticism by silence and lack of interest. However, science is really powerful in clearly proving things to be wrong – not right – and because this whole line of reasoning is wrong, it's appropriate to discuss why.
First, amusingly enough, the 2017 paper is titled as the second part following the first part of the paper. That's cute except that the first part was published in 2006, more than 11 years earlier:
Computational complexity of the landscape I (Denef, Douglas)Those who noticed the numeral "I" in the title were waiting for a "companion paper" cited as
[48] F. Denef and M. R. Douglas, “Computational Complexity of the Landscape II: CosmologicalWell, it was going to appear – but 11 years later and with a doubled number of authors. I think that this unexpected delay indicates that Denef and Douglas pre-decided to write a paper with certain conclusions before they knew whether the evidence adds up. And that's just wrong.
Considerations,” to appear.
OK, the 2006 paper shows that the problem to find a vacuum with a tiny cosmological constant in the "Bousso-Polchinski model of a discretuum of very many random flux vacua" is NP-complete, in the usual computer scientists' sense of the term, and that's important because there's a possibility that
...even if we were to find compelling evidence that some vacuum of string theory describes our universe, we might never be able to find that vacuum explicitly.As you can see, there are two very different assertions in that paper. One of them is very technical – namely that a problem analogous to the traveling salesman problem (which is NP-complete) is indeed analogous and NP-complete, too. The second one is that we should basically give up the search for additional details about the laws of physics. They more or less claim that the first implies the second. Does it?
The implication surely doesn't exist as a solid logical one – and their suggestion that it is strong enough evidence is pure ideology.
To be sure that you understand the meaning of the key word here, the 2000 Bousso-Polchinski paper was an early toy model for the "string theory landscape". They suggested that a stringy compactification on a qualitatively realistic compactification manifold may be decorated with one hundred or so extra integers \(K_i\) where \(i=1,2,\dots 100\), the generalized electromagnetic fluxes through non-contractible cycles (submanifolds) of the compactification manifold.
If the fluxes \(K_i\) may be assumed to be between \(1\) and \(100\), then you have about \(100^{100}\) (squared googol, with my choice of numbers) possible values of the 100-tuplets \(\{K_i\}\). The cosmological constant depends on the numbers \(K_i\) is some rather generic way (it is typically increasing) but the consequences will be similar if we simplify the dependence to something like \[
\Lambda = -1 + \sum_{i=1}^{100} f_i K_i
\] with some fixed random values of the coefficients \(f_i\). Some of the choices of \(K_i\) may accidentally produce \(\Lambda\) that is extremely close to zero, imagine \(|\Lambda| \leq 10^{-122}\). But those are basically random choices of the integers that randomly produce a physically interesting result, one with a small \(\Lambda\), even though there is nothing fundamentally interesting about them.
If that is so and if you want to find the right vacuum with the small \(|\Lambda|\), you basically need to go through a majority of the "googol squared" possibilities by brute force, one by one, and it can't be done in a realistic time, and that's why we can never find the right assignment of the fluxes in practice.
Does it mean that it has been shown that you cannot find the right vacuum in string theory? No, because:
- It is not clear at all whether the right vacuum is a nearly generic element of some huge set of candidates – so that the number of candidates is comparable to a googol or more: the anthropic if not multiverse paradigm may be wrong and our vacuum might be rather special, e.g. the heterotic compactification to one of the simplest orbifolds
- Even if it were an element of such a huge set, it may refuse to be a generic element and some early cosmological "vacuum selection" processes may prefer an element that is also easier to be found by physicists (just like by Nature)
- Even if our vacuum were an element of a huge set and even if it were a generic element, there may exist special properties of the assignment of the cosmological constant – roughly speaking, special properties of the coefficients \(f_i\) in the model above (but that model isn't an actual accurate Ansatz describing string theory precisely!) – that allow a much faster algorithm to search for the promising options. For example, some UV/IR connections may encode the small cosmological constant into some UV properties of the string vacuum
Lots of such people have presented incomplete, intrinsically ideological, arguments to make you think that science is hopeless – at least the search for the truly deep insights about the Universe cannot succeed. It may be true but it may be false. As long as your "proof" is incomplete to the extent that loopholes exist and are perfectly conceivable, you simply shouldn't claim that you have made a big step towards proving one possible answer. Their paper basically tells you "you should overlook the loopholes" and has no evidence for it – so the paper is propaganda trying to manipulate, not a package of persuasive evidence.
NP-completeness is an absolutely inapplicable label for any calculation or decision problem within string theory
In 2006, Douglas and Denef were really addressing two very different problems – and the whole "apparent power" of their paper was based on the suggestion that these problems "are the same" even though they are not. One of these problems is a technical problem similar to the traveling salesman:
Decide about the number \(N\), it was one hundred in my example – of cities or non-contractible cycles. Find the fastest algorithm that takes at most \(T(N)\) operations to be executed – where \(T(N)\) is the maximum number of steps that the program needs among all possible durations obtained by the exponentially many values of the parameters such as \(f_i\) – the coefficients in front of the fluxes, the distances between the cities etc. Study how this maximized \(T(N)\) scales with \(N\) or its powers and exponentials as \(N\to \infty\).By construction, this is a standardized computer science problem which is similar to the traveling salesman problem. And indeed, it may be shown that it is "equally parameterically difficult" in the computer scientists' understanding of the equivalence. But do the equivalent problems exist within string theory?
Not really. Why? Because string theory is a unique theory. Its set of vacua and their properties are completely uniquely determined. The search for a vacuum that obeys some properties is a single and specific problem. This problem isn't parameterized by any \(N\) at all. For example, the number of cycles of a Calabi-Yau three-fold is believed to be bounded (by a thousand or so) which means that you cannot send any such hypothetical \(N\to\infty\) and discuss the asymptotic behavior of the "complexity" for large \(N\) at all.
On top of that, even if you decided that some value of \(N\) is fair for the "actual problem to search for a good stringy vacuum", the definition of the complexity wouldn't involve any maximization of the time over possible values of \(f_i\), the "distances between the cities", because all these constants \(f_i\) are completely uniquely determined by string theory.
In fact, all the amplitudes in all string vacua should be considered elements of a class of special functions of a new stringy kind. String theory is a unique theory much like \(\zeta(s)\) is a unique function with certain properties. So all functions that describe aspects of string theory are unique and important, obey lots of identities, and there are usually lots of simplifications and alternative ways to determine all these functions. Any suggestion that these functions simply "have to" be searched for by the stupidest, brute force method because they're just some random gibberish are bound to be wrong. To say the least, the statement about the "gibberish" hasn't been demonstrated and it seems unlikely to ever be. The properties of string vacua weren't picked by any simple random generator – so they probably disagree with the numbers that you would get by a simple random generator.
So when you want to find a compactification with some properties, it's not the search for the "worst case scenario". Instead, it's analogous to the traveling salesman problem for a single particular distribution of the cities that the salesman should visit. And be sure, one can arrange the cities so that the shortest path through these cities is found very quickly. And you can even quickly prove that it's the shortest one, indeed.
Now, is the stringy problem analogous to the "worst case scenario" or one of the "easy or easier examples" of the traveling salesman problem? Douglas and Denef didn't really have any evidence for either answer to this fundamental question. They assumed that the stringy problem is close to the "worst case scenario", and then they proudly "almost proved" that the hopes are close to the "worst case scenario". Their reasoning was absolutely circular.
And I am generously overlooking the fact that even for the "worst case scenario", it hasn't really been proven that no reasonably fast algorithm exists. In particular, \(P=NP\) is still possible. But even if you decided to believe that \(P\neq NP\) is a safe enough assumption, my point is that they're making very many additional – and perhaps stronger – assumptions on top of that. Their conclusions almost trivially follow from these assumptions and they celebrate these conclusions as if they demonstrated something nontrivial. But they haven't.
The paper's role was an ideological one, a support for the defeatist attitude. Don't look for additional facts about the right theory of Nature or the right compactification. You're just a little germ who can't find anything. This ideology could have been used – and has been used – to discourage people from science at many moments in the past. Some people continued doing proper research and they have made huge progress, however. Of course the ideology "science would never make substantial progress again" was always based on some rationalization or predetermined pessimistic conclusions and the arguments always assumed some "worst case scenario". These pessimistic claims always assumed that there would be no new patterns and the remaining unknown facts about Nature would be impenetrable random gibberish. But there were always new patterns, disagreeing with the "random gibberish" assumptions. Science has repeatedly shown that these assumptions were way too strong – Nature has no reason to pay lip service to "worst case scenarios".
Jump to the 2017 paper now
OK, the 2017 paper has two more authors and assumes that the reader buys everything the two chaps wrote in 2006. But their thinking is even much more unscientific than the thinking in the 2006 paper. Among other things, it's all about "simulations of the multiverse".
You know, I translated Greene's popular book on the multiverse and a chapter is dedicated to Ms Simulator – all of us may live in Her computer game. It's OK to include such a chapter into a popular book of this kind – but mostly for entertainment reasons. To think that this is really how research of cosmology may be done is too bad.
In the abstract, the Lady and Gentlemen announce that they incorporate complexity into the "measure factors" that are considered in many papers about the multiverse. It already sounds bad but the following sentence of the abstract must make you say "WTF":
By defining a cosmology as a space-time containing a vacuum with specified properties (for example small cosmological constant) together with rules for how time evolution will produce the vacuum, we can associate global time in a multiverse with clock time on a supercomputer which simulates it.First, the authors decide to "define cosmology" (they really mean "redefine cosmology") as a spacetime containing a vacuum with specified properties. Why should "cosmology" – something that should represent the science about the Cosmos, something that exists independently of our desires – be "defined" by arbitrary properties that humans have specified?
If "cosmology" has some rules, it may also produce spacetime that do not obey these properties invented by humans. If the deepest known rules of cosmology that we have also produce spacetimes where the cosmological constant is never tiny, then these spacetimes are still products of cosmology according to the deepest known rules of cosmology. Saying that you can invalidate this principle – basically a tautology – by "defining cosmology" in your own way is utterly irrational.
You can't define whole disciplines of science to agree with random constraints that you invented. Instead, the purpose of disciplines of science is to decide whether your assumptions about the Cosmos and other things are correct. If there is a disagreement between the best theory and your assumptions, it's your assumptions that are wrong according to science.
So I think that this thinking about "defining cosmology" involves a misunderstanding of the basic logic of the scientific method. Like in so many other cases, the authors simply want to make up constraints that they find psychologically pleasing and dictate what properties the final laws of physics should obey.
But another problem with the first part of the sentence is that they think that research may be done by dividing objects to classes that obey or don't obey some cherry-picked properties. But this is a characteristic procedure for social sciences, not natural sciences. You know, social sciences may divide organisms to humans and non-humans – and assign vastly different rights to the humans than non-humans, despite the fact that the differences between pairs of humans are often comparable to the differences between some humans and some non-humans.
I am not saying that it's wrong to allow civil rights to humans, no civil rights to animals, and draw a thick line in between them. It's a convention that works fine for most societies. But the thick line is a social construct. Natural scientists know that nothing like that exists at the fundamental level. When a geneticist can distinguish a chimp from a human, she can also distinguish two humans from each other. The idea that some qualitative properties that may distinguish two objects are metaphysically more important than all other parameters is a pure superstition, something that no real scientist may believe. Physics and other natural sciences are quantitative so it doesn't really use the categorization of real-world objects into boxes by inventing arbitrary thick lines.
However, these superstitions are common among the fans of the anthropic principle. They divide Universes by thick lines into those that contain "intelligent beings" from those that don't. But the definition of an "intelligent being" contains a randomly cherry-picked subset of properties of humans or beings in our Universe. Why did you require some properties and not others, Gentlemen? This whole procedure is another social convention. It self-evidently cannot have any true physical significance.
If you use the existence (somewhere in the Universe) of beings that have some human properties as a condition to pick the vacua or Universes, it's just fine – because the existence of objects sharing some features with the humans is an experimentally proven fact. It's a fact simply because humans have been observed. But by describing the properties of humans in some "neutral language", you don't make your explanation less dependent on the empirical data. And to cherry-pick some properties of our Universe while "pretending the ignorance" of others is just utterly irrational. Once you are allowed to use the existence of animals as a criterion to pick the vacua, you're also allowed to use the value of the fine-structure constant \(\alpha\approx 1/137.036\) or so – and all other observed facts.
You can play a game in which you challenge yourself and try to find your vacuum as accurately as you can by using just some empirically observed facts. But it's just an arbitrary game, not science. A scientist is always allowed to use all empirically known facts to refine his knowledge of the right theory and/or parameters that need to be substituted to the theory. Indeed, it's the goal of science to extract the right theory by studying the empirical facts cleverly! At the end, the set of empirical facts that are sufficient to identify the right theory may be greatly reduced. But you don't know the reduced collection of facts from the beginning – you can only determine this collection when the correct theory is found.
But I had to laugh when I read the words about "defining the cosmic time for the whole multiverse as the time shown by the simulation". What!? What can it possibly mean and why you would write such a thing in a paper posted to a physics archive? Which simulation do they discuss? What is the exact program to simulate the Universe? Does this simulation properly reflect the actual laws of physics? If it does not, why would a random caricature of physics – some computer game – be relevant for physics? And if it is aligned, why don't you discuss physics directly instead of its simulations?
In Greene's book about the multiverse, he discussed a scenario in which he is an NPC in a computer simulation and the boss – Miss Simulator (probably George Soros with a lipstick) – decides to kick him out of the computer game because Greene says something politically incorrect. That was cute but I was assuming that he was just mocking religions. When I saw this paper with Denef and others, it seems he was damn serious. He wants everyone to be an NPC who just blindly worships some hypothetical Miss Simulator who is in charge of the Universe. This is not only "like" religions. It is completely isomorphic to religions.
If a programmer writes a computer game marketed as a simulation of the multiverse which has some cosmic time, it doesn't mean that her choice of the cosmic time agrees with how cosmic time works in physics. The cosmic time may be incorporated in tons of ways – some of them are more physically realistic, others less so. In fact, just our talk about the cosmic time in a simulation doesn't even imply that it makes any sense to define a universal time in the multiverse. Different patches of the multiverse may very well be mutually exclusive. By the horizon complementarity, the quantum fields in different patches may refuse to commute with each other. They don't have to "exist simultaneously" at all.
Just because you envision a would-be authoritative "programmer who created a simulation" along certain lines doesn't mean that you have any evidence that these lines are physically correct, sensible, or realistic.
Simulations and computer games may strikingly differ from reality and in most cases, they do. NPCs in computer games don't really behave like intelligent humans because they have lots of limitations. Computer games often allow things that are prohibited in the real world – such as the superluminal motion of rockets. On the other hand, computer programs are often unable to do things that are trivial to do for Nature – such as the calculation of the energy spectrum of a complicated molecule.
If you allow imperfect simulations, the imperfections may be huge and sufficient for a sensible person to see that simulations and the reality are completely different things. You may hypothetically think about some very precise representations of the laws of physics. But if you don't know something about these laws of physics, just talking about the "equivalent simulation" won't bring you any closer to the answers.
At the end, I think that the authors think like the social pseudoscientists. They think that someone – like a coder – may be placed above physics and physicists. He or more likely she studies the world – including the multiverse – by some categorization that is enough for comparative literature, by arbitrary defining cosmic time in some extremely stupid ways, and many other things, and physicists are obliged to take this stuff seriously.
It is pretty much exactly like the postmodern sociologists or anthropologists who want to study the scientific community using similar methods they use to study savages in Polynesia. Can't a sociologist simply stand above the physicists and understand everything that is truly important about them, their community, and their activities – much more than they understand it themselves?
Well, it's not possible. A social scientist is still a relatively clueless moron. If she weren't a moron, she could become a theoretical physicist instead of a social scientist. She may be smarter than savages in Polynesia but she's not smarter than physicists, at least the bright ones. So she's simply not standing above the physicists and by superficially looking at some people's behavioral patterns, she still completely misses the key things. The key things do depend on the validity of the theories, strength of the evidence, and the arguments. If she understands nothing about those, she can't understand anything truly important about the interactions between physicists! She's still similar to a puppy who learns the right reaction to several worlds used by the owner. By learning them, the puppy doesn't become a top expert in physics or neuroscience.
Denef et al. did something analogous to those sociologists or anthropologists. They envisioned some hypothetical authority, a programmer, and made guesses about her choice how to write a program. And because She is such a divine figure in our multiverse, Her choices must be considered serious insights of physics. I am sorry, Lady and Gentlemen, but readers with the IQ above 70 still see that those are your choices, not a divinity's choices, and they see that there is no evidence that you have found any picture that makes sense. Even if that divine programmer existed, it would still be just a simulation that could give a misleading picture about physics.
The simplest point they seem not to get is that programming, categorization, social sciences and all activities like that are emergent – they cannot possibly be fundamental in the sense of fundamental physics. This statement is tautologically true, it is true by construction. We know that animals, humans, societies, their conventions, and also computer programs have evolved from the pre-existing laws of physics. So no insight about these complex things – humans, societies, programs – can teach us any reliable insights about the fundamental laws of physics. Do they really disagree with this trivial assertion?
In particular, if you pick some random conventions – basically social conventions or some conventions extracted from your arbitrary assumptions about how some simulation of a multiverse should be written – it is absolutely obvious that a measure that you "calculate" out of these conventions is just another convention. Garbage in, garbage out. In fact, you have inserted some arbitrary garbage as the starting point but you have manipulated it in some even weirder and more arbitrary way so the "measure" you ended up with must be an even greater garbage than what you assumed at the beginning.
The main verdict is that there are no justified results or conclusions backed by arguments in such papers. It's just about the transformation of some garbage into another garbage. The last paragraph of their introduction says:
Finally, we make some comments about a more abstract version of this discussion, which defines the complexity class of a cosmology. Our proposal was inspired by computational complexity theory, and particularly the idea of computational reduction. Can we give meaning to questions such as “is the problem of finding a vacuum with small cosmological constant in P, NP or some larger complexity class?”No, you can't give a meaning to such questions. As I said, finding a string vacuum isn't a problem parameterized by an adjustable \(N\) and adjustable parameters \(f_i\). But more generally, you are mixing up complexity and cosmology even though you have absolutely nothing coherent to say about the union – but you know that such a mixture will be welcome by certain people for basically ideological reasons (it may be welcome e.g. by coders with a big ego who want to be told that by being coders, they indirectly know everything important about physics as well – and perhaps they are analogous to God). But this is very bad science.
The paper has 57 pages and one could write 570 pages to clarify why many detailed assertions in the paper are ludicrous. For example, by worshiping Miss Simulator, they claim to "solve the Boltzmann Brain problem", among others. But the "Boltzmann Brain problem" is just another pseudo-problem that arose from irrational ways to think about the Universe – ways that are completely analogous to this paper. We can easily empirically exclude the theory that we're Boltzmann Brains – and no theory that has actually been successful in science predicts that we should be Boltzmann Brains. Only completely flawed and irrational applications of the probability calculus and crackpot theories about cosmology suggest that we "should be" Boltzmann Brains.
Developing a theory that is free of the problem "the theory is predicting that we are the Boltzmann Brain" isn't a difficult task – you just need to throw away the stupidest possible approaches to probability and physics. Because it's not a difficult task, it's ludicrous to view the "cure for the Boltzmann Brain problem" as significant evidence that your theory of physics is valid.
Complexity, simulations in cosmology are pseudoscience
Reviewed by MCH
on
November 03, 2018
Rating:
No comments: