banner image
Sedang Dalam Perbaikan

Why competent physicists can't remain calm when seeing an apparent fine-tuning

Sabine Hossenfelder boasts that "students" are asking her what to think about major research topics in theoretical physics and cosmology
Good Problems in the Foundations of Physics
and she likes to give them the same answer as the rubbish on her atrociously anti-scientific blog: almost none of the research makes any sense, problems aren't problems, physics should stop. Dear students, let me point out that
she is just an average incompetent layman who was allowed to pretend to be a physics researcher just because she has a non-convex reproductive organ and that's why, in the contemporary environment of the extreme political correctness, many people are afraid to point out the obvious fact that she is just an arrogant whining woman who has no clue about any of these physics problems.
But every actual physicist will agree with an overwhelming majority of what I am going to say.

If you misunderstand the basic issues of the current state of theoretical physics (and particle physics and cosmology) as much as she does, you simply cannot get a postdoc job in any research group, with a possible exception of the totally ludicrous low-quality or corrupt places. And most of the relevant people would probably agree with me that if you still haven't figured out why her musings are completely wrong, you shouldn't really be a physics graduate student, either.



She starts by saying that advances in physics may be initiated by theorists or by experimenters – so far so good – but she quickly gets to a list of 12 defining problems of modern theoretical physics (and/or particle physics and cosmology) and she says that almost none of them is really a good problem deserving research. Most of them are problems with some apparent fine-tuning similar to the hierarchy problem.



Let's discuss them one by one. We begin with the problems that are not problems of the fine-tuning type:
Dark matter: S.H. thinks it's an inconsistency between observations and theory. For this reason, it's a good problem but it's not clear what it means to solve it.
Dark matter isn't an inconsistency between observations and theory. It's just an inconsistency between observations and a theory supplemented with some extra assumption and it is a theoretically unmotivated assumption, namely that our telescopes are capable of seeing all sources of the gravitational field.

This assumption shouldn't be called "a theory" – and not even "a hypothesis" – because there's no coherent framework for such "a theory". The assumption is ad hoc, doesn't follow from any deeper principles, doesn't come with any nontrivial equations, and doesn't imply any consequences that would be "good news" i.e. that would have some independent reasons to be trusted.

In principle, there are two possible classes how to deal with the galactic rotation curves that disagree with the simplest assumption: Either general relativity is subtly wrong (that's the MOND theories), or there are extra masses that source the gravitational field (dark matter). There are good reasons why physicists generally find the second answer to be more likely – general relativity is nice and its deformations seem pathological, while there's nothing wrong about the theories in which some matter doesn't interact through the electromagnetic fields.

To solve the problem of "dark matter" usually means to understand the microscopic and non-gravitational properties of this new stuff.
Grand unification: There's no reason to expect any unification because 3 forces are just fine. Maybe the value of the Weinberg angle is suggestive, she generously acknowledges, and it may or may not have an explanation.
Three non-gravitational forces may co-exist at the level of effective quantum field theory but it's a fact that at the fundamental level, forces cannot be separated. In string/M-theory, all forces and their messengers ultimately arise as different states of the same underlying objects (e.g. the vibrating string in perturbative string theory). Even if you decided that string/M-theory isn't the right description of the Universe, whatever would replace it would probably share the same qualitative properties.

Grand unification is the oldest scenario how the three forces arise from the fundamental theory: they merge into one force even at the level of effective quantum field theory. Grand unified theories may arise as limits of string compactifications. But string/M-theory doesn't make grand unification mandatory. The three forces, more precisely the three factors of the \(U(1)\times SU(2)\times SU(3)\) gauge group, may look separate in every field theory approximation of the physics. That's the case in the braneworlds, for example. But even in such vacua with separate forces, the three forces are fundamentally unified – for example, the branes where the forces live influence each other in the extra dimensions and the stabilization needs to involve all of them.
Quantum gravity: S.H. thinks that QG cures an inconsistency and is a solution to a good problem. But there may be other solutions than "quantizing gravity".
First, there is no inconsistency between gravity and quantum mechanics. It is hard to reconcile these two key principles of physics because in combination, they're even more constraining than separately, but it is not impossible. String/M-theory is at least an example – a proof of the existence of at least one consistent theory that obeys the postulates of quantum mechanics and also includes Einstein-like gravitational force based on the curved spacetime. So the claim that there's an inconsistency is just wrong. There is only an inconsistency between quantum mechanics and the most naive way how to make Einstein's equations quantum mechanical. It is the direct "quantization of gravity" that is inconsistent (at least non-renormalizable)!

Instead, the right picture is a theory that exactly obeys the postulates of quantum mechanics while it only includes Einstein's equations as an approximation at long distances and in the classical limit. So everything that S.H. writes is upside down. "Quantization of gravity" is the inconsistent approach while the consistent "quantum gravity" is something else. And the statement that there are other ways to achieve the consistency also seems to be wrong – all the evidence indicates that there is only one consistent theory of quantum gravity in \(d\geq 4\), string/M-theory. It may have many descriptions and definitions as well as many solutions or vacua but all of them are related by dualities or dynamical processes.
Black hole information problem: A good problem in principle but S.H. thinks that it is not a "promising research direction" because there's no way to experimentally distinguish between the solutions.
The fact that black hole thermodynamics and especially "statistical physics" of black hole microstates would be inaccessible to experiments has been known from the beginning when these words were first combined. It didn't mean that it wasn't a promising research direction. Instead, it's been demonstrated that it was an immensely successful research direction. Ms Hossenfelder knows absolutely nothing about it – although she wrote (totally wrong) papers claiming to be dedicated to the issue – but this changes nothing whatever about the success of this subdiscipline of science.

The laymen may know the name of the recently deceased Stephen Hawking. A big part if not most of his well-deserved scientific fame boils down to the quantum mechanics of black holes and the black hole information paradox.

Lots of questions were answered by purely theoretical or mathematical methods. It's possible. And the consistency constraints are so stringent that the black hole information is more or less a partially unsolved yet very well-defined problem of the mathematical character. The best theoretical physicists of the world have surely spent some time with this theorists' puzzle par excellence.
Misunderstandings of quantum field theory: S.H. believes that the Landau pole and the infrared behavior of QFT isn't as understood as advertised years ago.
This is pretty much complete garbage as well. She doesn't understand these things but that doesn't mean that genuine physicists don't understand them. The infrared behavior of QFTs has been mastered in principle and it's being studied separately for each QFT or a class of QFTs. There is no real obstacle – just new theories obviously sometimes produce new infrared or ultraviolet questions that take some time to be answered. In the same way, the Landau pole is known to be a non-perturbative inconsistency unless the theory is UV-completed in some way and physicists have a good idea which theories have the Landau pole and which don't, which theories can be completed and which can't.

Like in most cases, Ms Hossenfelder just admits that she has no idea about these physics issues and she wants her brain-dead readers to believe that her ignorance implies that genuine physicists are also ignorant. But this implication never works. She doesn't know anything about the existence of the Landau pole or about the infrared problems in given theories – but genuine physicists know lots about these things and others that she constantly spits upon.
The measurement problem: According to her, it's not a philosophical problem but "an actual inconsistency" because the measurement is inconsistent with reductionism.
As explained in a hundred of TRF blog posts or so, there is absolutely nothing inconsistent and absolutely nothing incomplete about the measurement in quantum mechanics. A measurement is a well-defined prerequisite needed to apply quantum mechanics and it requires the observer who identifies himself or the degrees of freedom whose values may be perceived by him. The rest of the Universe is described by the Hilbert space. There is no violation of reductionism here. Reductionism means that the behavior of the observed physical systems may be reduced to the behavior of their building blocks. From his own viewpoint, the observer isn't a part of the observed physical system so it is perfectly legitimate not to decompose him to the building blocks. Instead, it's the very purpose of the term "observer" that it's a final irreducible entity that shouldn't be reduced to anything more fundamental because he is one of the fundamental entities whose existence must be postulated and guaranteed before the theory may be applied.

All the research claiming that the measurement problem is a real problem, paradox, or inconsistency is a worthless pseudoscience that has produced zero scientifically valuable outcomes and there is no reason to think that something will ever change about it.
Dark energy
The hierarchy problem
Particle masses
The flatness problem
The monopole problem
Baryon asymmetry and the horizon problem
These are the problems of the fine-tuning type. Hossenfelder says that none of these are problems and they don't deserve any research because all the numbers may be fine-tuned and there is no inconsistency about it.

She just proves she is a moron who completely misunderstands the scientific method. None of these examples of fine-tuning represents a true logical inconsistency but virtually no inconsistencies in natural sciences may ever be truly logical. Instead, natural sciences deal with observations and certain observations look unlikely. Inconsistencies always arise when the observed phenomena are predicted to be extremely unlikely according to the current theory, framework, model, or paradigm.

For example, the Standard Model without a light Higgs boson wasn't "logically" inconsistent with observations. Instead, the LHC observed some excess of diphoton and other final states. This excess could have been considered to be a coincidence. But when the coincidence becomes too unlikely – like "1 in 1 million" unlikely (the 5-sigma evidence) – physicists may announce that the null hypothesis has been disproven and a new phenomenon has been found.

This is how it always works. Scientists always need to have some rules that say which observations should be more likely and which less likely, and when observations that are predicted to be insanely unlikely emerge in an experiment nevertheless, the null hypothesis is falsified.

When it comes to the parameters above – the vacuum energy (in Planck units), the Higgs mass (in Planck units), the relative deviation of the Universe from a flat one, the low concentration of the magnetic monopoles, the large baryon-antibaryon asymmetry in the present Universe, and the small variations in the cosmic microwave background temperature – all of them have very unlikely, typically very small (much smaller than one) values.

By Bayesian inference, such small values are unlikely, and this simply poses a problem that is in principle the same as the 5-sigma excess of diphoton states that could be explained by the Higgs boson – or any other experimental observations that is used as evidence for new phenomena in any context of natural sciences.

Every meaningful paradigm must say at least qualitatively what the parameters of the theories may be and what they may not be. For dimensionless parameters, there must be a normalizable statistical distribution that the scientist assumes, otherwise he doesn't know what he is doing. To say the least, such a statistical distribution must follow from a deeper theory.

The fine-structure constant \(\alpha\approx 1/137.06\) should ideally be calculable from a deeper theory. But in the absence of a precise calculation, one should still ask for at least a more modest, approximate explanation – one that gives an order-of-magnitude estimate for this constant and analogously for the more extremely fine-tuned constants in the list above.

The problem is that even the order-of-magnitude estimates seem to be vastly wrong in most cases. The tiny values are extremely unlikely and according to Bayesian inference, tiny likelihoods of the observations according to a theory translate to a tiny likelihood of the theory itself! That's true regardless of the precise choice of the probability distribution, as long as the distribution is natural by itself. Normalizable, smooth, non-contrived distributions simply have to be similar to the uniform one on \((0,1)\) or the Gaussian around zero. The probability that a special condition such as \(|x|\lt 10^{-123}\) holds is tiny, about \(p\approx 10^{-123}\).

A deeper theory could indeed say that a dimensionless real parameter has the value \(\Lambda=10^{\pm 123}\) but every real scientist has the curiosity to ask "Why!?" If he could talk to God and God wanted to keep the answer classified, the scientist would insist: "But please, God, tell me at least roughly why such a tiny number arises." You can't really live without the question why.

So this fine-tuning problem is a problem in all the cases where Hossenfelder claims that no problem exists. And some of these problems have been given a solution that is almost universally accepted – especially inflationary cosmology that solves the flatness, monopole, and horizon problem. The monopole problem only exists if we adopt some grand unified or similar theory that implies that magnetic monopoles exist in principle. (String theory probably says that they must exist, it's a general principle of the same kind as e.g. the weak gravity conjecture.) And once they may exist, a generic early cosmology would probably generate too many of them, in contradiction with the observations (of zero magnetic monopoles so far). That's where inflation enters and dilutes the concentration of magnetic monopoles to a tiny density, in agreement with observations.

The flatness and horizon problems were severe and almost arbitrarily severe in the sense that the probability that the initial conditions would agree with the nearly flat and nearly uniform observations could go like \(10^{-V}\) where \(V\) is the volume of the Universe in some microscopic units. The greater part of the Universe you see, the more insanely unlikely the flatness or homogeneity would be. This would be unacceptable which is why some explanation – inflation or something that has almost identical implications – has to operate in Nature.

In the case of the dark energy and the hierarchy problem, the fine-tuning is even more surprising because the natural fundamental parameters aren't even close to zero – we could say that for some reason, the numbers very close to zero are more likely than the uniform distribution indicates. Instead, the natural parameter must be very precisely tuned close to some very special nonzero values because the finite, large part of these constants is compensated by loop effects in quantum field theory and similar phenomena based on quantum mechanics.

For this reason, the tiny value of the cosmological constant must be considered an experimental proof of some qualitative mechanism. We are not sure what the mechanism is but it could be the anthropic selection. The anthropic selection is unattractive but it could be considered a solution of the cosmological constant problem. The constant is tiny because it can be anything, it tries all values somewhere, and no observers arise in the Universe where the value is large because these values create worlds that are inhospitable for life. That's the anthropic explanation. If it is illegitimate, there must exist another one – probably a better one – but one that is comparably qualitative or philosophically far-reaching.

The baryon problem is a problem of the opposite type, in some sense, because we observe a much larger asymmetry between matter and antimatter than what would follow from simple theories and generic initial conditions. The observed matter-antimatter asymmetry in the Universe is therefore more or less an experimental proof of some special era in cosmology which created the asymmetry. If you had a theory that naturally predicts a symmetry between matter and antimatter in average, they would have largely annihilated with each other and the probability that as much matter survives as we see would again go like \(10^{-V}\). Although it's not a "logical" contradiction, the probability is zero in practice.

Hossenfelder says that none of those things are good research directions because we may fudge all the numbers and shut up. But that's exactly what a proper scientist will never be satisfied with. Alessandro wrote the following analogy:
A century ago somebody could have written "Atomic Masses It would be nice to have a way to derive the masses of the atoms from a standard model with fewer parameters, but there is nothing wrong with these masses just being what they are. Thus, not a good problem." Maybe particle masses are a good problem, maybe not.
Right. We could go further. The Universe was created by God and all the species are what they are, planetary orbits are slightly deformed circles, everything is what it is, the Pope is infallible, and you should shut up and stop asking any questions. But that's a religious position that curious scientists have never accepted. It's their nature that they cannot accept such answers because these answers are clearly no good. If something – like the observed suggestive patterns – seem extremely unlikely according to a theory that is being presented as a "shut up" final explanation, it's probably because it's not the final explanation. And scientists always wanted a better one. And they got very far. And the scientists in the present want to get even further – that's what their predecessors also wanted.

Hossenfelder doesn't have any curiosity. As a thinker, she totally sucks. She isn't interested in any problem of physics, let alone a deep one. She should have been led to Kinder, Küche, Kirche but instead, to improve their quotas, some evil people have violently pushed her into the world of physics, a world she viscerally hates and she has zero skills to deal with. She isn't interested in science – just like a generic stoner may be uninterested in science. He's so happy when he's high and he doesn't care whether the Earth is flat or round and whether the white flying thing is a cloud or an elephant. But that doesn't mean that physics or science or problems of state-of-the-art physics aren't interesting. It doesn't mean that all great minds unavoidably study these problems. Instead, it means that Hossenfelder and the stoner are lacking any intellectual value. They are creatures with dull, turned-off brains, mammals without curiosity, creativity, or a desire for a better understanding of Nature.

I find it extremely offensive that fake scientists such as Hossenfelder who are wrong about literally every single entry in this list – because they just articulate the most ordinary misconceptions of the laymen who have no clue about the field – are being marketed as real scientists by the fraudulent media. This industry is operated by the same scammers who like to prevent the father of the DNA from communicating his answers to the question whether the DNA code affects intelligence. It surely does, James Watson knows that, every scientifically literate person knows that, and everyone who doubts it is a moron or a spineless, opportunist, hypocritical poser.

Every competent physicist also knows that Hossenfelder's opinions on the promising research directions are pretty much 100% wrong and are only served to delude the laymen – while their effect on the actual researchers is zero.
Why competent physicists can't remain calm when seeing an apparent fine-tuning Why competent physicists can't remain calm when seeing an apparent fine-tuning Reviewed by MCH on January 14, 2019 Rating: 5

No comments:

Powered by Blogger.