One of the ideas I found irresistible in my research during the last 3 weeks was the multiple point criticality principle mentioned in a recent blog post about a Shiu-Hamada paper.
Froggatt's and Nielsen's and Donald Bennett's multiple point criticality principle says that the parameters of quantum field theory are chosen on the boundaries of a maximum number of phases – i.e. so that something maximally special seems to happen over there.
This principle is supported by a reasonably impressive prediction of the fine-structure constant, the top quark mass, the Higgs boson mass, and perhaps the neutrino masses and/or the cosmological constant related to them.
In some sense, the principle modifies the naive "uniform measure" on the parameter space that is postulated by naturalness. We may say that the multiple point criticality principle not only modifies naturalness. It almost exactly negates it. The places with \(\theta=0\) where \(\theta\) is the distance from some phase transition are of measure zero, and therefore infinitely unlikely, according to naturalness. But the multiple point criticality principle says that they're really preferred. In fact, if there are several phase transitions and \(\theta_i\) measure the distances from several domain walls in the moduli space, the multiple point criticality principle wants to set all the parameters \(\theta_i\) equal to zero.
Is there an everyday life analogy for that? I think so. Look at the picture at the top and ignore the boat with the German tourist in it. What you see is the Arctic Ocean – with lots of water and ice over there. What is the temperature of the ice and the water? Well, it's about 0 °C, the melting point of water. In reality, the melting point is a bit different due to the salinity.
But in this case, there exists a very good reason to conclude that we're near the melting point. It's because we can see that the water and the ice co-exist. And the water may only exist above the melting point; and the ice may only exist beneath the melting point. The intersection of these two intervals is a narrow interval – basically the set containing the melting point only. If the water were much warmer than the melting point, it would have to cool quickly enough because the ice underneath is colder – it can't really be above the melting point.
(The heat needed for the ice to melt is equal to the heat needed to warm the same amount of water by some 80 °C if I remember well.)
How is it possible that the temperature 0 °C, although it's a special value of measure zero, is so popular in the Arctic Ocean? It's easy. If you study what's happening when you warm the ice – start with a body of ice only – you will ultimately get to the melting point and a part of ice will melt. You will obtain a mixture of the ice and water. Now, if you are adding additional heat, the ice no longer heats up. Instead, the extra heat will be used to transform an increasing fraction of the ice to the water – i.e. to melt the ice.
So the growth of the temperature stops at the melting point. Instead of the temperature, what the additional incoming heat increases is the fraction of the H2O molecules that have already adopted the liquid state. Only when the fraction increases to 100%, you get pure liquid water and the additional heating may increase the temperature above 0 °C.
In theoretical physics, we want things like the top quark mass \(m_t\) to be analogous to the temperature \(T\) of the Arctic water. Can we find a similar mechanism in physics that would just explain why the multiple point criticality principle is right?
The easiest way is to take the analogy literally and consider the multiverse. The multiverse may be just like the Arctic Ocean. And parts of it may be analogous to the floating ice, parts of it may be analogous to the water underneath. There could be some analogy of the "heat transfer" that forces something like \(m_t\) to be nearly the same in the nearby parts of the multiverse. But the special values of \(m_t\) that allow several phases may occupy a finite fraction of the multiverse and what is varying in this region isn't \(m_t\) but rather the percentage of the multiverse occupied by the individual phases.
There may be regions of the multiverse where several phases co-exist and several parameters analogous to \(m_t\) appear to be fine-tuned to special values.
I am not sure whether an analysis of this sort may be quantified and embedded into a proper full-blown cosmological model. It would be nice. But maybe the multiverse isn't really needed. It seems to me that at these special values of the parameters where several phases co-exist, the vacuum states could naturally be superpositions of quantum states built on several classically very different configurations. Such a law would make it more likely that the cosmological constant is described by a seesaw mechanism, too.
If it's true and if the multiple-phase special points are favored, it's because of some "attraction of the eigenvalues". If you know random matrix theory, i.e. the statistical theory of many energy levels in the nuclei, you know that the energy levels tend to repel each other. It's because some Jacobian factor is very small in the regions where the energy eigenvalues approach each other. Here, we need the opposite effect. We need the values of parameters such as \(m_t\) to be attracted to the special values where phases may be degenerate.
So maybe even if you avoid any assumption about the existence of any multiverse, you may invent a derivation at the level of the landscape only. We normally assume that the parameter spaces of the low-energy effective field theory (or their parts allowed in the landscape, i.e. those draining the swamp) are covered more or less uniformly by the actual string vacua. We know that this can't quite be right. Sometimes we can't even say what the "uniform distribution" is supposed to look like.
But this assumption of uniformity could be flawed in very specific and extremely interesting ways. It could be that the actual string vacua actually love to be degenerate – "almost equal" superpositions of vacua that look classically very different from each other. In general, there should be some tunneling in between the vacua and the tunneling gives you off-diagonal matrix elements (between different phases) to many parameters describing the low-energy physics of the vacua (coupling constants, cosmological constant).
And because of the off-diagonal elements, the actual vacua we should find when we're careful aren't actually "straightforward quantum coherent states" built around some classical configurations. But very often, they may like to be superpositions – with non-negligible coefficients – of many phases. If that's so, even the single vacuum – in our visible Universe – could be analogous to the Arctic Ocean in my metaphor and an explanation of the multiple point criticality principle could exist.
If it were right qualitatively, it could be wonderful. One could try to look for a refinement of this Arctic landscape theory – a theory that tries to predict more realistic probability distributions on the low-energy effective field theories' parameter spaces, distributions that are non-uniform and at least morally compatible with the multiple point criticality principle. This kind of reasoning could even lead us to a calculation of some values of the parameters that are much more likely than others – and it could be the right ones which are compatible with our measurements.
A theory of the vacuum selection could exist. I tend to think that this kind of research hasn't been sufficiently pursued partly because of the left-wing bias of the research community. They may be impartial in many ways but the biases often do show up even in faraway contexts. Leftists may instinctively think that non-uniform distributions are politically incorrect so they prefer the uniformity of naturalness or the "typical vacua" in the landscape. I have always felt that these Ansätze are naive and on the wrong track – and the truth is much closer to their negations. The apparent numerically empirical success of the multiple point criticality principle is another reason to think so.
Note that while we're trying to calculate some non-uniform distributions, the multiple point criticality principle is a manifestation of egalitarianism and multiculturalism from another perspective – because several phases co-exist as almost equal ones. ;-)
Froggatt's and Nielsen's and Donald Bennett's multiple point criticality principle says that the parameters of quantum field theory are chosen on the boundaries of a maximum number of phases – i.e. so that something maximally special seems to happen over there.
This principle is supported by a reasonably impressive prediction of the fine-structure constant, the top quark mass, the Higgs boson mass, and perhaps the neutrino masses and/or the cosmological constant related to them.
In some sense, the principle modifies the naive "uniform measure" on the parameter space that is postulated by naturalness. We may say that the multiple point criticality principle not only modifies naturalness. It almost exactly negates it. The places with \(\theta=0\) where \(\theta\) is the distance from some phase transition are of measure zero, and therefore infinitely unlikely, according to naturalness. But the multiple point criticality principle says that they're really preferred. In fact, if there are several phase transitions and \(\theta_i\) measure the distances from several domain walls in the moduli space, the multiple point criticality principle wants to set all the parameters \(\theta_i\) equal to zero.
Is there an everyday life analogy for that? I think so. Look at the picture at the top and ignore the boat with the German tourist in it. What you see is the Arctic Ocean – with lots of water and ice over there. What is the temperature of the ice and the water? Well, it's about 0 °C, the melting point of water. In reality, the melting point is a bit different due to the salinity.
But in this case, there exists a very good reason to conclude that we're near the melting point. It's because we can see that the water and the ice co-exist. And the water may only exist above the melting point; and the ice may only exist beneath the melting point. The intersection of these two intervals is a narrow interval – basically the set containing the melting point only. If the water were much warmer than the melting point, it would have to cool quickly enough because the ice underneath is colder – it can't really be above the melting point.
(The heat needed for the ice to melt is equal to the heat needed to warm the same amount of water by some 80 °C if I remember well.)
How is it possible that the temperature 0 °C, although it's a special value of measure zero, is so popular in the Arctic Ocean? It's easy. If you study what's happening when you warm the ice – start with a body of ice only – you will ultimately get to the melting point and a part of ice will melt. You will obtain a mixture of the ice and water. Now, if you are adding additional heat, the ice no longer heats up. Instead, the extra heat will be used to transform an increasing fraction of the ice to the water – i.e. to melt the ice.
So the growth of the temperature stops at the melting point. Instead of the temperature, what the additional incoming heat increases is the fraction of the H2O molecules that have already adopted the liquid state. Only when the fraction increases to 100%, you get pure liquid water and the additional heating may increase the temperature above 0 °C.
In theoretical physics, we want things like the top quark mass \(m_t\) to be analogous to the temperature \(T\) of the Arctic water. Can we find a similar mechanism in physics that would just explain why the multiple point criticality principle is right?
The easiest way is to take the analogy literally and consider the multiverse. The multiverse may be just like the Arctic Ocean. And parts of it may be analogous to the floating ice, parts of it may be analogous to the water underneath. There could be some analogy of the "heat transfer" that forces something like \(m_t\) to be nearly the same in the nearby parts of the multiverse. But the special values of \(m_t\) that allow several phases may occupy a finite fraction of the multiverse and what is varying in this region isn't \(m_t\) but rather the percentage of the multiverse occupied by the individual phases.
There may be regions of the multiverse where several phases co-exist and several parameters analogous to \(m_t\) appear to be fine-tuned to special values.
I am not sure whether an analysis of this sort may be quantified and embedded into a proper full-blown cosmological model. It would be nice. But maybe the multiverse isn't really needed. It seems to me that at these special values of the parameters where several phases co-exist, the vacuum states could naturally be superpositions of quantum states built on several classically very different configurations. Such a law would make it more likely that the cosmological constant is described by a seesaw mechanism, too.
If it's true and if the multiple-phase special points are favored, it's because of some "attraction of the eigenvalues". If you know random matrix theory, i.e. the statistical theory of many energy levels in the nuclei, you know that the energy levels tend to repel each other. It's because some Jacobian factor is very small in the regions where the energy eigenvalues approach each other. Here, we need the opposite effect. We need the values of parameters such as \(m_t\) to be attracted to the special values where phases may be degenerate.
So maybe even if you avoid any assumption about the existence of any multiverse, you may invent a derivation at the level of the landscape only. We normally assume that the parameter spaces of the low-energy effective field theory (or their parts allowed in the landscape, i.e. those draining the swamp) are covered more or less uniformly by the actual string vacua. We know that this can't quite be right. Sometimes we can't even say what the "uniform distribution" is supposed to look like.
But this assumption of uniformity could be flawed in very specific and extremely interesting ways. It could be that the actual string vacua actually love to be degenerate – "almost equal" superpositions of vacua that look classically very different from each other. In general, there should be some tunneling in between the vacua and the tunneling gives you off-diagonal matrix elements (between different phases) to many parameters describing the low-energy physics of the vacua (coupling constants, cosmological constant).
And because of the off-diagonal elements, the actual vacua we should find when we're careful aren't actually "straightforward quantum coherent states" built around some classical configurations. But very often, they may like to be superpositions – with non-negligible coefficients – of many phases. If that's so, even the single vacuum – in our visible Universe – could be analogous to the Arctic Ocean in my metaphor and an explanation of the multiple point criticality principle could exist.
If it were right qualitatively, it could be wonderful. One could try to look for a refinement of this Arctic landscape theory – a theory that tries to predict more realistic probability distributions on the low-energy effective field theories' parameter spaces, distributions that are non-uniform and at least morally compatible with the multiple point criticality principle. This kind of reasoning could even lead us to a calculation of some values of the parameters that are much more likely than others – and it could be the right ones which are compatible with our measurements.
A theory of the vacuum selection could exist. I tend to think that this kind of research hasn't been sufficiently pursued partly because of the left-wing bias of the research community. They may be impartial in many ways but the biases often do show up even in faraway contexts. Leftists may instinctively think that non-uniform distributions are politically incorrect so they prefer the uniformity of naturalness or the "typical vacua" in the landscape. I have always felt that these Ansätze are naive and on the wrong track – and the truth is much closer to their negations. The apparent numerically empirical success of the multiple point criticality principle is another reason to think so.
Note that while we're trying to calculate some non-uniform distributions, the multiple point criticality principle is a manifestation of egalitarianism and multiculturalism from another perspective – because several phases co-exist as almost equal ones. ;-)
Arctic mechanism: a derivation of the multiple point criticality principle?
Reviewed by DAL
on
August 12, 2017
Rating:
No comments: