banner image
Sedang Dalam Perbaikan

Why Penrose is one of many crackpots when it comes to inflation

When talking about cosmic inflation, Roger Penrose and many others display a complete lack of understanding of the principles of rational reasoning

Sean Carroll discusses cosmic inflation, especially the eternal one. In the context of cosmology, Carroll is a semi-confused person. Everything he was taught at the university, wasn't able to forget, and hadn't time to "independently" revisit is right; everything he has added or modified to this background is pure nonsense.

In this text, I want to sketch what cosmic inflation is; why it explains what it explains; and why all the criticisms claiming that it doesn't explain those things are symptoms of a critics' fatal brain dysfunction. This doesn't prove that inflation is right; I am just saying that all existing criticisms of it are worthless and nonsensical. If the critics were using their logic consistently, this dysfunction would prevent them from understanding any scientific or rational explanation of anything in science and in the everyday life as well.




Sketching the problems that are solved by inflation

The standard Big Bang cosmology, with its expansion of the Universe dictated by power laws, beautifully describes the evolution of the Cosmos since the moments when it was seconds (or fractions of a second) old. But it leaves some questions unresolved. Work with the Planck units where all quantities are divided by the product of appropriate powers of the speed of light, Planck's constant, and Newton's constant (as well as Boltzmann's constant) so that they become dimensionless.

Then ask: if this Universe is allowed to wait for the moment when it becomes matter-dominated (no longer radiation-dominated) and an observer measures the total mass/energy of all the particles in the Universe visible to him, how much will he get? Well, by dimensional analysis, the only conceivable answer should be \(M \sim O(1)\). The total mass of the Universe should be comparable to one in Planck units. One Planck mass is something like 10 nanograms: it's tiny and clearly smaller than the actual mass of the Universe which is over \(10^{52}\) kilograms.

Why is the Universe so massive? Where did the huge number determining the mass in natural units come from? In the same way, where did the large radius of the visible Universe at a transition point – relatively to the tiny Planck length – come from? We may also observe the curvature of the spatial slices of our spacetime at a fixed time (measured e.g. by the local temperature of the cosmic microwave background). Shockingly enough, this spatial part of the curvature ends up much lower (flatter) than the curvature of the spacetime as a whole; you could think that such a fact morally contradicts relativity. Where does this hierarchy come from?

Also, one may ask what is the number of the magnetic monopoles in the visible Universe. If this part of the Universe had ever been very hot, like the huge GUT-temperature hot, it should have created lots of magnetic monopoles and topological defects of similar types (not excessively far from the number of ordinary particles). But we don't see almost any: assuming that you believe me that the fundamental theory allows them to exist, where did they go? Why is the number of observed magnetic monopoles, cosmic strings, and other objects so much smaller than the most straightforward prediction of the Big Bang cosmology extrapolated to truly early moments of the life of our world?

I have deliberately formulated all these problems as "hierarchy problems". All of them may be reduced to the observation that some observable quantities describing the Universe, when converted to the most natural units, end up being nonzero numerical constants that are nevertheless much lower than one (if a "very large" number is discussed instead, invert it). If the Big Bang cosmology were really explaining the "beginning", all the nonzero dimensionless numbers should be of order one. If you have a natural probabilistic distribution for a dimensionless number, the probability that it ends up in a very special place, like in the very close vicinity of zero, is extremely small – proportional to the length of the interval where you require the number to appear. For this number to be small is unlikely.

So all these observed numbers that are problematic give rise to hierarchy problems: according to the Big Bang cosmology or its most straightforward extrapolation, the smallness (or, equivalently by inversion, largeness) of the observed dimensionless parameters describing the Universe and mentioned above is very unlikely. When something – the value of an "index" describing your object or situation – is very unlikely according to your hypothesis (some calculated number is extremely large or extremely small even though it should be fundamental), your hypothesis is in trouble. You need to improve it, add something to it, or replace it (or at least abandon it). You should be looking for an explanation. You should look for a more detailed theory where the observed "unusually small or large" values are likely.

Why inflation solves them?

Inflation is an outburst of exponential expansion of the Universe that occurred when the Universe was something like \(10^{-30}\) seconds old. During a very short time, the linear dimensions of the Universe grew exponentially and they increased by a factor of \(\exp(60)\) or more. This extra era inserted to the CV of our Universe explains why it so large (by dimensions), and because the Universe also came equipped with a nonzero energy density that got converted to particles when inflation ended, we also explain why the mass of the observable Universe today is so much higher than the natural unit of the mass, the Planck mass.

The expansion of the Universe dramatically diluted magnetic monopoles, cosmic strings, and other topological defects, explaining their present low density. In the same way, one may also say that the spatial curvature dramatically decreased when the Universe was inflating and flattening. We explain all the mysteries described in the previous section (and also get a remarkable explanation for the detailed shape of the WMAP spectral curve as a bonus predictions). What does it mean that we explain them?

Well, the parameters with large values we mentioned – such as the total mass \(M\) of the "dust" (slow particles, relatively to the speed of light) in the observable Universe measured at the moment when the total energy stored in mass beats the total energy stored in radiation for the first time – may be calculated, according to the inflationary rules, from other, more fundamental parameters \(P_i\). The calculations involve exponentiation: you get relationships such as
\[ M = \exp[f(P_i)] \] where \(f(P_i)\) is some natural function of the more fundamental parameters \(P_i\). For much more reasonable values such as \(f\sim \pm 100\), you get the exponentially large masses, linear sizes, and exponentially small spatial curvatures and densities of magnetic monopoles and other topological defects.

That's what it means to explain an "unnaturally small" or "unnaturally large" number in physics.

How does inflation achieve the exponential expansion? Well, it postulates that there is some additional scalar field (or several scalar fields) with a potential having a maximum and a minimum. At the beginning, the scalar field(s) was/were sitting near the maximum of the potential energy (density). At those moments, the potential energy (because it's a Lorentz-invariant vacuum energy density) behaves as a positive cosmological constant which forces the spacetime to resemble de Sitter space. It's much like the de Sitter space today except that the relevant vacuum energy and the relevant curvature in the inflationary case was greater by dozens of orders of magnitude than it is today. So the doubling of the Universe's size took \(10^{-30}\) seconds rather than billions of years.

When you try to determine where the temperature was equal to a constant in this de Sitter space, you will find out that this de Sitter space naturally comes sliced by flat slicing. In the flat slicing, the geometry of de Sitter space may be written as
\[ ds^2 = -dt^2 + \exp(2t/\alpha) (dx^2+dy^2+dz^2) \] You see that it's the normal Minkowski metric with a modification: while the temporal dimension \(t\) is not changed, the proper distances measured in the spatial directions \(x,y,z\) are exponentially increasing: the Universe exponentially grows. If the coordinate \(t\) changes by \(60\alpha\), which is a very short time (\(\alpha\) is some microscopic time scale and it's really the proper time because the temporal term in the metric isn't modified in these coordinates), you will guarantee that the proper distances in the direction of coordinates \(x,y,z\) will increase by the factor of \(\exp(60)\) which is enough to solve various problems above.

The scalar field sits near the maximum – an unstable point – for a long enough time so that those 60 \(e\)-foldings may be achieved (an \(e\)-folding is the time or the corresponding process after which the linear dimensions increase \(e\approx 2.71828\) times). It doesn't happen for all potentials but it happens for a significant portion of the potentials you may think of: it's surely not the case that it only works for \(10^{-100}\) fraction of the potentials and this modest conclusion is enough to claim that inflation is a huge improvement. In some measure – e.g. by counting models in string theory – you might say that at least 1% of the maxima of the would-be inflaton scalar fields (or many more than that) will be enough to produce a sufficiently long era of inflation. So the probability that you get roughly the observed figures for the mass, size, flatness, low concentration of cosmic strings, and so on is of order one percent or several percent. It is no longer \(10^{-100}\) as it would have been with the Big Bang cosmology.

The scalar field eventually rolls down near the minimum of the potential where you have a stable point (it is there today). The energy difference – which was mostly acting as the cosmological constant driving the rapid inflationary expansion – is converted to the inflaton field's kinetic energy and then, because of its interactions with other fields, it is converted to the energy of newly produced particles in the final stage of inflation known as reheating. So the inflation gives you a long enough exponential expansion followed by the conversion of the capital to a nicely large, flat Universe equipped with lots of particles (which are going to be used to build galaxies) and a small number of exotics. That's exactly what you need.

The initial conditions are natural if not very natural – a small region of space where the inflaton scalar field happens to sit near the maximum (not necessarily too close), plus the assumption that the potential for the inflaton obeys some conditions such as the slow-rolling conditions which are not too unlikely. In some counting, they're really generic if not inevitable.

If you don't "care" about naturalness and you're ready to say that the Universe just evolved to be large, heavy, nearly flat, and nearly free of monopoles just because Nature wanted it this way, you won't ever need or appreciate inflation. But this approach of yours is equivalent to the opinion (of Bishop Berkeley?) that the fossils of dinosaurs etc. were found on Earth simply because God created them and placed them under the soil those 6,000 years ago. Most sensible people are not satisfied with this explanation because it needs to make too many unnatural – and therefore unlikely – assumptions about some events or condition in a particular epoch of the history. For the same reason, reasonable people who study cosmology think that the Big Bang power-law expansion can't be the whole story. An ancient Earth combined with geology and evolution makes the observations as natural as inflation makes the observed properties of the Universe mentioned above natural.

Eternal inflation, tunneling in the landscape

So the places where the inflaton is near the maximum are inflating; the places where it's near the minimum of the potential energy are those where inflation has already stopped. As Sean Carroll correctly explains, inflation never quite stops. The inflaton wants to approach the minimum everywhere so you might think that the fraction of the space where the inflation has already stopped is increasing. However, you shouldn't forget that the places where the inflaton is still near the maximum (where the inflation hasn't stopped yet) are exponentially expanding (because the inflation hasn't stopped yet), so they are pushing the ratio in the opposite direction and they're trying to encourage the regions where inflation still continues.

In his latest popular book, The Hidden Reality, which does a great job in explaining many issues in modern cosmology, Brian Greene compares the situation to the fight against some pandemics or pests or whatever it was. They copulate and exponentially expand while they're being beaten by public health officials. It's not clear who wins: a brutal fight. What actually happens is that peaceful regions where the inflation has already stopped are separated by inflating, expanding regions where inflation still continues. Within those regions, the inflation stops at various places but not others, and so on.

This is a simple picture of inflation where the inflaton is essentially a continuous function of the spacetime coordinates. Nevertheless, it's already enough for the birth of the "pocket Universes", the inflating bubbles that are separated by regions where inflation is already over. The geometry of such a spacetime is very complex and hugely differs from the simple flat spacetime of 1905: the effects of general relativity are profound.

If you want to get to the state-of-the-art picture of inflation, you must also appreciate the fact that the inflaton may be discontinuous: it may tunnel to other places of the configuration space, a topic that was sketched in the article about cosmic catastrophes as well. This allows the bubbles in the inflating Universe to have different local environments (with different spectrum and masses of elementary particles and different interactions and their strengths) – to have fields that occupy different regions of the "landscape" or their "configuration space" – and that's how you produce the diverse multiverse with many non-equivalent chances for life to emerge.

We may be in one of these bubbles. The question whether the existence of the "parent bubbles" is real in the most relevant physical sense or whether it may be physically useful to study such "parent bubbles" of ours remains a controversial question. My answer is "probably no" to the second question and "we don't know" to the first question. In one of the sections below, I will discuss the "measure problem" and the "anthropic principle" as well.

Why low entropy at the beginning is never an enigma

This section is the section that gave the name to the whole blog entry but I will try to be brief. Decades ago, Roger Penrose raised an objection. He complained that inflation isn't any progress because it requires a "special state" at the beginning. Well, I have explained that the initial state was natural so how it could be "special"?

Well, the real problem that Roger Penrose has is that the initial state for inflation was and had to be a low-entropy state. One way to summarize why Penrose has been utterly irrational is to say that he must completely misunderstand the second law of thermodynamics or he must misunderstand that physical system are almost never at equilibrium. The second law of thermodynamics says that the entropy of the initial state is never greater – and is almost always strictly smaller – than the entropy of the final state. Just to be sure: indeed, saying that \(A\) is smaller than \(B\) is completely equivalent to saying that \(B\) is greater than \(A\) even though I feel that Penrose and many others must have a psychological problem with this simple assertion as well.

Indeed, the total entropy of a patch of the Universe was rapidly increasing during the inflation; equivalently, the initial entropy had to be much smaller than the final entropy. Is it a problem? Obviously, if you say that the increase of the entropy during inflation was a problem that makes inflation unconvincing or whatever negative adjective you choose, you should consistently say the same thing about any other process in the Universe – any other process described by science, whether or not it explains something – because the total entropy increases in all processes that may ever occur (except for those where the entropy is already maximized and constant, some equilibrium situations).

Cosmic inflation is "just" another era in the history of the Universe whose existence makes the observed largeness, flatness, high mass, low density of exotics etc. of the current Universe understandable because those quantities may be calculated from more fundamental, pre-inflation quantities (such as parameters of the inflaton potential) and the required values of the more fundamental quantities are natural: they are not fine-tuned. That's why inflation is another explanatory victory in science.

Let me mention one analogy: I could obviously use any explanation in science to make the same point but I will choose Darwin's evolution. We currently observe billions of organisms (if not trillions: insect) belonging to millions of species. Each of the organisms has something like billions of bases in the DNA and many properties. People may remember tens of thousands of words, to say the least, and can see (and evaluate) lots of pixels by their eyes. Try to express some miracles of life in a quantitative way; when you do so, you will get quantities whose "natural value" in the world that could have occurred (a lifeless world) would be tiny but they are large on Earth. We have many copies of many species that have much longer DNAs and that are much nicer, brighter, more skillful etc. than what you would expect "naturally", without any "intelligent design".

So you encounter lots of apparent "fine-tunings" and "hierarchy problems" and dull science used to be "clearly" incapable of explaining these miracles of life. That's why the people, until very recently, automatically assumed that there had to be an intelligent Creator. Darwin's evolution didn't make religion impossible but it made irreligion possible, as Steven Weinberg said. It gave us an explanation of all the large numbers. You may start with an inanimate Earth with some simple organic compounds (amino-acids) that may be easily created. Chemical processes combined with emergent processes such as the reproduction of DNA and natural selection will ultimately lead to lots of skillful and complex life forms. The outcome that looked unnatural (probably not following from the dynamical processes in Nature) was suddenly identified as a natural one (because we learned something about the laws of Nature that was previously unknown).

If Roger Penrose were consistent, he would dismiss evolution as a failed theory because it requires the initial conditions of the Solar System (plus the radiation emitted by it) that have a lower entropy than what we get at the end: every process that is necessary for evolution of life to proceed (for example, sex) leads to the increase of the total entropy (the friction is nonzero and produces some heat etc.). That proves that the initial state of "everything" has a lower entropy than the final state which Penrose considers a "problem".

But it is not a problem: it is just the second law of thermodynamics in action – the entropy never decreases and almost always increases – an insight that every undergraduate student of physics should be totally sure about. This law is completely universal: it holds for sex, natural selection, cosmic inflation, or any other process in the Universe as long as the number of degrees of freedom is much higher than one. The "visual content" may be different and the formulae for the entropy differ as well; but it's still true that the entropy goes up which means that it was lower at the beginning. When you try to study as distant past events as you can, you will encounter the total entropy of the Universe that is ever lower. This is always true, it is not a problem, and it is – on the contrary – something we can prove by elementary statistical or logical arguments.

What all these deeply confused people assume is some particular "measure", namely that microstates of a physical system should be "generic" (high-entropy) at all times. But this is complete bullshit. Any observation – a single observation – of the reality is enough to falsify this preposterous conjecture. As long as one is thinking scientifically, this simply closes the story. The hypothesis that the states of the Universe are always maximizing the entropy is totally absurd and indefensible. It is junk science, complete nonsense, a sign of a profound brain dysfunction of anyone who has ever proposed such a "law".

What's true is that "finite enough" systems that are allowed to evolve for a "long enough" time will ultimately be led to the maximum entropy they may have; the chaotic evolution ultimately makes every allowed point of the phase space (or state) equally likely. In the far future, such systems will be "generic" in the sense of statistical mechanics. They will maximize the entropy. However, it's equally true that the long waiting – when you're strict, an infinitely long waiting (but at least waiting for a time comparable to the so-called "thermalization time") – is necessary for a physical system to approach its high-entropy configuration. It simply doesn't work at "any time" and it surely doesn't work for the "initial state".

Quite on the contrary, as the second law of thermodynamics guarantees, the entropy of the initial state is low and because the non-negativity of the total entropy is the only thing that prevents us from reconstructing ever more distant moments in the past, we may say that the truly initial state of the Universe really had to have a vanishing entropy. There's nothing wrong with that. The natural initial entropy of the Universe is either zero or a number of order one; a value comparable to the present entropy, \(10^{105}\) or \(10^{120}\) if we include the cosmic horizon, would be totally unnatural for the same reason why superlarge numbers are always unnatural. Roger Penrose's definition of "natural" and "unnatural" is upside down.

Why you should look for no new "explanations" of the arrow of time, using inflation or anything else

What Roger Penrose and others have effectively done was to misunderstand – or, using their self-confident viewpoint, "deny" – the second law of thermodynamics. And because inflation is just another process that agrees with the second law of thermodynamics and they don't like it for some completely different reasons (e.g. that it was discovered by their contemporaries but not themselves), they decided to use the inflation's adherence to the second law as an argument against inflation.

But this is cherry-picking. They should admit that their logic or brain prevents them from believing any explanation in science because all explanations in science involve processes in which the total entropy increases. The very second law may be easily proved by general methods – and Boltzmann did it in his H-theorem more than a century ago – and this just totally closes the question. Physics has studied many other physical systems with different configuration spaces or Hilbert spaces and different Hamiltonians but the second law has been valid for all of them and it will hold for all the future systems as well. It's a universal, and in some sense trivial, result of logical or statistical reasoning.

Indeed, Boltzmann's proof of the H-theorem uses some past-future asymmetry, the logical arrow of time, as a starting point or a necessary pre-requisite. But the logical arrow of time is a part of science and it may never be separated from it. The logical arrow of time says that event \(B\) (or a property of an object at time \(t_B\)) is a direct consequence of event \(A\) (or a property of an object at time \(t_A\)), then \(B\) must belong to the future light cone of \(A\), thus introducing a future-past asymmetry into the logical reasoning applied on any physical question.

This contrived statement just says that the future evolves from the past and it cannot work in the opposite direction. All unambiguous predictions, including unique well-defined probabilistic ones, only allow us to determine the odds in the future from those in the past (predictions). The opposite reasoning needed for a "retrodiction" requires "logical [e.g. Bayesian] inference" because the initial states we may want to determine are competing "hypotheses". These "retrodictions" will always inevitably depend on arbitrary, subjective priors. There is no unique way to calculate the past from the present or the present from the future. The latter thing would be useless, anyway, because we don't know the future. But the first thing is impossible, too. The method to make "retrodictions" is completely inequivalent to (and more indirect than) the method to make "predictions".

In particular, the squared probability amplitudes in quantum mechanics may only be used to predict the properties of the system at a later moment from the properties at an earlier moment. It simply doesn't work in the opposite direction. When talking about uncertain propositions or ensembles of microscopic states, the probability of a transition has to be summed over the final microstates ("OR" applied to outcomes means "addition") but averaged over initial ones ("OR" applied to initial states means that the prior probability must be divided between all alternatives). This asymmetry between the past and the future, one that has been discussed in dozens of TRF articles, is a part of the common sense, it is totally essential for any rational reasoning about the world, and I will never hide that those who aren't capable of understanding this totally fundamental point suffer from some flagrant dysfunction of a key part of their brain, whether or not their name is Roger Penrose, Brian Greene, or Sean Carroll.

Really, I can't believe that someone who has studied physics for years didn't manage to notice that the mathematical logic applies differently to the past and the future so that any rational reasoning about physics inevitably involves a past-future asymmetry that doesn't require any additional "explanation", especially not a cosmological one (cosmology has nothing to do with the reasons why the arrows of time exist; cosmology is just one among hundreds of scientific disciplines in which the laws of thermodynamics hold and the arrows of time exist).

Other indefensible dogmas involving "measures"

One of the ramifications of the dysfunctional reasoning about the initial state – namely the idea that it should be "generic" in the sense of statistical physics, i.e. it should maximize the entropy – is a special example of various arbitrary "measures" that have become popular especially among the people promoting the anthropic principle.

As Moshe Rozali says in the comment section of Carroll's blog, he can't imagine that there is a preferred measure (on the landscape of string theory, for example) that is "more correct" than others. Well, I can imagine that there is a measure generated by something like the Hartle-Hawking state (which we still don't understand too well) which makes the universes with some shape of the extra dimensions to evolve out of the "initial nothingness" with a higher probability than others. But what seems obviously wrong to me is the idea that the "right measure" could be obtained by "counting all observers in the Universe or the multiverse or its whole history".

Such an assumption – believed by the anthropic people – is completely fallacious. It disagrees with many principles such as the objectivity of science. Science, if it works, must work for everyone. So an argument that allows you to "derive" that you can't be a citizen of a small nation is obviously not scientific because the Pope couldn't use it (the Vatican has 1,000 citizens or so which would be excluded at 5 sigma). And when something is science, it should be verifiable even by the Pope.

The people who think in this flawed way – and despite all the denial and bogus agreement with Moshe, this set obviously include Sean Carroll – think that all such things are just small technical problems they call "the problem of the measure". But these are no small technical problems. One may see lots of other fundamental and unavoidable reasons why every potential "clarification" of the way to count "generic observers" in order to get the "right measure" contradict basic logical principles. The problem is not just that it is hard to decide whether cells, people, or nations should be counted as observers and which hierarchy is exactly the most relevant one and which properties should be expected to be generic (one may always pick a subset of properties in which we will be non-generic and there can't be any "uniform measure" in the space of properties).

For example, when someone thinks that predictions may be based on properties of "generic observers" in the whole spacetime of the multiverse, he counts the future as well. But such an approach allows him to "determine" that there won't ever be quadrillions of intelligent beings in the Milky Way – because it would then be contrived that we live in a relatively small minority of 7 billion people. So all these "spacetime versions of the anthropic principle" predict the doomsday scenario: the mankind can't have a glorious future.

However, this conclusion can't be compatible with the rest of science. While I can't prove that there will be trillions or quadrillions people in the future, I nevertheless do know that such questions will be decided by the known dynamical laws of physics – by the evolution of the state we have today, according to laws that are effectively known – and these dynamical laws therefore imply everything about the future population (in a probabilistic way because we live in a quantum mechanical world). This obviously can't have anything to do with some extra rules how to count objects in the whole spacetime.

If you allow both the dynamical laws we know (such as General Relativity and the Standard Model) as well as the "genericity assumption", you clearly get an internally inconsistent system of axioms because the Standard Model predicts that the probability that the mankind will expand by another factor of one thousand (in the distant future) is significant while the probability according to the spacetime anthropic reasoning would be smaller than 0.001. In fact, you could move this whole discussion in the past. If ancient Greek physicists had used this spacetime anthropic principle, they would have derived wrong apocalyptic predictions about the future of the mankind (from their viewpoint).

Pythagoras may have written down this experiment – a prediction that the world population would never exceed 1 bilion people. I have made the experiment designed by Pythagoras today and falsified the anthropic reasoning. This just closes the question; it kills all spacetime-based "typicality" hypotheses. For other reasons, the "spatial slice" typicality hypotheses are excluded as well. They really contradict relativity because there can't be any preferred slices. And so on. It's just completely wrong to assume that we may ever derive new and valid insights out of the assumption that "we are generic". No clarification of this paradigm that works can ever be found. This is not just a "pessimism about the anthropic principle"; it's an outcome that may be logically proved.

As I said, this doesn't mean that there can't be a vacuum-selection rule that tells us which region of the landscape is more likely to be occupied or evolve from some truly initial starting point. But this probability measure, if it exists, has nothing to do with a counting of observers; the latter is ill-defined, acausal, subjective, and generally unscientific.

Some other nonsensical criticisms against inflation

Of course, generic people raise lots of other (but related) nonsensical complaints against inflation. On the Cosmic Variance page,
AI said: Inflation is a cure much worse then the disease.

Invoking inflation to explain away initial state of the Universe works just as well as invoking god to explain it’s existence, in both cases you are just hiding the real well-defined problem behind an elaborate abstract and ill-defined concept invented for exactly this purpose.

This seems to be an example of human bias – to prefer some explanation no matter how incomprehensible, arbitrary or superficial to no explanation at all. I guess questions to which we know there are no answers are more unsettling to us then questions to which we think there are satisfactory answers but we are just too dumb/uneducated/lazy to properly understand them.
Holy cow. What a breathtaking idiocy. Inflation isn't a tool to emotionally please mentally challenged people – to replace God in the emotional sense and produce "any" initial state. Inflation is a scientific theory; its purpose and achievement is in its ability to explain particular and "technical" observed properties of the Cosmos around us. Whether or not inflation manages to do so can't be discussed if you completely avoid these "technical" points.

If you analyze what really drives the moron to write the nonsense he wrote, you will easily find the answer. Inflation, even though it's really about some simple dynamics of a scalar field coupled to the metric tensor in general relativity, is just too complicated for him. He says that "it is a human bias" to prefer any explanation even if it is abstract. However, "a human bias" is exactly the opposite thing than what he says. Nature doesn't give a damn whether things look abstract to us or not, whether they're comprehensible or not. "A human bias" is the idea – clearly believed by AI – that correct theories in Nature are especially those that are comprehensible to generic people. They're not. After all, why they should be? And why it shouldn't be the puppies or the Jewish physicists who determine what is the right degree of comprehensibility? Most theories of modern physics (and arguably in the rest of modern science as well) are incomprehensible to half-men, half-pigs, half-bears such as AI.

To be frank, I am kind of happy that I haven't discovered any major discovery that is really in the same league as the cosmic inflation. The excitement during the discovery had to be great – proportionally greater than the excitement accompanying the things that I did discover (so far). However, watching the dumb mankind that pours lots of vitriol and pseudoscientific opposition over your discovery – realizing that you are really throwing pearls to the swines – must be pretty frustrating. When I find a theory of everything, I will probably keep it for myself.

And that's the memo.
Why Penrose is one of many crackpots when it comes to inflation Why Penrose is one of many crackpots when it comes to inflation Reviewed by MCH on October 22, 2011 Rating: 5

No comments:

Powered by Blogger.