banner image
Sedang Dalam Perbaikan

Why the feedback amplification can't be both positive and high

When no feedbacks are included, the greenhouse effect caused by CO2 adds about 1.2 °C per doubling of the CO2 concentration. This is a result of a rather clean physics problem. There's no real "complexity" in this problem: we reduce the Earth to a pretty manageable differential equation.

The doubling from the pre-industrial concentration of 280 ppm to 560 ppm of CO2 in the atmosphere will occur slightly before 2100, assuming business as usual. If the figure 1.2 °C were the total answer, and assuming that the mankind has caused the whole 0.6-0.8 °C of the warming we may have seen in the last century or so, it would mean that 0.4-0.6 °C of man-made warming would be left by 2100 - less than the innocent 20th century change.

That's a completely unspectacular change. So this elementary greenhouse effect is not enough for the "applications" of the physical effect in policymaking. The advocates of carbon regulation and threats depend on some amplification of the man-made greenhouse effect, i.e. positive feedbacks. The IPCC would like the warming per the CO2 doubling to go as high as 5 °C and some people would be thrilled to see even higher figures - that seem to completely disagree with the small rate of the recent warming.

Feedbacks: geometric series

Imagine that you add some CO2. That changes the temperature by the "bare mechanism" of the greenhouse effect. But the modified temperature also changes some other things in the climate that may change the temperature again. These "second round" effects are called the feedbacks and they may change the temperature in both directions.




If the "simply calculated" bare temperature change was "ΔT" and if the new increment was "f.ΔT" where "f" is a dimensionless coefficient, this "f.ΔT" of extra warming must actually be inserted to the feedback as an input once again. That adds additional "f^2.ΔT" of warming. And so on. The total warming is
ΔT(total) = ΔT (1 + f + f2 + f3 + ...) =
= ΔT / (1-f).
Yes, it's called the geometric series. While the total warming depends on "f" nonlinearly, it is the very coefficient "f" whose distribution should be kind of uniform. After all, the feedback "f" is a sum of many diverse effects. It's "f" that behaves as an additive quantity, not "1/(1-f)".

The alarming scenarios depend on the assumption that "f" is really close to one, something like "f=0.8" if not "f=0.9", and the corresponding total warming is correspondingly high. For example, for "f=0.8", we obtain "ΔT(total) = 1.2 °C/0.2 = 6 °C". This is the type of results that people like James Hansen would love to be true (or at least believed to be true).

However, the values of "f" above one are almost strictly ruled out because the geometric series above is actually divergent. That would physically mean that any initial perturbation would be amplified exponentially: the deviation from the would-be equilibrium would be exponentially increasing with time. (The normal behavior is that you approach an equilibrium in the future, and your distance from the equilibrium is exponentially shrinking.) The Earth's temperature would soon (in a logarithmic time) escape from a hospitable interval. Everything would freeze over or evaporate.

This arguably hasn't happened for billions of years.

It follows that "f" can't exceed one, at least not too often. It can be positive - feedbacks can be positive - but they can't be too positive. However, we may make a much stronger statement than this one. Why?

Because physical mechanisms make it pretty inevitable that "f" is not a universal dimensionless constant. For different "quasi-equilibriums", different chemical compositions of the atmosphere and the biosphere, the amount of ice in the Arctic, positions of the continents, and so on, i.e. for various changes that the Earth has seen during its history, the values of the total feedback coefficient "f" must have been different. The coefficient "f" is inevitably variable. (The "f" is also dependent on the location, but let's look at the global mean temperature only.)

By the central limit theorem, we may assume that for a random moment of the Earth's history, "f" took values in a normal distribution around the central value "f_0" and the standard deviation "SD". Because "f" is approximately the sum of contributions from many effects, there's no way how "f" could be "automatically" prevented from exceeding one.

So by looking at the statistical distribution, we may determine the percentage of the Earth's history when "f" actually exceeded "1". Whenever this occurred, if it ever occurred, the temperature was exponentially running away from the equilibrium value. So within a few decades, it would be reaching the boiling point or drop well below the freezing point. The life would die out. The geology would be very different.

Let's assume that such an uncontrollable exponential development of "f" exceeding one would destroy the life on Earth within 47 years, to make the numbers simpler. (I was approximately inspired by a stupid movie, Age of Stupid, when I chose this figure.) The Earth is 4.7 billion years old, so its life contains 100 million periods whose length is 47 years.

Because none of those 100 million periods has contained the deadly exponentially runaway behavior we are just discussing, it follows that the probability that "f" exceeds one should be lower than "one in 100 million".

Inserting the numbers

But we had an explicit formula for the probability that "f" exceeded one. We said that "f" was distributed according to the normal distribution around "f_0" as the mean value, with the standard deviation of "SD".

The maths is complicated, so let's be surprised by the power or lack of power in this argument. (I haven't made any calculation before I wrote this text: this is being written from scratch.)

My estimate for the fluctuations of "f" depending on the "regime" of the Earth is "SD=0.1" (for feedbacks "f" comparable to one, this is something like a 10% error). I think it's unlikely that "f" is determined much more accurately than that: it's much more likely that the uncertainty of "f" is higher than that. Now, what is the mean value "f_0" such that the probability that "f" exceeds one, given the standard deviation "SD=0.1", is lower than "1 in 100 million"?

Well, it's simple. If you look at the numbers describing confidence intervals, you will see that "1 in 100 million" is approximately equivalent to a "6 sigma" deviation from the mean. So the mean value must be at least 6 standard deviations below 1. But because I decided that "SD=0.1", it follows that "f_0" must be at most "1-0.6 = 0.4", which leads to the total warming of 1.2 °C / 0.6 °C = 2 °C per CO2 doubling. About 1 °C would be left for the 21st century.

If you managed to show that the standard deviation for "f" is "SD=0.2", the maximum allowed mean value of "f" would be "f=1.2-6*0.2 = 0". If you would demonstrate that the deviations are as big as "SD=0.2", that would prove that the (average over time and space) feedback coefficient "f" actually has to be negative!

Now, I don't know how much "SD" actually is. One would have to look at the typical changes of the water vapor variability and the variability of cloudiness in different epochs of the paleoclimatological and geological history. But whatever the exact numbers are, I think that this argument is very powerful and largely excludes the values of "f" - and distributions for "f" - that are too close to "f=1". Also, note that the normal distribution decreases very quickly: if I used a different distribution that is nonzero everywhere, I would get more strict conditions for "f_0"!

I feel that the argument above is a quantitative explanation for the intuition that feedbacks in systems without a runaway behavior are much more likely to be negative than positive: they must be "repelled" from the unphysical runaway region of the parameter space. The argument above is no "rigorous proof" that the feedbacks can't be high but I think it is a sensible starting point to choose the "priors" for different values of "f" that are a priori conceivable. The priors should follow a natural distribution that should be pretty much negligible at "f=1". That mostly excludes any significant amplification of the bare greenhouse effect.

Of course, I have no doubts that the alarmists will deny the existence of general theoretical arguments that make similar "catastrophes" very unlikely. But others may want to look at arguments in both directions.

And that's the memo.
Why the feedback amplification can't be both positive and high Why the feedback amplification can't be both positive and high Reviewed by DAL on February 25, 2010 Rating: 5

No comments:

Powered by Blogger.