banner image
Sedang Dalam Perbaikan

RealClimate vs Roy Spencer: non-feedback changes in clouds

Roy Spencer is a rising star and public face of climatology - not only because of his topseller, Climate Confusion, and occasionally inconvenient UAH MSU satellite data, but also because of his perfectionist recent theoretical work (including their recent work on cloud oscillations and several new papers that will be published soon) - and RealClimate.ORG has provided him with a positive feedback. ;-)
Ray Pierrehumbert: How to cook a graph in three easy lessons
When RealClimate.ORG starts to dedicate special articles to you (and plans to publish new ones), you know that you have made it. ;-)



Spencer's talk in New York overlapping with this text

When you Google search for Roy Spencer's name, the third hit leads to RealClimate.ORG's friends at ExxonSecrets.ORG and the fourth hit goes to DeSmogBlog.COM but Prof Spencer has surely learned how to live with similar things and he's doing fine.
Commercial break: Roy Spencer's answer to RealClimate.ORG is here
Pierrehumbert begins with an expected verbal procedure to mildly defame Spencer. Because Spencer's publications are rather impressive and, despite the huge recent alarmist bias, comparable to Pierrehumbert's record (which Gentleman is better depends on how you do the search), Pierrehumbert chooses a combination of the argument "Spencer is not so special" and "Spencer's papers don't contain any evidence of the skeptical viewpoints anyway".




Oh, really?

The first point by Pierrehumbert is that he - Pierrehumbert - completely misunderstands what the "internal radiative forcing" is and why it is something different than the "feedbacks". While I am no universal specialist in these matters, Pierrehumbert's comments (and graphs) make it very clear that he really has no clue, so let me give him a crash course.

Radiative forcing is the average imbalance of incoming and outgoing energy (in Watts per squared meter) that is expected to lead to warming (increase of the equilibrium temperature) or cooling (decrease), depending on the sign. Various effects lead to their radiative forcings. When these effects change, the equilibrium temperature changes as well.

For example, the greenhouse effect caused by carbon dioxide contributes some bare radiative forcing that can be amplified or reduced by positive and negative feedbacks, respectively. Water vapor is a positive feedback (they add additional greenhouse gas - H2O - when it's warmer), low-lying clouds are a negative feedback (water vapor also creates extra droplets, and when they're low-lying, the temperature cools down), and the high-lying clouds are likely to be a positive feedback.

The IPCC whose task is to "prove" a dangerous man-made climate change assumes that a lot of effort should be given to study the positive, catastrophic feedbacks, especially to methods that would suggest that these feedbacks are really strong, and that the negative feedbacks either don't exist or should be neglected.

What do we mean by a feedback?

Feedback is a process that transforms the incoming signal (a variation of some quantity) into other forms and eventually into the same kind of signal that is subsequently "fed back" (therefore the name) as the input, with some delay (usually expressed as a fuzzy number only).

Imagine that the temperature anomaly, Delta T, is a "chaotic" function of time, noise(t), that oscillates around zero. It has some color etc. but let us ignore these subtleties. But there is also another driver of the temperature that reflects what was happening with the temperature a moment (epsilon) earlier. We have something like
Delta T(t) = noise(t) + C . Delta T(t-epsilon)
What is the effect of the last, feedback term? Well, it primarily depends on the sign of the coefficient "C". Imagine it is positive: we deal with a positive feedback. If it is so, the following thing has to happen sooner or later. The value of the signal, Delta T(t), eventually exceeds the typical (time-averaged) magnitude of noise(t), or even the typical magnitude of noise(t) divided by C.

Once it happens, the noise(t) term can be essentially neglected and the feedback term dominates. The solution is an exponential of time. In the equation above, it is only increasing if C is greater than one. In reality, the correct equation has a time derivative on the left-hand side, guaranteeing continuity of Delta T(t). Consequently, the exponential is increasing for any positive feedback, as soon as the other terms become negligible.

Because the Earth hasn't seen any exponential growth of the temperature anomaly for nearly 5 billion years (it would be incompatible with the continuous existence of life for billions of years), it is very likely that such a runaway behavior cannot occur and the feedbacks that control the behavior at large variations (anomalies) are negative feedbacks.

How are feedbacks created?

The example of clouds is instructive. The temperature helps to create some clouds, after some delay, and these clouds influence the temperature, after another delay. The total delay is essentially the sum of the two delays while the total feedback coefficient "C" is essentially a product of the two. You need the causal relationships in both directions to get a feedback. Such a feedback may explain why the temperature and the cloud cover are correlated. You would expect a very high correlation coefficient: it should be close to one.

However, you may obtain correlation even if the relationship goes in one direction only. Clouds may influence the temperature even if the influence of temperature on the total cloud cover happens to be negligible. You still obtain a correlation but it is not as strong. Also, in this case, you cannot meaningfully eliminate one of the quantities (e.g. clouds) because it is a true system with (at least) two degrees of freedom, even if you're interested in the effective theory at long time scales only.

When the temperature doesn't affect the clouds too much, you can still ask what drives them. There is a lot of other things that can drive them, including ENSO patterns, PDO oscillations, and un-averaged artifacts of daily stochastic changes of the weather. These might be the dominant terms on the right-hand side of the equation for the cloud cover.

Now, it is damn important whether the temperature drives the clouds or the clouds drive the temperature or both.

While it is damn important, the observed correlation itself is usually insufficient to determine the coefficients separately. Certain observations can only determine a combination of these two, let me call it the product. But even if the product is zero - if there is no feedback - there can still be a substantial correlation. When you look at high-frequency data, you can perhaps see a lag that will reveal what is the cause and what is the consequence or, at least, which of the two causal relationships is stronger.

But because there are other terms in the "cloud equation" as well as the "temperature equation", even the measurement based on lags can be subtle.

Now, Spencer's argument - as I understand it - is that various climatologists are making an unjustified assumption that there exists a strong influence in both directions - one that essentially allows you to eliminate the cloud cover and study the temperature equation separately. Consequently, we can talk about the behavior of clouds as a feedback and only the coefficients from the feedback equation are important. Instead, Spencer says that it is important to know both coefficients separately - i.e. to know which of these two, clouds or temperature, is the cause or the driver and which of them is effect or the consequence.

More concretely, he says that ENSO, PDO, unaveraged daily variations etc. can be the drivers of the clouds cover and the cloud cover also helps to change the temperature. This extreme description clearly refers to a non-feedback mechanism because when things only go in one direction, there is no feedback (this very sentence already seems to be too high-tech for Pierrehumbert).

When someone tries to measure the feedback coefficient only - from one measured quantity in reality (or one function of them) - it is very clear that he must make some assumptions about the relative magnitudes of the coefficient of "clouds drive temperature" and "temperature drives clouds" effects in the equations. Spencer simply says that people have been making an incorrect assumption about this point.

Fine-tuning and discrete choices

Finally, Pierrehumbert is spending hours by showing that he is unable to reproduce Spencer's graphs. I find it pretty obvious that with some chosen coefficients that couple clouds, temperature, ENSO, PDO, and some weather effects with each other in all directions, one can qualitatively reproduce the graphs of both Gentlemen. Whether the corresponding models are consistent with everything else is a different question.

While Pierrehumbert suggests that Spencer has "cooked the data", he is more specific about the accusations in one paragraph that I choose to reproduce in its entirety:
My graph is not absolutely identical to Roy's, because there are minor differences in the initialization, the temperature offset used to define anomalies, and the temperature data set I'm using as a basis for comparision. My point though, is that this is not an exacting recipe: it's hash — or Hamburger Helper — not soufflé. Following Roy's recipe, you can get a reasonable-looking fit to data with very little fine-tuning because Roy has given himself a lot of elbow room to play around in: you have the choice of any two variability indices among dozens available, you make an arbitrary linear combination of them to suit your purposes, you choose whatever mixed layer depth you want, and you finish it all off by allowing yourself the luxury of diddling the initial condition. With all those degrees of freedom, I daresay you could fit the temperature record using hog-belly futures and New Zealand sheep population. Anybody want to try?
Well, yes, exactly. Many skeptics have been saying similar things for quite some time! ;-) If you want to obtain a graph of some shape, to support a theory, choose your two favorite variability indices, make an arbitrary linear combination, etc. etc. Then you can reproduce the temperature record using the New Zealand sheep population or the number of SUVs in the U.S., if you prefer the latter. And many high-school dropouts who own more private jets than SUVs surely prefer the latter. ;-)

For example, choose the surface temperature instead of the tropospheric temperatures to determine the sensitivity (even though the greenhouse effect is clearly more linked to the tropospheric temperature), choose the principal component analysis that mines for hockey sticks to get the hockey stick graph of the reconstructed temperatures since 1000 AD, and put dozens of coefficients in your equations (except for the CO2 greenhouse effect - especially those that could generate natural variations whose ultimate driver is not CO2) equal to zero. Then you may easily conclude that the temperature is driven by the SUVs in the U.S. And dozens of people have actually done so.

It is not a new insight that one can obtain many qualitatively different kinds of predictions not only by "fine-tuning" legitimate physical continuous parameters but also by "fine-tuning" of parameters that superficially seem to be pure conventions or by making various "discrete choices". This observation has been repeated in the string-theory debates many times, too.

Nevertheless, one collection of ideas which of these hundreds of relationships are really important, which of the coefficients are high enough to decide about the qualitative behavior, and which of the indices for various quantities are optimal to describe the phenomena accurately at the fundamental level has to be better than others. And it is far from obvious that Pierrehumbert's choices are better than Spencer's choices and that the SUVs or the sheep in New Zealand are better than the galactic cosmic rays or the ocean turbulence.

Some detailed analyses in Spencer's papers indicate that just the opposite is true. Moreover, the alarmist region of the parameter space is really of measure (essentially) zero, so to say, because you must assume that all the effects that don't boil down to CO2 at the end are (essentially) absent and that all the data are processed in such a way that you avoid the otherwise inevitable contradictions.

Climatologists should do their best to rationally determine - while avoiding preconceptions about the "right predictions for the future" - what these individual coefficients actually are, which of the effects are actually important, which altitude is most natural for defining the "climate sensitivity", and which of the indices or their linear combinations are most accurate as players in the equations, besides many other question marks.

And that's the memo.
RealClimate vs Roy Spencer: non-feedback changes in clouds RealClimate vs Roy Spencer: non-feedback changes in clouds Reviewed by DAL on May 22, 2008 Rating: 5

No comments:

Powered by Blogger.