banner image
Sedang Dalam Perbaikan

Stephen Schwartz vs scientific consensus

Michael Hansen has pointed out that the "scientific consensus" people around RealClimate.ORG have expanded their criticism of the paper we have discussed previously:
Stephen Schwartz and low climate sensitivity
For your convenience, here is the PDF file with Schwartz's paper.
See also a comment by UC at ClimateAudit
Recall that the main quantitative question about man-made climate change is usually phrased in terms of climate sensitivity: if you increase the CO2 concentration from the pre-industrial value of 280 ppm to 560 ppm expected before 2100, how much does the temperature increase because of the CO2-related changes?

The correct answer is probably around 1 Celsius degree - which means 0.4 additional degrees expected during the 21st century - but you need much more for a catastrophe so the IPCC magicians try to get 3 or 5 or even more degrees by some black magic. These results are almost certainly nonsensical. At the same moment, 3-5 degrees would still not be enough for a catastrophe but that's not what we want to discuss here.




Why is the sensitivity around 1 Celsius degree?

Well, there are two simple and relatively reliable ways to see that the climate sensitivity should be around 1 Celsius degree and can't be much greater:
  1. the calculation of the absorption by CO2 molecules, ignoring all feedbacks, leads to sensitivity around 1 Celsius degree. The feedbacks are likely to be small or negative because the climate with strong positive feedbacks would otherwise be an unstable system spoiled by exponentially growing perturbations
  2. according to the greenhouse calculus, the last 107 years have already made up about 70% of the effect of the doubling (because the effect of new CO2 molecules is slowing down), and we have seen that it has lead to 0.7 Celsius degrees of warming. That means that the full doubling is expected to generate about 1 Celsius degree of warming.

Everything else is unnecessary and shaky maths and bias. Climate sensitivity can't really be much greater than 1 Celsius degree. In the article mentioned above, Stephen Schwartz of Brookhaven obtained 1.1 Celsius degrees, claiming that the standard deviation is about 0.5 Celsius degrees. That's fully compatible with the solid results above but his method was different. He represented the climate as a machine with some noise that tries to reach the equilibrium much like soup that cools down in the kitchen.

In his setup, the sensitivity is obtained as the ratio of two quantities: the time constant - a time after which the deviation of temperature from the idealized equilibrium decreases 2.718 times - and the effective heat capacity. The time constant was obtained from high-frequency statistics of the measured temperature autocorrelation. According to Schwartz, it is between 4 and 6 years. This quantity is the main source of controversy.

Annan et al.

James Annan wrote a vacuous rant against Schwartz's paper on his - Annan's - blog, claiming things that one could say even without knowing anything whatsoever about the particular paper (which is probably the case of Annan anyway). Everything that Schwartz writes is unreliable, all the numbers are ambiguous, and his method is surely biased in the "right" direction so that the truth must certainly be much more catastrophic than he can even imagine.

Annan picked three more hardcore alarmists and they have converted Annan's blog rant into another format that looks like a scientific article. At the same time, Michael Hansen has informed us about a related attack on Schwartz's work by a blogger called Tamino. It is kind of fascinating how often these "professionals" on RealClimate.ORG rely on random bloggers or astrophysicists even when the basic quantity of global warming - the climate sensitivity - becomes the question of the day. Here are the two attacks:

Tamino's attack
Foster, Annan, Schmidt, Mann's attack

I will refer to them as Tamino and Foster et al., respectively. They share the same scientific meme copied from Annan's blog but there are some specific differences. Let's look at the two texts.

Tamino

Tamino is a textbook example of an ill-informed environmentalist activist. In his text, he uses the word "denialist" five times. More entertainingly, he or she argues that Sen James Inhofe shouldn't celebrate Schwartz's paper because Schwartz is not a denialist. This fact apparently means, according to Tamino, that Schwartz himself doesn't take his own paper seriously but instead thinks that only calculations that end up with a consensus with Al Gore can be trusted. Some modest and careful quotes from Schwartz's paper are used by Tamino to argue that Schwartz surely thinks that the right result must be 3 times higher than what Schwartz has obtained. ;-)

Some of the scientifically sounding statements by Tamino are identical to those in Foster et al. - all of them were copied from Annan's blog anyway - so without a loss of generality, let us look at the attack by Foster.

Foster et al.

First, a minor detail. You can see that they're probably not the most experienced statisticians in the field by a review of their typos. At the bottom of page 4, they use the term "autogressive". That's a typical typo of people who call themselves "progressive" because they tend to think that the root is just "gressive". I wonder why they think that the acronym is AR1 if it doesn't stand for "autoregressive". ;-)

Back to their scientific claims. They argue that

  1. Schwartz needs to assume that the climate is a combination of a linear trend plus first-order Markov process, which is unrealistic, especially because of the presence of many time scales
  2. Schwartz's method to extract the time constant gives too low time constants (and thus low sensitivity) because of a "bias" that exists even if the previous assumption in (1) were right.

It is not hard to see that they have first decided what the conclusions of their "paper" should be and then they added the fog in the bulk. Schwartz's result is inconvenient so all of his methods must be attacked. And Schwartz's value is too low so one must selectively invent arguments that could raise the value. The bias of Foster et al. is so obvious that I think that a scientifically inclined person would have to be completely blind not to see it.

Why is Schwartz's approach sensible

The difference between Schwartz on one side and Foster et al. on the other side is analogous to the difference between science and a new kind of science represented by Stephen Wolfram. What is this difference all about?

In science, we are always trying to understand a system by decomposing it into components. For each component, we try to find the best possible observation that allows us to study the component separately from others. Once we know how the components work, we study their interactions and complex systems containing these components. Particle physics is extreme in this reductionist approach because elementary particles are the components we are keen on. Anyway, reductionism is the right strategy not to confuse different possible causes of observed phenomena and to get as detailed information about the system as possible.

In the new kind of science, the approach is the opposite one: much like Gaia, it is holistic. For example, Stephen Wolfram writes a program for a cellular automaton. If you look at the result of this program, it visually looks like the skin of a tiger. That implies that your computer model has to describe genetics and biology, Wolfram argues. In the very same way, the proponents of the existing climate models claim that their naive oversimplified computer games describe the climate of the Earth because one resulting graph for the global mean temperature looks roughly like the observed one: an increasing line with a wiggle. ;-) Whether or not any of the details of their computer program describes reality becomes secondary: these questions are never tested in isolation.

Back to science. Schwartz's model, of course, can't be used to describe everything about the climate. Indeed, there are many diverse time scales that are relevant for the climate. This fact is actually one of my favorite observations that is usually neglected by the consensus advocates. The climate naturally changes at the decadal scale, centennial scale, sub-millenium scale, ice ages take tens of thousands of years, and there are additional phenomena at much longer time scales, too. Proponents of the man-made climate change theory tend to assume that there only exist interannual variations and the climate is otherwise stationary so any temporary trend is bound to be created by SUVs.

Fine. Now they agree that the climate is pretty complex - because they need to attack an inconvenient paper. Their admission of the complexity is progress anyway.

But if a system is complex, it doesn't mean that it is impossible to say anything about any of its components separately. If we design the right measurement, we can do so. For example, a physician can find the "bare" weight of a fat woman in a heavy fur coat in winter. It is brutal but he can do it: he asks her to get naked.

Analogously, Schwartz looks at the character of the autocorrelations extracted at very short time scales - much shorter than 5 years or so. This allows him to figure out that there is a force in the climate that tries to bring the temperature closer to the equilibrium whose time constant is approximately 5 years.

This result, 5 years, is extracted from the data in several ways. The method is controllable enough so that Schwartz can determine the error of 1 year, too. The statements by Foster et al. that the time constant could also be 30 years instead of 5 years is technically correct: it's possible that the result is 30 years, it is just very unlikely. The distribution is not Gaussian but the values far away from the central value are still very unlikely. These high values of the time constant are less likely but they are more interesting for the IPCC. Guess which adjective is more important for the IPCC: do they prefer more accurate & likely & central values or the more interesting ones at the tail?

Removing the slow signals

Are there other time scales that drive the climate to the equilibrium with a different speed? Yes, but only the slower ones. There are surely parts of the Earth whose inertia is much larger and that are able to store heat for 20 years or 1,000 years or longer. But these slower effects won't influence the very-short-time-scale behavior entering Schwartz's analysis. If you study chaotic motion of a bee, it won't be influenced too much by waves under your ferry because these waves are just too slow.

At timescales much shorter than 20 years or 1,000 years, the waves (or exponential damping) whose timescale is 20 years or 1,000 years look like a linear trend. That's exactly what Schwartz needs for his high-frequency calculations to be legitimate and that's why the criticism of Foster et al. that the non-noise component is assumed to be "linear" is irrelevant.

Now, the slower components of the climate will surely make the model that only contains the fastest component incorrect for all low-frequency questions. But that's not a problem for Schwartz's method because Schwartz wisely extracts the high-frequency information and all his calculation is based on this information.

Foster et al. either don't understand this approach based on the decoupling of scales or they pretend that they don't understand it. The main result doesn't care which answer is the correct one: their paper is a lame attack on an approach to climate sensitivity that is more scientific and much more controllable than the approach favored by most people in the IPCC - random games with overly complicated models where different effects are not separately understood. And it also leads to a much more realistic value for the climate sensitivity - a feature that really drives Foster et al. up the wall.

And that's the memo.

Stephen Schwartz vs scientific consensus Stephen Schwartz vs scientific consensus Reviewed by MCH on September 18, 2007 Rating: 5

No comments:

Powered by Blogger.