banner image
Sedang Dalam Perbaikan

Climate: are models better than theory?

Judith Curry wrote an essay about climate models:
Climate model verification and validation
While it's a nice reading and I don't disagree with too many comments too resolutely, I am not in harmony with the overall tone. Let us summarize the crucial issues into several points:
  1. Does "sanctioning" of models by institutions matter?
  2. How many types of experimentally known facts should be verified?
  3. Is a model universally superior over a theory or an explanation or an Ansatz that doesn't require powerful computers?
  4. Do the existing climate models actually teach us something that is both new and true?

To be sure, my answers are "No; as many as possible; no; probably no."




Verification and validation

Curry talks about verification and validation. Why do we use these two similar words and how do they differ? Well, verification is supposed to be an internal process: you should catch the errors during the "production" of your models.

On the other hand, validation is the quality control procedure that comes after the model is released. It's being measured how good the product is. Clearly, you wouldn't lose much science if you didn't distinguish these two V-words because the moment when a model is released is not a "breakthrough" from a scientific viewpoint. It's just an event that is "politically" interesting.

This is a good context to discuss the first question:

Does "sanctioning" matter?

Curry isn't terribly explicit about it but in between her lines, I see something that I totally disagree with:
Another factor is the “sanctioning” of these models by the IPCC (with its statements about model results having a high confidence level).

Knutti states:  “So the best we can hope for is to demonstrate that the model does not violate our theoretical understanding of the system and that it is consistent with the available data within the observational uncertainty.”  Expert judgment plays a large role in the assessment of confidence in climate models.
Knutti just says that the models shouldn't contradict anything we actually know; no problems with this statement. However, the verb "sanction" used by Curry sounds extremely authoritative. Well, if a random group of people such as the IPCC gives its "blessing" to a model, does the model become more reliable?

In general, the answer is "No."

A more accurate answer depends on what the group is and what methods it is using to reach the conclusion. Indeed, if we talk about the IPCC, it's just the other way around. If a notoriously dishonest, biased, pre-decided, politically, ideologically, and financially controlled group led by sub-par scientists and bureaucrats gives its "blessing" to a particular model, it is a reason to be deeply concerned that the model is almost certainly invalid.

If you managed to change the IPCC so that it wouldn't be controlled by atrocious crooks of the Pachauri type, you could improve the "brand" of such a blessing. But it could never become perfect. Models and theories simply can't become correct by being "sanctioned" by a group of people, not even a respectable one. Models and theories are identified as correct after they manage to explain many past observations and/or predict many future observations that would remain unexplained by other models or theories, while they simultaneously avoid any clear contradictions with the empirical data.

To find some new important science, one has to be honest. But honesty is not a sufficient condition. One has to be smart, hard-working, and a bit lucky, too.

The truth in science has simply nothing to do with politics, popularity contests, or unspecifiable "expert judgment". An expert can make an "expert judgment" but if there's not even a glimpse of an argument why he or she made one conclusion and not another conclusion, there's no reason to believe this judgment. Science certainly differs from magic in this sense. There are no scientific arguments that are valid but that cannot be "presented". Of course, there may exist a reason why an expert believes "X" or "Y" that is hardly explained to the laymen. However, it must be possible to explain it to the scientist's peers - otherwise the evidence doesn't exist.

In my opinion, by her comments about the importance of the IPCC processes, Judith Curry has revealed her other face - a person who has spent more time with politics of climate change rather than the actual science in recent years. I am sure she's still much better in this respect than most other people in the IPCC but it's still bad enough. The IPCC is bad and of course, if you average the dozens of Pachauri-like opinions that matter in the IPCC, you will get a result that is much further from the truth than if you took the opinion of one good atmospheric physicist such as Richard Lindzen or Roy Spencer. I hope that you don't need to hear Feynman's story about the Emperor's nose again.

How many types of empirical data should be checked?

In most cases, climate models are not being verified or validated at all. People just don't want to throw them away. They're not doing science. We see proofs of this statement every day. Yesterday, when I discussed James Hansen's IAS lecture, a commenter from Prague - probably a climatologist - defended a model by saying that it agreed with the prediction of increasing droughts in the Amazon basin.

The only problem is that in the real world, there is no increasing drought in the Amazon basin. Just look at this graph obtained from the GHCN NCDC datasets at NOAA.



The annual rainfall in Amazonia may have increased a bit since 1920 but there is no significant change

So any model that predicts a significant decrease in the rainfall in the Amazon basin during the 20th century - if the purpose of the model is to tell us something about the rainfall - has to be abandoned. It's been falsified, OK? It's as simple as that. If it gave a spectacularly wrong prediction about this question, chances are that it will produce a wrong answer to (many or most) other questions, too.

The standards in the climate modeling community have dropped below the level of the stinkiest excrements, however. People don't mind being totally sloppy and dishonest because virtually all of their colleagues are sloppy and dishonest, too. The interdisciplinary comparisons have been banned because all competent scientists from other disciplines are called "deniers". This whole discipline has been filled with garbage people and it has to be fully abolished, all the people have to be fired, and if it is needed, we have to start from scratch.

The climate modeling community as a whole is bringing zero to the society and to science.

But even if you look at some climate modelers who actually care about the verification and validation, and there are not too many, they are typically verifying a very tiny portion of the models' predictions only. In reality, if a model of precipitation in the Amazon basin is correct, it should probably predict the right trends (zero), variability, seasonal dependence of the rainfall, typical length scale and duration of the clouds, color of the noise in various autocorrelation functions, correlations between temperature, wind speed, pressure, and/or moisture, lags in various correlations, and lots of other things.

If some of these predictions seriously fail, it's likely to be due to a serious problem with the model, and this problem is enough to invalidate many other predictions. To summarize, even the modelers who are sometimes trying to verify and (in)validate the models - and care about the results - don't do it right and deny most of the empirical evidence we actually possess.

Do models have to be better than a theory?

This is an important point and I think that Judith Curry herself would answer "Yes". The "Yes" answer is a manifestation of the "computer model addiction" which is totally irrational and unscientific. Some people think that if they use a computer, especially an expensive one, their scientific work is immediately more scientific or more valid. They think that their brain has doubled in size and a new aura of intellectual authority started to orbit their skulls.

What a pile of crap.

A good theorist usually doesn't need any computer to produce the right answers to many questions, to make many predictions. He can sort the things in his brain. He can do the calculations manually. It's because he actually understands the relevant physical phenomena - and their manifestations in many situations. He knows many actual phenomena and many angles from which he can look at them.

The "fundamentalist climate modelers" are very different. They're sub-par scientists who actually don't know the physical phenomena and can't calculate the predictions with their own brains. They use a computer program but they don't really know why the computer produces one answer or another: the computer is a "black box" that they decided to mindless trust. They're not bothered by their ignorance. In fact, most of them think that it is an "advantage" not to understand what's happening inside the computer program.

But this kind of "science" should be classified as occultism. It's always a disadvantage to be ignorant - without the help of a machine - about the reasons why an algorithm produces a particular result. But it's not just an aesthetic imperfection. You could say that many ignorant sub-par scientists together with a computer can replace a good scientist.

But they cannot. Because the sub-par scientists don't understand what's going on, they also doesn't understand what kinds of predictions are "easy", what kinds of predictions are "hard", what kinds of predictions are "similar" or "equivalent" to those that were checked in the past, and what kinds of predictions are "new" and at risk that there will be a disagreement.

The sub-par scientists simply don't understand the science so they can't even do a good job in verification and validation of models even if they wanted to do these things. A typical climate modeler doesn't even know the relevant maths about correlation coefficients, Fourier expansions etc., so he couldn't do the verification because he doesn't have the required knowledge. He can't even understand what a typical test should be calculating and why it is relevant for validation.

To summarize, there's way too much "modeling" in contemporary climate science and way too little "theory". That's unfortunate because theory - and even many well-known phenomenological descriptions of the climatic phenomena - are actually giving us much better (or at least equally good) predictions as contrived climate models.



For example, even if you consider the overhyped notion of the global mean temperature, the climate models may do lots of complicated stuff. But it's still true that the resulting curve looks like a kind of pink noise, or AR(1) process, with some correct parameters. And this simple description in terms of noise with the right parameters fits the data in a more satisfactory way than the contrived models.

Even if it were just "equally" good, the simpler "noise model" should be preferred because of Occam's razor. A contrived climate model has too many arbitrary and separately unverified components and they should simply not be added unless it is necessary. Clearly, if we really want to validate a model that depends on 1,000 formulae that are "not obviously correct" (and that decided to neglect 1,000 other terms), we must successfully test at least 1,000 types of quantities predicted by the model to see that the pieces are separately right.

You should simply not use the model if it is not teaching you anything new and true. Whether there are correct "pieces of code" in the model is secondary. You are using the model as a whole, so the whole has to be useful if a rational person uses it.

No comprehensive tests of this kind have been done, of course. So while it's likely that the climate models contain many and many pieces and lines in the code that are extremely close to the truth, the probability that there are at least some important errors that seriously and qualitatively destroy the models' predictive powers is almost 100%. In fact, we know it is 100% because there are no models that agree with all the available data. You know, to get the right quantitative predictions, the model really has to be 100% right. One qualitative error is enough to neutralize the hard work done by all the correct lines of the code.

I want to compare this situation with string theory that also has many "models" - or "vacua". However, the difference is that all string theorists understand the obvious point that the detailed low-energy quantitative predictions (of particle masses etc.) do depend on the model and because we don't know which of them is correct, we simply can't make low-energy quantitative predictions for particle physics. We can still do many other important things - especially conceptual clarification of phenomena in the Planck regime, e.g. the behavior of black hole microstates - but clearly, they will not be useful to those who are only interested in some measurable numbers. No string theorist is pretending that we can give a prediction - or consensus prediction - of the electron mass.

There are no useful climate models yet

The meteorological models have clearly become useful. The predictions of weather for several days - and sometimes weeks - have clearly become reliable enough so that they tell you much more than guesswork, to say the least.

However, the same is not true about the climate models yet.

Predicting the climate superficially looks like predicting weather in an extremely distant future. This doesn't work. Of course, if one only predicts the "climate", i.e. "averaged weather", he may hope that many high-frequency phenomena will get averaged out and the predictive power may return.

This hypothesis is plausible. But there's actually no positive evidence at all that this hypothesis is true. The atmosphere is controlled by lots of chaotic processes. Some of them may get averaged out if you compute long-timescale averages. However, it's enough if some of the important ones don't get averaged out - and the long-term climate predictions will become impossible.

Such a problem is much more serious than a particular problem with predictions by a particular model. It is a potential problem of a whole class of models: they can't really be made predictive. Some "fundamentalist climate modelers" don't want to hear about this likely verdict. They feel as irritated as a Muslim who sees a picture of Mohammed. They respond as wild cattle.

But the answer is likely to be true because there doesn't seem to be any "magic timescale" similar to 30 or 50 years above which the averages suddenly become "predictable" again. We can only become reasonably certain that the long-term climate is predictable once we actually gather enough data - a few centuries of detailed data - that can be usefully described by a model that was fed by much more limited "input" than the demonstrably correct "output" we learned from it.

However, the existing "climate models" only claim to be able to predict 50-year (or longer) averages of the weather - well, it's because they have repeatedly failed in their prediction of the climate at any time scale shorter than 50 years. However, we only have 50+ years of reliable meteorological data. So there can clearly be no nontrivial prediction that has worked.

It's obvious that the only way how this problem could be resolved - in favor of climate models - is to make sure that the climate models may actually predict the averages over much shorter periods than those for which we have acquired the weather data. For example, if the climate models become able to predict the regional weather for 2020-2029 (the average), we may learn that they work pretty well before 2030.

I think it's unlikely. To be sure, so far, the attempts to predict the climate for 10 years in advance have always failed.

It has become a standard policy to only make predictions for distant enough future so that the authors of the flawed predictions won't be responsible for their failures. What is even worse is that there are many people in the AGW business who have made lots of insane predictions that should have already taken place. They haven't but the fearmongers are not held responsible for their wrong predictions and fearmongering. The society has largely lost its immunity - its self-protection against similar diseases and collective deception. The bulk of the society no longer does any verification or validation.

Types of theories about complex phenomena in physics

There are many analogies in particle physics that one could find relevant. I want to mention the description of nuclear physics in the late 1960s.

You know, by the late 1960s, people understood classical general relativity (including black holes and the big bang) as well as quantum electrodynamics - electrons, photons, and the electromagnetic force they mediate - including the loops and many detailed corrections. They had also understood the weak nuclear interaction responsible for the beta decay.

However, the composition of particles in nuclear physics, controlled by the strong nuclear force, looked like an uncontrollable zoological garden: protons, neutrons, Lambda hyperons, pions, kaons, and particles denoted by any letter of the Greek or Latin alphabet you may think of. It was a complete mess.

Now, people could have also "fundamentalistically" try to construct a theory of many elementary particles: all those particles looked pretty elementary at that time. They could have tried to adjust various interaction terms between the elementary eta-mesons and elementary Xi-baryons and to claim that they had a model.

This is what the "climate model fundamentalists" would do. They would insist on having a theory of the old kind regardless of the complexity of the data. Happily enough, particle physicists in the late 1960s were not as big idiots as the current climate scientists. They realized that this approach wouldn't lead anywhere. The strongly coupled phenomena looked complex and mysterious and writing a convoluted Lagrangian or a computer model couldn't actually reduce the complexity and the mystery.

Instead, they looked at various patterns that they could actually understand - with their brains, not with uncontrollable computer programs. The patterns included the sum rules, scaling laws for scattering, Murray Gell-Mann eightfold way - the SU(3) flavor symmetry of the spectrum of the hadrons, world sheet duality, and so on.

A careful thinking - not dependent on any computers - about these theoretical issues ultimately led the physicists to several concepts: strings inside the strongly interacting particles; partons; and quarks. They couldn't have thought about these concepts by the "straightforward" approach of throwing everything into a "computer model" at all.

In the early 1970s, it was understood that the string model was giving "too soft and too nice" predictions for nuclear processes and string theory was "re-employed" as a theory of quantum gravity a few years later. Meanwhile, the quarks and partons worked and they were identified as two different aspects of the same particles in Quantum Chromodynamics which correctly explained all of nuclear physics by the mid 1970s.

Two decades later, in 1997, it was realized that the string model of the nuclear physics was actually also right but the strings had to propagate in a higher-dimensional, anti de Sitter space. String theory - including excited strings, branes, and especially black holes and other gravitational objects and effects - was found to be exactly equivalent to the gauge theory (at least in the N=4 case).

For more than a decade, we have actually known that all the new major conceptual approaches to the strong force that were identified in the late 1960s were on the right track. They just needed to be refined. However, none of these correct approaches was found - and none of them could have been found - by the "modeling approach", by throwing the data into an old-style computer model that is simply obliged to described the reality even if we don't understand it.

One actually needed a lot of deep thinkers and a serious and hard theoretical work.

A computer model cannot describe the reality if no person was able to do so before the model was constructed. The very idea that the climate models inevitably increase our understanding of the climate system is deeply flawed. Computer models have this potential to correctly describe many phenomena and to help us but whether or not they succeed depends on lots of conceptual theoretical work, verification, and validation. And if you don't do these things correctly, your computer model is guaranteed to remain useless regardless of the billions of dollars that the taxpayers may pay for the required computers and for the sub-par scientists' work.

And that's the memo.



Cancún - news

Some people have already announced that the Cancún talks have failed and the remaining 10 days will be dedicated to recreational activities of the participants - starting with this fiesta. To compensate the failure, the eco-Nazi inkspillers have doubled the estimates of the temperature rate growth to 8 °C per century. That's a lot given the fact that the measured warming rate is between 0.6 and 1.4 °C per century, depending on the dataset and the reference period while theory predicts a value close to the lower bound of this interval. It's not hard to see that during the failing 2011 talks, the estimates will be 16 °C of warming per century. They will be doubling their statements up to the very point when someone finally accommodates them into a psychiatric asylum.

Others have claimed an unspecified "convergence" of the Chinese and American opinions about the quantification of the fight against climate change.



Most importantly, Japan has nuked a city called Kyoto - a funny Japanese name, isn't it? :-) Japan has refused to extend the Kyoto protocol because it doesn't include all the countries of the world.
Climate: are models better than theory? Climate: are models better than theory? Reviewed by MCH on December 02, 2010 Rating: 5

No comments:

Powered by Blogger.