Climate History: Cato Boffins Discovered “Anti-information”
By Patrick J. Michaels and Paul C. “Chip” Knappenberger
While doing some historical studies in preparation for an article in Cato’s Regulation magazine, we found that we once discovered the information equivalent of antimatter, namely, “anti-information”.
This breakthrough came when we were reviewing the first “National Assessment” of climate change impacts in the United States in the 21st century, published by the U.S. Global Change Research Program (USGCRP) in 2000. The Assessments are mandated by the Global Change Research Act of 1990. According to that law, they are, among other things, for “the Environmental Protection Agency for use in the formulation of a coordinated national policy on global climate change…”
One cannot project future climate without some type of model for what it will be. In this case, the USGCRP examined a suite of nine climate models and selected two for the Assessment. One was the Canadian Climate Model, which forecast the most extreme warming for the 21st century of all models, and the other was from the Hadley Center at the U.K Met Office, which predicted the greatest changes in precipitation.
We thought this odd and were told by the USGCRP that they wanted to examine the plausible limits of climate change. Fair enough, we said, but we also noted that there was no test of whether the models could simulate even of the most rudimentary climate behavior in past (20th) century.
So, we tested them on ten-year running means of annual temperature over the lower 48 states.
One standard method used to determine the utility of a model is to compare the “residuals”, or the differences between what is predicted and what is observed, to the original data. Specifically, if the variability of the residuals is less than that of the raw data, then the model has explained a portion of the behavior of the raw data and the model can continue to be tested and entertained.
A model can’t do worse than explaining nothing, right?
Not these models! The differences between their predictions and the observed temperatures were significantly greater (by a factor of two) than what one would get just applying random numbers.
Ponder this: Suppose there is a multiple choice test, asking for the correct temperature forecast for 100 temperature observations, and there were four choices. Using random numbers, you would average one-in-four correct, or 25%. But the models in the National Assessment somehow could only get 12.5%!
“No information”—a random number simulation—yields 25% correct in this example, which means that anything less is anti-information. It seems impossible, but it happened.
We informed the USGCRP of this problem when we discovered it, and they wrote back that we were right, and then they went on to publish their Assessment, undisturbed that they were basing it models that had just done the impossible.
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.
starzmom says:
May 23, 2013 at 5:35 am
Even though these models apparently are pretty good at predicting what will not happen, the EPA is going to set policy based on these models as if they predict what will happen. Is this Alice in Wonderland now?
———————————-
No. Try 1984.
cn
Indeed a breakthrough. Now, we are breathlessly awaiting the discovery of “dark information.”
Since USGCRP has done it, it is prima facie not impossible, by whatever means. However, it is certainly implausible. Ain’t political science wonderful!
hmmm. Does this mean we need a new definition of what GIGO is? FIFO? Fraud In Fraud out?
I think we have to be careful beore we condemn short term weather forecasts. Every day there are some 300,000 scheduled commercial aircraft flights. Each light must file a Flight Plan, including a weather forecast. There is absolutely no evidence whatsoever that the weather forecasts used by commercial airlines lead to any problems whatsoever.
thingodonta, you write “Yeah, its just like climate sensitivity less than 1.5degrees C being ‘ruled out’ in IPCC AR4. Yet that is what the temperatures are doing.”
I may have dificulty trying to explain my thoughts on this issue. I have been having an ongoing discussion with Steven Mosher over at Climate Etc on this sort of issue. My point is that, since climate sensitivity cannot be measured, using my definition of what measured means, we simply dont know what the value of climate sensitivity is. Steven seems to argue that getting numeric values from climate models gives us more information. It seems to me that theses guesses that “scientists” get for the value of climate sensitivity by using models is worse than useless. If models are behaving worse than using random numbers, then values of climate sensitivity derived from models are useless, and it is better to recognise that they are useless, rather than pretending that they are giving us useful information.
The meteorologist Phil Preflester says “Panic is not advised although it is recommended.”
http://spongebob.wikia.com/wiki/Pineapple_Fever_(transcript)
This is another example of why Patrick Michaels is one of the most important of the analysts and thinkers among sceptics.
bobl says:
May 23, 2013 at 4:56 am
“It strikes me that if a model consistently does worse than randomness, then it is predicting against the hypothesis. For example it is showing CO2 warming when CO2 actually causes cooling. This means these model show what will more likely not happen than happen.”
Here is how I think it works. CO2 that is emmited by tropical oceans is rapidly transported by thunderclouds to the upper atmosphere. These CO2 molecules collide with air and water molecules in those clouds. It follows that the CO2 molecules will radiate at the temperatures at TOA or the tops of clouds. Clouds should capture the IR toward the surface while there is an open window to space. This would mean that CO2 “sensitivity” used in models should be negative.
In essence, it is not a “bunch of monkeys” developing these models. it is a bunch of Developmentally challenged monkeys running them!
How about setting an upper bound on climate sensitivity by comparing the best possible reconstruction of “adjusted” average global temperature (assuming that’s possible) for the period 1850-80 with the period 1980-2010 and attributing all the “observed” warming to CO2 increase from 285 to 395 ppm (or whatever), then extrapolating at an appropriate curve to a doubling to 570 ppm later in this century or early in the next?
I haven’t done this, but IMO conducting such an operation would yield equilibrium CS under two degrees C for the first doubling. When adjusting for natural variability, I’d guess around one degree, ie about equal to the direct solar radiation effect, with positive & negative feedbacks cancelling each other out.
Actual data (even if adjusted) trump models in real science.
It’s tempting just to treat this as a punchline, but maybe there’s something really here.
Seriously. If it did worse than random, the model really does have utility. It implies that it is dealing with some of the correct variables and relationships and that some essential assumption in the theory is consistently wrong.
The “suprise” in this is that they even wrote back to tell you that you were right.
Mark Bofill says:
May 23, 2013 at 7:33 am
It’s tempting just to treat this as a punchline, but maybe there’s something really here.
Seriously. If it did worse than random, the model really does have utility. It implies that it is dealing with some of the correct variables and relationships and that some essential assumption in the theory is consistently wrong.
————-
Nah, I didn’t think that through. A model saying ‘temp = -170C’ would be consistently wrong, no utility there.
Oh well, just a punch line then.
Re-write: “We thought this odd and were told by the USGCRP that they wanted to ex[ploit] the [most im]plausible limits of climate change.” That becomes an assessment on the basis of the most extreme projections (are models anything but?) of temperature and precipitation. Fair enough? Not even close.
Need any more proof that this is not about science but generating fear so that people accept policies designed to destroy capitalism and freedom?
If I recall correctly, this was the assessment report which embarrassingly selected two models which were often directly opposite when it came to the change in soil moisture content we’d see from changes in temperature and precipitation.
I think it has been long established that people leaning to (what is conventionally called) the left of the political spectrum seem to think of themselves as intellectuals, and disparage the more practically minded as intrinsically stupid. Of course real evidence and the grand experiment of the 20th century shows that socialist/communist economic systems inevitably lead to decline (and often war) and yet latter day socialists still think that theory trumps evidence and all that is needed is the correct application of the theory. It takes a special kind of stupid to fly in the face of overwhelming evidence and still think of oneself as an intellectual and yet here they are making policy based on predictions that are always more wrong than right. The observation has been been attributed to Einstein that the height of stupidity is doing the same thing over and over and expecting a different result.
I remember being told an anecdote (maybe an urban myth) about a wager between some engineering students and some meteorology students at MIT (or similar prestige institution). Since I am an engineer, I declare my own bias upfront. The wager was who can more consistently predict tomorrow’s weather (i.e. a one day forcast) over a several month period. The engineers simply predicted todays weather would continue much the same tomorrow, and so on each day, whereas the meterology students tried to predict with all the computational tools at their disposal, and no one here will have trouble believing that the meteorology students lost. Maybe the lesson there (if true) is that basic regression (on good data) is better than theoretical predictions on phenomena that is not fully understood.
There is also the phenomenon described by the late Paul Smith (on his website “Safespeed”) that he called regression to the mean. Often statistical outliers, or concentrations of events, (like accident “hotspots” on the road or perceived concentrations of BSE/CJD outbreaks during the mad cow hysteria) simply go back to average with no outside intervention. However politicians and administrators put up a series of measures (like lower speed limits or gun control measures etc) and then congratulate themselves on the success of their actions which have no effect on what is simple statistical variability.
The statisticians among us, I am sure, will know well that to be more wrong than random has to contain some information. Negative information is information none the less so something that is consistently more wrong than random, and fails to regress to the mean , which is random, then there must be some phenomenon that has the opposite effect than that being touted in the hysteria. Maybe there is some value in the models?
It is too kind to just point out that they are using bad models. From an accounting point of view, an answer LESS THAN the random expectation is DECEPTION, as it clearly is designed to go away from the real data, not toward it.
One recalls how Steve McIntyre and Ross McKitrick demonstrated that random numbers correlated better to actual temps than Mann’s proxies.
Models are a priori constructs, not evidence of anything. They are nothing but a distraction from gennuine science.
Mike jarosz says:
May 23, 2013 at 4:11 am
These guys cheat more than my golf buddies.It was never about the climate . It was never about the environment. It was never about conservation or pollution. It’s about the destruction of capitalism and control over the human race. Unlike my golf buddies these people are evil.
___________________
It’s all too often about getting a big paycheck.
Worse than pure chance? That’s actually valuable information. Means one has a better chance to win by betting against the model’s projections.
Re: Chuck Nolan at 5:16AM, 5/23/13 (re: Doug Huffman at 4:26AM)
” [Anti-information] in science smells as sweetly as falsification! … mere ADHOCKERY shoring up a failed argument?” [Huffman]
———————————
“Nice term. … Ad Hockery: … or would that be Post Hockery … ?” [Nolan]
LOL, you clever gents, that rose would be called “hockeystickery,” a.k.a.,
a lie.
@ur momisugly@@ur momisugly@@ur momisugly@@ur momisugly@@ur momisugly@@ur momisugly@@ur momisugly@@ur momisugly@@ur momisugly@@ur momisugly@@ur momisugly@@ur momisugly@
A Little Fable About a Little Mann
(about how “anti-information” (i.e., a lie) is worse than, no answer at all)
Speeding along the highway, you come to a fork in the road. “Right? or Left?” you wonder. A leprechaun named Lucky suddenly appears, standing before you in the Y. “Which way to Belfast?” you ask. “We’re in a hurry!”
“That weh, me fine gentlemen,” says Lucky, confidently jerking his thumb toward his right. Off you go, booking along at top speed, down the broad, well-paved, left fork. Being leprechaun country, there are no warning signs, such as, “Road Ends in 500 feet,” as would have been apropos, here, aaaaand……………. off you go……down………. down…….. dooooooooowwwwwwwwwwn.
If Lucky had shrugged, “I don’t know,” at least, being the reasonably prudent driver that you are, you would have proceeded with caution — even moreso if Lucky had been able (and willing) to at least tell you that one of the roads leads to a cliff.
Lesson: Never trust a leprechaun. (or any “green” person — bwah, ha, ha, ha, ha, haaaaaaaaaa!) and, also, as a rule, going Left is generally NOT a wise choice.
re: several bloggers above shrewdly observing that one could bet against the model and win:
JUST A LITTLE FUN
The ol’ headhunter-liar logic illustration 8[:o|]:
Running from your enemy, down a jungle path, you come to a fork in the way. You know that one path leads to the lying (they always lie) headhunters’ village — it’s called “Ip’c’c,” not that it really matters — and the other to the safe village (they always tell the truth) — it’s called Wattsville. You can’t remember which way to go! Ah, but there’s a find local native standing at the Y. You can ask her! You don’t know which village she is from, however… . Hm. What do you ask her? (you only have time to ask one quick question)
Answer below the lines of ###### below (so you can avert your eyes if you like):
################################################################################################################################################################################################################################################################
Answer: Which is the way to your village? [:)]
Whoa! Sorry about the loooong line of ###’s!!!
I didn’t realize I needed to do a line break — the comment box is constantly auto-inserting line breaks, so, I thought it would then, too! I just leaned on the “#” key for awhile and called it good.
Oops! (and, yes, I just changed that 1 to a ! … why-I-don’t-know — hubris, not doubt… (good call, Codetech, re: your above ON topic remark).