Guest post by Steven Goddard
In his recent article, NSIDC’s Dr. Meier answered Question #9 “Are the models capable of projecting climate changes for 100 years?” with a coin flipping example.
1. You are given the opportunity to bet on a coin flip. Heads you win a million dollars. Tails you die. You are assured that it is a completely fair and unbiased coin. Would you take the bet? I certainly wouldn’t, as much as it’d be nice to have a million dollars.2. You are given the opportunity to bet on 10000 coin flips. If heads comes up between 4000 and 6000 times, you win a million dollars. If heads comes up less than 4000 or more than 6000 times, you die. Again, you are assured that the coin is completely fair and unbiased. Would you take this bet? I think I would.
Dr. Meier is correct that his coin flip bet is safe. I ran 100,000 iterations of 10,000 simulated random coin flips, which created the frequency distribution seen below.

The chances of getting less than 4,000 or greater than 6,000 heads are essentially zero. However, this is not an appropriate analogy for GCMs. The coin flip analogy assumes that each iteration is independent of all others, which is not the case with climate.
[Note: Originally I used Microsoft’s random number generator, which isn’t the best, as you can see below. The above plot which I added within an hour after the first post was made uses the gnu rand() function which generates a much better looking Gaussian.]

Climate feedback is at the core of Hansen’s catastrophic global warming argument. Climate feedback is based on the idea that today’s weather is affected by yesterday’s weather, and this year’s climate is dependent on last year. For example, climate models (incorrectly) forecast that Arctic ice would decrease between 2007 and 2010. This would have caused a loss of albedo and led to more absorption of incoming short wave radiation – a critical calculation. Thus climate model runs in 2007 also incorrectly forecast the radiative energy balance in 2010. And that error cascaded into future year calculations. Same argument can be made for cloud cover, snow cover, ocean temperatures, etc. Each year and each day affects the next. If 2010 calculations are wrong, then 2011 and 2100 calculations will also be incorrect.
Because of feedback, climate models are necessarily iterative. NCAR needs a $500 million supercomputer to do very long iterative runs decades into the future. It isn’t reasonable to claim both independence (randomness) and dependence (feedback.) Climate model errors compound through successive iterations, rather than correct. How could they correct?
Speaking of Arctic ice cover and albedo, the sun is starting to get high in the sky in the Arctic, and ice extent is essentially unchanged from 30 years ago. How does this affect climate calculations?
GCMs are similar to weather models, with added parameters for factors which may change over time – like atmospheric composition, changes in sea surface temperatures, changes in ice cover, etc. We know that weather models are very accurate for about three days, and then quickly break down due to chaos. There is little reason to believe that climate models will do any better through successive iterations. The claim is that the errors average out over time and produce a regionally correct forecast, even if incorrect for a specific location.
A good example of how inaccurate climate forecasts are, is shown in the two images below. NOAA’s Climate Prediction Center issued a long range forecast for the past winter in February, 2009. Brown and orange represents above normal temperatures, and as you can see they got most of the US backwards.
NOAA CPC’s long range forecast for winter 2009-2010
NOAA’s reported results for winter 2009-2010
The UK Met Office seasonal forecasts have also been notoriously poor, culminating in their forecast of a warm winter in 2009-2010.
The Met Office climate models forecast declining Antarctic sea ice, which is the opposite of what has been observed.
NSIDC’s observed increase in Antarctic sea ice
Conclusion : I don’t see much theoretical or empirical evidence that climate models produce meaningful information about the climate in 100 years.
However, Willis claims that such a projection is not possible because climate must be more complex than weather. How can a more complex situation be modeled more easily and accurately than a simpler situation? Let me answer that with a couple more questions:1. You are given the opportunity to bet on a coin flip. Heads you win a million dollars. Tails you die. You are assured that it is a completely fair and unbiased coin. Would you take the bet? I certainly wouldn’t, as much as it’d be nice to have a million dollars.2. You are given the opportunity to bet on 10000 coin flips. If heads comes up between 4000 and 6000 times, you win a million dollars. If heads comes up less than 4000 or more than 6000 times, you die. Again, you are assured that the coin is completely fair and unbiased. Would you take this bet? I think I would.
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.





Wren (12:41:26),
Bad example.
If you adjust the stock market for inflation between 1966 and 1982, for example, stock appreciation was flat. In constant dollars, the real stock appreciation was zero.
No GCMs [computer climate models] predicted the flat temperatures over the past 15 years [per Phil Jones].
None of them was right. Not a single one. They all predicted rising temperatures. That’s because they are programmed by people who are paid to show a particular outcome. Garbage in, Gospel out.
But think of the money we can make by claiming we know that which can never know.
Simulated coin flips prove only that your computer is functioning.
Test the validity of your model against real coins.
CO2 causing global warming is analogous to California not defaulting on its debt in the next 30 years.
I’m beginning to have some sympathy for climate scientists like Dr Meier.
I read the letters to the editor in Aviation Week & Space Technology, and some fellow will make a point about some jet engine, then next week he’ll get nailed by a dozen specialists in rocket science, engineering, corporate accounting, etc., who will detail why he’s wrong.
Climate scientists try to tease a climate signal out of weather noise, then extrapolate to the future. They have to gather data, use statistics, program computers, deal with politicians, get grants and funding, and a dozen other things, then publicly produce a conclusion/result/paper. They have to be generalists.
Meanwhile, here’s WUWT.
We have generalist climate scientists the equal of any. SurfaceStation volunteers and UAH know in detail about gathering data. We have statisticians who can spot misapplications in an instant. We have computer experts who deconstruct their shaky code and expose the tricks, accidental and otherwise. We have government types who see through the machinations to find the motivations. We have academics who note the violations of publication protocols.
There is no skill that a climate scientist uses that the collective expertise of WUWT doesn’t far surpass.
Big Climate can rely on only the likes of the perky Katie Couric to hide the facts, and the crooked politicians to keep up the funding. We’ll see how that climate changes in November.
A tip of the hat to Dr Meier and the others who have ventured over here to make their case.
The important thing with computer models is that they have been shown to have predictive power over several cycles. The upside and the downside can have completely different characteristics. The failure to predict the recent downturn (or lack of increase) demonstrates that a large proportion of the forcings must be not being modelled – or some inputs not being known in advance which completely counteract the ones being modelled. It is the inability to predict changes in direction which make long distance predictions implausible. I suspect the coin flippings were referring to a sort of monte-carlo style simulation. i.e. a lot of random events occur and so over a long period – because of the distribution you have shown above the output is closer to a different simulation run or the real system. The problem with that of course, is that it is suggesting the forcings in the model are essentially random.
If anyone wants to reproduce the gaussian on their favorite platform
g++ -g -o coin_flip coin_flip.cc
I used gnuplot to generate the graph
// coin_flip.cc
#include
#include
int
main(int argc, char** argv)
{
size_t sample_size = (size_t)atoi(argv[1]);
int iterations = atoi(argv[2]);
std::vector ones_count_vector(sample_size, 0);
for (int i = 0; i < iterations; i++)
{
int number_of_ones = 0;
for (size_t j = 0; j < sample_size; j++)
{
int random_number = rand();
if ( (random_number & 1) == 1 )
{
number_of_ones++;
}
}
ones_count_vector[number_of_ones]++;
}
for (size_t i = 0; i < sample_size; i++)
{
if (ones_count_vector[i])
{
std::cout << i << " " << ones_count_vector[i] << std::endl;
}
}
}
Wren said:
“I prefer the stock market analogy. I would have much more confidence in a forecasted rise in the Dow for 50- year period than a 1-year period.”
————-
Bingo. GCM’s are best when viewed with this kind of general trend analysis. The clmate undergoes a “random walk” on shorter time scales, but the biggest influncers (Milankovitch Cycles) work on the longest cycles, while the intermediate influencers (Greenhouse Gases) work on the medium-term cycles, and the msmallest influencers (solar cycles, ENSO, PDO, etc.) work on the shortest cycles and become the “noise” imposed on those longer cycles. It has been a major effort of climate scientists to weed out short term noise from the longer cycles to understand the role that CO2 plays. A good example to see this directly can be found in this chart:
http://www.climate4you.com/Sun.htm#Global temperature and sunspot number
Notice the noise of the solar cycles imposed on the background of the steadily increasing temperatures. The dips in temperature rise occur during the solar minimums as we saw in during the last few years, but now the rise continues once more. 2010 likely to be the warmest year on instrument record. To what do the AGW skeptics attribute this years heat?
Looks like the html parser removed iostream and vector from the #include statements above
Steve Goddard (12:56:54) :
Wren (12:41:26) :
Your argument boils down to “temperatures are rising.” In that case, a simple extrapolation is more valuable than $500 million supercomputer GCM simulation.
http://docs.google.com/View?id=ddw82wws_5998xrxhzhc
According to the long term GISS trend, temperatures will rise about 0.6C by the end of the century.
======
A simple extrapolation assumes the rise is simply a function of time. We know better than that, don’t we?
So to describe what I mean in easier terms if the coin flipper was instead a casino, and the casino owner was filing a fixed amount of the coin every week to increase the odds. The casino owner makes a computer model to predict the coin flip. You can imagine if his filing was not perfect the computer model would be better at predicting long range than short – the errors on filing would average out over time.
The assertion is frequently made that it is easier to correctly model 100 years of climate than it is to model 10 years. The assertion is that errors would cancel out !
For this to be true each event must be totally random and independent of all previous events which is NOT TRUE OF CLIMATE EVENTS !
This is nonsense. A model is like an algebra test with 10 questions where the answer to #1 is the input to question #2 and the answer to question 2 is the input to question 3 and so on.
If the feed forward is POSITIVE errors are amplified so that a tiny error in the output of question #1 becomes monstrous by answer #10. Even if the basic equations are perfectly understood [which they aren’t] the errors compound until the output is worthless.
In a negative feedback system, which the climate alarmists assure is not the case, the errors would drive the answer to the set point and they would not accumulate as badly but they would still accumulate.
Steve in SC (13:03:22) :
One billion coin flips are needed to reproduce. That would take 31 years, at one toss per second.
And you are going to come up with the same answer anyway.
At least Microsoft’s random function produces epiphanies…
However according to even longer term climate trends the temperature will cycle warmer and colder during the next 100 years and without accurate knowledge of where in the “LONG” term trend we are we can not know where we will be 100 years from now. We can only know where we are going after we get there unless we have an accurate road map.
The coin toss think is an attempt to draw attention away from the main issue of models based on bad input. “GIGO”
The WORST Gaussian distribution EVER!
Steve Goddard (12:25:19) :
I tried generating the coin flip Gaussian using g++ rand() and it is remarkable how much better job it does than Visual Studio
http://docs.google.com/View?id=ddw82wws_597cn34skhf
I hope the irony implicit in this and subsequent discussion about statistical accuracy of coin flipping isn’t lost in the bigger concern over climate model tossing… (not picking on you or anyone specifically, Steve.)
Smokey (13:00:45) :
Wren (12:41:26),
Bad example.
If you adjust the stock market for inflation between 1966 and 1982, for example, stock appreciation was flat. In constant dollars, the real stock appreciation was zero.
No GCMs [computer climate models] predicted the flat temperatures over the past 15 years [per Phil Jones].
None of them was right. Not a single one. They all predicted rising temperatures. That’s because they are programmed by people who are paid to show a particular outcome. Garbage in, Gospel out.
====
I know what you are saying about inflation, but 1966-82 doesn’t look like 50 years to me.
Flat temperatures over the past 15 years? If you are referring to what Jones said about “statistical significance” I’m afraid you are misinterpreting what the term means.
Hansen’s projections made back in the 1980’s are looking better and better. No wonder he won an award for his contributions to climate modeling.
“No GCMs [computer climate models] predicted the flat temperatures over the past 15 years [per Phil Jones].
None of them was right. Not a single one. They all predicted rising temperatures. That’s because they are programmed by people who are paid to show a particular outcome. Garbage in, Gospel out.”
…Exactly.
Are local weather forecasts even in the 50/50 range? I bet not…
Keep running the Microsoft random number generator, looks like a Time Lord is sending you a message.
(For those of us who wonder what “quirks” M$ programmers stuck in their code… Does an “easter egg” mystery image sound all that far-fetched?)
I’m able to flip a fair coin so that it reliably comes up heads.
I’ve learned how much force to apply in the toss so the coin does one and a half rotations before coming down. It then becomes a matter of (by feel) making sure the coin is tails-up at the point that I toss it.
Wren (12:41:26) :
(….)
I prefer the stock market analogy.
————
I submit you’re forgetting two parallels:
The US becoming a completely socialist/marxist nation–a recent study showed that countries that did so (and this wasn’t a future projection, prediction, or prognostication; it was based on case studies) saw an immediate 40% drop in GDP.
The world dropping of the Holocene Epoch into the next Ice Age.
Both are catastrophic to their participants.
As others have pointed out, the context of the bet is important, i.e., Is the coin flipper an expert flipper? It also matters the conditions under which the bet is made since they actually change the expected value of the bet. If I have been given 3 months to live, the bet looks very different than if I was in good health and had every expectation of living another 40 years or more.
It still seems to me that significant variables relevant to understanding climate, and hence GCMs, are not modeled very effectively because of our limited understanding of key climate relevant processes and the interactions among both understood and not understood processes. If we don’t know, we don’t know and the uncertainty of the models has to be acknoweledge accordingly. Sometimes it is wise to say you don’t know and simply decline to make a prediction. As Matt Briggs keeps reminding us “too many people are too certain of too many things”.
“I ran 100,000 iterations of 10,000 simulated random coin flips”
Not the real thing. Go back and flip coins that many times, and don’t control the experiment.
@Pwl,
Wouldn’t it be: 50-(x/2), x, 50-(x/2)?
A coin only has one edge and the chances of it landing on that edge would detract evenly from the chances of it landing on any particular side.
Just a thought…