One Model, One Vote

Guest Post by Willis Eschenbach

The IPCC, that charming bunch of United Nations intergovernmental bureaucrats masquerading as a scientific organization, views the world of climate models as a democracy. It seems that as long as your model is big enough, they will include your model in their confabulations. This has always seemed strange to me, that they don’t even have the most simple of tests to weed out the losers.

Through the good offices of Nic Lewis and Piers Forster, who have my thanks, I’ve gotten a set of 20 matched model forcing inputs and corresponding surface temperature outputs, as used by the IPCC. These are the individual models whose average I discussed in my post called Model Climate Sensitivity Calculated Directly From Model Results. I thought I’d investigate the temperatures first, and compare the model results to the HadCRUT and other observational surface temperature datasets. I start by comparing the datasets themselves. One of my favorite tools for comparing datasets is the “violin plot”. Figure 1 show a violin plot of a random (Gaussian normal) dataset.

vioplot random normal distribution n=1000Figure 1. Violin plot of 10,000 random datapoints, with mean of zero and standard deviation of 0.12

You can see that the “violin” shape, the orange area, is composed of two familiar “bell curves” placed vertically back-to-back. In the middle there is a “boxplot”, which is the box with the whiskers extending out top and bottom. In a boxplot, half of the data points have a value in the range between the top and the bottom of the box. The “whiskers” extending above and below the box are of the same height as the box, a distance known as the “interquartile range” because it runs from the first to the last quarter of the data. The heavy black line shows, not the mean (average) of the data, but the median of the data. The median is the value in the middle of the dataset if you sort the dataset by size. As a result, it is less affected by outliers than is the average (mean) of the same dataset.

So in short, a violin plot is a pair of mirror-image density plots showing how the data is distributed, overlaid with a boxplot. With that as prologue, let’s see what violin plots can show us about the global surface temperature outputs of the twenty climate models.

For me, one of the important metrics of any dataset is the “first difference”. This is the change in the measured value from one measurement to the next. In an annual dataset such as the model temperature outputs, the first difference of the dataset is a new dataset that shows the annual CHANGE in temperature.  In other words, how much warmer or cooler is a given year’s temperature compared to that of the previous year? In the real world and in the models, do we see big changes, or small changes?

This change in some value is often abbreviated by the symbol delta,”∆”, which means the difference in some measurement compared to the previous value. For example, the change in temperature would be called “∆T”.

So let’s begin by looking at the first differences of the modeled temperatures, ∆T. Figure 2 shows a violin plot of the first difference ∆T of each of the 20 model datasets, as numbers 1:20, plus the HadCRUT and random normal datasets.

delta T hadcrut and modelsFigure 2. Violin plots of 20 climate models (tan), plus the HadCRUT observational dataset (red), and a normal gaussian dataset (orange) for comparison. Horizontal dotted lines in each case show the total range of the HadCRUT observational dataset. Click any graphic to embiggen.

Well … the first thing we can say is that we are looking at very, very different distributions here. I mean, look at GDFL [11] and GISS [12], as compared with the observations …

Now, what do these differences between say GDFL and GISS mean when we look at a timeline of their modeled temperatures? Figure 3 shows a look at the two datasets, GDFL and GISS, along with my emulation of each result.

cmip5 emulations gfdl3 gissrFigure 3. Modeled temperatures (dotted gray lines) and emulations of two of the models, GDFL-ESM2M and GISS-E2-R. The emulation method is explained in the first link at the top of the post. Dates of major volcanoes are shown as vertical lines. 

The difference between the two model outputs is quite visible. There is little year-to-year variation in the GISS results, half or less than what we see in the real world. On the other hand, there very large year-to-year variation in the the GDFL results, up to twice the size of the largest annual changes ever seen in the observational record …

Now, it’s obvious that the distribution of any given model’s result will not be identical to that of the observations. But how much difference can we expect? To answer that, Figure 4 shows a set of 24 violin plots of random distributions, with the same number of datapoints (140 years of ∆T) as the model outputs.

vioplot 24 random normal distribution n=1000Figure 4. Violin plots of different random datasets with a sample size of N = 140, and the same standard deviation as the HadCRUT ∆T dataset.

As you can see, with a small sample size of only 140 data points, we can get a variety of shapes. It’s one of the problems in interpreting results with small datasets, it’s hard to be sure what you’re looking at. However, some things don’t change much. The interquartile distance (the height of the box) does not vary a lot. Nor do the locations of the ends of the whiskers. Now, if you re-examine the GDFL (11) and GISS (12) modeled temperatures (as redisplayed in Figure 5 below for convenience), you can see that they are nothing like any of these examples of normal datasets.

Here’s a couple of final oddities. Figure 5 includes three other observational datasets—the GISS global temperature index (LOTI), and the BEST and CRU land-only datasets.

vioplot models hadcrut giss best cruFigure 5. As in Figure 2, but including the GISS, BEST, and CRUTEM temperature datasets at lower right. Horizontal dotted lines show the total range of the HadCRUT observational dataset.

Here, we can see a curious consequence of the tuning of the models. I’d never seen how much the chosen target affects the results. You see, you get different results depending on what temperature dataset you choose to tune your climate model to … and the GISS model [12] has obviously been tuned to replicate the GISS temperature record [22]. Looks like they’ve tuned it quite well to match that record, actually. And CSIRO [7] may have done the same. In any case, they are the only two that have anything like the distinctive shape of the GISS global temperature record.

Finally, the two land-only datasets [23, 24 at lower right of Fig. 5] are fairly similar. However, note the differences between the two global temperature datasets (HadCRUT [21] and GISS LOTI [22]), and the two land-only datasets (BEST [23] and CRUTEM [24]). Recall that the land both warms and cools much more rapidly than the ocean. So as we would expect, there are larger annual swings in both of those land-only datasets, as is reflected in the size of the boxplot box and the position of the ends of the whiskers.

However, a number of the models (e.g 6, 9, & 11) resemble the land-only datasets much more than they do the global temperature datasets. This would indicate problems with the representation of the ocean in those models.

Conclusions? Well, the maximum year-to-year change in the earth’s temperature over the last 140 years has been 0.3°C, for both rising and falling temperatures.

So should we trust a model whose maximum year-to-year change is twice that, like GFDL [11]? What is the value of a model whose results are half that of the observations, like GISS [12] or CSIRO [7]?

My main conclusion is that at some point we need to get over the idea of climate model democracy, and start heaving overboard those models that are not lifelike, that don’t even vaguely resemble the observations.

My final observation is an odd one. It concerns the curious fact that an ensemble (a fancy term for an average) of climate models generally performs better than any model selected at random. Here’s how I’m coming to understand it.

Suppose you have a bunch of young kids who can’t throw all that well. You paint a target on the side of a barn, and the kids start throwing mudballs at the target.

Now, which one is likely to be closer to the center of the target—the average of all of the kids’ throws, or a randomly picked individual throw?

It seems clear that the average of all of the bad throws will be your better bet. A corollary is that the more throws, the more accurate your average is likely to be. So perhaps this is the justification in the minds of the IPCC folks for the inclusion of models that are quite unlike reality … they are included in the hope that they’ll balance out an equally bad model on the other side.

HOWEVER … there are problems with this assumption. One is that if all or most of the errors are in the same direction, then the average won’t be any better than a random result. In my example, suppose the target is painted high on the barn, and most of the kids miss below the target … the average won’t do any better than a random individual result.

Another problem is that many models share large segments of code, and more importantly they share a range of theoretical (and often unexamined) assumptions that may or may not be true about how the climate operates.

A deeper problem in this case is that the increased accuracy only applies to the hindcasts of the models … and they are already carefully tuned to create those results. Not the “twist the knobs” kind of tuning, of course, but lots and lots of evolutionary tuning. As a result, they are all pretty good at hindcasting the past temperature variations, and the average is even better at hindcasting … it’s that dang forecasting that is always the problem.

Or as the US stock brokerage ads are required to say, “Past performance is no guarantee of future success”. No matter how well an individual model or group of models can hindcast the past, it means absolutely nothing about their ability to forecast the future.

Best to all,

w.

NOTES:

DATA SOURCE: The model temperature data is from the study entitled Evaluating adjusted forcing and model spread for historical and future scenarios in the CMIP5 generation of climate models, by Forster, P. M., T. Andrews, P. Good, J. M. Gregory, L. S. Jackson, and M. Zelinka, 2013, Journal of Geophysical Research, 118, 1139–1150, provided courtesy of Piers Forster. Available as submitted here, and worth reading.

DATA AND CODE: As usual, my R code is a snarl, but for what it’s worth it’s here , and the data is in an Excel spreadsheet here.

Get notified when a new post is published.
Subscribe today!
0 0 votes
Article Rating
88 Comments
Inline Feedbacks
View all comments
H.R.
November 22, 2013 7:10 am

EternalOptimist says:
November 22, 2013 at 1:38 am
“I always feared this would happen. The CAGW debate has erupted into Violins”
==============================================================
Phweeeet! 15 minute time out in the corner, no recess, and no pudding for dessert!
.
.
.
.
As Willis wrote in the main post:
“Another problem is that many models share large segments of code, and more importantly they share a range of theoretical (and often unexamined) assumptions that may or may not be true about how the climate operates.”
If a model is really good, it should run long and well enough to predict the next glaciation, eh? One of my assumptions about climate is that there will be another glaciation.

ferdberple
November 22, 2013 7:11 am

In another recent thread Nick Stokes said that the models were not fitted to the temperature record,
“GCMs are first principle models working from forcings. However, they have empirical models for things like clouds, updrafts etc which the basic grid-based fluid mechanics can’t do properly. The parameters are established by observation. I very much doubt that they fit to the temperature record; that would be very indirect. Cloud models are fit to cloud observations etc.”
=============
If that was the case, then the models would all have the same values for aerosols within a very narrow margin. They don’t. Aerosol, not CO2 are the tuning knob on the models.
First principle models work fine in simple systems. In complex systems they are hopeless because of Chaos. Round off errors quickly accumulate in even the most precise models and overwhelm the calculation. This happens even in simple linear programming models with relatively few terms, such that you must back-iterate to minimize the error. As model size grows, the problem expands exponentially and overwhelms the result.
The classic example of the failure of first principles to predict complex systems is the ocean tides. You cannot calculate the tides with any degree of accuracy by first principle, yet we propose to calculate a much more complex problem, the climate of the earth using the same failed methodology. A methodology that has already proven itself to be hopeless at predicting the tides, the economy, the market, or any other complex time series.
Early humans already discovered how to predict complex time series. Without any knowledge of first principles they learned to predict the seasons. As advanced as we believe ourselves to be, we have forgotten this fundamental lesson.

MarkW
November 22, 2013 7:12 am

If they weeded out the losers, there wouldn’t be any left.

ferdberple
November 22, 2013 7:26 am

Average Climate Sensitivity does exist as a single number, but this is a meaningless term. If you took 1000 identical earths and played out the future, you would find that on some climate sensitivity was X, and on others it was Y, and you could average this out and say average sensitivity was (X1+X2…+Xn)/n. However, this would be meaningless on each individual earth, because their sensitivity would still be X1, X2, etc. Now it could be argued that this differenece is due to natural variability, but what is natural variability if not the difference between the effect that the identical forcings would have in the future. How do we assign this to an unseen, hidden variable, and allow this to affect the climate system, while assuming that the known variables are unaffected?

November 22, 2013 7:45 am

Willis, are you colorblind?
Anyway, lovin’ the mudballs.

November 22, 2013 8:22 am

“In another recent thread Nick Stokes said that the models were not fitted to the temperature record,
“GCMs are first principle models working from forcings. However, they have empirical models for things like clouds, updrafts etc which the basic grid-based fluid mechanics can’t do properly. The parameters are established by observation. I very much doubt that they fit to the temperature record; that would be very indirect. Cloud models are fit to cloud observations etc.”
so which is it?
######################
a while back I passed willis a paper on how calibration is done.
Tuning the climate of a global model
Thorsten Mauritsen,1 Bjorn Stevens,1 Erich Roeckner,1 Traute Crueger,1 Monika Esch,1
Marco Giorgetta,1 Helmuth Haak,1 Johann Jungclaus,1 Daniel Klocke,2 Daniela Matei,1
Uwe Mikolajewicz,1 Dirk Notz,1 Robert Pincus,3,4 Hauke Schmidt,1 and Lorenzo Tomassini
First a few definitions. when folks use the word tuning my sense is moost think of something like this.
Tuning: you would take the entire historical 2 meter temperature ( air and sea ) and fiddle knobs( aerosols for example) until you matched the metric.
This kind of Tuning is not what is done and that should be pretty obvious. They dont for the most part Tune to match the entire land +ocean series. That should be clear when you see
A) they dont match temperature absolute temperature very well
B) they miss hindcast peaks and valleys.
Instead, one adjusts parameters to achieve balance at the TOA and to match some subset of temperature anomalies.
Think of this as tuning to match known physics: energy out has to equal energy in along with tuning to match an initial state.
In the begining this was done by heat flux adjustments
“The need to tune models became apparent in the
early days of coupled climate modeling, when the top of
the atmosphere (TOA) radiative imbalance was so large
that models would quickly drift away from the observed
state. Initially, a practice to input or extract heat and
freshwater from the model, by applying flux-corrections,
was invented to address this problem [Sausen et al.,
1988]. As models gradually improved to a point when
flux-corrections were no longer necessary [Colman et al.,
1995; Guilyardi and Madec, 1997;Boville and Gent, 1998;
Gordon et al., 2000], this practice is now less accepted in
the climate modeling community.”
Now its done like this
the radiation balance is controlled primarily by tuning cloud-related
parameters at most climate modeling centers…IIn addition some tune by adjust ocean albedo and other tune by adjusting aersols.
But not to match the entire historical surface temps, but rather something like this
In one example they tune to hit the 1850 to 1880 temperature of 13.7C. Then they tune
so that the Energy out in the satellite era matches satillite observations.
The notion that they tune to match the entire historical series is somewhat of an urban legend. real skeptics would question that legend. ever see a skeptic challenge that assertion? nope. selective skepticism.
So. The initial state for the first 30 years is 13.7C. parameters are adjusted so that the temperature matches at the begining. Then the process is run forward and you make sure that you match a Different parameter at the end: Energy out.
Got that? so youre not tuning to match the hump in the 30s or the rise since 1976.
You are tuning to make sure that the global average of 13.7C is set correctly at the start
and then closing on a different parameter at the end: Energy out.
in between the temperature is what it is; a function of forcings.
now scream and shout, but by all means dont read the papers or back up the urban legend with some actual evidence.
The urban legend: all GCMs are tuned to match the entire historical series.
now quickly go google tuning GCMs. find a quote and use that quote as proof. but by no means should you actually read papers or write to scientists who work on models or visit
a modeling center to actually watch a tuning process. Dont do that your story might get very complicated very quickly. Stick with the simple legend.

G P Hanner
November 22, 2013 8:22 am

I knew a financial economist who was very good at tuning his investment model to accurately reflect past results. Still could not forecast investment results — at all. Knowing the past in no way gives the ability to see the future.

Epigenes
November 22, 2013 8:29 am

Eschenbach is tedious. This post is boring and detracts from the debate. How many of you studied his pictograms?
None of the warmists will give a damn. Eschenbach is preaching to the converted and on an ego trip. He needs to adopt a more political approach. Do not hold your breathe ‘cos he has no political nous whatsoever.

November 22, 2013 8:34 am

Epigenes,
I like Willis. He is not tedious, he is interesting. Your criticism is tedious.
Regarding ‘political’, you have no concept of that concept. How is your comment politically wise? Willis often gets hundreds of comments — almost all of them favorable — under his articles. So far, you have one comment. Mine. And it is anything but favorable.

mpainter
November 22, 2013 8:52 am

“This has always seemed strange to me, that they don’t even have the most simple of tests to weed out the losers.”
<<<<<<<<<<<<<<<<<<<<<<<<<<>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
That could be because they are all losers.

Alan Robertson
November 22, 2013 9:07 am

Epigenes says:
November 22, 2013 at 8:29 am
“blah blah blah”
____________________________
I looked over the graphs as I read Willis’ text.
Epigenes, you should know better…
Where is your contribution?
You say Willis is on a trip, but you sound as if you’re on Lithium, but forgot your last few doses.

Jim G
November 22, 2013 9:12 am

Interesting, but me thinks we too often try to outstatistically masturbate the warmistas. A simple plot of actual temperatures over time, as opposed to anomalies, says volumes about the over exagerations being employed to scare folks. Plotting anomalies, to me, is simply another way of rescaling to exagerate what is actually going on. And yea, yea, I know how little change in temperature is needed to substantially affect our way of life. But in reality that is mostly if the change is to the colder temps which would put the hurt on our growing seasons, food supplies, disease, and probably some sabre ratteling and war in the end.

Alan Robertson
November 22, 2013 9:13 am

Nick Stokes says:
November 22, 2013 at 4:47 am
jimmi_the_dalek says: November 22, 2013 at 2:42 am
“In another recent thread Nick Stokes said that the models were not fitted to the temperature record,…
so which is it?”
We’ll I’d invite people who think they are so fitted, or tuned, to say how they think it is done, and offer their evidence.
____________________________________
I won’t mention harry.readme, because I’m not much concerned with the practice of tuning models to fit the data. What I have a problem with, is your crowd’s practice of tuning the data to fit the models.

November 22, 2013 9:19 am

Willis: Did you get the book I sent you, through Anthony (“Rapid Interpretation of EKG’s”). I sent it to General Delievery in Chico CA. Should be there (according to Amazon). Let me note, this isn’t just a book on reading EKG’s, its the most concise cardiac education you can get in one book, and (with your abilities) go through in about 3 days!

David A
November 22, 2013 9:34 am

Mosher ends a long post with, “Stick with the simple legend.”[ a snide perspective that the skeptics do not understand the real work done by the modelers.) Yet the truth is it is the models that “Stick with the simple legend.”
They have a simple legend about the power of CO2 and the C.S. to that anthropogenic increase. So, no matter how they “tune” the past, and most skeptics I know never said the attempted to match the entire and uncertain historic record, their forward runs from their tuning point, uniformly run considerably higher then observations.
I predict that the one factor that would improve all the models is tuning down climate sensitivity to CO2. Yet the IPCC models stick with the “simple legend” of CAGW, and have the nerve to run their disaster scenarios based on the ensemble model mean of already failed projections.
Mr. Mosher, step away from the trees, so you can see the forest.

Duster
November 22, 2013 10:40 am

DocMartyn says:
November 21, 2013 at 6:33 pm
What graphics package did you use for the violin plots?
They look lovely BTW

In R you can load specific libraries for specialized tasks. If you look at Willis’ code, he has a line “require(vioplot)” . That indicates he loads the “vioplot” package, which is – more or less – an extension to R downloadable from CRAN and has to be installed in the local R installation to be used. There are two other related packages that do similar jobs. If the data points available are so limited that a violin or boxplot is liable to be misleadingly generalized, you can use viopoints instead. They are an underused form of graphic.

jimmi_the_dalek
November 22, 2013 10:47 am

The comments by Nick Stokes and Steven Mosher on tuning GCMs are interesting. I knew hat they could not be fitting the entire temperature record – this is what the cyclomaniacs do. Perhaps someone should write a more comprehensive account. Davis A above says “one factor that would improve all the models is tuning down climate sensitivity to CO2.” but I thought that the sensitivity was one of the results not one of the inputs . Is this correct? I can see why, as Mosher says, they initialise to start in the ~1850s or thereabouts (no accurate data prior to that), but the problem with starting so recently is that, if there are long term cycles in ENSO or the like, which are believe to be important, they would not be detected. Long term cycles would have to emerge from the model – they cannot be an input – if the physics is right. Has anyone ever started a GCM at a point thousands of years back, even though the initial conditions may be poorly defined, just to see if any cyclical behaviour emerges? Or are the models not sufficiently numerically stable to allow that.

Duster
November 22, 2013 10:50 am

Epigenes says:
November 22, 2013 at 8:29 am
Eschenbach is tedious. This post is boring and detracts from the debate. How many of you studied his pictograms?
None of the warmists will give a damn. Eschenbach is preaching to the converted and on an ego trip. He needs to adopt a more political approach. Do not hold your breathe ‘cos he has no political nous whatsoever.

What precisely is your point, other than that you skipped the introductory statistics courses and never learned about the process of exploratory data analysis, and don’t want to? A “political approach” is what distinguishes “consensus” science from the real thing. It is irrelevant how many “believe” something and far more important whether belief is true to fact or not. Willis is comparing products of belief with fact here. You want to be political, he’s offering you the ammunition.

Duster
November 22, 2013 11:03 am

Otto Weinzierl says:
November 21, 2013 at 11:53 pm
Sorry to correct you but the “whiskers” extend 1.5 times of the height of the box. Data beyond that are classified as outliers.

It is not that simple. There are several types of boxplot. Even wikipedia notes that the ends of the whiskers can represent several different things, only one which would meet your mention.

Jimbo
November 22, 2013 11:34 am

David A says:
November 22, 2013 at 4:20 am
Jimbo says:
November 22, 2013 at 1:33 am
Would it not be a good idea for the IPCC to look at say 5 of the models that came closest to temperature observations…………………….
============================================
………………………If they follow your suggestion they will likely find that by tuning way down “climate sensitivity” to CO2, they can produce far more accurate predictions.

That is what I was hinting at. 🙂 If the IPCC did this their scary projections would no longer be scary and we would not have to act now.

November 22, 2013 12:02 pm

For those critics of Willis on being overly detailed and arcane. You couldn’t be more correct. Let’s take a look at this 14 page classic:
http://people.csail.mit.edu/bkph/courses/papers/Exact_Conebeam/Radon_English_1917.pdf
This is Johann Radon’s original paper on his “Transform”. Clear, transparent, and the root mathematical basis of al CT work, MRI or Xray based.
It should be obvious that Willis spends FAR too much time on WORDS, actually describing things, when he could be more direct with just the equations.
Max
PS: Am I supposed to put a \sarc tag in to indicate sarcasm is now off?
[Reply: Always use a /sarc tag when being sarcastic. Some folks take everything literally. — mod.]

3x2
November 22, 2013 12:15 pm

My final observation is an odd one. It concerns the curious fact that an ensemble (a fancy term for an average) of climate models generally performs better than any model selected at random. Here’s how I’m coming to understand it.
Suppose you have a bunch of young kids who can’t throw all that well. You paint a target on the side of a barn, and the kids start throwing mudballs at the target […].

Worse than that perhaps. It is like betting on every horse in a particular race. On paper it looks as though you have a ‘sure fire’ betting system because you always collect on the winner. Potential investors in your scheme only get wise once they figure out that, although they collect on every race, it is costing them, on average, a million each race for every 40,000 they collect.
Of course you could always refine your ‘sure fire’ scheme over time such that more money is placed on the current favourites in the race.
The horse racing analogy falls down with the IPCC simply because, if we were to treat the various AR’s (model elements) as a horse race, the IPCC swaps out horses mid race depending upon their performance. While, of course, claiming that it is still the same race. Were LT temps to fall over the coming decades then one could bet that AR9 will be bang on the money when it came to foreseeing that development (AR1-8 ‘model ranges’ conveniently forgotten).

November 22, 2013 3:46 pm

Willis, your observations about the average suggests to me the comparison between “accuracy” and “precision:” in that regard, the models may be quite precise, but the accuracy sucks. If the modelers have “tuned” (i.e., targeted) the model on bad observational data, it may very well replicate that data set nicely, but if the observational data stink, well, so will the predictions by the model. Of course, if the models poorly replicate the annual, year-to-year changes, then that’s a big problem in its own right.
Thanks for taking so much time to perform and present this analysis.

Scottish Sceptic
November 22, 2013 5:03 pm

At school they said we were to have a surprise fire bell at 2pm one day the next week.
They couldn’t have it on Friday – because being the last day it would not be a surprise.
So … they couldn’t have it on Thursday as it would be a surprise – because they couldn’t have it on Friday and so Thursday would not be a surprise.
Likewise, Wednesday, Tuesday and Monday.
So, they couldn’t have a surprise fire drill.
It’s very similar with the climate models. They set them up believing they model natural variability in the climate – in practice the degrees of chaos are constrained so that they are not.

November 22, 2013 6:12 pm

Willis writes “My main conclusion is that at some point we need to get over the idea of climate model democracy, and start heaving overboard those models that are not lifelike, that don’t even vaguely resemble the observations.”
You cant post-hoc select based on the variable you’re measuring or you will most certainly select for models which produce hockey sticks. This is precisely the same (bad) reasoning as throwing out tree rings because they dont play ball. You’re not allowed to do it.
Basically if the model is believed to represent temperature then it must stay.