New hockey team paper: models tuned to tree rings and other palaeo-climate reconstructions

A new paper is out in a special issue of Climate of the Past called Progress in paleoclimate modelling

It is from the ‘hockey team’ and titled “Using palaeo-climate comparisons to constrain future projections in CMIP5”.  I loved this tweet from the LDEO Tree ring lab

Models agreeing with palaeoclimate give different future results than others

Gosh, who woulda thought that model output in tune with palaeoclimate data would be different than output from others? Last I heard, tree rings, corals, and cave limestone deposits aren’t climate forcings, so from my perspective, the claim is meaningless. And of course, the word “robust” is used in the abstract, which I think is Gavin’s favorite word. On the plus side, the paper is open access, so we can examine why they claim that.

See the paper and the abstract

Using palaeo-climate comparisons to constrain future projections in CMIP5, Clim. Past, 10, 221-250, doi:10.5194/cp-10-221-2014, 2014.

Schmidt, G. A., Annan, J. D., Bartlein, P. J., Cook, B. I., Guilyardi, E., Hargreaves, J. C., Harrison, S. P., Kageyama, M., LeGrande, A. N., Konecky, B., Lovejoy, S., Mann, M. E., Masson-Delmotte, V., Risi, C., Thompson, D., Timmermann, A., Tremblay, L.-B., and Yiou, P.:

Abstract. We present a selection of methodologies for using the palaeo-climate model component of the Coupled Model Intercomparison Project (Phase 5) (CMIP5) to attempt to constrain future climate projections using the same models. The constraints arise from measures of skill in hindcasting palaeo-climate changes from the present over three periods: the Last Glacial Maximum (LGM) (21 000 yr before present, ka), the mid-Holocene (MH) (6 ka) and the Last Millennium (LM) (850–1850 CE). The skill measures may be used to validate robust patterns of climate change across scenarios or to distinguish between models that have differing outcomes in future scenarios. We find that the multi-model ensemble of palaeo-simulations is adequate for addressing at least some of these issues. For example, selected benchmarks for the LGM and MH are correlated to the rank of future projections of precipitation/temperature or sea ice extent to indicate that models that produce the best agreement with palaeo-climate information give demonstrably different future results than the rest of the models. We also explore cases where comparisons are strongly dependent on uncertain forcing time series or show important non-stationarity, making direct inferences for the future problematic. Overall, we demonstrate that there is a strong potential for the palaeo-climate simulations to help inform the future projections and urge all the modelling groups to complete this subset of the CMIP5 runs.

Schmidt, G. A., Annan, J. D., Bartlein, P. J., Cook, B. I., Guilyardi, E., Hargreaves, J. C., Harrison, S. P., Kageyama, M., LeGrande, A. N., Konecky, B., Lovejoy, S., Mann, M. E., Masson-Delmotte, V., Risi, C., Thompson, D., Timmermann, A., Tremblay, L.-B., and Yiou, P.: Using palaeo-climate comparisons to constrain future projections in CMIP5, Clim. Past, 10, 221-250, doi:10.5194/cp-10-221-2014, 2014.

Paper PDF: http://www.clim-past.net/10/221/2014/cp-10-221-2014.pdf

From what I can make of the paper, it seems they are providing a road map to help modelers tune their model output to things like Mann’s hockey stick palaeo reconstructions, which they apparently still believe in  even though the palaeo tree ring data showed a dive after 1960, which is why the whole “hide the decline” issue came about.

Not only was the deletion of post-1960 values not reported by IPCC, as Gavin Schmidt implies, it is not all that easy to notice that the Briffa reconstruction ends around 1960. As the figure is drawn, the 1960 endpoint of the Briffa reconstruction is located underneath other series; even an attentive reader easily missed the fact that no values are shown after 1960. The decline is not “hidden in plain view”; it is “hidden” plain and simple.

Figure 3. Blowup of IPCC Figure 2-21.

Source: How “The Trick” was pulled off

I’m sure Steve McIntyre will weigh in on this new paper soon as he is best qualified to sort out what is “robust” in team science and what isn’t.

0 0 votes
Article Rating
44 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Jimbo
February 12, 2014 2:39 pm

Could it be that Michael Mann’s rings were also acting like rain gauges?
http://wattsupwiththat.com/2008/03/19/treemometers-or-rain-gauges/
http://dx.doi.org/10.1038/ngeo2053
http://wattsupwiththat.com/2009/09/28/a-look-at-treemometers-and-tree-ring-growth/
They keep tuning and constraining and keep getting it wrong. We will have to wait and see but their track record on climate projections / scenarios is not good.

Editor
February 12, 2014 2:46 pm

“The climate models are bust” more accurately describes the situation.

February 12, 2014 2:59 pm

The concept is pretty simple.
There are paleo climate runs for models.
You see which models do best on these runs by comparing the models to paleo recons.
The features they look at are relatively large scale features.. Like land ocean contrast.
Then you select those models.
pretty standard approach to model selection. It beats the current approach of model democracy.
We’ve done the same thing with models that have runs going back to 1750..
Lucia has done something similar with hindcast scoring.
The basic answer doesnt say much that is different.. models that perform well in paleo
have sensitivity between 1.7 and 4.6 or something close to that..

RACookPE1978
Editor
February 12, 2014 3:08 pm

OK, Steve Mosher.
Name those models that have been rejected based on ridiculous results which have proved 95% dead wrong in only 10 years of real world temperatures. 5 years after the model results were released.
Your funding and professional reputation in the community relies on these 23 models. If only 2 are in error (but close!) by missing only 5% after 17 years, which are you going to reject funding for?
Which have been revised in the past 17 years so even ONE is now predicting accurately?

timetochooseagain
February 12, 2014 3:11 pm

Yup models that agree with junk data when forced by junk data fall within a large range of junk values. Very impressive.

Berényi Péter
February 12, 2014 3:17 pm

From what I can make of the paper, it seems they are providing a road map to help modelers tune their model output to things like Mann’s hockey stick palaeo reconstructions, which they apparently still believe in

It does not matter.
I am quite confident if one tried to make astrology “scientific” by setting up an Astrological Model Intercomparison Project, then tuned hindcasting on past constellations and synchronous historical events, substituting the real history with a false one would not decrease predictive skill of models at all.

Catcracking
February 12, 2014 3:18 pm

The team should hire CGI to develop the models based on their good experience with government contracts.

hunter
February 12, 2014 3:25 pm

Steve,
a range of 1.7 – 4.6 seems pretty useless, actually.

albertalad
February 12, 2014 3:32 pm

No need of more government funded studies – they have achieved consensus! Tts global warming all the way down. We need to warm up now – cue the CO2 guys, pump more oil.

Peter Miller
February 12, 2014 3:36 pm

The ‘climate science’ troughs have allowed an explosion of models. Most/all have little resemblance to reality as you simply: i) cannot model a chaotic system, and ii) expect realistic results when the modeller is required to always produce a prediction of Thermageddon to acquire future additional funding.
Instead of modelling, it would be nice if climate scientists spent some time de-homogenising the pre-satellite temperature data in order to return it the original actual figures. I for one would really like to know how much the world has warmed up over the past 150 years, 0.7 degrees C is currently my absolute maximum figure.
But give GISS a few more years and it will be >1.5 degrees C.

Jimbo
February 12, 2014 3:38 pm

Steven Mosher says:
February 12, 2014 at 2:59 pm
The concept is pretty simple.
There are paleo climate runs for models.
You see which models do best on these runs by comparing the models to paleo recons….

Now that gives me an idea. Why not do the same for the observed divergence on global surface temps V IPCC projections / scenarios? Let’s ONLY use those that came closest to observations? Is this a good idea? Can we use these for future projections / scenarios?
My advanced apologies as I am not a model or modeler though many honeys have told me in the past that I coulda and shoulda been.

Neville.
February 12, 2014 3:41 pm

I can’t wait for Steve McIntyre to have a “captain Cook” at this latest HS team effort.

Jimbo
February 12, 2014 3:46 pm

I love models, they are so beautiful and nubile. As for climate models they are ugly, haggard, and full of garbage.

The key role of heavy precipitation events in climate model disagreements of future annual precipitation changes in California
Between these conflicting tendencies, 12 projections show drier annual conditions by the 2060s and 13 show wetter. These results are obtained from sixteen global general circulation models downscaled with different combinations of dynamical methods……
http://journals.ametsoc.org/doi/abs/10.1175/JCLI-D-12-00766.1

OK enough of the poking fun, though I can easily continue. I will let the rest of you chaps bang away at this hopeless case of climate modelling – past, present, future and back to the future.

Leo Morgan
February 12, 2014 3:46 pm

I’m curious as to how the (very few) models that predicted the current temperature measurements differ from the majority that are running too hot.
Has anyone completed Mann’s original proxy set to show what ‘the hockey stick’ would look like today without the decline being hidden? Does anybody have a link? (Yes, I’m aware of Steve McIntyre’s yeoman work, I’m concerned with the issue of the data set being up (as close as possible) to today’s date, rather than the issues of principal component centring, CRU’s failure to include the full data set, or splicing thermometer records to the set.)
On a separate issue, comments in the believer sites are full of animosity, antagonism, and small-spiritedness. (Sometimes in the header articles too.) This makes them painful to read, and deters the casual reader from revisiting them. Lets not do the same to Anthony’s site.
Keep in mind, if we remain civil, we will take and keep the moral high ground. This will impress the uncommitted, and really annoy the climate faithful.

February 12, 2014 3:55 pm

Long past time to defund GISS.

February 12, 2014 3:56 pm

But…
Where is Briffa?
Cynically I can only paraphrase Willis: “The quality of the work is inversely proportional to the number of authors”.
I shall relax and donate to Climate Audit, secure in the knowledge there are gems of insanity buried throughout this government quality paper.
If and when he is ready, I am sure Steve will gently correct their errors.

Jimbo
February 12, 2014 4:04 pm

Sorry I couldn’t resist. I promise this is the last unless…..

Abstract – 3 June 2013
Historical Antarctic mean sea ice area, sea ice trends, and winds in CMIP5 simulations
“…most climate models from the Coupled Model Intercomparison Project Phase 5 (CMIP5) archive simulate a decrease in Antarctic sea ice area over the recent past,…”
doi:10.1002/jgrd.50443

Gary Pearse
February 12, 2014 4:51 pm

Wow, MEM is happy to tag along in 10th place or so in the “discipline” he essentially owned.They must need the expertise in culling out paleos that don’t give the desired result. So let me get this straight. They are using models of climate that fit the LGM, H-optimum, and the 850-1850 period (they don’t dare call it the medieval warm period and little ice age) a period of no CO2 forcing to constrain models for prediction of the future. If Mann has put his name on this paper and it reinstates the MWP and the LIA that he wiped out, how does this effect his court case?

FrankK
February 12, 2014 4:56 pm

Peter Miller says:
February 12, 2014 at 3:36 pm
…………………….
I for one would really like to know how much the world has warmed up over the past 150 years, 0.7 degrees C is currently my absolute maximum figure.
—————————————————————————————-
Well according to the CET temps its 0.9 deg C over about 350 years ! A linear trend since LIA of 0.25 deg C per Century.

Louis Hooffstetter
February 12, 2014 5:26 pm

Mosher says:
“The concept is pretty simple.
There are paleo climate runs for models.
You see which models do best on these runs by comparing the models to paleo recons.
The features they look at are relatively large scale features.. Like land ocean contrast.
Then you select those models.
Steve, thanks for the explanation. I tried reading the paper but couldn’t make heads or tails of it. Let me ask you a personal question: Do you have faith in the models selected this way? Or to put the question another way: Would you invest your money using a stock market forecasting model selected this way?

David Ball
February 12, 2014 5:29 pm

The problem is not the models, but the modellers themselves. They understand computers, but they have no understanding of climatology.

February 12, 2014 6:27 pm

Mosher writes “The features they look at are relatively large scale features.. Like land ocean contrast. Then you select those models. pretty standard approach to model selection. It beats the current approach of model democracy.”
On the face of it, and to someone who doesn’t actually understand the issues, that seems reasonable but its actually rubbish.
Firstly there are no models that are capable of doing 10K year hindcasts using actual physics. At best they use coarse approximation in parameterisation. Hell even the relatively short 100 year forecasts use parameterisations because we dont have the physics (ie clouds) and they cant do physics (ie solutions to N-S).
So they’re a fit. And even more so with 10K year runs.
If they’re a fit and you select them based on the result you’re measuring (ie Temperature Hockeysticks) then you’re doing no better than chosing tree rings based on the result you’re measuring (ie Temperature Hockeysticks)
Others (mainly Steve McIntyre but also Lucia) have shown the folly of that approach.

Billy Liar
February 12, 2014 6:28 pm

The basic answer doesn’t say much that is different; models that perform well in paleo have sensitivity between 1.7 and 4.6 or something close to that.
Well, that really narrows it down.
/sarc

Alcheson
February 12, 2014 7:42 pm

I’m interested in what data they are trying to fit. Are they trying to fit “hockey stick data” or more Lamb like data, you know… data as it was pre 1991. If it is ‘hockey stick data” or data that doesn’t show significant up AND down cycles, its totally just another scam promoting, BS exercise.

Pamela Gray
February 12, 2014 8:50 pm

Gee wiz!!!!! The list of authors these days is getting longer than the article they authored! What the heck! Is the night watchman on that list too? The baby sitter? Drinking buddy?

John F. Hultquist
February 12, 2014 9:27 pm

The only thing I am impressed with regarding tree rings and paleo temperature reconstructions (climate is something else, perhaps better investigated via pollen) is the effort, resources, and money involved. All could be used elsewhere with the potential for useful results.
Regarding tree rings, a quote from a famous American comes to mind: “What difference, at this point, does it make?

TomRude
February 12, 2014 11:08 pm

A quick perusing suggests they are trying to constrain climate sensitivity to CO2 to high values through finding models that fit their ad hoc proxies rather than using direct measures that are not really helping the Cause. If a model with high climate sensitivity describes fairly well a paleoclimatic event apprehended only through their interpretation of specific proxies, then all is well… Notice how the fishing net is wide enough that virtually huge events will show up as desired, hence confirming their claim and by default CAGW.

GregK
February 12, 2014 11:25 pm

Weather forecasting has it’s problems..
http://en.wikipedia.org/wiki/Butterfly_effect
Climate even more so, though there are those that believe that it’s not chaotic responses that are the difficulty but inadequacies in the models……….. who would have thought..?

negrum
February 12, 2014 11:36 pm

Does a coin flip give better predictive/projective results than the models?

Stephen Richards
February 13, 2014 1:10 am

timetochooseagain says:
February 12, 2014 at 3:11 pm
Yup models that agree with junk data when forced by junk data fall within a large range of junk values. Very impressive.
Exactly, but Mosher thinks that’s exceptional science.

Bill Illis
February 13, 2014 3:47 am

They are going to use four different forcings to model the paleoclimate.
GHGs, volcanoes, Albedo, and, changing solar insolation due to the Milankovitch cycles.
There are various volcanoes over the period, and the models always overstate the impact but it is what it is. Changing solar insolation from Milankovitch is actually a very small impact by itself.
The two big ones are GHGs and Albedo.
I’m pretty sure they just “force” the Albedo impact numbers so that they get 3.0C per doubling out of the GHG “forcing”. What was the Earth’s Albedo during the last glacial maximum. They never say in these papers. It was certainly much higher than they have built in.
“Forcing” is just another word for “fudging” is it not.

John
February 13, 2014 4:10 am

Sounds like a snipe hunt, where they’re looking for something to substantiate their claims that doesn’t exist

ferdberple
February 13, 2014 6:49 am

The notion that you can calibrate tree rings as though they are thermometers is statistically invalid. It is called “selection on the dependent variable” and it is forbidden in statistics as it leads to false results. This is well known in other fields, but apparently the ‘hockey team’ skipped their statistics 101 classes.
http://www.nyu.edu/classes/nbeck/q2/geddes.pdf
“Most graduate students learn in the statistics courses forced upon them that selection on the dependent variable is forbidden, but few remember why, or what the implications of violating this taboo are for their own work.”

ferdberple
February 13, 2014 6:55 am

The problem with only selecting those few trees that correlate with temperature, while ignoring the large number of trees that do not correlate, is that the trees that do not correlate are telling you something. They are telling you that the few trees that do correlate are doing so by accident.
However, rather than listen, climate science assumes that they know better than the trees. So what you find is outside the calibration zones (the blade) the few trees that were selected do not correlate, they show random behavior, which when averaged gives as flat line (the shaft).

ferdberple
February 13, 2014 7:03 am

Here is another reference, showing how the shaft of the hockey stick was created. The money quote is “tends to reduce the slope estimate produced by regression analysis”. By reducing the slope you get the shaft from the proxy data, while the thermometer data used for calibration (selection) produces the blade.
The Problem of Selection Bias:
Selection bias is commonly understood as occurring when the nonrandom selection of cases result in inferences, based on the resulting sample, that are not statistically representative of the population. The focus of the present discussion is on selection bias deriving from deliberate selection by the investigator. A common problem arising from such selection is that it may over-represent cases at one or the other end of the distribution on a key variable.
The statistical insight crucial, to understanding the consequences of such selection, is the observation that selecting cases – so as to constrain variation toward high or low values of the dependent variable – tends to reduce the slope estimate produced by regression analysis. This is the basis for warning about the hazards of “selecting on the dependent variable”. This expression refers, not only to the deliberate selection of cases according to their scores on this variable, but to any mode of selection correlated with the dependent variable (i.e., tending to select cases that have higher, or lower, values on that variable) once the effect of the explanatory variable is removed. If such a correlation exists, causal inference will tend to be biased (Coolier, 1995, 461).
http://poli.haifa.ac.il/~levi/pitfalls.html

Jeff
February 13, 2014 7:32 am

“And of course, the word “robust” is used in the abstract, which I think is Gavin’s favorite word.”
Maybe he has a new meaning for the word, as in an automaton, model, or RO-BOT gone wrong,
i.e. RO BUST…..
Robust(a) doesn’t even make good coffee anymore….
Way back when I was a kid, we’d get whacked with a hockey stick for spewing such nonsense….
and it wasn’t made out of cherry(picked) wood….

catweazle666
February 13, 2014 7:55 am

Still faffing about pretending that it is possible to model non-linear feedback driven (where we don’t even know the sign of some of the relatively small number of the feedbacks we DO know) chaotic systems on Xbox games computer models?
Why does anyone with the remotest grasp of the science and mathematics involved still take these clowns seriously?

catweazle666
February 13, 2014 8:04 am

Following on from my last comment, we actually DO know that the most important feedback that is absolutely necessary for AGW and high climate sensitivity (by high I mean unity and above) – water vapour – isn’t happening because there is no increase in atmospheric water vapour that correlates even remotely to the increase in CO2, vide Vonder Vaar, Solomon and Humlum.
In fact according to Solomon the opposite has in fact occurred, in the decade post-2000 water vapour declined by ~10%.

February 13, 2014 3:16 pm

I’ve never understood the “hide the decline” hoopla. Why would anyone want to use proxies to measure 20th century temperatures when we have thermometers for that? Why use indirect measurement when you have direct measurement?

February 13, 2014 4:06 pm

There are lots of good looking models out there, after enough make-up is applied. I don’t fantasize about what it would be like to be married to any of them. Or base my future on such fantasies. I prefer to be bound to the reality of the woman who agreed to put up with me for life.
Such is what CAGW climate model fantasies have to offer.

February 13, 2014 4:17 pm

claimsguy says:
February 13, 2014 at 3:16 pm
I’ve never understood the “hide the decline” hoopla. Why would anyone want to use proxies to measure 20th century temperatures when we have thermometers for that? Why use indirect measurement when you have direct measurement?

===================================================================
The proxies claimed to be “robust” in showing past temperatures should have also been “robust” against actual measured temperatures. They weren’t. They took a dive. To shore up the reputation of the reliability of the proxies (no MWP or LIA in the Mannian proxies), they choose to “hide the decline”. If I’m not mistaken this went on back when “The Hockey Stick” was still in vogue with the IPCC.
(I’m sure someone else can state that better than I did.)

David Ball
February 13, 2014 7:35 pm

claimsguy says:
February 13, 2014 at 3:16 pm
Very good question. Keep going.

Paul Blase
February 14, 2014 2:07 pm

claimsguy says:
“Why use indirect measurement when you have direct measurement?”
At least ostensibly in order to calibrate the proxies.

nick
February 14, 2014 3:28 pm

Stop me if you’ve heard it before. It looks like the CHIMP5 have taken over the Lab?