A new paper is out in a special issue of Climate of the Past called Progress in paleoclimate modelling
It is from the ‘hockey team’ and titled “Using palaeo-climate comparisons to constrain future projections in CMIP5”. I loved this tweet from the LDEO Tree ring lab
Models agreeing with palaeoclimate give different future results than others
Gosh, who woulda thought that model output in tune with palaeoclimate data would be different than output from others? Last I heard, tree rings, corals, and cave limestone deposits aren’t climate forcings, so from my perspective, the claim is meaningless. And of course, the word “robust” is used in the abstract, which I think is Gavin’s favorite word. On the plus side, the paper is open access, so we can examine why they claim that.
See the paper and the abstract
Using palaeo-climate comparisons to constrain future projections in CMIP5, Clim. Past, 10, 221-250, doi:10.5194/cp-10-221-2014, 2014.
Schmidt, G. A., Annan, J. D., Bartlein, P. J., Cook, B. I., Guilyardi, E., Hargreaves, J. C., Harrison, S. P., Kageyama, M., LeGrande, A. N., Konecky, B., Lovejoy, S., Mann, M. E., Masson-Delmotte, V., Risi, C., Thompson, D., Timmermann, A., Tremblay, L.-B., and Yiou, P.:
Abstract. We present a selection of methodologies for using the palaeo-climate model component of the Coupled Model Intercomparison Project (Phase 5) (CMIP5) to attempt to constrain future climate projections using the same models. The constraints arise from measures of skill in hindcasting palaeo-climate changes from the present over three periods: the Last Glacial Maximum (LGM) (21 000 yr before present, ka), the mid-Holocene (MH) (6 ka) and the Last Millennium (LM) (850–1850 CE). The skill measures may be used to validate robust patterns of climate change across scenarios or to distinguish between models that have differing outcomes in future scenarios. We find that the multi-model ensemble of palaeo-simulations is adequate for addressing at least some of these issues. For example, selected benchmarks for the LGM and MH are correlated to the rank of future projections of precipitation/temperature or sea ice extent to indicate that models that produce the best agreement with palaeo-climate information give demonstrably different future results than the rest of the models. We also explore cases where comparisons are strongly dependent on uncertain forcing time series or show important non-stationarity, making direct inferences for the future problematic. Overall, we demonstrate that there is a strong potential for the palaeo-climate simulations to help inform the future projections and urge all the modelling groups to complete this subset of the CMIP5 runs.
Schmidt, G. A., Annan, J. D., Bartlein, P. J., Cook, B. I., Guilyardi, E., Hargreaves, J. C., Harrison, S. P., Kageyama, M., LeGrande, A. N., Konecky, B., Lovejoy, S., Mann, M. E., Masson-Delmotte, V., Risi, C., Thompson, D., Timmermann, A., Tremblay, L.-B., and Yiou, P.: Using palaeo-climate comparisons to constrain future projections in CMIP5, Clim. Past, 10, 221-250, doi:10.5194/cp-10-221-2014, 2014.
Paper PDF: http://www.clim-past.net/10/221/2014/cp-10-221-2014.pdf
From what I can make of the paper, it seems they are providing a road map to help modelers tune their model output to things like Mann’s hockey stick palaeo reconstructions, which they apparently still believe in even though the palaeo tree ring data showed a dive after 1960, which is why the whole “hide the decline” issue came about.
Not only was the deletion of post-1960 values not reported by IPCC, as Gavin Schmidt implies, it is not all that easy to notice that the Briffa reconstruction ends around 1960. As the figure is drawn, the 1960 endpoint of the Briffa reconstruction is located underneath other series; even an attentive reader easily missed the fact that no values are shown after 1960. The decline is not “hidden in plain view”; it is “hidden” plain and simple.
Figure 3. Blowup of IPCC Figure 2-21.
Source: How “The Trick” was pulled off
I’m sure Steve McIntyre will weigh in on this new paper soon as he is best qualified to sort out what is “robust” in team science and what isn’t.
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.

Could it be that Michael Mann’s rings were also acting like rain gauges?
http://wattsupwiththat.com/2008/03/19/treemometers-or-rain-gauges/
http://dx.doi.org/10.1038/ngeo2053
http://wattsupwiththat.com/2009/09/28/a-look-at-treemometers-and-tree-ring-growth/
They keep tuning and constraining and keep getting it wrong. We will have to wait and see but their track record on climate projections / scenarios is not good.
“The climate models are bust” more accurately describes the situation.
The concept is pretty simple.
There are paleo climate runs for models.
You see which models do best on these runs by comparing the models to paleo recons.
The features they look at are relatively large scale features.. Like land ocean contrast.
Then you select those models.
pretty standard approach to model selection. It beats the current approach of model democracy.
We’ve done the same thing with models that have runs going back to 1750..
Lucia has done something similar with hindcast scoring.
The basic answer doesnt say much that is different.. models that perform well in paleo
have sensitivity between 1.7 and 4.6 or something close to that..
OK, Steve Mosher.
Name those models that have been rejected based on ridiculous results which have proved 95% dead wrong in only 10 years of real world temperatures. 5 years after the model results were released.
Your funding and professional reputation in the community relies on these 23 models. If only 2 are in error (but close!) by missing only 5% after 17 years, which are you going to reject funding for?
Which have been revised in the past 17 years so even ONE is now predicting accurately?
Yup models that agree with junk data when forced by junk data fall within a large range of junk values. Very impressive.
It does not matter.
I am quite confident if one tried to make astrology “scientific” by setting up an Astrological Model Intercomparison Project, then tuned hindcasting on past constellations and synchronous historical events, substituting the real history with a false one would not decrease predictive skill of models at all.
The team should hire CGI to develop the models based on their good experience with government contracts.
Steve,
a range of 1.7 – 4.6 seems pretty useless, actually.
No need of more government funded studies – they have achieved consensus! Tts global warming all the way down. We need to warm up now – cue the CO2 guys, pump more oil.
The ‘climate science’ troughs have allowed an explosion of models. Most/all have little resemblance to reality as you simply: i) cannot model a chaotic system, and ii) expect realistic results when the modeller is required to always produce a prediction of Thermageddon to acquire future additional funding.
Instead of modelling, it would be nice if climate scientists spent some time de-homogenising the pre-satellite temperature data in order to return it the original actual figures. I for one would really like to know how much the world has warmed up over the past 150 years, 0.7 degrees C is currently my absolute maximum figure.
But give GISS a few more years and it will be >1.5 degrees C.
Now that gives me an idea. Why not do the same for the observed divergence on global surface temps V IPCC projections / scenarios? Let’s ONLY use those that came closest to observations? Is this a good idea? Can we use these for future projections / scenarios?
My advanced apologies as I am not a model or modeler though many honeys have told me in the past that I coulda and shoulda been.
I can’t wait for Steve McIntyre to have a “captain Cook” at this latest HS team effort.
I love models, they are so beautiful and nubile. As for climate models they are ugly, haggard, and full of garbage.
OK enough of the poking fun, though I can easily continue. I will let the rest of you chaps bang away at this hopeless case of climate modelling – past, present, future and back to the future.
I’m curious as to how the (very few) models that predicted the current temperature measurements differ from the majority that are running too hot.
Has anyone completed Mann’s original proxy set to show what ‘the hockey stick’ would look like today without the decline being hidden? Does anybody have a link? (Yes, I’m aware of Steve McIntyre’s yeoman work, I’m concerned with the issue of the data set being up (as close as possible) to today’s date, rather than the issues of principal component centring, CRU’s failure to include the full data set, or splicing thermometer records to the set.)
On a separate issue, comments in the believer sites are full of animosity, antagonism, and small-spiritedness. (Sometimes in the header articles too.) This makes them painful to read, and deters the casual reader from revisiting them. Lets not do the same to Anthony’s site.
Keep in mind, if we remain civil, we will take and keep the moral high ground. This will impress the uncommitted, and really annoy the climate faithful.
Long past time to defund GISS.
But…
Where is Briffa?
Cynically I can only paraphrase Willis: “The quality of the work is inversely proportional to the number of authors”.
I shall relax and donate to Climate Audit, secure in the knowledge there are gems of insanity buried throughout this government quality paper.
If and when he is ready, I am sure Steve will gently correct their errors.
Sorry I couldn’t resist. I promise this is the last unless…..
Wow, MEM is happy to tag along in 10th place or so in the “discipline” he essentially owned.They must need the expertise in culling out paleos that don’t give the desired result. So let me get this straight. They are using models of climate that fit the LGM, H-optimum, and the 850-1850 period (they don’t dare call it the medieval warm period and little ice age) a period of no CO2 forcing to constrain models for prediction of the future. If Mann has put his name on this paper and it reinstates the MWP and the LIA that he wiped out, how does this effect his court case?
Peter Miller says:
February 12, 2014 at 3:36 pm
…………………….
I for one would really like to know how much the world has warmed up over the past 150 years, 0.7 degrees C is currently my absolute maximum figure.
—————————————————————————————-
Well according to the CET temps its 0.9 deg C over about 350 years ! A linear trend since LIA of 0.25 deg C per Century.
Mosher says:
“The concept is pretty simple.
There are paleo climate runs for models.
You see which models do best on these runs by comparing the models to paleo recons.
The features they look at are relatively large scale features.. Like land ocean contrast.
Then you select those models.
Steve, thanks for the explanation. I tried reading the paper but couldn’t make heads or tails of it. Let me ask you a personal question: Do you have faith in the models selected this way? Or to put the question another way: Would you invest your money using a stock market forecasting model selected this way?
The problem is not the models, but the modellers themselves. They understand computers, but they have no understanding of climatology.
Mosher writes “The features they look at are relatively large scale features.. Like land ocean contrast. Then you select those models. pretty standard approach to model selection. It beats the current approach of model democracy.”
On the face of it, and to someone who doesn’t actually understand the issues, that seems reasonable but its actually rubbish.
Firstly there are no models that are capable of doing 10K year hindcasts using actual physics. At best they use coarse approximation in parameterisation. Hell even the relatively short 100 year forecasts use parameterisations because we dont have the physics (ie clouds) and they cant do physics (ie solutions to N-S).
So they’re a fit. And even more so with 10K year runs.
If they’re a fit and you select them based on the result you’re measuring (ie Temperature Hockeysticks) then you’re doing no better than chosing tree rings based on the result you’re measuring (ie Temperature Hockeysticks)
Others (mainly Steve McIntyre but also Lucia) have shown the folly of that approach.
The basic answer doesn’t say much that is different; models that perform well in paleo have sensitivity between 1.7 and 4.6 or something close to that.
Well, that really narrows it down.
/sarc
I’m interested in what data they are trying to fit. Are they trying to fit “hockey stick data” or more Lamb like data, you know… data as it was pre 1991. If it is ‘hockey stick data” or data that doesn’t show significant up AND down cycles, its totally just another scam promoting, BS exercise.
Gee wiz!!!!! The list of authors these days is getting longer than the article they authored! What the heck! Is the night watchman on that list too? The baby sitter? Drinking buddy?