Climate scientists can restart the climate change debate & win: test the models!

By Larry Kummer, from the Fabius Maximus website

Summary; Public policy about climate change has become politicized and gridlocked after 26 years of large-scale advocacy. We cannot even prepare for a repeat of past extreme weather. We can whine and bicker about who to blame. Or we can find ways to restart the debate. Here is the next of a series about the latter path, for anyone interested in walking it. Climate scientists can take an easy and potentially powerful step to build public confidence: re-run the climate models from the first 3 IPCC reports with actual data (from their future): how well did they predict global temperatures?

Trust can trump Uncertainty.”

Presentation by Leonard A Smith (Prof of Statistics, LSE), 6 February 2014.

The most important graph from the IPCC’s AR5

clip_image001

Figure 1.4 from p131 of AR5: the observed global surface temperature anomaly relative to 1961–1990 in °C compared with the range of projections from the previous IPCC assessments. Click to enlarge.

Why the most important graph doesn’t convince the public

Last week I posted What climate scientists did wrong and why the massive climate change campaign has failed. After 26 years, one of the largest longest campaigns to influence public policy has failed to gain the support of Americans, with climate change ranking near the bottom of people’s concerns. It described the obvious reason: they failed to meet the public’s expectations for behavior of scientists warning about a global threat (i.e., a basic public relations mistake).

Let’s discuss what scientists can do to restart the debate. Let’s start with the big step: show that climate models have successfully predicted future global temperatures with reasonable accuracy.

This spaghetti graph — probably the most-cited data from the IPCC’s reports — illustrates one reason for lack of sufficient public support in America. It shows the forecasts of models run in previous IPCC reports vs. actual subsequent temperatures, with the forecasts run under various scenarios of emissions and their baselines updated. First, Edward Tufte probably would laugh at this The Visual Display of Quantitative Informationclip_image002 — too much packed into one graph, the equivalent of a PowerPoint slide with 15 bullet points.

But there’s a more important weakness. We want to know how well the models work. That is, how well each forecast if run with a correct scenario (i.e., actual future emissions, since we’re uninterested here in predicting emissions, just temperatures).

The big step: prove climate models have made successful predictions

“A genuine expert can always foretell a thing that is 500 years away easier than he can a thing that’s only 500 seconds off.”

— From Mark Twain’s A Connecticut Yankee in King Arthur’s Courtclip_image002[1].

A massive body of research describes how to validate climate models (see below), most stating that they must use “hindcasts” (predicting the past) because we do not know the temperature of future decades. Few sensible people trust hindcasts, with their ability to be (even inadvertently) tuned to work (that’s why scientists use double-blind testing for drugs where possible).

But now we know the future — the future of models run in past IPCC reports — and can test their predictive ability.

Karl Popper believed that predictions were the gold standard for testing scientific theories. The public also believes this. Countless films and TV shows focus on the moment in which scientists test their theory to see if the result matches their prediction. Climate scientists can run such tests today for global surface temperatures. This could be evidence on a scale greater than anything else they’ve done.

Testing the climate models used by the IPCC

“Probably {scientists’} most deeply held values concern predictions: they should be accurate; quantitative predictions are preferable to qualitative ones; whatever the margin of permissible error, it should be consistently satisfied in a given field; and so on.”

— Thomas Kuhn in The Structure of Scientific Revolutionsclip_image002[2] (1962).

The IPCC’s scientists run projections. AR5 describes these as “the simulated response of the climate system to a scenario of future emission or concentration of greenhouse gases and aerosols … distinguished from climate predictions by their dependence on the emission/concentration/radiative forcing scenario used…”. The models don’t predict CO2 emissions, which are an input to the models.

So they should run the models as they were when originally run for the IPCC in the First Assessment Report (FAR, 1990), in the Second (SAR, 1995), and the Third (TAR, 2001). Run them using actual emissions as inputs and with no changes of the algorithms, baselines, etc. How accurately will the models’ output match the actual global average surface temperatures?

Of course, the results would not be a simple pass/fail. Such a test would provide the basis for more sophisticated tests. Judith Curry (Prof Atmospheric Science, GA Inst Tech) explains here:

Comparing the model temperature anomalies with observed temperature anomalies, particularly over relatively short periods, is complicated by the acknowledgement that climate models do not simulate the timing of ENSO and other modes of natural internal variability; further the underlying trends might be different. Hence, it is difficult to make an objective choice for matching up the observations and model simulations. Different strategies have been tried… matching the models and observations in different ways can give different spins on the comparison.

On the other hand, we now have respectably long histories since publication of the early IPCC reports: 25, 20, and 15 years. These are not short periods, even for climate change. Models that cannot successfully predict over such periods require more trust than many people have when it comes to spending trillions of dollars — or even making drastic revisions to our economic system (as Naomi Klein and the Pope advocate).

Conclusion

Re-run the models. Post the results. More recent models presumably will do better, but firm knowledge about performance of the older models will give us useful information for the public policy debate. No matter what the results.

As the Romans might have said when faced with a problem like climate change: “Fiat scientia, ruat caelum.” (Let science be done though the heavens may fall.)

“In an age of spreading pseudoscience and anti-rationalism, it behooves those of us who

believe in the good of science and engineering to be above reproach whenever possible.“

P. J. Roach, Computing in Science and Engineering, Sept-Oct 2004 — Gated.

Other posts in this series

These posts sum up my 330 posts about climate change.

  1. How we broke the climate change debates. Lessons learned for the future.
  2. A new response to climate change that can help the GOP win in 2016.
  3. The big step climate scientists can make to restart the climate change debate – & win.

For More Information

(a) Please like us on Facebook, follow us on Twitter, and post your comments — because we value your participation. For more information see The keys to understanding climate change and My posts about climate change. Also see these about models…

(b) I learned much, and got several of these quotes, from a 2014 presentations by Leonard A Smith (Prof of Statistics, LSE): the abridged version “The User Made Me Do It” and the full version “Distinguishing Uncertainty, Diversity and Insight“. Also see “Uncertainty in science and its role in climate policy“, Leonard A. Smith and Nicholas Stern, Phil Trans A, 31 October 2011.

(c)  Introductions to climate modeling

These provide an introduction to the subject, and a deeper review of this frontier in climate science.

Judith Curry (Prof Atmospheric Science, GA Inst Tech) reviews the literature about the uses and limitation of climate models…

  1. What can we learn from climate models?
  2. Philosophical reflections on climate model projections.
  3. Spinning the climate model – observation comparison — Part I.
  4. Spinning the climate model – observation comparison: Part II.

(d)  Selections from the large literature about validation of climate models

4 1 vote
Article Rating

Discover more from Watts Up With That?

Subscribe to get the latest posts sent to your email.

201 Comments
Inline Feedbacks
View all comments
September 26, 2015 1:23 pm

I have to wonder why it is so important to you that people who don’t even understand their own work triumph in their glorious struggle to ruin the lives of millions of people they’ve never met. And on the basis of graphical legerdemain no less.

Daryl S.
September 28, 2015 7:15 pm

The problem is that those models are vastly outdated. That’s always going to be the problem. Now, if you were to run CURRENT models, and they replicate past history from then until now, you’ve got something. Do they, does anybody have links?

Reply to  Daryl S.
September 29, 2015 7:59 am

Daryl S.:
Unfortunately, regardless of age currently existing models draw illogical conclusions from arguments.

Reply to  Daryl S.
September 29, 2015 1:50 pm

Daryl,
As mentioned in the post, hindcasting — predicting the past — is a first step to validate models, but only a weak one. Unless strict methodological protocols are followed, models tend to be “tuned” to match past results — deliberately or inadvertently.
Similar problems plague drug testing, hence their use of double-blind trials.
Predictions are the gold standard of testing. As you note, we can assume (but not know) that current models are better than older ones. But testing older models on new data (i.e., from their future) can give us confidence in newer ones — or reasons to be skeptical.
Either way, we’ll know more than we do today.

Reply to  Editor of the Fabius Maximus website
September 29, 2015 4:38 pm

Editor of the Fabius Maximus website:
You’ve drifted back into equivocating. You could avoid same by making your arguments in a disambiguated language such as the one that is developed at http://wmbriggs.com/post/7923/ . With use of this language or an equivalent it can be shown the a prediction is a kind of proposition and that a projection is not. That a prediction is a kind of proposition forges a tie between the associated study and logic. That a projection is not a kind of proposition breaks the tie between the study and logic.
Models predict but modèles project. Models are susceptible to validation but modèles are insusceptible to it. Modèles are susceptible to evaluation but models are insusceptible to it.
The IPCC’s “models” are modèles. Thus they are insusceptible to validation and the studies that produced them were divorced from logic.
It is impossible for a modèle to convey information to policy maker about the outcomes from his policy decisions. It is possible for a model to convey this information to a policy maker. Thus, it is not currently possible for our climate to be regulated but would be possible were climatologists to switch from building modèles to building models.

October 2, 2015 8:55 am

This is to note for the record that richardscourtney has announced his retirement from debate over the issue of whether I am completely ignorant of logical principles. He has done so without providing the proof that I requested of his contention that I am completely ignorant of these principles. Thus, Courtney’s contention stands as an application of the fallacy of proof by assertion. As it attacks me personally, Courtney’s contention stands also as an application of the ad hominem fallacy. As it defames me, Courtney’s contention is illegal.

Joe Prins
Reply to  Terry Oldberg
October 2, 2015 9:09 am

[Identity thief strikes again. -mod]