Climate scientists can restart the climate change debate & win: test the models!

By Larry Kummer, from the Fabius Maximus website

Summary; Public policy about climate change has become politicized and gridlocked after 26 years of large-scale advocacy. We cannot even prepare for a repeat of past extreme weather. We can whine and bicker about who to blame. Or we can find ways to restart the debate. Here is the next of a series about the latter path, for anyone interested in walking it. Climate scientists can take an easy and potentially powerful step to build public confidence: re-run the climate models from the first 3 IPCC reports with actual data (from their future): how well did they predict global temperatures?

Trust can trump Uncertainty.”

Presentation by Leonard A Smith (Prof of Statistics, LSE), 6 February 2014.

The most important graph from the IPCC’s AR5

clip_image001

Figure 1.4 from p131 of AR5: the observed global surface temperature anomaly relative to 1961–1990 in °C compared with the range of projections from the previous IPCC assessments. Click to enlarge.

Why the most important graph doesn’t convince the public

Last week I posted What climate scientists did wrong and why the massive climate change campaign has failed. After 26 years, one of the largest longest campaigns to influence public policy has failed to gain the support of Americans, with climate change ranking near the bottom of people’s concerns. It described the obvious reason: they failed to meet the public’s expectations for behavior of scientists warning about a global threat (i.e., a basic public relations mistake).

Let’s discuss what scientists can do to restart the debate. Let’s start with the big step: show that climate models have successfully predicted future global temperatures with reasonable accuracy.

This spaghetti graph — probably the most-cited data from the IPCC’s reports — illustrates one reason for lack of sufficient public support in America. It shows the forecasts of models run in previous IPCC reports vs. actual subsequent temperatures, with the forecasts run under various scenarios of emissions and their baselines updated. First, Edward Tufte probably would laugh at this The Visual Display of Quantitative Informationclip_image002 — too much packed into one graph, the equivalent of a PowerPoint slide with 15 bullet points.

But there’s a more important weakness. We want to know how well the models work. That is, how well each forecast if run with a correct scenario (i.e., actual future emissions, since we’re uninterested here in predicting emissions, just temperatures).

The big step: prove climate models have made successful predictions

“A genuine expert can always foretell a thing that is 500 years away easier than he can a thing that’s only 500 seconds off.”

— From Mark Twain’s A Connecticut Yankee in King Arthur’s Courtclip_image002[1].

A massive body of research describes how to validate climate models (see below), most stating that they must use “hindcasts” (predicting the past) because we do not know the temperature of future decades. Few sensible people trust hindcasts, with their ability to be (even inadvertently) tuned to work (that’s why scientists use double-blind testing for drugs where possible).

But now we know the future — the future of models run in past IPCC reports — and can test their predictive ability.

Karl Popper believed that predictions were the gold standard for testing scientific theories. The public also believes this. Countless films and TV shows focus on the moment in which scientists test their theory to see if the result matches their prediction. Climate scientists can run such tests today for global surface temperatures. This could be evidence on a scale greater than anything else they’ve done.

Testing the climate models used by the IPCC

“Probably {scientists’} most deeply held values concern predictions: they should be accurate; quantitative predictions are preferable to qualitative ones; whatever the margin of permissible error, it should be consistently satisfied in a given field; and so on.”

— Thomas Kuhn in The Structure of Scientific Revolutionsclip_image002[2] (1962).

The IPCC’s scientists run projections. AR5 describes these as “the simulated response of the climate system to a scenario of future emission or concentration of greenhouse gases and aerosols … distinguished from climate predictions by their dependence on the emission/concentration/radiative forcing scenario used…”. The models don’t predict CO2 emissions, which are an input to the models.

So they should run the models as they were when originally run for the IPCC in the First Assessment Report (FAR, 1990), in the Second (SAR, 1995), and the Third (TAR, 2001). Run them using actual emissions as inputs and with no changes of the algorithms, baselines, etc. How accurately will the models’ output match the actual global average surface temperatures?

Of course, the results would not be a simple pass/fail. Such a test would provide the basis for more sophisticated tests. Judith Curry (Prof Atmospheric Science, GA Inst Tech) explains here:

Comparing the model temperature anomalies with observed temperature anomalies, particularly over relatively short periods, is complicated by the acknowledgement that climate models do not simulate the timing of ENSO and other modes of natural internal variability; further the underlying trends might be different. Hence, it is difficult to make an objective choice for matching up the observations and model simulations. Different strategies have been tried… matching the models and observations in different ways can give different spins on the comparison.

On the other hand, we now have respectably long histories since publication of the early IPCC reports: 25, 20, and 15 years. These are not short periods, even for climate change. Models that cannot successfully predict over such periods require more trust than many people have when it comes to spending trillions of dollars — or even making drastic revisions to our economic system (as Naomi Klein and the Pope advocate).

Conclusion

Re-run the models. Post the results. More recent models presumably will do better, but firm knowledge about performance of the older models will give us useful information for the public policy debate. No matter what the results.

As the Romans might have said when faced with a problem like climate change: “Fiat scientia, ruat caelum.” (Let science be done though the heavens may fall.)

“In an age of spreading pseudoscience and anti-rationalism, it behooves those of us who

believe in the good of science and engineering to be above reproach whenever possible.“

P. J. Roach, Computing in Science and Engineering, Sept-Oct 2004 — Gated.

Other posts in this series

These posts sum up my 330 posts about climate change.

  1. How we broke the climate change debates. Lessons learned for the future.
  2. A new response to climate change that can help the GOP win in 2016.
  3. The big step climate scientists can make to restart the climate change debate – & win.

For More Information

(a) Please like us on Facebook, follow us on Twitter, and post your comments — because we value your participation. For more information see The keys to understanding climate change and My posts about climate change. Also see these about models…

(b) I learned much, and got several of these quotes, from a 2014 presentations by Leonard A Smith (Prof of Statistics, LSE): the abridged version “The User Made Me Do It” and the full version “Distinguishing Uncertainty, Diversity and Insight“. Also see “Uncertainty in science and its role in climate policy“, Leonard A. Smith and Nicholas Stern, Phil Trans A, 31 October 2011.

(c)  Introductions to climate modeling

These provide an introduction to the subject, and a deeper review of this frontier in climate science.

Judith Curry (Prof Atmospheric Science, GA Inst Tech) reviews the literature about the uses and limitation of climate models…

  1. What can we learn from climate models?
  2. Philosophical reflections on climate model projections.
  3. Spinning the climate model – observation comparison — Part I.
  4. Spinning the climate model – observation comparison: Part II.

(d)  Selections from the large literature about validation of climate models

4 1 vote
Article Rating

Discover more from Watts Up With That?

Subscribe to get the latest posts sent to your email.

201 Comments
Inline Feedbacks
View all comments
Gus
September 25, 2015 7:27 am

The models are being tested all the time, e.g., by scientists in China and India, and every time they are found deficient. In nature, various atmospheric and ocean phenomena, e.g., monsoon, the models are meant to simulate, unfold in ways that are different from how it happens in the models. And it cannot be any other way: the models are physically and chemically incomplete and their resolution is too poor to simulate correctly what today has to be called “subgrid physics,” e.g., convection and cloud formation/evolution, yet this subgrid physics is essential to how weather patterns and climate evolve.

Reply to  Gus
September 25, 2015 8:13 am

Gus,
Can you give us some pointers or cites to these tests by scientists in China and India?

Gus
Reply to  Editor of the Fabius Maximus website
September 25, 2015 12:01 pm

“>>> Can you give us some pointers… <<<"
With pleasure:
[1] doi:10.1175/JCLI-D-14-00740.1 (April 2015)
[2] doi:10.1007/s00382-014-2269-3 (July 2015)
[3] doi:10.1007/s00376-015-4157-0 (August 2015)
[4] doi:10.1175/JCLI-D-14-00475.1 (April 2015) Here the authors are Americans, but they find severe problems in both CMIP3 and CMIP5 models
[5] doi:10.1175/JCLI-D-14-00810.1 (April 2015)
[6] doi:10.1007/s00382-014-2398-8 (May 2015)
[7] doi:10.1007/s00704-014-1155-6 (April 2015)
[8] doi:10.1002/2014JD022239 (March 2015)
[9] doi:10.1007/s00382-014-2229-y (March 2015)
[10] doi:10.1175/JCLI-D-14-00405.1 (March 2015)

Reply to  Editor of the Fabius Maximus website
September 25, 2015 12:24 pm

Gus,
Thank you for the citations!
These (the first 3, at least) evaluate climate models’ ability to simulate weather phenomena (e.g., monsoons, Hadley circulation). I believe (from memory) that the IPCC reports acknowledge that.
But this kind of criticism has not — and I believe will not — break the logjam. The question is about the key factor: the ability of models to forecast global atmospheric temperatures. IMO that has to be the focus — on the core, not peripheral issues.

Richard M
September 25, 2015 7:39 am

There are some folks that think all the changes in climate are driven by ENSO. We know the models can’t do ENSO which means they would fail right from the start. Now, add in ocean oscillations and it gets even worse. In my opinion, until the models can clearly predict ENSO, PDO, AMO (at a minimum) they are completely worthless.

Richard M
Reply to  Richard M
September 25, 2015 7:46 am

I always liked this comparison.comment image

Reply to  Richard M
September 25, 2015 8:20 am

+1

dp
September 25, 2015 7:40 am

If the consensus would like to convince people of anything they should abandon all the models and use only observed unfiddled data. Tell the truth, in other words.

September 25, 2015 8:42 am

Why don’t the projections of each Report start at the existing conditions? It is as if each model run doesn’t recognize the present, considers some theoretical past is actually more correct – indeed, the “real” situation – from which the future builds.
Some of these projections, then, say that this year is not what we measure, but actually 0.6C warmer than measured.

Reply to  douglasproctor
September 25, 2015 10:44 am

Easy, Douglas. The observed temperatures on the graphs that show a divergence from models are satellite-based. True believers use the globally averaged surface temperatures, which are “adjusted” (notionally: to correct for various sources of error) but (how convenient!!) more or less track the models.
Because the models have what looks suspiciously like an exponential trend, this fix may not work for much longer (unless the earth cooperates and really does heat up, which appears a bit unlikely to this observer).

Dermot O'Logical
September 25, 2015 9:38 am

Are there truly _no_ climate models which are in the public domain, and hence available for public testing?
If there’s even just one, can’t we start there and publish the results ourselves?
This is just software after all. Run on the right hardware and the with the true emissions data, the model’s accuracy or otherwise will become clear.

Billy Liar
Reply to  Dermot O'Logical
September 26, 2015 3:02 pm

Here you can download GISS Model E:
http://www.giss.nasa.gov/tools/modelE/
The current incarnation of the GISS series of coupled atmosphere-ocean models is now available. Called ModelE, it provides the ability to simulate many different configurations of Earth System Models – including interactive atmospheric chemistry, aerosols, carbon cycle and other tracers, as well as the standard atmosphere, ocean, sea ice and land surface components.

MarkW
September 25, 2015 9:41 am

The assumption that newer models will perform better than older models, is not one that I am prepared to make.

Reply to  MarkW
September 25, 2015 10:14 am

Mark,
I can sympathize with your view. However, let’s be generous at this point. When (if) we see the results from the first three assessment reports, then is the time to discuss the latest models.
But today I don’t see why people consider climate models as sufficient basis for massive public policy changes — even assuming (as I do) that AR5’s WGI is mostly correct. Their major finding, operationally for public policy, is that anthropogenic greenhouse gases are responsible for more than half of the warming since 1950. This describes the past, not the future — and is only given at the 90% confidence level (below the 95% level usually required for science and public policy).
The case for large bold action rests on the models. Let’s test the models, as a next step.

Bob Weber
Reply to  Editor of the Fabius Maximus website
September 25, 2015 9:17 pm

“is that anthropogenic greenhouse gases are responsible for more than half of the warming since 1950.”
Then please explain how HadSST3 temps dropped from the 1950’s to about 1976 as CO2 was rising.
And while you’re at it, please explain how SSTs dropped significantly in 2008, and then rebounded.
It wasn’t CO2. What is the stated reason for the “other” half of the warming since 1950?

Reply to  Editor of the Fabius Maximus website
September 26, 2015 7:23 am

You say “It wasn’t CO2”. Justify your claim, please.

Reply to  Editor of the Fabius Maximus website
September 26, 2015 9:51 am

warrenlb says:
You say “It wasn’t CO2. Justify your claim, please.”
How can someone be so completely deluded about how real science works??
Explaining the basics to warrenlb is like trying to teach a dog trigonometry. He is incapable of learning.
For rational readers, here is how it works: the one making the conjecture or hypothesis has the onus of convincingly supporting it. But warrenlb is trying to re-frame the method in order to make skeptics prove a negative (prove that “It wasn’t CO2”).
Skeptics do not have the onus to prove what “it wasn’t”. It is up to the alarmist misinformers to show that CO2 is causing the current global warming. But so far, all they have for an argument is their endless ‘appeal to authority’ logical fallacy, and their measurement-free assertions. And of course, global warming stopped many years ago.
Dishonest and illogical word games like that appeal to the less bright, who tend to congregate on the alarmist side. But for the others, this is the correct statement:
“You have made the conjecture that CO2 is the primary cause of global warming, and that it will cause runaway global warming if emissions continue. Justify your claim, please.”
But they can’t; they’ve never even been able to produce a measurement of AGW — despite many decades of searching. They are convinced that CO2 emissions have a major effect. But they are completely incapable of finding the required evidence. Thus, they fall back on their baseless assertions that ‘it must be because of human CO2 emissions’.
warrenlb cannot think straight. But for readers who can, that abject failure to find even a single measurement quantifying the fraction of AGW out of all global warming means one of two things:
1. Either AGW does not exist, or
2. AGW is such a tiny part of all global warming, which includes the natural recovery of the planet from the Little Ice Age, ocean and solar events, etc., that it is far too minuscule to measure. Since AGW is too tiny to measure, it can be completely disregarded as a non-problem.
I think #2 is correct. But like everyone else’s opinion, that is not based on quantifiable measurements, because there are no such measurements.
So dishonest propagandists on the alarmist side try to turn the burden upside down, and place the onus on scientific skeptics — the only honest kind of scientists. That leaves out warrenlb, as we see from his comment. That leaves out the UN/IPCC, too, which also has never been able to measure AGW.
And that explains why alarmist scientists like Michael Mann will not engage in fair, moderated debates any more. The alarmist scientists have lost every debate held in a neutral venue. Skeptics easily demolished their arguments. That is to be expected, when they use warrenlb’s illogical attempts to convince people that skeptics must prove a negative. Wrong.
So now alarmist scientists tuck tail and run from debates, relying on thier mendacious, anti-science “consensus” arguments instead. We hear anything but honest science from the alarmist contigent, because they lack even the simplest measurements of what they insist must be happening, and Planet Earth is decisively falsifying their claims.

Alx
September 25, 2015 9:46 am

In grade B science fiction movies, when the scientist asked the computer a ridiculous question the computer would answer, “Cannot compute – Insufficient Data”. If only climate models were as sophisticated as computers in grade B sci-fi movies.

Reply to  Alx
September 25, 2015 10:50 am

“Forty two”

rogerknights
September 25, 2015 10:15 am

It’s a pity that IPCC authors weren’t made to individually submit their predictions for the next 5 / 10 / 15 / 20 / 25 / 30 / 40 / 50 years, so their susequent

rogerknights
Reply to  rogerknights
September 25, 2015 10:27 am

(Oops–I got cut off.)
… subsequent predictions and warnings could be put in context.
I think it would be a savvy political move for us contrarians to demand that this be done going forward, for IPCC authors of the next AR. In the interim, past IPCC authors, and other big name warmists, should be challenged to “Put your cards on the table”. Too bad there’s no betting market any more, where they could be challenged t “put up or shut up.”
I repeat: I appeal to the merchants of doubt, in their lair in Skull Mountain, to get with this program and make “pu your cards on the table” our mantra.

Caligula Jones
Reply to  rogerknights
September 25, 2015 12:24 pm

With the recent developments of our betters calling for the arrest of skeptics, and the Vatican appearing to be revving up the Inquisition-style rhetoric against “non-believers” climate change-wise, IPCC authors should be happy we aren’t going back to the days of killing people for wrong predictions.
Although these guys seem to be “do as we say, not as we do”, so I can imagine they wouldn’t be upset if this was used against skeptics.

Hugs
Reply to  rogerknights
September 26, 2015 12:38 pm

I agree heavily on this. Were Gore setting up a bet that the Arctic melts by 2018, we were much happier. Even the loser of this bet might be happier – Gore that he was wrong, and me for new opportunities for oil production.
And, the money could be placed on charity.

Billy Liar
Reply to  rogerknights
September 26, 2015 3:36 pm

Roger Knights,
The UKMO did this in 2009. Watch the following video and have a good laugh.

There’s also a website dedicated to spreading the message:
http://ukclimateprojections.metoffice.gov.uk/
It contains such gems as: by 2080, in the high emissions scenario, summer mean temperature in the south of England (London) is very unlikely to be greater than the 2009 summer mean temperature + 10°C. Current Jun/Jul/Aug mean is ~19°C, so 2080 likely to be less than 29°C (about the same as New Orleans now). Oddly, for a place next to the sea, in the same timeframe it is very unlikely to be wetter than it is now. Obviously, the boiling hot North Sea is not expected to generate extra-tropical storms and fire them in the direction of the city. What’s not to like? Warmer weather with no downside.

BrianK
September 25, 2015 10:40 am

We should have enough accurate data now to back load temperatures and known natural phenomena against the models and get a realistic estimate of sensitivity. Why hasn’t that been done yet?

Neil Jordan
September 25, 2015 10:51 am

First we had global warming, then climate change, then weather wierding. Now Greenwire reports we have “climate momentum”. The article gets one thing correct, it is filed under politics.
http://www.eenews.net/tv/2015/09/25
“POLITICS:
“Greenwire’s Chemnick talks climate momentum following papal address
“The Cutting Edge: Friday, September 25, 2015
“As Pope Francis continues his U.S. tour, will his remarks on climate affect the tone of discussions in Congress and the momentum heading into this year’s Paris talks? On today’s The Cutting Edge, Greenwire reporter Jean Chemnick discusses the power of the pope following his historic address. She also talks about the growing momentum surrounding this year’s international climate talks in Paris.”

Neil Jordan
September 25, 2015 10:58 am

Excuse the Wiki reference for the quote, but it deserves to be repeated on this thread: “All models are wrong, but some are useful”.
https://en.wikipedia.org/wiki/George_E._P._Box
“His name is associated with results in statistics such as Box–Jenkins models, Box–Cox transformations, Box–Behnken designs, and others. Box wrote that “essentially, all models are wrong, but some are useful” in his book on response surface methodology with Norman R. Draper.”
https://en.wikipedia.org/wiki/All_models_are_wrong
“Since all models are wrong the scientist cannot obtain a “correct” one by excessive elaboration. On the contrary following William of Occam he should seek an economical description of natural phenomena. Just as the ability to devise simple but evocative models is the signature of the great scientist so overelaboration and overparameterization is often the mark of mediocrity.”

September 25, 2015 11:15 am

Before we challenge the climate change proponents to this test, we need to do a couple things:
(1) Require that the model runs be compared to actual data, not the “adjusted”, “corrected”, “homogenized” data; and
(2) Define before any tests are run the criteria (and their values) that will constitute verification.
If you don’t do at least these two things first, before ever starting a run, the entire effort may well be a waste of money.
BTW, lest anyone question my credentials, I spent about 20 years of my career verifying (or validating, since the words are so often used interchangeably) moderately complex models of physical systems, based on documented physics and algorithm derivations.

Lady Gaiagaia
September 25, 2015 11:19 am

This has effectively already been done, since the models were run under a “business as usual” scenario, in which CO2 was projected to continue growing at more or less its pace 25, 20 and 15 years ago.
So the result is known, and the BAU scenario produced temperature forecasts for this decade laughably too high.

Reply to  Lady Gaiagaia
September 25, 2015 12:07 pm

Lady G,
“the models were run under a “business as usual” scenario”
That’s not really correct.
(1) The model runs shown are run under multiple scenarios. For example, in AR5 there were 4 scenarios used. Which of those lines on the graph were by the “business as usual” scenarios for each assessment report?
(2) Don’t assume that “business as usual” means a continuation of current emission trends. For example, the RCP8.5 scenario in AR5 is often described as the “business as usual” scenario. That’s not remotely true. For details see “Is our certain fate a coal-burning climate apocalypse? No!
In fact none of the 5 RCP’s used in RCP represents a “business as usual” scenario.

Reply to  Editor of the Fabius Maximus website
September 25, 2015 12:18 pm

Correction: last sentence should read “In fact none of the 5 RCP’s used in AR5 represents a “business as usual” scenario.”

Lady Gaiagaia
Reply to  Editor of the Fabius Maximus website
September 25, 2015 12:54 pm

Correct me if wrong, but in at least those models I’ve studied, three scenarios are run, one of which is without any curbs on emissions, ie BAU. That scenario roughly replicates present levels of CO2, but of course is always way too hot. The other two scenarios assume different levels of CO2 reduction, which hasn’t happened, but even those scenarios still overshoot observed temperatures.

Reply to  Editor of the Fabius Maximus website
September 25, 2015 12:58 pm

Lady G,
As I showed, you are incorrect with respect to AR5. There were 4 scenarios, none showing business as usual.
If you have cites supporting your view about the first three assessment reports, I’d like to see them!

Lady Gaiagaia
Reply to  Editor of the Fabius Maximus website
September 25, 2015 12:58 pm

For instance for the 2001 models, that would be SRES scenario A1FI.

Lady Gaiagaia
Reply to  Editor of the Fabius Maximus website
September 25, 2015 1:01 pm

For why the earlier (1992) emissions scenarios were changed in the 2001 IPCC report:
http://www.ipcc.ch/ipccreports/sres/emission/index.php?idp=27#anc1

Reply to  Editor of the Fabius Maximus website
September 25, 2015 1:59 pm

Lady G,
You still have not shown which — if any — of the scenarios used in the first 3 IPCC assessment reports has tracked actual emissions through 2015. The IPCC’s “Emissions Scenarios” report published in 2000 doesn’t help.
You still have not shown which — if any — of the lines on AR5’s spaghetti graph correspond to models run using actual emissions over the last 25, 20, or 15 years.
It’s not clear to me what you are attempting to say.

Richard Petschauer
Reply to  Editor of the Fabius Maximus website
September 25, 2015 2:25 pm

Do the models stiii use 1% annual growth in CO2 ppm for the Business as Usual case? Twenty year data is close to only about 0.55%.
Also exponential CO2 growth (constant annual percent) of any percent combined with a forcing based on a log of CO2 content will give a linear temp increase with time, not an increasing one as most models show.
With so much variation between models, the entire basic common approach of these is wrong. The biggest concern is CO2. Why no just work on its effect with feedbacks on average global temperatures using simple energy balance models including balance at the surface?

September 25, 2015 11:57 am

Here is another paper for your list.
http://www.researchgate.net/publication/266908993_Evaluation_of_CMIP5_Simulated_Clouds_and_TOA_Radiation_Budgets_Using_NASA_Satellite_Observations
They show that climate models have underestimated the cooling effect of clouds by enough to offset any anthropogenic warming.

Reply to  gyan1
September 25, 2015 12:16 pm

Gyan,
Thanks for the cite!
There is a vast literature assessing — and attempting to validate — climate models. To keep the list at a reasonable size I included only those that directly test their atmosphere temperature forecasts.

Reply to  Editor of the Fabius Maximus website
September 25, 2015 12:33 pm

Editor,
You are welcome!
The paper identifies a major source of model error using empirical evidence. I doubt if models will be adjusted as a result because CAGW would cease to exist.

Alcheson
September 25, 2015 1:18 pm

I have a better idea. Rather than allowing the climate scientists to rerun their models and adjusting parameters to present us with the NEW results. Lets just identify the FIVE previous model runs that most closely matched the actual emissions and see how well those predictions turned out?? There would be NO redo and chances for additional “under-the-hood” tampering. There is no need to rerun the model runs, there have undoubtedly been numerous runs already performed that come pretty close to matching actual emissions. More than likely this will NOT be done because those models likely all predict temperatures that are near the upper end of the spaghetti graph.

September 25, 2015 1:20 pm

The climate models are a complete waste of money and time.
Models are not data.
Without data, there is no science.
Real climate science is done by geologists, and other scientists, who work with real data — objects on the Earth that tell a tale of Earth’s past — not silly computer games making inaccurate predictions of the future.
The factors that change the climate are not understood.
Some factors are probably still unknown.
Climate history studies have identified only two main climate conditions:
1) Repeating mild warming / cooling cycles, and
2) Ice sheets growing until they cover much of the Earth, and then melting.
These climate conditions have only one suspected correlation with CO2, to the best of scientists’ ability to estimate past temperature and CO2 levels:
(1) It appears that natural (unknown) factors that warm the oceans, cause them to release CO2 into the air with a 500 to 1,000-year lag.
There is no known correlation where rising CO2 leads, or is simultaneous with, global warming, and there is evidence of high CO2 levels in the past with no runaway greenhouse warming.
Therefore, models based on the assumption that CO2 is “the climate controller” are WRONG, and even if they appeared to be accurate for a decade or two, or even for five decades, that would be nothing but a coincidence — not good science.
You can never prove today’s climate model “predictions” wrong in your lifetime — you’d have to wait 100 years to “prove” them wrong.
The climate change “debate” skipped the first step on the assumption ladder by assuming global warming is bad news.
We’ve had global warming since 1850.
It has been GOOD news for humans and green plants.
Another degree of two F. warming would be even better news.
Unfortunately you are completely wrong about re-running the models.
— The models belong in the garbage can.
— The “scientists” who run them belong on the unemployment line.
Humans have caused a lot of damage to the environment, and are still causing a lot of damage in Asia.
But adding CO2 to the air does no damage:
– It improves plant growth = good news
– It may cause a small amount of warming = good news
The only bad news concerning CO2 is smarmy people demonizing the gas, in an effort to
(1) halt economic growth (hurts the poor the most),
(2) halt the use of cheap sources of energy (hurts the poor the most),
(3) halt population growth (affects the poor the most), and
(4) some want to redistribute wealth as “climate change reparations”, maybe to compensate for damage caused by (1), (2) and (3).
Free climate blog for non-scientists:
No ads
No money for me.
A public service:
http://www.elOnionBloggle.blogspot.com

Christopher Hanley
September 25, 2015 3:42 pm

“… re-run the climate models from the first 3 IPCC reports with actual data (from their future): how well did they predict global temperatures? …”.
=================================
Why?
As the anonymous writer says: “… few sensible people trust hindcasts, with their ability to be (even inadvertently) tuned to work …”.
Unless they change their basic false assumptions about feedbacks etc. the resulting graphs would look the same with ‘armageddon’ postponed a couple of decades.
The only purpose would be as a face-saving operation.

Christopher Hanley
Reply to  Christopher Hanley
September 25, 2015 3:48 pm

Sorry, I missed the writer’s name at the head.

Reply to  Christopher Hanley
September 25, 2015 3:49 pm

Christopher,
“Few sensible people trust hindcasts”
Running the models from the first three ARs with data from after those reports’ publication is not “hindcasting” in the usual sense. It’s a fair test of their predictive ability since the models cannot be tuned to their future. (It is a “hindcast” in the technical sense, as it uses data from our past.)
“Unless they change …”
You are confident you know the result. Probably the people from Naked Capitalism reading this post (it was on their daily news today) are equally sure — with the opposite view. Breaking this logjam requires more than both side shouting with confidence at each other.
Run the models. The answer will put the debate on a new foundation. We can only guess at what will happen then.

Clyde Spencer
September 25, 2015 4:37 pm

Larry
I like your proposal of demonstrating the veracity and potential improvement over time of GCMs. I would think that in an ideal world the model creators would be anxious to show off their handiwork. However, I don’t think it is going to happen. For the same reason that a person who owns a car that is ‘all show and no go’ would be reluctant to take it to a drag strip and suffer the embarrassment of demonstrating what everyone suspects, I’m afraid the modelers will not submit themselves to such scrutiny. Also, most climatologists seem unwilling to debate those who critique their work. Therefore, they will deny that there is any need for such oversight. My suspicion is that those most intimately familiar with the models are all too aware of their shortcomings, and to wash their laundry in public could endanger their future funding. In most contracts that are awarded, there are ‘milestones’ and performance specifications written into the contract. That isn’t the case for grant awards. Thus, it is in their best interest to avoid any kind of robust evaluation and continue to make promises they will not be held accountable for.

Reply to  Clyde Spencer
September 25, 2015 8:04 pm

Clyde,
You go to the very heart of this debate, the interaction of the public with scientists.
My proposal is directed at the public as much as at climate scientists. Should they not do this test, they can be asked about this in the many forums for debate — and especially in Congress. We will learn much from their reluctance to test their models, perhaps as much as we might from a test of their models.
We are not passengers in America, critiquing the performances of the crew — like the audience in the cheap seats at a baseball game. We are the crew.

Tom Harley
September 25, 2015 6:34 pm

Dr David Evans has found a fault with the original climate model that all others are based on. Over at Jonova’s, this is the first of many posts that explains how the basic model worked and in subsequent posts, (3 to date). he will explain where the Physics went wrong. http://joannenova.com.au/2015/09/new-science-1-pushing-the-edge-of-climate-research-back-to-the-new-old-way-of-doing-science/
Starting from scratch, the second and third post explain in detail what the climate model is, with full mathematics and equations before the future posts dissect the physics error he has found after 2 years of research.
Well worth a look. http://joannenova.com.au/2015/09/new-science-2-the-conventional-basic-climate-model-the-engine-of-certain-warming/
http://joannenova.com.au/2015/09/new-science-3-the-conventional-basic-climate-model-in-full/

Tom Harley
Reply to  Tom Harley
September 25, 2015 9:24 pm
Lady Gaiagaia
Reply to  Tom Harley
September 26, 2015 12:17 pm

Thanks!

Bob Weber
September 25, 2015 9:48 pm

Who can seriously defend these models as worthwhile? Junk is junk. They are no good for their purpose as they are all wrong! The propaganda value of them however keeps them alive.
I wouldn’t trust the people from the same outfits that make the old models with any new models, as they are already prejudiced, unobjective, operating on false premises, and under political pressure to conform to the CO2 paradigm at all costs.
Their models have already been reality tested as failures. No one in the private sector could get away with such abject failure for so long. Time to run away from those failures.
It’s time for some competition.

Berényi Péter
September 26, 2015 4:49 am

Conclusion
Re-run the models. Post the results. More recent models presumably will do better, but firm knowledge about performance of the older models will give us useful information for the public policy debate. No matter what the results.

I do not know whether old computer code is properly archived or not. Is it?
If it is not, the project is unimplementable.
If it is, a pointer to an online code repository would be appreciated, complete with timestamps and digital signatures to ensure it was never tampered with in the meantime.

Science or Fiction
September 26, 2015 6:44 am

In my profession, there is no way we would let product owners – suppliers – test their own products and trust the results they report. Simply because:
– it is likely that the product owner will optimize and adjust their product for the test conditions
– it is likely that product owners would only report results favorable to their product
– It is unlikely that the product owner would report all results
Hence results reported by product owners are in general regarded trustworthy. The reported results are not regarded to be sufficient to evaluate capabilities, uncertainties and systematic errors for the products.
This is why independent test laboratories are used to test and report the results from such testing. This is also why there are international standards for the independent laboratories like the standard:
“ISO/IEC 17025 “General requirements for the competence of testing and calibration laboratories” is the main ISO standard used by testing and calibration laboratories. In most major countries, ISO/IEC 17025 is the standard for which most labs must hold accreditation in order to be deemed technically competent. In many cases, suppliers and regulatory authorities will not accept test or calibration results from a lab that is not accredited.” (Ref. Wikipedia)
Intergovernmental Panel on Climate Change is very far from meeting the requirements and guidelines of this standard. Hence, IPCC is very far from meeting the requirements the authorities would impose on an independent party in the industry – to be able to accept the results from such an independent party in important matters. Intergovernmental Panel on Climate Change is nowhere near being qualified, or in position to become, an accredited test laboratory. This is overwhelmingly clear just from the principles governing IPCC.
To be trustworthy, the models need to be tested against trustworthy empirical data for conditions it has not been adjusted to match. Such tests need to be performed by an independent party. A party which have no interest, what so ever, in the test results. An independent party which is accredited in accordance with international standards for accreditation of testing laboratories.

Science or Fiction
Reply to  Science or Fiction
September 26, 2015 6:47 am

“Hence results reported by product owners are in general regarded trustworthy.”
Should have been:
“Hence results reported by product owners are in general not regarded trustworthy.”

Reply to  Science or Fiction
September 26, 2015 10:51 am

Science,
“In my profession, there is no way we would let product owners – suppliers – test their own products and trust the results they report.”
I agree. Validation by outside experts is an essential precaution for any high-stakes project. It’s the hard-won wisdom of the ages. “Trust but verify.” “Always cut the cards.”

Simon
Reply to  Editor of the Fabius Maximus website
September 26, 2015 11:47 am

EFM says “Validation by outside experts is an essential precaution for any high-stakes project.”
Which is why we are all waiting with baited breath for the International Temperature Data Review Project (from the Global Warming Policy Foundation) to let us know what their review results are. Is anyone else looking forward to this?

Reply to  Editor of the Fabius Maximus website
September 26, 2015 11:59 am

Simon, I like the GWPF because it is honest, while the UN/IPCC is not.
And it’s “bated” breath. Unless you’re fishing for a periodontist.

Simon
Reply to  Editor of the Fabius Maximus website
September 26, 2015 12:14 pm

DB
Thanks for the correction. I’m a fisherman through and through. So does anyone know when their report will be released. The website (http://www.tempdatareview.org/) gives very little info.

rgbatduke
September 26, 2015 7:35 am

A few comments.
First, I’m pretty sure this figure is the one that was in the draft report that was “released” early, and then was removed when the inevitable furor occurred and replaced with one that was less obviously failing. Perhaps I’m wrong, and don’t want to slog though my copy of AR5 to find out, but that’s my recollection.
Second the fundamental problem is that THIS figure — even broken down — isn’t right. Each of the lines drawn in the spaghetti graph isn’t “the” output from one of the climate models — it is the average over many runs with slightly perturbed initial conditions from the climate models. The number of runs being averaged is not even controlled model to model. The independence of the models is not assessed — there are something like 7 GISS models out of the 36 that (obviously) share substantial parts of their code but all this does is weight GISS’s contribution to the final averages and envelopes disproportionately. There are far fewer than 36 independent models represented.
Third, it isn’t really possible to run the models “with the actual numbers” as they weren’t run in the first place with actual numbers. We don’t have actual numbers to run them WITH. We have no idea what the state of the Earth’s fluid dynamical system is at any given instant in time because our measurements of it are incredibly sparse, even with ARGO and many surface stations. Sampling it in depth in the atmosphere and down to the bottom of the ocean in anything like a uniform or random grid of the whole planetary surface is simply not available and will not be available in the foreseeable future. Finally, what sampling we have omits far too much information. We do not have (for example) CO2 distributions, in depth. We do not have aerosol distributions, in depth, and recent evidence strongly suggests that the contributions of aerosols to cooling have been largely overestimated (and inconsistently estimated) in the GCMs to boot — we could rerun the models with aerosols reduced to something like their current most probable values but this has already been done for selected models and the result was that they produced something like half of the warming after they were adjusted to fit the reference period.
Total climate sensitivity dropped to pretty much the no-feedback estimate of 1.5 C per doubling, utterly non-catastrophic in outlook, especially given that both ITER and Lockheed-Martin are now claiming that they have licked fusion, with LM promising a working fusion plant in (now) around four years, and ITER claiming that they are going to build a 500 MW facility starting immediately. If either or both of these claims are true, we MIGHT reach 450 ppm or even 500 ppm before electricity made from coal is as silly as whale oil and gaslights. Even if the Bern model is correct — which is still more than a bit contentious and dubious — and the residence time of CO2 in the atmosphere is centuries, that will only be a good thing as we will have restored a healthy amount of carbon dioxide to the atmosphere of a planet balanced on the edge of a cold catastrophe due to CO2 starvation. The low-water mark of CO2 in the Wisconsin glaciation was around 180-190 ppm, just over the point of mass extinction of broad species of plants. At 450 ppm CO2, temperatures would stabilize right about where they are now or perhaps a hair warmer (if a non-stationary process could be said to “stabilize”) and agriculture and the biosphere would retain the substantially boosted growth rate for C3 and some C4 plants, especially temperate zone trees and certain staple food crops.
rgb

Reply to  rgbatduke
September 26, 2015 10:44 am

rgbatduke
“I’m pretty sure this figure is the one that was in the draft report that was “released” early, ”
As the caption says, this is “Figure 1.4 from p131 of AR5”.

Hugs
Reply to  rgbatduke
September 26, 2015 12:44 pm

Sometimes I think I could read this site for just rgb’s comments, which are always insightful. But no, the blog entry was good as well.

September 26, 2015 9:32 am

Mr. Kummer implies that if the models were to be run on the recorded atmospheric CO2 concentration of the past, this would produce “predictions.” This implication is inaccurate and misleading.
A model that makes predictions has a logical structure that results from the status of a prediction as a kind of proposition. For science, logic is probabilistic. Thus every proposition has a probability of being true. It follows that every prediction has a probability of being true. A model that makes predictions assigns a numerical value to each of these probabilities.
Science has theoretical and empirical sides. Probabilities belong to the theoretical side. Their empirical counterparts are called “relative frequencies.” Values are assigned to relative frequencies by counting concrete objects called “sampling units” in a sample that is drawn from the population underlying the model. These values provide a check on the values that are assigned to the corresponding probabilities by the model. A model that passes this test in a sample that was not used in the construction of the model is said to be “validated.” Otherwise, it is said to be “falsified.”
The climate models of yesterday and today possess none of the logic-related attributes that I have described. A consequence is for them to be insusceptible to being validated. In the parlance of the IPCC they are “evaluated.” The result of the exercise that is proposed by Mr. Kummer is an “evaluation” but while this word sounds like “validation” it refers to a process that is non-logical.
In a logical context a “prediction” is a kind of proposition. If we conform to logic in naming concepts, Mr. Kummer’s “predictions” are not predictions at all.

Reply to  Terry Oldberg
September 26, 2015 10:48 am

Terry O.,
“Mr. Kummer implies that if the models were to be run on the recorded atmospheric CO2 concentration of the past, this would produce “predictions.” This implication is inaccurate and misleading.”
I give and use the IPCC definitions of these terms. I believe that’s the best way to clear communication with the public.
“Mr. Kummer’s “predictions” are not predictions at all.”
These are predictions of climate scientists, not mine. I’m recommending a specific use of them as a test to restart the public debate about climate change.

Reply to  Editor of the Fabius Maximus website
September 26, 2015 1:39 pm

Editor of the Fabius Maximus website:
Thanks for taking the time to reply.
“Prediction” is usually used in reference to an logical concept where a “prediction” is an example of a proposition. When the same word is also used in reference to an illogical concept where a “prediction” is not an example of a proposition the result is to lead many people to mistake equivocations for syllogisms thus drawing false or unproved conclusions from global warming arguments. Drawing false or unproved conclusions from global warming arguments leads to logically unfounded public policy.
Rather than fostering this state of affairs we can and should oppose it by reserving “prediction” for use in reference to a kind of proposition. Under this usage there are no circumstances in which the global warming models of today make predictions.