90 climate model projectons versus reality

Reality wins, it seems. Dr Roy Spencer writes:

As seen in the following graphic, over the period of the satellite record (1979-2012), both the surface and satellite observations produce linear temperature trends which are below 87 of the 90 climate models used in the comparison.

CMIP5-90-models-global-Tsfc-vs-obs[1]

more here: http://www.drroyspencer.com/2013/10/maybe-that-ipcc-95-certainty-was-correct-after-all/

Get notified when a new post is published.
Subscribe today!
0 0 votes
Article Rating
101 Comments
Inline Feedbacks
View all comments
richardscourtney
October 14, 2013 4:21 pm

Friends:
Some here seem to think rejection of the models which are clearly wrong would leave models which are right or have some property which provides forecasting skill and, therefore, merits investigation. Not so. To understand why this idea is an error google for Texas Sharp Shooter fallacy.
Models which have failed to correctly forecast are observed to be inadequate at forecasting. Those two (or three?) which remain are not known to be able to forecast the future from now. One or more of them may be able to do that but it cannot be known if this is true.
Richard

Latitude
October 14, 2013 4:26 pm

Jimbo says:
October 14, 2013 at 4:05 pm
My simple problem with these dozens of climate models is this:
===
Jim, my problem is just looking at them…you know they are garbage
Prior to ~1998..the hindcast…they show Judith’s waves clearly…
After ~1998…all they predict is a linear line going up to infinity
…anyone should know you’re not going to have ups and downs forever…and then a straight line
and to top it all off….a straight line at the same time CO2 has the least effect
and here we are…moving up and down…doing exactly what they can’t predict

Gcapologist
October 14, 2013 4:39 pm

In my biz we’d call this a high bias …… To protect the public health.
Can anyone tell me why a half a degree is harmful to the public health? (That question is largely rhetorical.)
The important question …. Why are the models so wrong?

Latitude
October 14, 2013 4:42 pm

Why are the models so wrong?
===
because we’re really not that smart

October 14, 2013 4:47 pm

I’m still confused why we care about HADCRUT. I thought WUWT demonstrated that half the “warming” came from bad ground stations and other fudge factors. I also thought another article on WUWT demonstrated that that HADCRUT takes advantage of some bad mathmatics to suppress temperatures early than 1960, especially the high temps of the 1940s.

Jimbo
October 14, 2013 4:49 pm

Guardian – 23 September 2013
Dana Nuccitelli [Tetra Tech & Co.]
The problem for climate contrarians is that our existing climate framework is very solid. We understand the fundamentals about how the climate operates well enough to accurately reproduce the observed changes, based on solid, well-understood physical mechanisms like the increased greenhouse effect.
http://www.theguardian.com/environment/climate-consensus-97-per-cent/2013/sep/23/climate-science-magical-thinking-debunked-by-science#comment-27256471

I see “observed changes” – would that be past or present?

richardscourtney
October 14, 2013 4:50 pm

Gcapologist:
At October 14, 2013 at 4:39 pm you ask

The important question …. Why are the models so wrong?

I answer, because they do not model the climate system of the real Earth.
To explain that answer it seems I need to post the following yet again, and I ask all who have seen it to skip it and to forgive my posting it yet again.
None of the models – not one of them – could match the change in mean global temperature over the past century if it did not utilise a unique value of assumed cooling from aerosols. So, inputting actual values of the cooling effect (such as the determination by Penner et al.
http://www.pnas.org/content/early/2011/07/25/1018526108.full.pdf?with-ds=yes )
would make every climate model provide a mismatch of the global warming it hindcasts and the observed global warming for the twentieth century.
This mismatch would occur because all the global climate models and energy balance models are known to provide indications which are based on
1.
the assumed degree of forcings resulting from human activity that produce warming
and
2.
the assumed degree of anthropogenic aerosol cooling input to each model as a ‘fiddle factor’ to obtain agreement between past average global temperature and the model’s indications of average global temperature.
More than a decade ago I published a peer-reviewed paper that showed the UK’s Hadley Centre general circulation model (GCM) could not model climate and only obtained agreement between past average global temperature and the model’s indications of average global temperature by forcing the agreement with an input of assumed anthropogenic aerosol cooling.
The input of assumed anthropogenic aerosol cooling is needed because the model ‘ran hot’; i.e. it showed an amount and a rate of global warming which was greater than was observed over the twentieth century. This failure of the model was compensated by the input of assumed anthropogenic aerosol cooling.
And my paper demonstrated that the assumption of aerosol effects being responsible for the model’s failure was incorrect.
(ref. Courtney RS An assessment of validation experiments conducted on computer models of global climate using the general circulation model of the UK’s Hadley Centre Energy & Environment, Volume 10, Number 5, pp. 491-502, September 1999).
More recently, in 2007, Kiehle published a paper that assessed 9 GCMs and two energy balance models.
(ref. Kiehl JT,Twentieth century climate model response and climate sensitivity. GRL vol.. 34, L22710, doi:10.1029/2007GL031383, 2007).
Kiehl found the same as my paper except that each model he assessed used a different aerosol ‘fix’ from every other model. This is because they all ‘run hot’ but they each ‘run hot’ to a different degree.
He says in his paper:

One curious aspect of this result is that it is also well known [Houghton et al., 2001] that the same models that agree in simulating the anomaly in surface air temperature differ significantly in their predicted climate sensitivity. The cited range in climate sensitivity from a wide collection of models is usually 1.5 to 4.5 deg C for a doubling of CO2, where most global climate models used for climate change studies vary by at least a factor of two in equilibrium sensitivity.
The question is: if climate models differ by a factor of 2 to 3 in their climate sensitivity, how can they all simulate the global temperature record with a reasonable degree of accuracy.
Kerr [2007] and S. E. Schwartz et al. (Quantifying climate change–too rosy a picture?, available at http://www.nature.com/reports/climatechange, 2007) recently pointed out the importance of understanding the answer to this question. Indeed, Kerr [2007] referred to the present work and the current paper provides the ‘‘widely circulated analysis’’ referred to by Kerr [2007]. This report investigates the most probable explanation for such an agreement. It uses published results from a wide variety of model simulations to understand this apparent paradox between model climate responses for the 20th century, but diverse climate model sensitivity.

And, importantly, Kiehl’s paper says:

These results explain to a large degree why models with such diverse climate sensitivities can all simulate the global anomaly in surface temperature. The magnitude of applied anthropogenic total forcing compensates for the model sensitivity.

And the “magnitude of applied anthropogenic total forcing” is fixed in each model by the input value of aerosol forcing.
Thanks to Bill Illis, Kiehl’s Figure 2 can be seen at
http://img36.imageshack.us/img36/8167/kiehl2007figure2.png
Please note that the Figure is for 9 GCMs and 2 energy balance models, and its title is:

Figure 2. Total anthropogenic forcing (Wm2) versus aerosol forcing (Wm2) from nine fully coupled climate models and two energy balance models used to simulate the 20th century.

It shows that
(a) each model uses a different value for “Total anthropogenic forcing” that is in the range 0.80 W/m^-2 to 2.02 W/m^-2
but
(b) each model is forced to agree with the rate of past warming by using a different value for “Aerosol forcing” that is in the range -1.42 W/m^-2 to -0.60 W/m^-2.
In other words the models use values of “Total anthropogenic forcing” that differ by a factor of more than 2.5 and they are ‘adjusted’ by using values of assumed “Aerosol forcing” that differ by a factor of 2.4.
So, each climate model emulates a different climate system. Hence, at most only one of them emulates the climate system of the real Earth because there is only one Earth. And the fact that they each ‘run hot’ unless fiddled by use of a completely arbitrary ‘aerosol cooling’ strongly suggests that none of them emulates the climate system of the real Earth.
Richard

October 14, 2013 4:54 pm

“We understand the fundamentals about how the climate operates well enough to accurately reproduce the observed changes, based on solid, well-understood physical mechanisms like the increased greenhouse effect.
— Dana Nuccitelli

Sure, they understand that — a grade 11 student could, and in pretty great detail — the problem is everything else like the water cycle and the astrophysical variables.
If their best answer to the pause is, “The missing warmth must be in the ocean!” there’s a lot they don’t understand.

Theo Goodwin
October 14, 2013 4:57 pm

richardscourtney says:
October 14, 2013 at 4:50 pm
You are one fine educator. Thanks again for your valuable work.

Latitude
October 14, 2013 5:02 pm

Richard, I see it as simply not willing to admit that CO2 isn’t as powerful as they want it to be….
covering it up and justifying it with “aerosols” etc….
That way they can still blame it all on CO2

Zeke
October 14, 2013 5:08 pm

“Projectons vs Reality”
The Fifth Sequel

Gcapologist
October 14, 2013 5:18 pm

Richardscourtney
I would agree. The models do not adequately replicate the ways the earth’s systems work.
I doubt that co2 sensitivity is constant, and I’m sure aerosol formation (hence forcing) is.
When the powers that be rely on incomplete models, how do we advance the conversation?

October 14, 2013 5:26 pm

Reblogged this on Power To The People and commented:
Wonder if David Suzuki, Michael Mann, Tom Steyer , Al Gore or President Obama will ever admit that when reality does not agree with their Catastrophic Climate Change Theory reality is not what’s false. hattip Vahrenholt

Gcapologist
October 14, 2013 5:34 pm

Typo? I’m sure aerosol formation is not constant – so forcing shouldn’t be.

John Whitman
October 14, 2013 5:39 pm

Roy W. Spencer, Ph. D. wrote,
“. . .
So, about 95% (actually, 96.7%) of the climate models warm faster than the observations. While they said they were 95% certain that most of the warming since the 1950s was due to human greenhouse gas emissions, what they meant to say was that they are 95% sure their climate models are warming too much.
Honest mistake. Don’t you think? Maybe?”

– – – – – – – –
Roy,
Thanks for your droll humor. It cheers the heart.
If their models came with a money back guarantee, the modelers would be in the red, honestly. No maybe about it . : )
John

richardscourtney
October 14, 2013 5:45 pm

Latitude:
Thankyou for your reply to me at October 14, 2013 at 5:02 pm which says in total

Richard, I see it as simply not willing to admit that CO2 isn’t as powerful as they want it to be….
covering it up and justifying it with “aerosols” etc….
That way they can still blame it all on CO2

I understand your suggestion but I disagree. I explain my understanding of what has happened is as follows.
The models were each constructed to represent the understandings of climate which were possessed by each modelling team that produced a model.
Firstly, they assumed that water vapour (i.e. the main greenhouse gas) only changed its concentration in the atmosphere as a feedback on temperature. Greenhouse gas (GHG) forcing thus was dominated by other GHGs of which CO2 is the major one (being responsible for about half of greenhouse forcing) and – for modeling simplicity – their forcing was aggregated into a single forcing value of CO2 equivalence.
Then the modelers parametrised (i.e. applied their best guesses) of effects which were not adequately understood and/or that the model’s resolution was insufficient to model (e.g. clouds, storms, etc.).
The parametrisations varied between the models because the modeling teams each had different opinions on the parametrisation values and methods to apply in their models.
But each model ‘ran hot’; see my post you are answering
http://wattsupwiththat.com/2013/10/14/90-climate-model-projectons-versus-reality/#comment-1447979
This (as my post explained) was compensated by inclusion of a completely arbitrary input of aerosol cooling effect in each model. However, the rise in global temperature was not uniform over the twentieth century; e.g. global temperature did not rise between ~1940 and ~1970. The degree of ‘ran hot’ in each model was an output so could not be adjusted. But a balance between the warming effect of GHGs (i.e. ECS) and the cooling effect of aerosols could be adjusted, so the modelers were able to get a ‘best fit’ for each model. And this is why each model has a unique value of ECS and effect of aerosol cooling.
Of course, they could have admitted the ‘ran hot’ was evidence that a model was inadequate and abandoned the model, but much time money and effort had been expended on each model so this was not a politically available option. Or they could have altered parametrisations in each model and to some degree they did, but the adjustment of ECS and aereosol cooling was the simplest option and each modeling team adopted it.
Hence, each model is a curve fitting exercise and, therefore, it is not surprising that Willis Eschenbach discovered he could emulate the models’ outputs with a curve fitting exercise.
In summation, I agree with you that failure to reject the models is politically driven. However, I don’t agree that it was so “they can still blame it all on CO2”: that was merely a convenient (for some) result of the failure to reject the models. And that is my understanding of how we have ended up with dozens of models which are all different but not one of which emulates the climate system of the real Earth.
Richard

richardscourtney
October 14, 2013 5:50 pm

Gcapologist:
Thankyou for your reply to me at October 14, 2013 at 5:18 pm. Unfortunately it is nearly 2 am here and I need to retire for the night. Please be assured that I have not ignored your post which I shall answer in the morning and I hope you will forgive me for this.
Richard

Werner Brozek
October 14, 2013 6:11 pm

By comparing the models to HadCRUT4 and UAH, you picked some of the worst data sets to prove your point. RSS and HadCRUT3 would have worked better. See the 4 graphs below that I have zeroed so they all start at the same point in around 1985. Note how they diverge at the present time.
http://www.woodfortrees.org/plot/hadcrut4gl/from:1979/mean:60/offset:-0.01/plot/hadcrut3vgl/from:1979/mean:60/plot/rss/from:1979/mean:60/offset:0.18/plot/uah/from:1979/mean:60/offset:0.28

Layman
October 14, 2013 6:43 pm

By the models’ logic aerosol is the answer to combating AGW. While cutting down CO2 emission would cost trillions and at best a century to see any effect. By increasing aerosol emission the effect is immediate and costs nothing to promote except relaxation of regulations.
(jk)

jorgekafkazar
October 14, 2013 7:14 pm

Lewis P Buckingham says: “It is clear that the hypothetical models have a governor, as they all fit within a fairly tight band.”
What are you talking about? That plate of spaghetti is a tight band? You must be thinking of Motley Crue or Aerosmith.

jorgekafkazar
October 14, 2013 7:16 pm

Jimbo says: “I wouldn’t be surprised if the 90 climate models…increases to 150….”
I was thinking the same thing. We need more climate models. Then one of them might accidentally get the fit right.

jorgekafkazar
October 14, 2013 7:18 pm

“Projectons” are morons who project their inner mental problems on everybody who disagrees with them.

October 14, 2013 7:24 pm

Jquip says:
“By eyeball it seems you could model the multi-model ensemble reasonably well by simply drawing a pair of lines between {1992, 1993} and 1998”
I’m quite sure that’s what they did. Hansen had his epiphany, they looked at the temperature over a few years and declared it would rise at that rate forever. It’s like if it rained two inches one day so you just assume it will continue at that rate for 100 years.

October 14, 2013 7:45 pm

I think this post and Roy Spencer’s could use a little more meta data. It’s not clear to me when these model runs were made. Were they tuned to that pre-98 “W” or did they predict it?

Jeff F.
October 14, 2013 8:11 pm

I do my best at following this all this; everyone needs to be absolutely sure on the data/statements. Why are there only two lines under the observed data when the graph states three; why not 97.8-percent?