90 climate model projectons versus reality

Reality wins, it seems. Dr Roy Spencer writes:

As seen in the following graphic, over the period of the satellite record (1979-2012), both the surface and satellite observations produce linear temperature trends which are below 87 of the 90 climate models used in the comparison.

CMIP5-90-models-global-Tsfc-vs-obs[1]

more here: http://www.drroyspencer.com/2013/10/maybe-that-ipcc-95-certainty-was-correct-after-all/

Advertisements

  Subscribe  
newest oldest most voted
Notify of

Clearly that simply means that the satellites and surface records need further adjustment to conform to reality.

Gary

Is this the 97% consensus? Sorry, couldn’t resist. Another juicy jewel to provide to my growing circle of layman skeptics.

JustAnotherPoster

As RJB would state. The really clever work now would be to bin all the model that are failures and investigate why the two at the bottom have matched reality and what they have assumed compared with the failed ones.
That’s the Potential published paper work What have the model winners assumed that the model failures haven’t

steven

The typo in the title gave me an idea. “Projectons” must be the hypothetical quantum particle simulated in the models to give them a little lift.

Jeff Mitchell

I worry that at some point they start fudging the measuring devices. I like the new “Like this” feature.

Jeff Mitchell

Does the “Like this” work? I didn’t see any evidence that my clicks got counted. I refreshed after clicking them, and nothing changed.

Jumped over to Dr Spencer’s blog and the top ad was for “Death Records online!” Maybe that was a Freudian coincidence??

Bryan A

It appears that 97% of climate models CAN in fact be wrong.
Nah
It indicates that the current measurements still fall within Modeled Ranges
Nah
It really indicates that 97% of climate models really are wrong

Resourceguy

How many of the models are using the Mann math term in them?

Are you alright Sun?

Latitude

LOL…
but I only see two…not three….and those two were invalidated around 1998

Mark X

Notice that the model runs do indicate pauses and short cooling periods. They don’t agree on when these occur after about 2000. That’s why in the average you don’t see pauses. But in the individual runs you do.
The models likely do not include changes in solar irradiation, volcanic activity, China’s increase in SO2 emissions, the reduction in CFC emissions or the Great Recession. That we don’t have good ways of modeling the el Nino/la Nina cycle is unfortunate. But if AGW is false, shouldn’t we have had some cooling periods this decade like in the 90s? Where did the cooling go? Isn’t it a travesty that you cannot find the cooling? It cannot be hiding in the oceans because they have still been warming.

Tilo Reber

Just by eyeball, it looks like the lowest trending model is running about 1C Per century. So it actually has a chance of being close. The highest trending models are already falsified. But they are kept to increase the average.

Alan

This just makes me laugh, especially when this article appeared in the news today.
http://news.ca.msn.com/local/calgary/world-temperatures-go-off-the-chart-by-2047-study-says-3

Mike Smith

Yeah, and 97% of climate-related public policy decisions are still being based on the 97% consensus which, according to the best hard data we have, is 97% wrong.
This is 97% messed up!

Eyvind Dk

Maybe another came up with that idea before Dr. Roy Spencer 😉

magicjava

Just a quick question. Why are temperatures given as a 5 year mean? Why not plot the actual temperature?

Jimbo

There are 3 words that Warmists hate to see in the same paragraph. These 3 words can cause intolerable mental conflict.
*Projections, *observations, *comparisons.
At one of the IPCC insiders’ meetings they knew full well that there was a problem. Some bright spark must have suggested that they simple pluck a new confidence number out of thin air otherwise they would be doomed (and shamed). Desperate times call for desperate measures. Just look at the graph. You won’t see this kind of behavior in any other science.

TRM

“JustAnotherPoster says: October 14, 2013 at 2:15 pm
The really clever work now would be to bin all the model that are failures and investigate why the two at the bottom have matched reality and what they have assumed compared with the failed ones.”
While the bottom ones do model it better I don’t see how they have matched it. Their long term 1 degree C per century appears to be correct but still fall short.
The one prediction from 1979 appears to be most accurate in that it called the end to the cold in mid 80s, warmth until 2000 and flatline ….. you don’t want to know what they say is next. Trust me.
http://wattsupwiththat.com/2011/06/01/old-prediction-may-fit-the-present-pattern/
Get yer wool socks out folks. Still the most accurate prediction over the longest period. I wonder if either scientist is still active in the field or retired? Would be a great interview.

I would say 100% are wrong. The fact that 2 or 3 haven’t overshot the temperature doesn’t make them right. They aren’t following the observed temperature. It’s clear that none of them simulate the actual climate.

Jimbo

My simple problem with these dozens of climate models is this: They are used to make so many projections that even then they fail Warmists point to the lower bounds to claim they came very close, and are more accurate than previously thought. This is what I am seeing in the graph. Too many throws of the dice. If there is a solid understanding of how the climate works and the physical mechanisms that cause it to change then why so many models? Why not 9 at most?

The GuardianPosted by Dana Nuccitelli, John Abraham, Scott Mandia – Monday 23 September 2013
“One of the most important concepts to understand when trying to grasp how the Earth’s climate works, is that every climate change must have a physical cause. This principle was the basis of our new paper, Nuccitelli et al. (2013). Over the past century, climate scientists have developed a solid understanding about how the climate works and the physical mechanisms that cause it to change. By building that knowledge into complex climate models, scientists have been able to accurately reproduce past observed global surface temperature changes. “

PS Is Dana still being paid by the fossil fuel company Tetra Tech? I hear Tetra Tech acquired a fracking outfit in the last few years.

Jquip

shenanigans24 “They aren’t following the observed temperature. ”
By eyeball it seems you could model the multi-model ensemble reasonably well by simply drawing a pair of lines between {1992, 1993} and 1998.

Is What’s Up With That going to talk about Dr. Nicola Scafetta’s climate model that he detailed in a recent paper in Earth-Science Reviews? Jo Nova posted on it as well as Tallbloke, but I linked his version as Jo unfortunately made an initial error in describing it which makes her post harder to read.
Scafetta writes:

“Herein I propose a semi-empirical climate model made of six specific astronomical
oscillations as constructors of the natural climate variability spanning from the decadal to the millennial scales plus a 50% attenuated radiative warming component deduced from the GCM mean simulation as a measure of the anthropogenic and volcano contributions to climatic changes.”

It looks interesting. Perhaps it was posted about earlier and I missed it, or there are objections to it that make it not worth a post? If so, I’d be interested in those.

sophocles

steven says:
October 14, 2013 at 2:22 pm
The typo in the title gave me an idea. “Projectons” must be the hypothetical quantum particle simulated in the models to give them a little lift.
========================================================
ROTFL. Love it!
We know they have “spin,” What other properties can be deduced from the
observed behaviour?
Spin orientation (up? down? around?)
Charm?
Color?
(In)stability?
Half life? .

Jimbo

I wouldn’t be surprised if the 90 climate models use increases to 150. It will give them greater scope and rat holes. It may not be 150 but I ‘project’ that the number will increase due to the necessity of keeping this charade going while the jig is up.

richardscourtney

Friends:
Some here seem to think rejection of the models which are clearly wrong would leave models which are right or have some property which provides forecasting skill and, therefore, merits investigation. Not so. To understand why this idea is an error google for Texas Sharp Shooter fallacy.
Models which have failed to correctly forecast are observed to be inadequate at forecasting. Those two (or three?) which remain are not known to be able to forecast the future from now. One or more of them may be able to do that but it cannot be known if this is true.
Richard

Latitude

Jimbo says:
October 14, 2013 at 4:05 pm
My simple problem with these dozens of climate models is this:
===
Jim, my problem is just looking at them…you know they are garbage
Prior to ~1998..the hindcast…they show Judith’s waves clearly…
After ~1998…all they predict is a linear line going up to infinity
…anyone should know you’re not going to have ups and downs forever…and then a straight line
and to top it all off….a straight line at the same time CO2 has the least effect
and here we are…moving up and down…doing exactly what they can’t predict

Gcapologist

In my biz we’d call this a high bias …… To protect the public health.
Can anyone tell me why a half a degree is harmful to the public health? (That question is largely rhetorical.)
The important question …. Why are the models so wrong?

Latitude

Why are the models so wrong?
===
because we’re really not that smart

I’m still confused why we care about HADCRUT. I thought WUWT demonstrated that half the “warming” came from bad ground stations and other fudge factors. I also thought another article on WUWT demonstrated that that HADCRUT takes advantage of some bad mathmatics to suppress temperatures early than 1960, especially the high temps of the 1940s.

Jimbo

Guardian – 23 September 2013
Dana Nuccitelli [Tetra Tech & Co.]
The problem for climate contrarians is that our existing climate framework is very solid. We understand the fundamentals about how the climate operates well enough to accurately reproduce the observed changes, based on solid, well-understood physical mechanisms like the increased greenhouse effect.
http://www.theguardian.com/environment/climate-consensus-97-per-cent/2013/sep/23/climate-science-magical-thinking-debunked-by-science#comment-27256471

I see “observed changes” – would that be past or present?

richardscourtney

Gcapologist:
At October 14, 2013 at 4:39 pm you ask

The important question …. Why are the models so wrong?

I answer, because they do not model the climate system of the real Earth.
To explain that answer it seems I need to post the following yet again, and I ask all who have seen it to skip it and to forgive my posting it yet again.
None of the models – not one of them – could match the change in mean global temperature over the past century if it did not utilise a unique value of assumed cooling from aerosols. So, inputting actual values of the cooling effect (such as the determination by Penner et al.
http://www.pnas.org/content/early/2011/07/25/1018526108.full.pdf?with-ds=yes )
would make every climate model provide a mismatch of the global warming it hindcasts and the observed global warming for the twentieth century.
This mismatch would occur because all the global climate models and energy balance models are known to provide indications which are based on
1.
the assumed degree of forcings resulting from human activity that produce warming
and
2.
the assumed degree of anthropogenic aerosol cooling input to each model as a ‘fiddle factor’ to obtain agreement between past average global temperature and the model’s indications of average global temperature.
More than a decade ago I published a peer-reviewed paper that showed the UK’s Hadley Centre general circulation model (GCM) could not model climate and only obtained agreement between past average global temperature and the model’s indications of average global temperature by forcing the agreement with an input of assumed anthropogenic aerosol cooling.
The input of assumed anthropogenic aerosol cooling is needed because the model ‘ran hot’; i.e. it showed an amount and a rate of global warming which was greater than was observed over the twentieth century. This failure of the model was compensated by the input of assumed anthropogenic aerosol cooling.
And my paper demonstrated that the assumption of aerosol effects being responsible for the model’s failure was incorrect.
(ref. Courtney RS An assessment of validation experiments conducted on computer models of global climate using the general circulation model of the UK’s Hadley Centre Energy & Environment, Volume 10, Number 5, pp. 491-502, September 1999).
More recently, in 2007, Kiehle published a paper that assessed 9 GCMs and two energy balance models.
(ref. Kiehl JT,Twentieth century climate model response and climate sensitivity. GRL vol.. 34, L22710, doi:10.1029/2007GL031383, 2007).
Kiehl found the same as my paper except that each model he assessed used a different aerosol ‘fix’ from every other model. This is because they all ‘run hot’ but they each ‘run hot’ to a different degree.
He says in his paper:

One curious aspect of this result is that it is also well known [Houghton et al., 2001] that the same models that agree in simulating the anomaly in surface air temperature differ significantly in their predicted climate sensitivity. The cited range in climate sensitivity from a wide collection of models is usually 1.5 to 4.5 deg C for a doubling of CO2, where most global climate models used for climate change studies vary by at least a factor of two in equilibrium sensitivity.
The question is: if climate models differ by a factor of 2 to 3 in their climate sensitivity, how can they all simulate the global temperature record with a reasonable degree of accuracy.
Kerr [2007] and S. E. Schwartz et al. (Quantifying climate change–too rosy a picture?, available at http://www.nature.com/reports/climatechange, 2007) recently pointed out the importance of understanding the answer to this question. Indeed, Kerr [2007] referred to the present work and the current paper provides the ‘‘widely circulated analysis’’ referred to by Kerr [2007]. This report investigates the most probable explanation for such an agreement. It uses published results from a wide variety of model simulations to understand this apparent paradox between model climate responses for the 20th century, but diverse climate model sensitivity.

And, importantly, Kiehl’s paper says:

These results explain to a large degree why models with such diverse climate sensitivities can all simulate the global anomaly in surface temperature. The magnitude of applied anthropogenic total forcing compensates for the model sensitivity.

And the “magnitude of applied anthropogenic total forcing” is fixed in each model by the input value of aerosol forcing.
Thanks to Bill Illis, Kiehl’s Figure 2 can be seen at
http://img36.imageshack.us/img36/8167/kiehl2007figure2.png
Please note that the Figure is for 9 GCMs and 2 energy balance models, and its title is:

Figure 2. Total anthropogenic forcing (Wm2) versus aerosol forcing (Wm2) from nine fully coupled climate models and two energy balance models used to simulate the 20th century.

It shows that
(a) each model uses a different value for “Total anthropogenic forcing” that is in the range 0.80 W/m^-2 to 2.02 W/m^-2
but
(b) each model is forced to agree with the rate of past warming by using a different value for “Aerosol forcing” that is in the range -1.42 W/m^-2 to -0.60 W/m^-2.
In other words the models use values of “Total anthropogenic forcing” that differ by a factor of more than 2.5 and they are ‘adjusted’ by using values of assumed “Aerosol forcing” that differ by a factor of 2.4.
So, each climate model emulates a different climate system. Hence, at most only one of them emulates the climate system of the real Earth because there is only one Earth. And the fact that they each ‘run hot’ unless fiddled by use of a completely arbitrary ‘aerosol cooling’ strongly suggests that none of them emulates the climate system of the real Earth.
Richard

“We understand the fundamentals about how the climate operates well enough to accurately reproduce the observed changes, based on solid, well-understood physical mechanisms like the increased greenhouse effect.
— Dana Nuccitelli

Sure, they understand that — a grade 11 student could, and in pretty great detail — the problem is everything else like the water cycle and the astrophysical variables.
If their best answer to the pause is, “The missing warmth must be in the ocean!” there’s a lot they don’t understand.

Theo Goodwin

richardscourtney says:
October 14, 2013 at 4:50 pm
You are one fine educator. Thanks again for your valuable work.

Latitude

Richard, I see it as simply not willing to admit that CO2 isn’t as powerful as they want it to be….
covering it up and justifying it with “aerosols” etc….
That way they can still blame it all on CO2

“Projectons vs Reality”
The Fifth Sequel

Gcapologist

Richardscourtney
I would agree. The models do not adequately replicate the ways the earth’s systems work.
I doubt that co2 sensitivity is constant, and I’m sure aerosol formation (hence forcing) is.
When the powers that be rely on incomplete models, how do we advance the conversation?

Reblogged this on Power To The People and commented:
Wonder if David Suzuki, Michael Mann, Tom Steyer , Al Gore or President Obama will ever admit that when reality does not agree with their Catastrophic Climate Change Theory reality is not what’s false. hattip Vahrenholt

Gcapologist

Typo? I’m sure aerosol formation is not constant – so forcing shouldn’t be.

Roy W. Spencer, Ph. D. wrote,
“. . .
So, about 95% (actually, 96.7%) of the climate models warm faster than the observations. While they said they were 95% certain that most of the warming since the 1950s was due to human greenhouse gas emissions, what they meant to say was that they are 95% sure their climate models are warming too much.
Honest mistake. Don’t you think? Maybe?”

– – – – – – – –
Roy,
Thanks for your droll humor. It cheers the heart.
If their models came with a money back guarantee, the modelers would be in the red, honestly. No maybe about it . : )
John

richardscourtney

Latitude:
Thankyou for your reply to me at October 14, 2013 at 5:02 pm which says in total

Richard, I see it as simply not willing to admit that CO2 isn’t as powerful as they want it to be….
covering it up and justifying it with “aerosols” etc….
That way they can still blame it all on CO2

I understand your suggestion but I disagree. I explain my understanding of what has happened is as follows.
The models were each constructed to represent the understandings of climate which were possessed by each modelling team that produced a model.
Firstly, they assumed that water vapour (i.e. the main greenhouse gas) only changed its concentration in the atmosphere as a feedback on temperature. Greenhouse gas (GHG) forcing thus was dominated by other GHGs of which CO2 is the major one (being responsible for about half of greenhouse forcing) and – for modeling simplicity – their forcing was aggregated into a single forcing value of CO2 equivalence.
Then the modelers parametrised (i.e. applied their best guesses) of effects which were not adequately understood and/or that the model’s resolution was insufficient to model (e.g. clouds, storms, etc.).
The parametrisations varied between the models because the modeling teams each had different opinions on the parametrisation values and methods to apply in their models.
But each model ‘ran hot’; see my post you are answering
http://wattsupwiththat.com/2013/10/14/90-climate-model-projectons-versus-reality/#comment-1447979
This (as my post explained) was compensated by inclusion of a completely arbitrary input of aerosol cooling effect in each model. However, the rise in global temperature was not uniform over the twentieth century; e.g. global temperature did not rise between ~1940 and ~1970. The degree of ‘ran hot’ in each model was an output so could not be adjusted. But a balance between the warming effect of GHGs (i.e. ECS) and the cooling effect of aerosols could be adjusted, so the modelers were able to get a ‘best fit’ for each model. And this is why each model has a unique value of ECS and effect of aerosol cooling.
Of course, they could have admitted the ‘ran hot’ was evidence that a model was inadequate and abandoned the model, but much time money and effort had been expended on each model so this was not a politically available option. Or they could have altered parametrisations in each model and to some degree they did, but the adjustment of ECS and aereosol cooling was the simplest option and each modeling team adopted it.
Hence, each model is a curve fitting exercise and, therefore, it is not surprising that Willis Eschenbach discovered he could emulate the models’ outputs with a curve fitting exercise.
In summation, I agree with you that failure to reject the models is politically driven. However, I don’t agree that it was so “they can still blame it all on CO2”: that was merely a convenient (for some) result of the failure to reject the models. And that is my understanding of how we have ended up with dozens of models which are all different but not one of which emulates the climate system of the real Earth.
Richard

richardscourtney

Gcapologist:
Thankyou for your reply to me at October 14, 2013 at 5:18 pm. Unfortunately it is nearly 2 am here and I need to retire for the night. Please be assured that I have not ignored your post which I shall answer in the morning and I hope you will forgive me for this.
Richard

Werner Brozek

By comparing the models to HadCRUT4 and UAH, you picked some of the worst data sets to prove your point. RSS and HadCRUT3 would have worked better. See the 4 graphs below that I have zeroed so they all start at the same point in around 1985. Note how they diverge at the present time.
http://www.woodfortrees.org/plot/hadcrut4gl/from:1979/mean:60/offset:-0.01/plot/hadcrut3vgl/from:1979/mean:60/plot/rss/from:1979/mean:60/offset:0.18/plot/uah/from:1979/mean:60/offset:0.28

Layman

By the models’ logic aerosol is the answer to combating AGW. While cutting down CO2 emission would cost trillions and at best a century to see any effect. By increasing aerosol emission the effect is immediate and costs nothing to promote except relaxation of regulations.
(jk)

jorgekafkazar

Lewis P Buckingham says: “It is clear that the hypothetical models have a governor, as they all fit within a fairly tight band.”
What are you talking about? That plate of spaghetti is a tight band? You must be thinking of Motley Crue or Aerosmith.

jorgekafkazar

Jimbo says: “I wouldn’t be surprised if the 90 climate models…increases to 150….”
I was thinking the same thing. We need more climate models. Then one of them might accidentally get the fit right.

jorgekafkazar

“Projectons” are morons who project their inner mental problems on everybody who disagrees with them.

Jquip says:
“By eyeball it seems you could model the multi-model ensemble reasonably well by simply drawing a pair of lines between {1992, 1993} and 1998”
I’m quite sure that’s what they did. Hansen had his epiphany, they looked at the temperature over a few years and declared it would rise at that rate forever. It’s like if it rained two inches one day so you just assume it will continue at that rate for 100 years.

I think this post and Roy Spencer’s could use a little more meta data. It’s not clear to me when these model runs were made. Were they tuned to that pre-98 “W” or did they predict it?

Jeff F.

I do my best at following this all this; everyone needs to be absolutely sure on the data/statements. Why are there only two lines under the observed data when the graph states three; why not 97.8-percent?