90 climate model projectons versus reality

Reality wins, it seems. Dr Roy Spencer writes:

As seen in the following graphic, over the period of the satellite record (1979-2012), both the surface and satellite observations produce linear temperature trends which are below 87 of the 90 climate models used in the comparison.

CMIP5-90-models-global-Tsfc-vs-obs[1]

more here: http://www.drroyspencer.com/2013/10/maybe-that-ipcc-95-certainty-was-correct-after-all/

0 0 votes
Article Rating
101 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
October 14, 2013 2:10 pm

Clearly that simply means that the satellites and surface records need further adjustment to conform to reality.

Gary
October 14, 2013 2:13 pm

Is this the 97% consensus? Sorry, couldn’t resist. Another juicy jewel to provide to my growing circle of layman skeptics.

JustAnotherPoster
October 14, 2013 2:15 pm

As RJB would state. The really clever work now would be to bin all the model that are failures and investigate why the two at the bottom have matched reality and what they have assumed compared with the failed ones.
That’s the Potential published paper work What have the model winners assumed that the model failures haven’t

steven
October 14, 2013 2:22 pm

The typo in the title gave me an idea. “Projectons” must be the hypothetical quantum particle simulated in the models to give them a little lift.

Jeff Mitchell
October 14, 2013 2:26 pm

I worry that at some point they start fudging the measuring devices. I like the new “Like this” feature.

Jeff Mitchell
October 14, 2013 2:29 pm

Does the “Like this” work? I didn’t see any evidence that my clicks got counted. I refreshed after clicking them, and nothing changed.

October 14, 2013 2:29 pm

Jumped over to Dr Spencer’s blog and the top ad was for “Death Records online!” Maybe that was a Freudian coincidence??

Bryan A
October 14, 2013 2:41 pm

It appears that 97% of climate models CAN in fact be wrong.
Nah
It indicates that the current measurements still fall within Modeled Ranges
Nah
It really indicates that 97% of climate models really are wrong

Resourceguy
October 14, 2013 2:42 pm

How many of the models are using the Mann math term in them?

Zeke
October 14, 2013 2:53 pm

Are you alright Sun?

Latitude
October 14, 2013 2:56 pm

LOL…
but I only see two…not three….and those two were invalidated around 1998

Mark X
October 14, 2013 3:09 pm

Notice that the model runs do indicate pauses and short cooling periods. They don’t agree on when these occur after about 2000. That’s why in the average you don’t see pauses. But in the individual runs you do.
The models likely do not include changes in solar irradiation, volcanic activity, China’s increase in SO2 emissions, the reduction in CFC emissions or the Great Recession. That we don’t have good ways of modeling the el Nino/la Nina cycle is unfortunate. But if AGW is false, shouldn’t we have had some cooling periods this decade like in the 90s? Where did the cooling go? Isn’t it a travesty that you cannot find the cooling? It cannot be hiding in the oceans because they have still been warming.

Tilo Reber
October 14, 2013 3:14 pm

Just by eyeball, it looks like the lowest trending model is running about 1C Per century. So it actually has a chance of being close. The highest trending models are already falsified. But they are kept to increase the average.

Alan
October 14, 2013 3:23 pm

This just makes me laugh, especially when this article appeared in the news today.
http://news.ca.msn.com/local/calgary/world-temperatures-go-off-the-chart-by-2047-study-says-3

Mike Smith
October 14, 2013 3:30 pm

Yeah, and 97% of climate-related public policy decisions are still being based on the 97% consensus which, according to the best hard data we have, is 97% wrong.
This is 97% messed up!

Eyvind Dk
October 14, 2013 3:40 pm

Maybe another came up with that idea before Dr. Roy Spencer 😉

magicjava
October 14, 2013 3:42 pm

Just a quick question. Why are temperatures given as a 5 year mean? Why not plot the actual temperature?

Jimbo
October 14, 2013 3:43 pm

There are 3 words that Warmists hate to see in the same paragraph. These 3 words can cause intolerable mental conflict.
*Projections, *observations, *comparisons.
At one of the IPCC insiders’ meetings they knew full well that there was a problem. Some bright spark must have suggested that they simple pluck a new confidence number out of thin air otherwise they would be doomed (and shamed). Desperate times call for desperate measures. Just look at the graph. You won’t see this kind of behavior in any other science.

TRM
October 14, 2013 3:51 pm

“JustAnotherPoster says: October 14, 2013 at 2:15 pm
The really clever work now would be to bin all the model that are failures and investigate why the two at the bottom have matched reality and what they have assumed compared with the failed ones.”
While the bottom ones do model it better I don’t see how they have matched it. Their long term 1 degree C per century appears to be correct but still fall short.
The one prediction from 1979 appears to be most accurate in that it called the end to the cold in mid 80s, warmth until 2000 and flatline ….. you don’t want to know what they say is next. Trust me.
http://wattsupwiththat.com/2011/06/01/old-prediction-may-fit-the-present-pattern/
Get yer wool socks out folks. Still the most accurate prediction over the longest period. I wonder if either scientist is still active in the field or retired? Would be a great interview.

October 14, 2013 4:02 pm

I would say 100% are wrong. The fact that 2 or 3 haven’t overshot the temperature doesn’t make them right. They aren’t following the observed temperature. It’s clear that none of them simulate the actual climate.

Jimbo
October 14, 2013 4:05 pm

My simple problem with these dozens of climate models is this: They are used to make so many projections that even then they fail Warmists point to the lower bounds to claim they came very close, and are more accurate than previously thought. This is what I am seeing in the graph. Too many throws of the dice. If there is a solid understanding of how the climate works and the physical mechanisms that cause it to change then why so many models? Why not 9 at most?

The GuardianPosted by Dana Nuccitelli, John Abraham, Scott Mandia – Monday 23 September 2013
“One of the most important concepts to understand when trying to grasp how the Earth’s climate works, is that every climate change must have a physical cause. This principle was the basis of our new paper, Nuccitelli et al. (2013). Over the past century, climate scientists have developed a solid understanding about how the climate works and the physical mechanisms that cause it to change. By building that knowledge into complex climate models, scientists have been able to accurately reproduce past observed global surface temperature changes. “

PS Is Dana still being paid by the fossil fuel company Tetra Tech? I hear Tetra Tech acquired a fracking outfit in the last few years.

Jquip
October 14, 2013 4:09 pm

shenanigans24 “They aren’t following the observed temperature. ”
By eyeball it seems you could model the multi-model ensemble reasonably well by simply drawing a pair of lines between {1992, 1993} and 1998.

October 14, 2013 4:14 pm

Is What’s Up With That going to talk about Dr. Nicola Scafetta’s climate model that he detailed in a recent paper in Earth-Science Reviews? Jo Nova posted on it as well as Tallbloke, but I linked his version as Jo unfortunately made an initial error in describing it which makes her post harder to read.
Scafetta writes:

“Herein I propose a semi-empirical climate model made of six specific astronomical
oscillations as constructors of the natural climate variability spanning from the decadal to the millennial scales plus a 50% attenuated radiative warming component deduced from the GCM mean simulation as a measure of the anthropogenic and volcano contributions to climatic changes.”

It looks interesting. Perhaps it was posted about earlier and I missed it, or there are objections to it that make it not worth a post? If so, I’d be interested in those.

sophocles
October 14, 2013 4:16 pm

steven says:
October 14, 2013 at 2:22 pm
The typo in the title gave me an idea. “Projectons” must be the hypothetical quantum particle simulated in the models to give them a little lift.
========================================================
ROTFL. Love it!
We know they have “spin,” What other properties can be deduced from the
observed behaviour?
Spin orientation (up? down? around?)
Charm?
Color?
(In)stability?
Half life? .

Jimbo
October 14, 2013 4:17 pm

I wouldn’t be surprised if the 90 climate models use increases to 150. It will give them greater scope and rat holes. It may not be 150 but I ‘project’ that the number will increase due to the necessity of keeping this charade going while the jig is up.

richardscourtney
October 14, 2013 4:21 pm

Friends:
Some here seem to think rejection of the models which are clearly wrong would leave models which are right or have some property which provides forecasting skill and, therefore, merits investigation. Not so. To understand why this idea is an error google for Texas Sharp Shooter fallacy.
Models which have failed to correctly forecast are observed to be inadequate at forecasting. Those two (or three?) which remain are not known to be able to forecast the future from now. One or more of them may be able to do that but it cannot be known if this is true.
Richard

Latitude
October 14, 2013 4:26 pm

Jimbo says:
October 14, 2013 at 4:05 pm
My simple problem with these dozens of climate models is this:
===
Jim, my problem is just looking at them…you know they are garbage
Prior to ~1998..the hindcast…they show Judith’s waves clearly…
After ~1998…all they predict is a linear line going up to infinity
…anyone should know you’re not going to have ups and downs forever…and then a straight line
and to top it all off….a straight line at the same time CO2 has the least effect
and here we are…moving up and down…doing exactly what they can’t predict

Gcapologist
October 14, 2013 4:39 pm

In my biz we’d call this a high bias …… To protect the public health.
Can anyone tell me why a half a degree is harmful to the public health? (That question is largely rhetorical.)
The important question …. Why are the models so wrong?

Latitude
October 14, 2013 4:42 pm

Why are the models so wrong?
===
because we’re really not that smart

October 14, 2013 4:47 pm

I’m still confused why we care about HADCRUT. I thought WUWT demonstrated that half the “warming” came from bad ground stations and other fudge factors. I also thought another article on WUWT demonstrated that that HADCRUT takes advantage of some bad mathmatics to suppress temperatures early than 1960, especially the high temps of the 1940s.

Jimbo
October 14, 2013 4:49 pm

Guardian – 23 September 2013
Dana Nuccitelli [Tetra Tech & Co.]
The problem for climate contrarians is that our existing climate framework is very solid. We understand the fundamentals about how the climate operates well enough to accurately reproduce the observed changes, based on solid, well-understood physical mechanisms like the increased greenhouse effect.
http://www.theguardian.com/environment/climate-consensus-97-per-cent/2013/sep/23/climate-science-magical-thinking-debunked-by-science#comment-27256471

I see “observed changes” – would that be past or present?

richardscourtney
October 14, 2013 4:50 pm

Gcapologist:
At October 14, 2013 at 4:39 pm you ask

The important question …. Why are the models so wrong?

I answer, because they do not model the climate system of the real Earth.
To explain that answer it seems I need to post the following yet again, and I ask all who have seen it to skip it and to forgive my posting it yet again.
None of the models – not one of them – could match the change in mean global temperature over the past century if it did not utilise a unique value of assumed cooling from aerosols. So, inputting actual values of the cooling effect (such as the determination by Penner et al.
http://www.pnas.org/content/early/2011/07/25/1018526108.full.pdf?with-ds=yes )
would make every climate model provide a mismatch of the global warming it hindcasts and the observed global warming for the twentieth century.
This mismatch would occur because all the global climate models and energy balance models are known to provide indications which are based on
1.
the assumed degree of forcings resulting from human activity that produce warming
and
2.
the assumed degree of anthropogenic aerosol cooling input to each model as a ‘fiddle factor’ to obtain agreement between past average global temperature and the model’s indications of average global temperature.
More than a decade ago I published a peer-reviewed paper that showed the UK’s Hadley Centre general circulation model (GCM) could not model climate and only obtained agreement between past average global temperature and the model’s indications of average global temperature by forcing the agreement with an input of assumed anthropogenic aerosol cooling.
The input of assumed anthropogenic aerosol cooling is needed because the model ‘ran hot’; i.e. it showed an amount and a rate of global warming which was greater than was observed over the twentieth century. This failure of the model was compensated by the input of assumed anthropogenic aerosol cooling.
And my paper demonstrated that the assumption of aerosol effects being responsible for the model’s failure was incorrect.
(ref. Courtney RS An assessment of validation experiments conducted on computer models of global climate using the general circulation model of the UK’s Hadley Centre Energy & Environment, Volume 10, Number 5, pp. 491-502, September 1999).
More recently, in 2007, Kiehle published a paper that assessed 9 GCMs and two energy balance models.
(ref. Kiehl JT,Twentieth century climate model response and climate sensitivity. GRL vol.. 34, L22710, doi:10.1029/2007GL031383, 2007).
Kiehl found the same as my paper except that each model he assessed used a different aerosol ‘fix’ from every other model. This is because they all ‘run hot’ but they each ‘run hot’ to a different degree.
He says in his paper:

One curious aspect of this result is that it is also well known [Houghton et al., 2001] that the same models that agree in simulating the anomaly in surface air temperature differ significantly in their predicted climate sensitivity. The cited range in climate sensitivity from a wide collection of models is usually 1.5 to 4.5 deg C for a doubling of CO2, where most global climate models used for climate change studies vary by at least a factor of two in equilibrium sensitivity.
The question is: if climate models differ by a factor of 2 to 3 in their climate sensitivity, how can they all simulate the global temperature record with a reasonable degree of accuracy.
Kerr [2007] and S. E. Schwartz et al. (Quantifying climate change–too rosy a picture?, available at http://www.nature.com/reports/climatechange, 2007) recently pointed out the importance of understanding the answer to this question. Indeed, Kerr [2007] referred to the present work and the current paper provides the ‘‘widely circulated analysis’’ referred to by Kerr [2007]. This report investigates the most probable explanation for such an agreement. It uses published results from a wide variety of model simulations to understand this apparent paradox between model climate responses for the 20th century, but diverse climate model sensitivity.

And, importantly, Kiehl’s paper says:

These results explain to a large degree why models with such diverse climate sensitivities can all simulate the global anomaly in surface temperature. The magnitude of applied anthropogenic total forcing compensates for the model sensitivity.

And the “magnitude of applied anthropogenic total forcing” is fixed in each model by the input value of aerosol forcing.
Thanks to Bill Illis, Kiehl’s Figure 2 can be seen at
http://img36.imageshack.us/img36/8167/kiehl2007figure2.png
Please note that the Figure is for 9 GCMs and 2 energy balance models, and its title is:

Figure 2. Total anthropogenic forcing (Wm2) versus aerosol forcing (Wm2) from nine fully coupled climate models and two energy balance models used to simulate the 20th century.

It shows that
(a) each model uses a different value for “Total anthropogenic forcing” that is in the range 0.80 W/m^-2 to 2.02 W/m^-2
but
(b) each model is forced to agree with the rate of past warming by using a different value for “Aerosol forcing” that is in the range -1.42 W/m^-2 to -0.60 W/m^-2.
In other words the models use values of “Total anthropogenic forcing” that differ by a factor of more than 2.5 and they are ‘adjusted’ by using values of assumed “Aerosol forcing” that differ by a factor of 2.4.
So, each climate model emulates a different climate system. Hence, at most only one of them emulates the climate system of the real Earth because there is only one Earth. And the fact that they each ‘run hot’ unless fiddled by use of a completely arbitrary ‘aerosol cooling’ strongly suggests that none of them emulates the climate system of the real Earth.
Richard

October 14, 2013 4:54 pm

“We understand the fundamentals about how the climate operates well enough to accurately reproduce the observed changes, based on solid, well-understood physical mechanisms like the increased greenhouse effect.
— Dana Nuccitelli

Sure, they understand that — a grade 11 student could, and in pretty great detail — the problem is everything else like the water cycle and the astrophysical variables.
If their best answer to the pause is, “The missing warmth must be in the ocean!” there’s a lot they don’t understand.

Theo Goodwin
October 14, 2013 4:57 pm

richardscourtney says:
October 14, 2013 at 4:50 pm
You are one fine educator. Thanks again for your valuable work.

Latitude
October 14, 2013 5:02 pm

Richard, I see it as simply not willing to admit that CO2 isn’t as powerful as they want it to be….
covering it up and justifying it with “aerosols” etc….
That way they can still blame it all on CO2

Zeke
October 14, 2013 5:08 pm

“Projectons vs Reality”
The Fifth Sequel

Gcapologist
October 14, 2013 5:18 pm

Richardscourtney
I would agree. The models do not adequately replicate the ways the earth’s systems work.
I doubt that co2 sensitivity is constant, and I’m sure aerosol formation (hence forcing) is.
When the powers that be rely on incomplete models, how do we advance the conversation?

October 14, 2013 5:26 pm

Reblogged this on Power To The People and commented:
Wonder if David Suzuki, Michael Mann, Tom Steyer , Al Gore or President Obama will ever admit that when reality does not agree with their Catastrophic Climate Change Theory reality is not what’s false. hattip Vahrenholt

Gcapologist
October 14, 2013 5:34 pm

Typo? I’m sure aerosol formation is not constant – so forcing shouldn’t be.

John Whitman
October 14, 2013 5:39 pm

Roy W. Spencer, Ph. D. wrote,
“. . .
So, about 95% (actually, 96.7%) of the climate models warm faster than the observations. While they said they were 95% certain that most of the warming since the 1950s was due to human greenhouse gas emissions, what they meant to say was that they are 95% sure their climate models are warming too much.
Honest mistake. Don’t you think? Maybe?”

– – – – – – – –
Roy,
Thanks for your droll humor. It cheers the heart.
If their models came with a money back guarantee, the modelers would be in the red, honestly. No maybe about it . : )
John

richardscourtney
October 14, 2013 5:45 pm

Latitude:
Thankyou for your reply to me at October 14, 2013 at 5:02 pm which says in total

Richard, I see it as simply not willing to admit that CO2 isn’t as powerful as they want it to be….
covering it up and justifying it with “aerosols” etc….
That way they can still blame it all on CO2

I understand your suggestion but I disagree. I explain my understanding of what has happened is as follows.
The models were each constructed to represent the understandings of climate which were possessed by each modelling team that produced a model.
Firstly, they assumed that water vapour (i.e. the main greenhouse gas) only changed its concentration in the atmosphere as a feedback on temperature. Greenhouse gas (GHG) forcing thus was dominated by other GHGs of which CO2 is the major one (being responsible for about half of greenhouse forcing) and – for modeling simplicity – their forcing was aggregated into a single forcing value of CO2 equivalence.
Then the modelers parametrised (i.e. applied their best guesses) of effects which were not adequately understood and/or that the model’s resolution was insufficient to model (e.g. clouds, storms, etc.).
The parametrisations varied between the models because the modeling teams each had different opinions on the parametrisation values and methods to apply in their models.
But each model ‘ran hot’; see my post you are answering
http://wattsupwiththat.com/2013/10/14/90-climate-model-projectons-versus-reality/#comment-1447979
This (as my post explained) was compensated by inclusion of a completely arbitrary input of aerosol cooling effect in each model. However, the rise in global temperature was not uniform over the twentieth century; e.g. global temperature did not rise between ~1940 and ~1970. The degree of ‘ran hot’ in each model was an output so could not be adjusted. But a balance between the warming effect of GHGs (i.e. ECS) and the cooling effect of aerosols could be adjusted, so the modelers were able to get a ‘best fit’ for each model. And this is why each model has a unique value of ECS and effect of aerosol cooling.
Of course, they could have admitted the ‘ran hot’ was evidence that a model was inadequate and abandoned the model, but much time money and effort had been expended on each model so this was not a politically available option. Or they could have altered parametrisations in each model and to some degree they did, but the adjustment of ECS and aereosol cooling was the simplest option and each modeling team adopted it.
Hence, each model is a curve fitting exercise and, therefore, it is not surprising that Willis Eschenbach discovered he could emulate the models’ outputs with a curve fitting exercise.
In summation, I agree with you that failure to reject the models is politically driven. However, I don’t agree that it was so “they can still blame it all on CO2”: that was merely a convenient (for some) result of the failure to reject the models. And that is my understanding of how we have ended up with dozens of models which are all different but not one of which emulates the climate system of the real Earth.
Richard

richardscourtney
October 14, 2013 5:50 pm

Gcapologist:
Thankyou for your reply to me at October 14, 2013 at 5:18 pm. Unfortunately it is nearly 2 am here and I need to retire for the night. Please be assured that I have not ignored your post which I shall answer in the morning and I hope you will forgive me for this.
Richard

Werner Brozek
October 14, 2013 6:11 pm

By comparing the models to HadCRUT4 and UAH, you picked some of the worst data sets to prove your point. RSS and HadCRUT3 would have worked better. See the 4 graphs below that I have zeroed so they all start at the same point in around 1985. Note how they diverge at the present time.
http://www.woodfortrees.org/plot/hadcrut4gl/from:1979/mean:60/offset:-0.01/plot/hadcrut3vgl/from:1979/mean:60/plot/rss/from:1979/mean:60/offset:0.18/plot/uah/from:1979/mean:60/offset:0.28

Layman
October 14, 2013 6:43 pm

By the models’ logic aerosol is the answer to combating AGW. While cutting down CO2 emission would cost trillions and at best a century to see any effect. By increasing aerosol emission the effect is immediate and costs nothing to promote except relaxation of regulations.
(jk)

jorgekafkazar
October 14, 2013 7:14 pm

Lewis P Buckingham says: “It is clear that the hypothetical models have a governor, as they all fit within a fairly tight band.”
What are you talking about? That plate of spaghetti is a tight band? You must be thinking of Motley Crue or Aerosmith.

jorgekafkazar
October 14, 2013 7:16 pm

Jimbo says: “I wouldn’t be surprised if the 90 climate models…increases to 150….”
I was thinking the same thing. We need more climate models. Then one of them might accidentally get the fit right.

jorgekafkazar
October 14, 2013 7:18 pm

“Projectons” are morons who project their inner mental problems on everybody who disagrees with them.

October 14, 2013 7:24 pm

Jquip says:
“By eyeball it seems you could model the multi-model ensemble reasonably well by simply drawing a pair of lines between {1992, 1993} and 1998”
I’m quite sure that’s what they did. Hansen had his epiphany, they looked at the temperature over a few years and declared it would rise at that rate forever. It’s like if it rained two inches one day so you just assume it will continue at that rate for 100 years.

October 14, 2013 7:45 pm

I think this post and Roy Spencer’s could use a little more meta data. It’s not clear to me when these model runs were made. Were they tuned to that pre-98 “W” or did they predict it?

Jeff F.
October 14, 2013 8:11 pm

I do my best at following this all this; everyone needs to be absolutely sure on the data/statements. Why are there only two lines under the observed data when the graph states three; why not 97.8-percent?

Hoser
October 14, 2013 8:36 pm

sophocles says:
October 14, 2013 at 4:16 pm

This projecton possesses 3 different types of quirks. Quirks have several states that define them. As others have noted, the projecton in question has very strong odd spin. This remarkable spin makes it appear to have a TRUTH quirk, but skillful examination reveals it clearly possesses the SHAM quirk. Further evidence indicates projecton has an excited IN quirk. However, this excited state is expected to decay soon, leaving the projection with an OUT quirk after emitting an amazing amount of electromagnetic energy. Finally, its third quirk rapidly alternates state between FLIM and FLAM, effectively remaining in a hybrid dual state. These quirks clearly place this projecton among the Bogons, and this one may in fact be ubiquitous Gore’s Bogon, a very massive particle suspected to mediate several processes of decay, release of energy, and increase in entropy. In effect, Bogons lead to the depletion of stored forms of energy producing heat with little work. Bogons also make it difficult to replenish stored energy. Their effects are very costly, produce instability, and progressive increase in the mass of governmentium.

Jquip
October 14, 2013 9:33 pm

Canman: “Were they tuned to that pre-98 “W” or did they predict it?”
I’d assume they were tuned on it. Everything thereafter being what happens when you get off the far end of a Taylor series.
Assume that’s not the case: Then the models are actually damn good, within various limitations. But necessarily they require real measurables plugged in for them to reach damn good. Or they wouldn’t fit nice back then and go ‘derp’ in the recent now and beyond. As such, they cannot predict climate not because they are broken as such. But because they rely on an unpredictable measurable. For example, the recent postings about failing to predict the AMO.
One case leaves hope open. Both cases toss out any idea that there’s predictive validity in them.

Jquip
October 14, 2013 9:39 pm

Canman: Forgot to mention. As far as I know the models are all open ended equations rather than a notion of deflection and tension from a baseline. eg. Grey body estimates from insolation. So in that latter case with unpredictable measureables, there’s every reason to expect the values to diverge to an infinity of either sign than to settle down towards a physically known limit or baseline.

george e. smith
October 14, 2013 9:54 pm

Well not to worry Dr Roy, it seems that past numbers are always too high, and GISS and others seem to like to “adjust” the past observed and recorded data downwards, to keep it in general agreement with present day model requirements.
So pay no attention to all that high data observed in the year 2028, because by 2078, all that bloated stuff will be suitably corrected down to then prevailing terraflop computer models.

george e. smith
October 14, 2013 10:09 pm

I don’t winkle, titter, mutter, tinkle, bleat, inkle, or dweet, so no need to keep repeating any of those, for me to click on.
I have discovered that more and more “sites” no longer have e-mail response capabilities, but have little blue birds, and other juvenilia, for people who can’t write a complete English language sentence.
I’m quite a fan of atto-second phenomena; which is about the amount of time, I’m prepared to devote to web sites that lack open e-mail response capabilities. I’m actually an enthusiastic student of archeo-physics, which is the study of all the really interesting things, that happened in the first 10^-43 seconds after the big bang. After that, it gets kind of boring, which is why I like to have e-mail to keep me busy.

J Martin
October 14, 2013 11:56 pm

@ Werner. You make a good point. I would have thought that Hadcrut 4 is suspect and its use a poor choice. Perhaps Roy Spencer is in a position where he needs to make some concessions to whatever the prevailing political correctness is. Perhaps he hasn’t compared hadcrut 3 and 4. Perhaps he feels that he can better make his point against the warmists by using their own (flawed) data sets.

Chris Schoneveld
October 15, 2013 12:18 am

So 3 out of 90 models did a good job. Interesting then to find out what exactly made them stand out and what they did right (or not, in case they were right for the wrong reasons).

Dermot O'Logical
October 15, 2013 12:26 am

I can’t see The Pause in the observational data. Why?

Chris Schoneveld
October 15, 2013 12:46 am

Werner, by putting the sampling at 60, you have created graphs that are unrecognisable. What is your justification?

Chris Schoneveld
October 15, 2013 12:51 am

Ah, I see now, we are talking about a 5 year running mean. Sorry for asking the obvious.

richardscourtney
October 15, 2013 12:53 am

Gcapologist:
This is the reply I promised to provide to your post addressed to me at October 14, 2013 at 5:18 pm. It said in total

Richardscourtney
I would agree. The models do not adequately replicate the ways the earth’s systems work.
I doubt that co2 sensitivity is constant, and I’m sure aerosol formation (hence forcing) is.
When the powers that be rely on incomplete models, how do we advance the conversation?

And I note that at October 14, 2013 at 5:34 pm you provided this addendum

Typo? I’m sure aerosol formation is not constant – so forcing shouldn’t be.

Taking your latter point first, we cannot “advance the conversation” with “the powers that be” because they are fulfilling an agenda and are only conversing with those they employ to justify the agenda. I explained this in another WUWT thread, and this link jumps to that explanation
http://wattsupwiththat.com/2013/10/12/tail-wagging-the-dog-ipcc-to-rework-ar5-to-be-consistent-with-the-spm/#comment-1445687
Hence, if we want to influence “the powers that be” then we need to mobilise public opinion behind the truth of the climate models; i.e.
the climate models have the same demonstrated forecasting skill as the casting of chicken bones for determining the future.
The amounts of CO2 and aerosols in the atmosphere vary from year to year. A climate model provides an output for a specific time by being input with the start temperatures and the amounts of CO2 and aerosols in the atmosphere at that time. And the model is run for time increments until a series of runs provides ‘projections’ through the present and for future times. Aerosols wash out of the atmosphere within days so their concentrations are input as being greatest near the sources of their emissions. CO2 is modeled as being ‘well mixed’ in the atmosphere.
It is assumed that the forcing potentials of CO2 and aerosols are constants. Indeed, as my first post to you explained, the models are ‘tuned’ using these assumptions and would ‘run hot’ without that tuning.
It is an interesting question as to whether those forcing potentials are constants in reality. Of course, the radiative properties of their molecules are constants but that does not mean their forcing potentials are constants in the real atmosphere where responses to radiative changes (i.e. feedbacks) may vary depending on circumstances.
I hope these answers are clear and what you wanted from me.
Richard

October 15, 2013 1:28 am

The Beatles knew a thing or two about GW (TM)
http://youtu.be/Bj1AesMfIf8

Manfred
October 15, 2013 1:29 am

And that despite most of the time, there was additional warming from the PDO warm half cycle.
Around 2030, after the cold half cycle, temperatures may then end well below 1/3 of model predictions, equaling a sensitivity of below 1 deg.

richardscourtney
October 15, 2013 2:23 am

Chris Schoneveld:
Your post at October 15, 2013 at 12:18 am says

So 3 out of 90 models did a good job. Interesting then to find out what exactly made them stand out and what they did right (or not, in case they were right for the wrong reasons).

Of dear! So many mistaken assumptions in so few words.
Clearly, you did not read my above post at October 14, 2013 at 4:21 pm which said

Some here seem to think rejection of the models which are clearly wrong would leave models which are right or have some property which provides forecasting skill and, therefore, merits investigation. Not so. To understand why this idea is an error google for Texas Sharp Shooter fallacy.
Models which have failed to correctly forecast are observed to be inadequate at forecasting. Those two (or three?) which remain are not known to be able to forecast the future from now. One or more of them may be able to do that but it cannot be known if this is true.

So, I spell out the matter as follows.
There are 90 models which are each different so it would be surprising if some did not provide an output something like the reality of the past decade.
But that does NOT indicate that the “3 out of 90 models did a good job”.
And it does NOT indicate that there was anything “they did right”.
And it does NOT indicate “they were right” for any reason (be it right or wrong).
Therefore, the fact that the output of those 3 out of 90 models coincidentally approximated reality over the last decade does NOT suggest they are likely to approximate reality over the next decade.
This is a link to the wicki explanation of the Texas Sharp Shooter fallacy
http://en.wikipedia.org/wiki/Texas_sharpshooter_fallacy
Rejecting all except 3 of the models is drawing the target around the remaining 3 models after the shot was fired.
Richard

Chris Schoneveld
October 15, 2013 3:44 am

Richard,
No I didn’t read your post. They do often make a lot of sense but now and then they have a strong element of pettifoggery, like in this case.
You write:
“But that does NOT indicate that the “3 out of 90 models did a good job”.
And it does NOT indicate that there was anything “they did right”.
And it does NOT indicate “they were right” for any reason (be it right or wrong).”
You make a big deal of a cursory question (you are a bit of a selfrighteous p…k, I have noted, as if you are the moderator or the CEO on this site. I am on my guard when your posts start with “Friends”, I am not your friend). I am not saying or indicating that what they do is intrinsically right. I didn’t add for nothing that they may well be right (meaning that the outcome is more or less in line with observations, because that’s what I meant with “right” or “with a good job”) for the wrong reasons.
I am just curious to know which input parameters have made these 3 models end up close to what was observed in terms of surface temperatures; indeed in what way do they differ from the other 87 definitely wrong models. Just by handwaving and assuming, as you do, that these models “coincidentally approximated reality” does not satisfy my curiosity. It is as simple as that.

October 15, 2013 4:27 am

The graph itself is an interesting image, sort of like a rising column of hot air…

Chris Schoneveld
October 15, 2013 4:40 am

Richardscourtney,
No I didn’t read your post. They do often make a lot of sense but now and then they have a strong element of pettifoggery, like in this case.
You write:
“But that does NOT indicate that the “3 out of 90 models did a good job”.
And it does NOT indicate that there was anything “they did right”.
And it does NOT indicate “they were right” for any reason (be it right or wrong).”
You make a big deal of a cursory question (you have a tendency to be selfrighteous, I have noted. I am on my guard when your posts start with the pretentious “Friends”. Yes guru, we are listening to your wise words). I am not saying or indicating that what they do is intrinsically right. I didn’t add for nothing that they may well be right (meaning that the outcome is more or less in line with observations, because that’s what I meant with “right” or “with a good job”) for the wrong reasons.
I am just curious to know which input parameters have made these 3 models end up close to what was observed in terms of surface temperatures; indeed in what way do they differ from the other 87 definitely wrong models. Just by handwaving and assuming, as you do, that these models “coincidentally approximated reality” does not satisfy my curiosity. It is as simple as that.

David L.
October 15, 2013 4:45 am

87 is 96.6% (97%) of 90. Why does 97% keep popping up in the AGW hoax?

John Whitman
October 15, 2013 5:19 am

Dr Roy Spencer wrote,
“Reality wins, it seems.”

– – – – – – – –
Roy,
Reality.
What of the intellectual leadership of the IPCC Bureau’s ideology whose concept of science allows the models to be counter-observational and still be valid? I suggest some at first just ghostly turned a whiter shade of pale***.
*** Procol Harum
John

October 15, 2013 5:34 am

http://wattsupwiththat.com/2013/10/08/the-taxonomy-of-climate-opinion/#comment-1441009
Steven Mosher says:
“Model answers fall within the range established by observations.”
Steve, did you really say that?
I suggest, with due respect, that the climate models cited by the IPCC are crap (please see Engineering Handbook for technical definition of “crap”).
Regards, Allan
Here is the evidence:
http://wattsupwiththat.com/2013/09/28/models-fail-land-versus-sea-surface-warming-rates/#comment-1432696

Fred
October 15, 2013 6:56 am

Condo be worse. The Global Warmistas could have gone into medical research, in which case there would be mountains of dead people who received incorrect treatments based on their theoretical understanding of science.
All they have really done is torqued public policies and squandered something just south of $2Trillion of public money on useless gree schemes, scams, cons and gifts.
That money would have purchased a lot of research for a cancer cure, better schools, upgrading infrastructure and helping developing nations. You know, useful stuff.
But nooooooooooooo, the Warmistas decided they were smarter than everyone else, smarter than Mother Nature and that their kindergarten level theories were infallible.
Quite the legacy they will leave, a legacy of blind adherence to their quasi Religous Eco Greenie ideas, of eliminating conflicting data, of operating personal smear campaigns to cover up their scientific malfeasance.
History will be very unkind to The Team et al. Mikey Mann could be the poster boy for courses taught to Freshmen science students about what not to do, to be aware of pride getting between scientific process and personal political and religous beliefs.
I would suggest the Warmista problem be summarized in the future as Mann Falling Sky Syndrome.

October 15, 2013 7:12 am

richardscourtney says:
October 15, 2013 at 2:23 am
Therefore, the fact that the output of those 3 out of 90 models coincidentally approximated reality over the last decade does NOT suggest they are likely to approximate reality over the next decade.
============
the results do indicate however, that the 87 other models should be rejected. whether these 3 remaining models were right by accident is quite possible, given that there were 90 “guesses” to start with. You would expect some to be right just by accident.
However, one cannot assume they were simply right by accident, even if it is likely. It would be of interest to know the assumptions in these models – to see how they were different – if at all – from the other models.
The most telling result of the models is that a single model, if run multiple times with very small changes to the inputs/assumptions, gives very different results. That each model itself generates spaghetti, and what we are seeing is simply the averages for each of the 90 models.
However, you cannot average all the possible throws of a pair of dice and arrive at what will actually happen. You get a number between 2 and 12. The average, 7, might be the most likely result, but when you add up all the possibilities, 7 is less likely than something other than 7. Of the 36 possible results, only 6 will yield a 7 (the average). 30 other times you get something other than the average.
This is the fallacy of modelling the future. Contrary to Einstein’s famous quote, God does play dice with the universe. Quantum mechanics has established this at a fundamental level, and until we develop some new theory to replace quantum mechanics, this limits our ability to predict the future at a fundamental level.
Climate modellers are trying to tell us that quantum mechanics does not apply to climate, that the future is given by an average, but this is not what the models themselves are saying. The models show quite clearly that for a given scenario, a near infinite number of futures are possible. And while the average might be the most likely, something other than the average is much more likely.
Hawking covers this in a lecture:
Thus it seems Einstein was doubly wrong when he said, God does not play dice. Not only does God definitely play dice, but He sometimes confuses us by throwing them where they can’t be seen….Thus, the future of the universe is not completely determined by the laws of science, and its present state, as Laplace thought. God still has a few tricks up his sleeve.
http://www.hawking.org.uk/does-god-play-dice.html

passingstatistican
October 15, 2013 7:15 am

richardscourtenay
Never heard of the Texas Sharpshooter fallacy, but you are quite right to discount the wider significance of 2-3 reasonable fits out of 90 tries. Its a familiar problem which arises in many applications of statistics, sometimes called ‘data-mining’ or other names. Researchers carry out a number of experiments or whatever, looking for the result they want. When they succeed on eg. the 90th try, they attach the same statistical significance to the outcome as if it had come from a single experiment. Of course, the probability of heads in 90 spins of a coin is way greater than the probability of a head on one spin. So success on 2-3 out of 90 is most unlikely to be of any significance without a lot of other evidence.

steveta_uk
October 15, 2013 7:26 am

Chris Schoneveld, it seems to me that two or three models have not been shown by the data to be wrong. Whatever some sharpshooter in Texas might think, discarding the models that have been shown to be wrong, and examining the remaining models to see if they are also wrong, seems simply an obvious thing to do.
Throwing out all models because most have been shown to be wrong is exactly like throwing the toys out of the pram.

richardscourtney
October 15, 2013 7:41 am

Chris Schoneveld:
I did read your post at October 15, 2013 at 4:40 am.
It says ou don’t have a clue what you are talking about and try to hide it with ad hom. and bluster.
To quote a narcassist, it is as simple as that.
Richard

richardscourtney
October 15, 2013 7:53 am

ferd berple and steveta_uk:
re your posts at October 15, 2013 at 7:12 am and October 15, 2013 at 7:26 am, respectively.
Yes, the 87 models that did not emulate the last decade are known to be wrong. But it does not follow from this that the other three models have any merit. If you really think examination of the models will reveal why they have and have not matched the last decade then there is no more reason to retain the three than to throw out the 87. Either the examination may reveal something so keep and examine them all, or it won’t so junk them all.
Please read the post from passingstatistican at October 15, 2013 at 7:15 am. It explains the matter and this link jumps to it
http://wattsupwiththat.com/2013/10/14/90-climate-model-projectons-versus-reality/#comment-1448619
Richard

steveta_uk
October 15, 2013 7:58 am

Richard, I don’t think neither Ferd nor I suggested that the 2 or 3 models are right. Simply that these 2 or 3 have not been shown to be wrong.
As I implied before, rejecting them solely on the grounds that other models are wrong is foolish.

October 15, 2013 8:01 am

As RJB would state. The really clever work now would be to bin all the model that are failures and investigate why the two at the bottom have matched reality and what they have assumed compared with the failed ones.
That’s the Potential published paper work What have the model winners assumed that the model failures haven’t.

As rGb does in fact state, the right thing to do is to go through the 90 models, apply a hypothesis test to each model independently, and reject all of the models whose entire envelope of variation is outside the actual climate, all models whose envelope of variation spends over 95% of its time outside of the actual climate, all models that have autocorrelation or variance that is egregiously incorrect compared to the autocorrelation or variance of the actual climate. Eyeballing the data, one could eliminate at least 2/3 of the models in the figure above on the basis of these general rejection criteria for HADCRUT, a lot more on the basis of UAH LTT. The ones that survive the cut would, of course, have a MUCH lower (and still high-biased) mean, corresponding to a vastly reduced climate sensitivity.
We are not finished, in other words, with the plunge in climate sensitivity that is underway. Almost all of the 90 models above clearly have too much positive feedback as is evident BOTH from the growing divergence from the actual climate AND from the fact that their variation is both too broad and too rapid compared to the data. The models are not correctly reproducing the “forces” that maintain approximate climate stability given the “inertia” of the climate relative to all forcings, in other words. To borrow words from the IPCC’s own report it is very likely that the effective restoring forces have “spring constants” in some mean field treatment that are too large, have a positive feedback bias on top of this, and underrepresent the heat capacity of the system. And that’s just from a glance at the data. A systematic analysis of the data (one model at a time) would reveal much more, per model, and might even provide one with enough information to indicate how to “fix” the models to better represent reality without breaking their physics content.
rgb

Russ R.
October 15, 2013 8:06 am

Before drawing any conclusions from this chart, I would ask the following questions:
1. Which RCP scenario is being fed into the models?
2. How closely does the chosen scenario resemble the actual observed concentrations to date?
3. What are the OLS trends for each of the model projections and the observations along with their 95% confidence ranges (after correcting for auto correlation)?
4. To what degree do the confidence ranges overlap?
5. How robust is the analysis to different start years?

more soylent green!
October 15, 2013 8:07 am

We’re dealing with an alien worldview that places theory over empirical data and beliefs/values over results.

richardscourtney
October 15, 2013 8:14 am

Robert Brown:
Thankyou for your post at October 15, 2013 at 8:01 am. This link jumps to it.
http://wattsupwiththat.com/2013/10/14/90-climate-model-projectons-versus-reality/#comment-1448653
I agree, and I am especially grateful for your writing

As rGb does in fact state, the right thing to do is to go through the 90 models, apply a hypothesis test to each model independently, and reject all of the models whose entire envelope of variation is outside the actual climate, all models whose envelope of variation spends over 95% of its time outside of the actual climate, all models that have autocorrelation or variance that is egregiously incorrect compared to the autocorrelation or variance of the actual climate.

That is precisely what I meant when I wrote at October 15, 2013 at 7:53 am

Yes, the 87 models that did not emulate the last decade are known to be wrong. But it does not follow from this that the other three models have any merit. If you really think examination of the models will reveal why they have and have not matched the last decade then there is no more reason to retain the three than to throw out the 87. Either the examination may reveal something so keep and examine them all, or it won’t so junk them all.

Richard

richardscourtney
October 15, 2013 8:25 am

steveta_uk:
Thankyou for your post to me at October 15, 2013 at 7:58 am.
I think we may be talking at cross purposes. Please see my post addressed to Robert Brown at October 15, 2013 at 8:14 am. This link jumps to it
http://wattsupwiththat.com/2013/10/14/90-climate-model-projectons-versus-reality/#comment-1448663
Also, please read his post (I link to it from my post to him) because his post expands on exactly what I was saying, so your reading his post may unblock the impasse between you and me.
Richard

Pamela Gray
October 15, 2013 8:41 am

CMIP5 is a modeler’s smorgasbord of every parameter under the Sun, including the Sun, and is primarily a couple-able and tweakable menu of models with high resolution such that micro-climate regions can be targeted for experiment. I assume that a super computer somewhere has this ready to go with a long waiting list of groups wanting to use it. With this computer design, just about everything can be tweaked, including the component code for each individual parameter. So it is possible (note I did not say probable) that someone may have come close to a fairly good model of how the real climate works. Do we have the name of the two that shadowed observations? Here is a link to a useful page on CMIP5. Scroll down to find the link of all the “official” model names using CMIP5.
http://cmip-pcmdi.llnl.gov/cmip5/guide_to_cmip5.html
Thinking out loud: Note how many groups using CMIP5 are from China. Are they attempting to use CMIP5 to refute the contention that they are the ones pumping anthropogenic CO2 into the air or that it is harmful if they are? CMIP5 certainly can be used by skeptics (you don’t have to couple the AGW-CO2 model component into your selection but you can also dial down greenhouse gas affects) and I wonder who among the list are bent that way. China would be a country that would parse out its study results only if it benefits them, given that its researchers are shackled to government control to a far greater degree than elsewhere. And if it behooves them to continue a ruse, they would do so. If someone were to demonstrate that the world will not burn up, we would have no reason to ship our manufacturing to China if we can do it right here at home (yeh, I know, gotta have cheap labor too). Developing countries willing to manufacture our stuff would be shooting themselves in the foot if they were to show there is no runaway anthropogenic CO2 affect.

Gary Pearse
October 15, 2013 9:02 am

richardscourtney says:
October 15, 2013 at 8:14 am
“Robert Brown:
Thankyou for your post at October 15, 2013 at 8:01 am. This link jumps to it.
http://wattsupwiththat.com/2013/10/14/90-climate-model-projectons-versus-reality/#comment-1448653
I agree,…”
I have been hoping that someone like Robert Brown or Richard Courtney would take a crack at models that DO better match the temperature record. The total failure of all models based only on a CO2 cause has been known for some time, and although CAGW proponents have tried to shoe horn their averages into the envelope, they, too are increasingly unhappy about the performance. I know the skeptics’ only job is to attempt to falsify the science, but because of the cast-in-stone stance of the “proponents”, who are shaving the ECS down a bit and beginning to appeal to aerosols, ENSO, AMO/PDO and even the much denigrated quiet sun to shore up the central CO2 cause, it might be time for someone outside the proponent fraternity to finally do the job for them. I think that otherwise, the obvious open door to better models as a face-saving or renovation of crumbling reputations will invite the brighter ones remaining among them to step through and throw the rest of them under the bus. It would be nice to have a skeptic finally do this and leave them to their phlogiston.

Gary Pearse
October 15, 2013 9:12 am

One step before the modelling, however, would be to do them on UAH temps or Hadcrut 3 at least. Hadcrut 4 is simply jacked up <0.2C in the more recent record to try to nip the bottom of the model ensemble and to shorten the period of the flat temps after UEA's P. Jones had remarked on the 15 year hiatus a few years ago. Ultimately we have to reconstruct temps using only well-sited thermometers because if you get the physics right in the new model, it will be judged not fully adequate because of the fiddled higher trend of the temperature record.

October 15, 2013 9:31 am

The problem with most data sets (and models based on it) is that they are not properly balanced:
1) balanced sh/nh i.e amount weather stations SH = amount weather stations NH
2) balanced by latitude i.e. sum must be zero or close to zero
3) balanced 70% @sea and 30% in-land
4) all continents included
look at global minima to rubbish AGW (minima are not pushing up the average temps)
look at global maxima if you want to predict the future (maxima are an independent proxy for energy coming through the atmosphere)
if you follow these simple procedures you will or you should get the same results as I got
http://blogs.24.com/henryp/2013/02/21/henrys-pool-tables-on-global-warmingcooling/
http://blogs.24.com/henryp/2012/10/02/best-sine-wave-fit-for-the-drop-in-global-maximum-temperatures/

Pamela Gray
October 15, 2013 9:46 am

From my understanding: 1. Modelers get to decide what to use to “drive” the coupled models. (models are initially driven with input data and then let run on their own). The data input driver can be temperature observations from the past as well as input forcings such as SST or CO2 ghg forcing scenarios. You can even put in white noise false data in place of real observations. 2. You then get to decide how you will “tune” the models to match your particular set of observations before setting the computer to “run”. 3. The models are then allowed to run through several “years” worth of a myriad of coupled model calculations designed to mimic how the Earth’s various systems respond to your set of input drivers which you then compare to new observations.
Obviously it takes a while to be able to publish your results. You want the model setup you put together to match observations so I am guessing you need at least 5 years worth of new observations to compare with your modeled output. This is probably why researchers started using the term “projected scenarios”. They could publish right away because scenarios don’t have to be compared to new observations. However, the risk of being a flash in the pan is great. 5 years later your published research could be trashed by mother nature.
Given that so many CMIP5 models (and more importantly, they way they were driven) were indeed trashed by mother nature, a new set of model components and runs will be put together for AR6 in the unending attempt to live on research grants strictly used to study human-forced anthropogenic warming. If I had a booky, I would bet the farm that most of the next generation will not match new observations either. The gravy train will continue until the last voting person standing decides to keep a hand on the old wallet.

Pamela Gray
October 15, 2013 10:19 am

I went looking for the next set of model construction parameters for AR6 the IPCC is looking for. This is something they have done in the past. But could only find this:
http://www.ipcc.ch/apps/eventmanager/documents/5/030920131000-INF_1_p37.pdf
It is a synopsis of how a selection of countries responded to questions about the future of IPCC. Several said stop with the AR reports. You are done with the research. Now tell us how to force people to change.
Kenya was a standout and said to work on the models, bringing up the fact that the regional Kenyan model included a sea-ice coupling parameter. Yes, they stated the obvious. There is no sea-ice in Kenya.

Keitho
Editor
October 15, 2013 10:56 am

I wonder what it would like like if the models had the CO2 knob set to 400ppm today and used that as the trajectory. My guess is that reality would be below them all.

Amber
October 15, 2013 4:53 pm

What other enterprise can consistently make projections that are 200% wrong and have any credibility let alone suggest $trillions be spent on their highly inaccurate projections of doom?
It’s time the adults returned to the room.

October 15, 2013 5:05 pm

This shows that Mother Nature has been wrong. The modelers should sue Mother Nature in a world court. Lots of luck getting Mother Nature to pay damages.

Werner Brozek
October 16, 2013 6:06 am

The ENSO forecasts look very much like the above graph. The predictions are almost all above 0 for this time but the actual numbers dropped from 0.00 to -0.28 over the last two weeks. Is this a coincidence?

Doug Proctor
October 16, 2013 8:03 am

Based on the trend of the graphic, reality meets observation about 5 kilometers underground.
The problem is now apparent: the IPCC hired geophysicists instead of climatologists by mistake.

John Whitman
October 16, 2013 8:09 am

Robert Brown on October 15, 2013 at 8:01 am said,
. . . To borrow words from the IPCC’s own report it is very likely that the effective restoring forces have “spring constants” in some mean field treatment that are too large, have a positive feedback bias on top of this, and underrepresent the heat capacity of the system. . . .”

– – – – – – –
Robert Brown,
I am pleased you started in this thread after you were mentioned by JustAnotherPoster on October 14, 2013 at 2:15 pm.
The IPCC itself recognizes, you pointed out in the above quote, the models used in their report were rather limited in their capability for correctly reproducing future climate behavior.
To me this means they consider the models as still a development work in progress. It implies serious efforts are still needed to overcome what they do incorrectly.
If that is a reasonable statement of the IPCC’s view of the models included in their report then the certainty of the future calculated by the models in the SPM do appear overstated as many have pointed out. What occurs to me is the IPCC can just say, in the face of criticism, something like (my words) => ‘we are being reasonably pre-cautious on the safe side in showing more future warming until, in the indefinite future, we finally get the models right.
Just looking for were the IPCC’s CAGW hockey puck is going to be come January 2014.
Your comment?
PERSONAL REQUEST. => rgb, what is the status of your book ? You have mentioned in previous comments over the past year or so that you are working on (IIRC) a book on epistemic subjects.
John

rgbatduke
October 16, 2013 9:42 am

f that is a reasonable statement of the IPCC’s view of the models included in their report then the certainty of the future calculated by the models in the SPM do appear overstated as many have pointed out. What occurs to me is the IPCC can just say, in the face of criticism, something like (my words) => ‘we are being reasonably pre-cautious on the safe side in showing more future warming until, in the indefinite future, we finally get the models right.
Just looking for were the IPCC’s CAGW hockey puck is going to be come January 2014.
Your comment?
PERSONAL REQUEST. => rgb, what is the status of your book ? You have mentioned in previous comments over the past year or so that you are working on (IIRC) a book on epistemic subjects.

Yes, the IPCC could indeed say something like this. If the authors of its reports wanted to be brought before congress and charged with contempt of congress as the preferable and civilized alternative to being attacked by an angry mob armed with pitchforks and torches.
This would be basically saying “We’ve been lying to you from the beginning, but it is for your own good, maybe, because we could have turned out to be right.”
At times like these, I like to trot out a few lines from Feynman’s Cargo Cult address:
Details that could throw doubt on your interpretation must be given, if you know them. You must do the best you can–if you know anything at all wrong, or possibly wrong–to explain it. If you make a theory, for example, and advertise it, or put it out, then you must also put down all the facts that disagree with it, as well as those that agree with it. There is also a more subtle problem. When you have put a lot of ideas together to make an elaborate theory, you want to make sure, when explaining what it fits, that those things it fits are not just the things that gave you the idea for the theory; but that the finished theory makes something else come out right, in addition.
The removal of the lines clearly stating reasonable doubt from AR5’s SPM — is that the mark of good, honest science? Is failing to point out that the GCMs’ GASTA predictions alone are already in poor agreement with facts, let alone all the other parts of this quintessentially complex theory that don’t fit, the mark of good, honest science?
I would like to add something that’s not essential to the science, but something I kind of believe, which is that you should not fool the layman when you’re talking as a scientist. I am not trying to tell you what to do about cheating on your wife, or fooling your girlfriend, or something like that, when you’re not trying to be a scientist, but just trying to be an ordinary human being. We’ll leave those problems up to you and your rabbi. I’m talking about a specific, extra type of integrity that is not lying, but bending over backwards to show how you’re maybe wrong, that you ought to have when acting as a scientist. And this is our responsibility as scientists, certainly to other scientists, and I think to laymen.
For example, I was a little surprised when I was talking to a friend who was going to go on the radio. He does work on cosmology and astronomy, and he wondered how he would explain what the applications of his work were. “Well,” I said, “there aren’t any.” He said, “Yes, but then we won’t get support for more research of this kind.” I think that’s kind of dishonest. If you’re representing yourself as a scientist, then you should explain to the layman what you’re doing– and if they don’t support you under those circumstances, then that’s their decision.

I would think that the same principle would apply to people who claim that their research is going to “save the world” to guarantee the continuation of what has grown to become one of the world’s fattest funding trees — provided, of course, that your proposed work is looking into anthropogenic global warming (that is, provided that you’ve already begged the question that AGW exists). Is the vast research infrastructure that has been built to study the climate and predict its future capable of surviving a “never mind, sorry, we got it wrong, there probably won’t be any catastrophic AGW after all” moment? Is it capable of the scientific honesty required to commit public seppuku, to literally spill its guts in expiation of the hundreds of billions of dollars misspent and the millions of lives being lost per year all due to the artificial inflation of carbon based energy prices?
Even if it were, will it be given the chance? For a scientist you are right — saying “I was wrong” is a part of honest science. For a politician who supported the incorrect scientific conclusion and wasted our hard earned money and quite possibly contributed to the recent depression and near-collapse of the Euro, there are no second chances. Expect the tail to wag the dog, because the tail is in control of everything from funding streams to an entire network of media devoted to controlling public opinion and perception. Why do you think that they rewrote AR5’s SPM, the same way that they rewrote AR4’s SPM, after the actual scientists were done with it? Because if the SPM honestly stated the uncertainties, the IPCC would never have been more than a tiny, nearly irrelevant UN structure devoted to predicting and ameliorating things like the southeast asian monsoon, and the world’s poorest people would have far cheaper energy. Even the energy companies benefit from the panic that has been created. It has “forced” them to raise their prices, and their profits are margins on those prices. They don’t lose money because of CAGW, they make it!
One example of the principle is this: If you’ve made up your mind to test a theory, or you want to explain some idea, you should always decide to publish it whichever way it comes out. If we only publish results of a certain kind, we can make the argument look good. We must publish BOTH kinds of results.
I say that’s also important in giving certain types of government advice. Supposing a senator asked you for advice about whether drilling a hole should be done in his state; and you decide it would be better in some other state. If you don’t publish such a result, it seems to me you’re not giving scientific advice. You’re being used. If your answer happens to come out in the direction the government or the politicians like, they can use it as an argument in their favor; if it comes out the other way, they don’t publish at all. That’s not giving scientific advice.

Where is the evidence that the people running the GCMs have ever “tested their theories”? When I glance at figure 1.4 of AR5’s SPM, can I pick out model results that nobody sane would consider not to have been falsified by the actual data? I can, easily. There are model results at the very top of the spaghetti envelope that are never anywhere close to the data. Why are they still there in the first place, contributing to the “meaningless mean” of all of the model results? Instead of openly acknowledging that these models, at least, have failed and throwing them out, they are included for the sole reason that they lift the meaningless mean of many GCMs, indeed, lift it a LOT as outliers.
A lowered mean would be in better agreement with observation (and still would be meaningless as the average of many models is not a predictor of anything other than the average of many models according to the theory of statistics) but it would weaken all of the political arguments for expensive and pointless measures such as “Carbon Taxes” that bring great profit to selected individuals and will not, even according to their promoters, solve the climate problem by ameliorating CO_2 in the foreseeable future.
We’ve learned from experience that the truth will come out. Other experimenters will repeat your experiment and find out whether you were wrong or right. Nature’s phenomena will agree or they’ll disagree with your theory. And, although you may gain some temporary fame and excitement, you will not gain a good reputation as a scientist if you haven’t tried to be very careful in this kind of work. And it’s this type of integrity, this kind of care not to fool yourself, that is missing to a large extent in much of the research in cargo cult science.
What more can one say? AR5 has now “officially” bet the farm on its SPM. Everybody knows that the draft openly acknowledged the fact that the models are not working and contained a now-infamous figure that allowed any non-technical reader to see this for themselves. Everybody knows that this acknowledgement was removed in the official release, and that the figure in 1.4 was replaced by a figure that fairly obviously obscured the obvious conclusion — shifting and renormalizing the axes so that the data divergence was less obvious, replacing colored ranges with a plate full of incomprehensible spaghetti so that one can see that some colored strands spend some of their time as low as the actual climate.
At this point they are at the absolute mercy of Nature. In two years, in five years, in ten years, either Nature will cause GASTA to shoot back up by 0.5C or so all at once so that it starts to correspond with the GCM predictions, or it won’t. If it doesn’t — worst case for them, if GASTA remains constant or actually descends (and there are some halfway decent reasons to think that it might well descend even without the use of GCMs at all, and they are not unaware of this and there are signs that the climate community is starting to break ranks on this) then they are done. The temporary fame and excitement that brought Michael Mann to the foreground as the cover story of many books will be replaced by ignominy, congressional investigations, and yes, pitchforks and torches and now they cannot back out of the latter because the changes in AR5’s SPM will be damning proof that climate science has been good old fashioned cargo cult science for two decades now, benefitting nobody but the high priests and politicians leading the cult.
IMO this is unfortunate. Not all climate science has been dishonest. The actual scientific reports from the working groups have been a lot more open about uncertainties (although they too have suffered from political rewriting after the fact to eliminate some of this before the reports were allowed to go public). And I’m certain that a lot of research has been done in the best of faith. But when one is funded to do research on and report on how CAGW is going to affect the migratory behavior of species, you aren’t going to return an answer of “it isn’t” or an answer qualified by “IF AGW turns out to be a correct hypothesis”, you’re going to return an answer of “here are the expected effects given an assumed warming of X”. Bayes might as well never have lived.
Finally, as regards my book Axioms, it is still being written, unfortunately. I’ve finished maybe half of it (and am pretty happy with that half) but the second half is the “messy” part of analyzing things like religion and ethics and I tend to rant too much and write too long every time I dig into it. I’m also insanely busy, and Axioms is just one of a dozen things on the back burner as I’m teaching a large class in physics, trying to fix up and improve my textbooks, get a startup company to take off so I can earn enough wealth in the process to be able to do whatever I like for the rest of my professional career, and get kids through college and launched. But it is near and dear to my heart. You can always go and grab the last image I uploaded before I quit working on it at:
http://www.phy.duke.edu/~rgb/axioms.pdf
This part does a fair job of working through elementary axiomatic metaphysics to where one has a defensibly “best” basis for epistemology and ontology, for a worldview, but one that is flexible enough to accommodate both some personal choice in what to believe and to accommodate the imperfect and incomplete and constantly changing description of “probable best belief” concerning propositions about the real world.
Enjoy, at least so far.
rgb

richardscourtney
October 16, 2013 10:07 am

Anthony:
I have deliberately addressed this to you using the correct spelling of your first name in hope of ensuring that you read it.
I write to request that you ask Robert Brown to ‘tidy’ the parts of his post at October 16, 2013 at 9:42 am which pertain to GCMs and Feynman’s ‘Cargo Cult Science’ with a view to the resulting version being a Guest Essay for WUWT. This link is to his post to which I refer
http://wattsupwiththat.com/2013/10/14/90-climate-model-projectons-versus-reality/#comment-1449916
It really is superb stuff and is far, far too good to be lost as a post near the end of a long thread. In my opinion, it deserves to be a ‘sticky’.
Richard

John Whitman
October 16, 2013 12:02 pm

rgbatduke on October 16, 2013 at 9:42 am

– – – – – – – – –
rgbatduke,
Thank you for your reply. It is going to become a classic WUWT inline comment, I think.
When Feynman was alive I never met him personally or saw his talks live. My loss.
I will look at what you’ve written so far of ‘Axioms’.
Over the last year or so in comments you several times have recommended Thompson Jaynes’ book ‘Probability Theory: The Logic of Science’. I have located a copy at a local university library to which I have access . . . that will be slow reading . . . I expect a lot of heavy statistical lifting in it even with my engineering focused statistics education. : )
John

rgbatduke
October 16, 2013 1:29 pm

Over the last year or so in comments you several times have recommended Thompson Jaynes’ book ‘Probability Theory: The Logic of Science’. I have located a copy at a local university library to which I have access . . . that will be slow reading . . . I expect a lot of heavy statistical lifting in it even with my engineering focused statistics education. : )
You can find a copy online for free here:
http://omega.albany.edu:8008/JaynesBook.html
Less some editing added posthumously. This book was knocking around for well over a decade before Jaynes died; I’ve had a privately circulated copy since maybe the late 80s or early 90s. You should also grab the online PDF of his “Mobil Lecture” — I can’t really give the link because they haven’t set it up with its own page, but it can be found on the WUSTL site with his publications on it. It is the original base for the book.
You should also purchase Richard Cox’s monograph on the law of probable inference. Cox is arguably the originator of this in a 1940-something paper that narrowly preceded Shannon’s paper on information theory that turned out to derive the same thing a different way. The Cox axioms are the basis of probability theory AS the logic of science and by inheritance, what it is reasonable to believe in an entire (reasonably) internally consistent (but unprovable) worldview, and they are very simple axioms indeed. You’ll have no trouble reading either the Cox monograph, the Mobil lectures, or at least the first few chapters of Jaynes’ book derived from the Mobil lectures. They are accessible to anyone with a bit of skill withe algebra and knowledge of the IDEAS of probability theory.
There is more reading out there — George Boole’s book on on thought in the 19th century brilliantly anticipates Cox/Jaynes without the axiomatic foundation, and John Maynard Keynes put down a lot of the axiomatic foundation in his book on probability theory without realizing its broader implications in physics and epistemology. Overall, this work completely replaces logical positivism, Popper’s falsificationism, and other efforts by e.g. the Munich school to come up with a sound basis for a theory of knowledge. It perfectly interpolates the extremes — one cannot generally empirically “prove” an assertion, but positive evidence has some weight, one cannot generally emperically “falsify” an assertion, but negative evidence has some de facto weight. Best of all, there is a lot of reason to think that this is how our brains actually function at the neural level, as a book by David MacKay makes rather brilliantly clear.
As I said, enjoy.
rgb

John Whitman
October 16, 2013 2:00 pm

rgbatduke on October 16, 2013 at 1:29 pm

– – – – – – – –
rgbatduke,
I think there is now no doubt you are a university educator with all the assigned homework opportunities to me! : )
Especially thanks for the free online link to Jaynes’ book. It’s much more convenient than hardcover.
John

rgbatduke
October 17, 2013 1:27 pm

Sure. MacKay’s book is also free online here:
http://www.inference.phy.cam.ac.uk/mackay/itprnn/
But it is going to be tough going for you unless you are at least moderately knowledgeable about computer programming and information theory. OTOH, if you DO manage to slog through it, it is amazing.
rgb