Reposted from The GWPF by Dr. David Whitehouse
There has been some discussion about a paper in Nature Climate Change by Gleckler et al that says they detect “a positive identification (at the 1% level) of an anthropogenic fingerprint in the observed upper-ocean temperature changes, thereby substantially strengthening existing detection and attribution evidence.” What they’ve done is collect datasets on volume-averaged temperatures for the upper 700 metres of the ocean.
But Yeager and Large, writing in the Journal of Climate, looking at the same layer of ocean, come to a different view. They conclude that it is natural variability, rather than long-term climate change that dominates the sea surface temperature and heat flux changes over the 23 years period (1984 – 2006). They say the increase in sea surface temperatures is not driven by radiative forcing. It’s a good example of how two groups of scientists can look at the same data and come to differing conclusions. Guess which paper the media picked up?
Whilst the IPCC AR4 report says that between 1961 – 2003 the upper 700 metres has increased in temperature by 0.1 deg C, some researchers think that that estimate is an artifact of too much interpolation of sparse data (Harrison and Carson 2007). Their analysis found no significant temperature trends over the past 50 years at the 90% level, although this is a minority opinion.
The interesting thing about Gleckler et al is that their unambiguous detection of a human fingerprint in ocean warming comes from what they say are “results from a large multimodel archive of externally forced and unforced simulations.” To you and me this means with and without anthropogenic carbon dioxide. What they have done is to look at the average of a variety of computer models.
What Does A Multimodel Mean?
But what is meant by a multimodel mean, and how is one to know when the ensemble of models used to calculate the mean is large enough to provide meaningful results? Another pertinent question is if averaging multiple models is a safe thing to do in the first place?
Tweek this or that parameter, change a numerical calculation and a different output from a computer model will be obtained. In some quarters these are described as experiments, which is technically true given the definition of the word experiment. But in my view they are not on a par with physical experiments. Experiments in the real world are questions asked of nature with a direct reply. Experiments in computer models are internal questions about a man-made world, not the natural one. That is not to say there is not useful insight here. One just has to be careful not to get carried away.
For some there is insight in diversity. For example the CMIP3 is an ensemble of twenty major climate models and while many of them are related in terms of code and philosophy, many are not. Advocates of the multi-model approach say this is a good thing as if models produced in different ways agree because it provides confidence that we have in some way understood what is going on.
But the key point, philosophically and statistically, is that the various outputs of computer models are not independent samples in the same way that repeated measurements of a physical parameter could be. They are not independent measurements centred on what is the “truth” or reality.
Given this, does the addition of more models and “experiments” force the mean of a multimodel ensemble to converge on reality? Some, such as the work by Professor Reto Knutti believe it doesn’t. I agree, and think it is a precarious step to take to decide that reality and models are drawn from the same population. How can uncertainty in parameterisation of climatic variables and numeric calculations reproduce uncertainty in the climate system? The spread of models is not necessarily related to uncertainty in climate predictions.
When one averages climate models one has to be clear about what the averaging process actually does. It does not legitimize the spread of climate model output, compensating for each models errors and biases, as if an average of widely different predictions is somehow the ‘correct’ representation of reality. Averaging computer models does not necessarily make things clearer. Indeed it results in a loss of signal and throws away what most models are saying.
There are researchers who point out that the average of an ensemble of models actually reproduces real-world climate data better than any individual model. To my mind this is not a great achievement in their favour, but something about which we should be suspicious. It smacks of selection effects, bias and begging the question.
When climate models are used to make predictions some scientists refer to the past as a “training period” meaning that if the model reproduces the past it will reproduce the future. Perhaps it will, but it is not certain that it will and we cannot prove it, especially when the “training period” is shorter than long-term semi-cyclic climatic influences.
My overall impression is that computer climate models, useful as they can be, have been oversold and that they have been often used without interpreting their results in terms of known processes and linked to observations – the recent standstill in the annual average global temperatures is an example.
Modeling the climate is not like modeling a pendulum in which all relevant information is available to forecast its future movement until chaos theory takes over. General climate models are an approximation of the complex physical, chemical and biological processes that happen on Earth. We have incomplete knowledge of what goes on, we have limited computational abilities and sparse real-world observations of many parameters. All these are reasons to be wary of individual models, let alone an average of an ensemble of them.
Feedback: david.whitehouse@thegwpf.org
=====================================================
In case you don’t get the pun in the title, see this, or the video below.

G. Karst says:
June 17, 2012 at 7:32 pm
Everyone is familiar with the adage:
Even a stopped clock indicates the correct time 2x per day
.
100 stopped clocks will indicate the correct time 200x per day… but I still don’t see how this facilitates determining the actual correct time?? GK
====================================================
You just need an accurate clock to know when to look at the stopped ones and you’d need something to know which of the stopped clocks to look at the right time to know which one was right and you’d need ………
(I think I just confused myself.)
Miss Grundy says, June 17, 2012 at 8:41 pm
Probably because he’s chock-full of chalk.
Eric Simpson says,
“And there’s now a reason to do it. It’s called climate change. We need to change our evil, or, unnatural, ways, now! Or else, yes, the next thing you know, because of terrible climate change, we are going to find ourselves living in huts. Mark my word.”
We need to change our evil ways? Who is “we” . If you are doing those things then you need to change your ways, the rest of us are sensible about what we do addressing real pollutants without trying to demonise C02 a naturally occuring trace gas, essential for life on this planet.
As for “Climate Change” remember it started out as “global warming” but since that has not happened for 12 to 15 years the name has convientently changed to “Climate Change” the ultimate non falisfiable theory. Too hot, too cold, too wet, too dry unable to be falsified, that is not science. How much longer will it take without warming, for people to realise that the theory is faisfied. Jones of the UEA is reported to have said by 2020. That is only 71/2 years away.
Many reputable & award winning scientists already believe the theory is falisfied already, but then they are not on the AGW gravy train.
statistics is inductive. no deductions can be made from it.
probability is a very wishful speculation about possibilities, not factual, logical or representative of the ordered universe in which we live.
the gambler’s fallacy takes many forms.
Eric Simpson says,
Nice attempt to redirect the discussion. Be aware that scientists are single minded and that most have had to teach undergrads whose brains were heavily affected by Brownian movement.
You forgot the /SARC tag at the end. Unless of course you were trying to reel some in.
You should also be aware that scientists take things at face value and need help with sarcasm. If you don’t, you will come off as a garden variety fool and become the butt of your own joke.
Meanwhile, back to the article. The modelers should be encouraged to apply their multi-model technique to the stock market and to invest all of their retirement money according to what the multi-models output they will either become very wealthy or old and homeless. Most likely the latter.
There that’s better.
Haste makes waste.
@Ostar. As far as a /sarc, I thought the bolded preface was sufficient: pretending to be a guilt-ridden white warmist:, so I was fully covered, but though I did not intend to “reel them in,” that has been interesting, and speaks perhaps to the authenticity of the voice I portrayed.
One thing, I’m not sure what you are talking about as far as the past, if you want to expand, great!
@AJB. I think I’m 100% in agreement with you. Yes, the warmists at their heart, with insanely radical plans to cut CO2 by 83% by 2050 (this 83% cut passed the U.S. House, so it’s not a fiction!), are wishing for a beautiful pleasant pastoral stone age, but they would get an ugly hungry cold violent version of the Dark Ages instead. Finally, but, when you say “what your ilk has wrought” I’m not sure who that’s addressed to.
[Refresh] Ok, thanks Miss Grundy!
@eyesonu. Somehow I knew there was some inside lol joke to the van by the river! You could say: “You must be an old white guy… not that there’s anything wrong with that.”
@David Dodd: “the cause of that problem resides in the Big White House.” YES! They failed, and when I say they, I mean O and his coterie of radicals, to implement their true dream, the 83% CO2 cuts in the cap & trade bill that would have wrought havoc, thrown a wrecking ball into organized civilization. But they’re doing their best to win 2nd Prize in their war on energy. And it’s that simple, it is a war on energy. They want society slowed down; yes, and if possible, no exaggeration, taken down. They fight drilling not just in Alaska, but everywhere. They’ve derailed the Keystone Pipeline, fought shale, fracking, killed the construction of new coal plants, and laid the groundwork to shutter all existing coal plants, and they still dream of the day that electricity rates will skyrocket, perhaps to $2 per kilowatt hour (avg cost today about 12 cents). As it is, O’s Secretary of Energy (anti-Energy) Chu is on record advocating gas at $8 to $10 a gallon, but he’d probably go gaga over $15 a gallon, and O’s climate change pushing “Science” Czar Holdren calls for the “de-development of the United States.”
@Firey. I agree with you. Notice that my statement was a parody.
Heh, when you have Al Gore blaming Global Warming on cigarette smokers it can sometimes be a bit hard to differentiate satire from reality. :>
Re: “a positive identification (at the 1% level) of an anthropogenic fingerprint in the observed upper-ocean temperature changes, ”
A “positive identification of an anthropogenic fingerprint” means absolutely NOTHING. It’s like the “studies” that scream alarmist messages because we now have the technology to detect picograms of various poisons from various undesirable activities and can say, “Oh No! Your baby is getting 173% more of that nasty thing than the baby in the nice ‘correct’ house next door!”
I’m quite sure that with the proper instrumentation I could easily detect greater amounts of CO in the air outside of Wichita Kansas than I could in the middle of the Australian Outback. Does that mean that living outside of Wichita Kansas is “dangerous”? Am I “poisoning” my children by living there?
Likewise, I wouldn’t be at all surprised to find that proper modeling and instrumentation could detect some microscopic “anthropogenic fingerprint” on our world’s atmosphere and climate. The question however is NOT whether such a thing can be detected, but whether the extent of that fingerprint has any real possibility of causing harm
– MJM
Re Eric Simpson, DNFTT.
But what is meant by a multimodel mean, and how is one to know when the ensemble of models used to calculate the mean is large enough to provide meaningful results?
Another pertinent question is if averaging multiple models is a safe thing to do in the first place?
Good questions.
The number of model runs is irrelevant because you are not sampling a real population.
Probably the best way to view a model run is that its a numerical quantification of the views, opinions, assumptions and biases of the modellers. So, an ensemble (from different models) is the average of these from several groups of modellers.
If you are looking at an ensemble from a single model, the situation is somewhat different.
Gavin Schmidt says,
“Any single realisation can be thought of as being made up of two components – a forced signal and a random realisation of the internal variability (‘noise’). By definition the random component will uncorrelated across different realisations and when you average together many examples you get the forced component (i.e. the ensemble mean).”
What Gavin doesn’t tell us is that the random component is artificially introduced into climate models, in a rather hopeless (IMO) attempt to simulate natural variability.
Again IMO, they should take out the artificial randomness, because its just a cheap slight of hand to try and convince people that the models simulate the real climate.
You should read Bob Tisdale’s excellent,
Do Observations and Climate Models Confirm Or Contradict The Hypothesis of Anthropogenic Global Warming?
https://bobtisdale.wordpress.com/2011/12/12/part-2-do-observations-and-climate-models-confirm-or-contradict-the-hypothesis-of-anthropogenic-global-warming/
Gleckler is just more omitted variable fraud. They leave out all explanatory variables besides CO2 and say that the warming must have been caused by CO2. Leif provides a short list of what they ignore: UV and GCR effects of solar activity. They are just looking at the IPCC models, which only include a single solar variable, TSI, which is parameterized in these models as having 1/40th the forcing effect of CO2 over the last 150 yrs. (In AR4 the ratio was 14 to 1, but AR5 downgrades the variation in TSI.)
Insufficient spatial coverage!!!
We have no idea as to the average temperature of the oceans, still less throughout the 0 to 700m depth.
Given the sparcity of measurements and lack of spatial coverage the notion that we can assess changes to a fraction of degree is not science, it is farce!
We do not have quality data capable of making the claims that these so called scientists claimed to have discovered/evaluated.
Re: Eric Simpson; I thought the “we have to start living in huts now or else we’ll wind up living in huts” was the giveaway (and piercingly funny).
Re: Models and stopped clocks; who says a model has to be right even once? The average of 100 wrong models is the wrong average.
The hammer/nail meme: when all you’ve got is a grant application, everything looks like a man-made global warming signature…
Eric Simpson says:
June 17, 2012 at 6:45 pm
” Now, we need to save our resources, and share.”
As Tonto said to the Lone Ranger, “What do you mean “we”, white man!”
Eric;
hilarious. Did RS print it? Did it garner much praise and agreement? I can’t imagine that anyone twigged it was a parody. Poe’s Law, and all o’that.
To those here who didn’t read or comprehend the intro — Booo!
_______________
GrammarNasty twofer:
1, Another pertinent question is if averaging multiple models is a safe thing to do in the first place?
2, Tweek this or that parameter,
1. This is not a question, but a statement describing a question. No question mark.
2. Tweak. Tweek is not a word.
I sense a G&S comeback.
The “money quote”:
“But the key point, philosophically and statistically, is that the various outputs of computer models are not independent samples in the same way that repeated measurements of a physical parameter could be. They are not independent measurements centred on what is the “truth” or reality.”
Right on !!
RayG says:
June 17, 2012 at 10:21 pm
Re Eric Simpson, DNFTT.
_________________
RayG… you aren’t paying attention!
Eric, next time remember to use the /SARC tag. OK. Or else I’ll have some of what you’re smoking. 😉
Eric Simpson says, June 17, 2012 at 9:42 pm
Eric, the ilk in question consists of modern major generals of the dark depressive kind, already replete with enough indoctrinated conscripts in pursuit of the great global crankfest to sustain their rabid power-grabbing machinations indefinitely. It transcends all political divides. Try this YouTube channel and realize there’s a whole other world out here beyond the back garden of the big chalk-white house, some of it already paying close to a ‘sustainable’ $8 a gallon for fuel.
Best viewed seated on a portable leather upholstered toilet seat while talking to plants to enquire of mother earth’s inner concerns, homeopathic anti-depressants in hand. Alternatively you can just enjoy a frivolous game of “Logical Fallacy Jackpot” at the same time; no popcorn required:
OK Eric, now try this instead. No sanity preserving diversions needed 🙂
Richard Verney, i think we do have an idea of the average temperature of the oceans. Its around 2 Celcius. Not very precise but then, hey, this is climate science who needs exact data?
How very strange, and possibly very telling – I, with my ‘Not good enough for Uni education’ – Immediately ‘got’ that Eric was parodying the AGW Proponents.
This possibly points to a solution – before entering higher-academia ALL prospective students should be required to do at least a year at a (preferably very low-level) real job,
Checkout person, Shelf restocker, MCD Burgermaker … anything that will have them entering the rarified realms of academia with at least some clue as to what real life is like?
Both my girls had (and have) jobs as they pursue their academic quals and are much the better for it
Aidan
I was wondering if some of the spankings we have seen on Wall Street lately are the results of market modeling. The models probably made a lot of money when the market was predictable and have lost a lot of money when it became unpredictable. When the clever traders armed with real data applied their their skills to the problem, they probably cleaned out some accounts such as those of the London Whale aka JPM.
I wonder also if the trend toward modeling in “”climate Science” is the result of the politicos not wanting to spend money but want to claim they are doing something. Paying some modelers to do the work ~ 1 million. Mounting a drilling or coring expedition 5 ~ 20 million or more after all is said and done. A politico would have no problem giving the money to the modelers, their backside would be covered, plus there would be the added bonus of getting the desired results for their other agendas