Reality Leaves A Lot To The Imagination

Guest Post by Willis Eschenbach

On an average day you’ll find lots of people, including NASA folks like Gavin Schmidt and James Hansen, evaluating how well the climate models compare to reality. As I showed here, models often don’t do well when matched up with real-world observations. However, they are still held up as being accurate by the IPCC, which uses climate models throughout their report despite their lack of rigorous testing.

XKCD, of course.

But if you ask me, that evaluation of the models by comparing them with reality is not possible. I think that the current uncertainties in the total solar irradiation (TSI) and aerosol forcings are so large that it is useless to compare climate model results with observed global temperature changes.

Why do I make the unsubstantiated claim that the current uncertainties in TSI and aerosols are that large? And even if they are that large, why do I make the even more outlandish claim, that the size of the uncertainties precludes model testing by comparison with global temperature observations?

Well … actually, I’m not the one who made that claim. It was the boffins at NASA, in particular the good folks at GISS, including James Hansen et al., who said so (emphasis mine) …

Total solar irradiance (TSl) is the dominant driver of global climate, whereas both natural and anthropogenic aerosols are climatically important constituents of the atmosphere also affecting global temperature. Although the climate effects of solar variability and aerosols are believed to be nearly comparable to those of the greenhouse gases (GHGs; such as carbon dioxide and methane), they remain poorly quantified and may represent the largest uncertainty regarding climate change. …

The analysis by Hansen et al. (2005), as well as other recent studies (see, e.g., the reviews by Ramaswamy et al. 2001; Kopp et al. 2()05b; Lean et al. 2005; Loeb and Manalo-Smith 2005; Lohmann and Feichter 2005; Pilewskie et al. 2005; Bates et al. 2006; Penner et al. 2006), indicates that the current uncertainties in the TSI and aerosol forcings are so large that they preclude meaningful climate model evaluation by comparison with observed global temperature change. These uncertainties must be reduced significantly for uncertainty in climate sensitivity to be adequately constrained (Schwartz 2004).

“Preclude meaningful climate model evaluation” … hmmm. Of course, they don’t make that admission all the time. They only say things like that when they want to get money for a new satellite. The rest of the time, they claim that their models are accurate to the nearest 0.15°C …

Now, the satellite that the NASA GISS folks (very reasonably) wanted to get money for, the very satellite that the aforementioned study was written to promote, was the Glory Mission … which was one of NASA’s more unfortunate failures.

NASA’s Glory Satellite Fails To Reach Orbit

WASHINGTON — NASA’s Glory mission launched from Vandenberg Air Force Base in California Friday at 5:09:45 a.m. EST failed to reach orbit.

Telemetry indicated the fairing, the protective shell atop the Taurus XL rocket, did not separate as expected about three minutes after launch.

So … does this mean that the evaluation of models by comparison with observed global temperature change is precluded until we get another Glory satellite?

Just askin’ … but it does make it clear that at this point the models are not suitable for use as the basis for billion dollar decisions.

w.

0 0 votes
Article Rating

Discover more from Watts Up With That?

Subscribe to get the latest posts sent to your email.

77 Comments
Inline Feedbacks
View all comments
Tom Harley
April 30, 2011 8:42 pm

Hello Willis, you may like to have a look through this
http://pindanpost.com/2011/05/01/greenhouse2011-pt-1-comments/
list of presentations from ‘Greenhouse 2011’, there are plenty of models by the look of it, and probably need fact checking by WUWT commenters.

Lew Skannen
April 30, 2011 9:00 pm

Excellent analysis.
I really believe that the models are what we need to attack relentlessly. To people who know anything about modelling it is clear that these things are absolutely incapable of producing accurate results for the kind of decisions our deluded politicians are making but to the average Joe Public it is not so clear. They take on faith a lot of what these charlatans tell them.
If we can demonstrate to the public what sorcery these models are we will win the war.

Lawrie Ayres
April 30, 2011 9:01 pm

Thanks Willis for another reason to doubt the claims made by the warmists. It seems part of the reason the warmist scientists and their government sponsors have been doing so well for so long is the ignorance of many voters. A small survey in Perth showed a frightening lack of basic knowledge of both carbon and it’s oxide. Some respondents were so freaked by “carbon pollution” that they wanted to eliminate carbon from their diet. Anyone that stupid should not be allowed to vote. Anyway Jo Nova has more
http://joannenova.com.au/2011/04/carbon-demonized-by-climate-propaganda/#more-14606

Beth Cooper
April 30, 2011 9:04 pm

‘I’m in love with
My computer.’

Paul Brassey
April 30, 2011 9:17 pm

That’s multi-trillion-dollar decisions.

Brian H
April 30, 2011 9:19 pm

It’s just that the uncertainty is too great to reject the T-Null Hypothesis (AGW explains everything unless proven otherwise).

Mike Bromley the Kurd
April 30, 2011 9:29 pm

Statement:
1) Not for public consumption
2) Not for Bursor’s consumption
3) Not for Peer-review
4) Not sure if we uttered it
5) Tempered to fit agenda
6) Fail, spit out and bandage foot
7) All of the above

Mac the Knife
April 30, 2011 9:50 pm

Wi

Dagfinn
April 30, 2011 9:52 pm

Very much related: “Fewer than 3 or 4 percent [of surveyed climate scientists] said they “strongly agree” that computer models produce reliable predictions of future temperatures, precipitation, or other weather events.” http://pajamasmedia.com/blog/bast-understanding-the-global-warming-delusion-pjm-exclusive/

April 30, 2011 9:53 pm

I want to see them start trying to put big emissions capture cans full of Blue Def on top of volcanoes to clean the plumes. Heh!

April 30, 2011 10:00 pm

You speak clearly and fluently Willis and put things in an easy to understand manner. Thanks for that. Hanson and his cronies, the whole AGW crowds reputation for honesty has reached such a low level with me that even if they said the sun was coming up tomorrow I would look for the catch, the lie involved. I guess I should be thankfull in a way, I now examine every statement from everyone with a very skeptical eye.

April 30, 2011 10:06 pm

I agree, and what amazes me is that the calculated forcing is so small compared to the incoming flux at about 340 W/m2. So to say that they can determine the effects of the “modelled forcing” of 1 to 2 W/m2 on a background of 340 W/m2 to any degree of precision is a pretty long bow to draw, which is why I remain a sceptic.

Alan S. Blue
April 30, 2011 10:09 pm

Note that the error limits for the pre-satellite ground data are not calibrated for the intended use either. The instrumental/observational error (0.1C or whatever’s marked on the instrument) doesn’t actually have much to do with how accurately a individual point source thermometer measures the daily average integrated gridcell temperature. And yet that’s what propagates by assumption in the anomaly method.

ferd berple
April 30, 2011 10:15 pm

Here is how you create a false result in machine learning models, including climate models. Say you want to achieve a CO2 sensitivity of 3C per doubling of CO2:
What you do is take historical temperatures for the past 150 years, then splice a synthetic (artificial) data set to this, for the next 150 years into the future, with temperature going up at 3C per doubling of CO2.
You then train the model for the entire 300 years; so that the weights give you are reasonable fit over the entire period. Then you remove the artificial (future) dataset. The resulting model will then recreate the past accurately, and continue to predict the future that you have trained it to predict. Giving the level of CO2 sensitivity you built into the future data.
The key to making this approach work is to allow for lots of input parameters in your model. Each parameter will have its weighting, and very small changes on the weighting of on parameter versus another can have dramatic effects over time. By introducing a very small increase in the error rate in the first 150 years of data, virtually at the level of the noise, you can custom build just about any model prediction you want.
The classic example of this is linear programming models; where very small machine round off errors lead to huge errors in the final result. To solve these models you need to back iterate through the models to reduce the errors and converge on the solution.
Creating “artificial” models work this technique in reverse. By introducing very small errors into the weights of the model, you can create almost any answer you desire in the future. By keeping the errors small enough, spread over a large number of input parameters, the technique is virtually impossible to detect.
Now you might argue that no reputable climate modeler would do this. However, this is exactly what happens when people build models, just in a more subtle fashion. They run the model numerous times, adjusting the weights until the model delivers the answer they expect. This is the model they then keep.
The effect is that they have trained the model to predict what they expect, exactly as though they had used an artificial (future) dataset for the training in the manner I’ve laid out. The difference is that they are (perhaps) not aware of what is happening, while I’ve laid it out so you can recognize how the cheat takes place.
This “experimenter-expectancy effect” is widely recognized in animal intelligence and learning tests. It should not be surprising that machine learning suffers from the same problem.
“Clever Hans” the horse that could perform arithmetic was one of the most famous examples. “Climate Models”; the machines that could predict climate 100 years in the future are the latest example. Climate models are not predicting future climate. They are predicting the current expectations of climate modelers about future climate. Those models that provide the expected answer survive. Those that do not provide the expected answer are “corrected”.

Brian H
April 30, 2011 10:27 pm

fred;
A simpler way of saying what you iterate above is: Climate Models illustrate the opinions of the experts and programmers who wrote them.
Time for Feynman: “Science is belief in the ignorance of experts.”

Mac the Knife
April 30, 2011 10:33 pm

Willis,
“As I showed here, models often don’t do well when matched up with real-world observations. ”
I whole heartedly agree. You can’t assess the accuracy of a fiction or a faith based assertion versus reality. Fiction and faith are artifices of the human mind. Reality is fact and physics based. The human guesses of values for essential variables needed to make climate models representative of reality are just that – guesses, or fiction. Guided by human intuition? Perhaps, but still just guesses.
I say this as one who has a strong faith in The Creator, based only on the myriad of improbabilities that make this world and universe possible. Such grandeur, such magnificent micro and macro order in a universe ruled by entropy must have a talented designer beyond human comprehension, I believe. But my beliefs are sure as hell not sufficient for any government to base their nations energy or pollution control policies on!
Keep up the good work… and Thanks!

J.Hansford
April 30, 2011 10:58 pm

Lawrie Ayres said…
“Some respondents were so freaked by “carbon pollution” that they wanted to eliminate carbon from their diet. Anyone that stupid should not be allowed to vote.”
Maaaate!…. Anyone that stupid shouldn’t be allowed to breed…. 🙁

Dagfinn
April 30, 2011 11:12 pm

There’s also the contrast between the acknowledged uncertainty and Hansen’s penchant for expressions such as “99 per cent certain” or “dead certain”. This is from his book: “If we also burn the tar sands and tar shale, I believe the Venus syndrome is a dead certainty.” He’s artfully vague about when this will happen, but all the ice has to melt first, so I suppose he might admit that it would take a thousand years or more. That means one of the many things he has to be dead certain about is the lifetime of CO2 in the atmosphere.

Richard111
April 30, 2011 11:14 pm

All these well reasoned arguments about temperature.
Where are the discussions that show the physics of the GE and carbon dioxide are even remotely responsible for any change in the energy balance of the atmosphere?

Dr. Dave
April 30, 2011 11:41 pm

I think everyone could agree that pharmacology is a far more mature, well developed and well defined science than climatology. With our current understanding of pharmacology, biochemistry, pharmacokinetics, pharmacodynamics and strucutre-activity relationships we can create “virtual” new drugs on a computer. In fact, this is done all the time. The lab grunts have to figure out how to synthesize the damn things.
The computer can predict much of the expected pharmacological activity. Care to venture a guess how ofter they’re right? Seldom…in fact, almost never. Empiric testing is mandatory. What the computer predicts and what happens in real life very rarely match. We might have incredible understanding of how things work, but our predictive ability is entirely inadequate. New drugs are designed on computers all the time. Then they’re synthesized and then they’re tested. Most of the time they fail.
Pharmacology is a reasonably well-defined discipline and yet computer modelling fails most of the time. Climatology is like the wild west in terms of being understood and well-defined. Yet we’re expected to accept THOSE computer models as gospel and base trillion dollar public policy decisions on their output? Would you take an untested drug designed on a computer?

pat
April 30, 2011 11:43 pm

One of the easiest things to model would be the political beliefs of the modelers.
Their mental state would be more problematic.

Andy G55
May 1, 2011 12:10 am

@J.Hansford
I love eating carbon based food, I’m just not at all sure I would like to eat too much carbon (as itself)… yucky gritty black stuff….
ie.. I’m not sure how I would have answered that question in the survey.

Noel Hodges
May 1, 2011 12:55 am

I think the biggest weakness of the models is that they all have different predictions.
Which model is the one which contains the” settled ” science ? If the science is settled then there should only be one prediction and obviously this is not the case.
This question needs to asked whenever claims are made the science is settled. These models only have assumptions not settled science that the net effect of all other forcings from the minor increase left in additional carbon dioxide are so positive. High positive feedbacks have not shown up in the sattelite temperature data to date. We should’nt use any of the model predictions until there is much more certainty in whether positive feedbacks from additional carbon dioxide are being counter balanced by negative feedbacks.

SSam
May 1, 2011 1:39 am

“Just askin’ … but it does make it clear that at this point the models are not suitable for use as the basis for billion dollar decisions.”
What? And miss out on yet another AP-ICBM (Anti-Penguin Intercontinental Ballistic Missile)?
Glory was the second in the series…

kwik
May 1, 2011 1:47 am

On Michael Chrichtons’s (rest in peace) official homepage we could read this a couple of years ago;
http://www.crichton-official.com/speechourenvironmentalfuture.html
From IPCCS ”Third Assessment Report” ;
“In climate research and modeling, we should recognize that we are dealing with a coupled non-linear chaotic system. and therefore that the long term prediction of future climate states is not possible.”
Unfortunately all his climate stuff is now removed from his homepage.
I think most scientists realize that this is a basic fact.

1 2 3 4
Verified by MonsterInsights