Reality Leaves A Lot To The Imagination

Guest Post by Willis Eschenbach

On an average day you’ll find lots of people, including NASA folks like Gavin Schmidt and James Hansen, evaluating how well the climate models compare to reality. As I showed here, models often don’t do well when matched up with real-world observations. However, they are still held up as being accurate by the IPCC, which uses climate models throughout their report despite their lack of rigorous testing.

XKCD, of course.

But if you ask me, that evaluation of the models by comparing them with reality is not possible. I think that the current uncertainties in the total solar irradiation (TSI) and aerosol forcings are so large that it is useless to compare climate model results with observed global temperature changes.

Why do I make the unsubstantiated claim that the current uncertainties in TSI and aerosols are that large? And even if they are that large, why do I make the even more outlandish claim, that the size of the uncertainties precludes model testing by comparison with global temperature observations?

Well … actually, I’m not the one who made that claim. It was the boffins at NASA, in particular the good folks at GISS, including James Hansen et al., who said so (emphasis mine) …

Total solar irradiance (TSl) is the dominant driver of global climate, whereas both natural and anthropogenic aerosols are climatically important constituents of the atmosphere also affecting global temperature. Although the climate effects of solar variability and aerosols are believed to be nearly comparable to those of the greenhouse gases (GHGs; such as carbon dioxide and methane), they remain poorly quantified and may represent the largest uncertainty regarding climate change. …

The analysis by Hansen et al. (2005), as well as other recent studies (see, e.g., the reviews by Ramaswamy et al. 2001; Kopp et al. 2()05b; Lean et al. 2005; Loeb and Manalo-Smith 2005; Lohmann and Feichter 2005; Pilewskie et al. 2005; Bates et al. 2006; Penner et al. 2006), indicates that the current uncertainties in the TSI and aerosol forcings are so large that they preclude meaningful climate model evaluation by comparison with observed global temperature change. These uncertainties must be reduced significantly for uncertainty in climate sensitivity to be adequately constrained (Schwartz 2004).

“Preclude meaningful climate model evaluation” … hmmm. Of course, they don’t make that admission all the time. They only say things like that when they want to get money for a new satellite. The rest of the time, they claim that their models are accurate to the nearest 0.15°C …

Now, the satellite that the NASA GISS folks (very reasonably) wanted to get money for, the very satellite that the aforementioned study was written to promote, was the Glory Mission … which was one of NASA’s more unfortunate failures.

NASA’s Glory Satellite Fails To Reach Orbit

WASHINGTON — NASA’s Glory mission launched from Vandenberg Air Force Base in California Friday at 5:09:45 a.m. EST failed to reach orbit.

Telemetry indicated the fairing, the protective shell atop the Taurus XL rocket, did not separate as expected about three minutes after launch.

So … does this mean that the evaluation of models by comparison with observed global temperature change is precluded until we get another Glory satellite?

Just askin’ … but it does make it clear that at this point the models are not suitable for use as the basis for billion dollar decisions.

w.

0 0 votes
Article Rating
77 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Tom Harley
April 30, 2011 8:42 pm

Hello Willis, you may like to have a look through this
http://pindanpost.com/2011/05/01/greenhouse2011-pt-1-comments/
list of presentations from ‘Greenhouse 2011’, there are plenty of models by the look of it, and probably need fact checking by WUWT commenters.

Lew Skannen
April 30, 2011 9:00 pm

Excellent analysis.
I really believe that the models are what we need to attack relentlessly. To people who know anything about modelling it is clear that these things are absolutely incapable of producing accurate results for the kind of decisions our deluded politicians are making but to the average Joe Public it is not so clear. They take on faith a lot of what these charlatans tell them.
If we can demonstrate to the public what sorcery these models are we will win the war.

Lawrie Ayres
April 30, 2011 9:01 pm

Thanks Willis for another reason to doubt the claims made by the warmists. It seems part of the reason the warmist scientists and their government sponsors have been doing so well for so long is the ignorance of many voters. A small survey in Perth showed a frightening lack of basic knowledge of both carbon and it’s oxide. Some respondents were so freaked by “carbon pollution” that they wanted to eliminate carbon from their diet. Anyone that stupid should not be allowed to vote. Anyway Jo Nova has more
http://joannenova.com.au/2011/04/carbon-demonized-by-climate-propaganda/#more-14606

Beth Cooper
April 30, 2011 9:04 pm

‘I’m in love with
My computer.’

Paul Brassey
April 30, 2011 9:17 pm

That’s multi-trillion-dollar decisions.

Brian H
April 30, 2011 9:19 pm

It’s just that the uncertainty is too great to reject the T-Null Hypothesis (AGW explains everything unless proven otherwise).

Mike Bromley the Kurd
April 30, 2011 9:29 pm

Statement:
1) Not for public consumption
2) Not for Bursor’s consumption
3) Not for Peer-review
4) Not sure if we uttered it
5) Tempered to fit agenda
6) Fail, spit out and bandage foot
7) All of the above

Mac the Knife
April 30, 2011 9:50 pm

Wi

Dagfinn
April 30, 2011 9:52 pm

Very much related: “Fewer than 3 or 4 percent [of surveyed climate scientists] said they “strongly agree” that computer models produce reliable predictions of future temperatures, precipitation, or other weather events.” http://pajamasmedia.com/blog/bast-understanding-the-global-warming-delusion-pjm-exclusive/

April 30, 2011 9:53 pm

I want to see them start trying to put big emissions capture cans full of Blue Def on top of volcanoes to clean the plumes. Heh!

April 30, 2011 10:00 pm

You speak clearly and fluently Willis and put things in an easy to understand manner. Thanks for that. Hanson and his cronies, the whole AGW crowds reputation for honesty has reached such a low level with me that even if they said the sun was coming up tomorrow I would look for the catch, the lie involved. I guess I should be thankfull in a way, I now examine every statement from everyone with a very skeptical eye.

Terry
April 30, 2011 10:06 pm

I agree, and what amazes me is that the calculated forcing is so small compared to the incoming flux at about 340 W/m2. So to say that they can determine the effects of the “modelled forcing” of 1 to 2 W/m2 on a background of 340 W/m2 to any degree of precision is a pretty long bow to draw, which is why I remain a sceptic.

Alan S. Blue
April 30, 2011 10:09 pm

Note that the error limits for the pre-satellite ground data are not calibrated for the intended use either. The instrumental/observational error (0.1C or whatever’s marked on the instrument) doesn’t actually have much to do with how accurately a individual point source thermometer measures the daily average integrated gridcell temperature. And yet that’s what propagates by assumption in the anomaly method.

ferd berple
April 30, 2011 10:15 pm

Here is how you create a false result in machine learning models, including climate models. Say you want to achieve a CO2 sensitivity of 3C per doubling of CO2:
What you do is take historical temperatures for the past 150 years, then splice a synthetic (artificial) data set to this, for the next 150 years into the future, with temperature going up at 3C per doubling of CO2.
You then train the model for the entire 300 years; so that the weights give you are reasonable fit over the entire period. Then you remove the artificial (future) dataset. The resulting model will then recreate the past accurately, and continue to predict the future that you have trained it to predict. Giving the level of CO2 sensitivity you built into the future data.
The key to making this approach work is to allow for lots of input parameters in your model. Each parameter will have its weighting, and very small changes on the weighting of on parameter versus another can have dramatic effects over time. By introducing a very small increase in the error rate in the first 150 years of data, virtually at the level of the noise, you can custom build just about any model prediction you want.
The classic example of this is linear programming models; where very small machine round off errors lead to huge errors in the final result. To solve these models you need to back iterate through the models to reduce the errors and converge on the solution.
Creating “artificial” models work this technique in reverse. By introducing very small errors into the weights of the model, you can create almost any answer you desire in the future. By keeping the errors small enough, spread over a large number of input parameters, the technique is virtually impossible to detect.
Now you might argue that no reputable climate modeler would do this. However, this is exactly what happens when people build models, just in a more subtle fashion. They run the model numerous times, adjusting the weights until the model delivers the answer they expect. This is the model they then keep.
The effect is that they have trained the model to predict what they expect, exactly as though they had used an artificial (future) dataset for the training in the manner I’ve laid out. The difference is that they are (perhaps) not aware of what is happening, while I’ve laid it out so you can recognize how the cheat takes place.
This “experimenter-expectancy effect” is widely recognized in animal intelligence and learning tests. It should not be surprising that machine learning suffers from the same problem.
“Clever Hans” the horse that could perform arithmetic was one of the most famous examples. “Climate Models”; the machines that could predict climate 100 years in the future are the latest example. Climate models are not predicting future climate. They are predicting the current expectations of climate modelers about future climate. Those models that provide the expected answer survive. Those that do not provide the expected answer are “corrected”.

Brian H
April 30, 2011 10:27 pm

fred;
A simpler way of saying what you iterate above is: Climate Models illustrate the opinions of the experts and programmers who wrote them.
Time for Feynman: “Science is belief in the ignorance of experts.”

Mac the Knife
April 30, 2011 10:33 pm

Willis,
“As I showed here, models often don’t do well when matched up with real-world observations. ”
I whole heartedly agree. You can’t assess the accuracy of a fiction or a faith based assertion versus reality. Fiction and faith are artifices of the human mind. Reality is fact and physics based. The human guesses of values for essential variables needed to make climate models representative of reality are just that – guesses, or fiction. Guided by human intuition? Perhaps, but still just guesses.
I say this as one who has a strong faith in The Creator, based only on the myriad of improbabilities that make this world and universe possible. Such grandeur, such magnificent micro and macro order in a universe ruled by entropy must have a talented designer beyond human comprehension, I believe. But my beliefs are sure as hell not sufficient for any government to base their nations energy or pollution control policies on!
Keep up the good work… and Thanks!

J.Hansford
April 30, 2011 10:58 pm

Lawrie Ayres said…
“Some respondents were so freaked by “carbon pollution” that they wanted to eliminate carbon from their diet. Anyone that stupid should not be allowed to vote.”
Maaaate!…. Anyone that stupid shouldn’t be allowed to breed…. 🙁

Dagfinn
April 30, 2011 11:12 pm

There’s also the contrast between the acknowledged uncertainty and Hansen’s penchant for expressions such as “99 per cent certain” or “dead certain”. This is from his book: “If we also burn the tar sands and tar shale, I believe the Venus syndrome is a dead certainty.” He’s artfully vague about when this will happen, but all the ice has to melt first, so I suppose he might admit that it would take a thousand years or more. That means one of the many things he has to be dead certain about is the lifetime of CO2 in the atmosphere.

Richard111
April 30, 2011 11:14 pm

All these well reasoned arguments about temperature.
Where are the discussions that show the physics of the GE and carbon dioxide are even remotely responsible for any change in the energy balance of the atmosphere?

Dr. Dave
April 30, 2011 11:41 pm

I think everyone could agree that pharmacology is a far more mature, well developed and well defined science than climatology. With our current understanding of pharmacology, biochemistry, pharmacokinetics, pharmacodynamics and strucutre-activity relationships we can create “virtual” new drugs on a computer. In fact, this is done all the time. The lab grunts have to figure out how to synthesize the damn things.
The computer can predict much of the expected pharmacological activity. Care to venture a guess how ofter they’re right? Seldom…in fact, almost never. Empiric testing is mandatory. What the computer predicts and what happens in real life very rarely match. We might have incredible understanding of how things work, but our predictive ability is entirely inadequate. New drugs are designed on computers all the time. Then they’re synthesized and then they’re tested. Most of the time they fail.
Pharmacology is a reasonably well-defined discipline and yet computer modelling fails most of the time. Climatology is like the wild west in terms of being understood and well-defined. Yet we’re expected to accept THOSE computer models as gospel and base trillion dollar public policy decisions on their output? Would you take an untested drug designed on a computer?

pat
April 30, 2011 11:43 pm

One of the easiest things to model would be the political beliefs of the modelers.
Their mental state would be more problematic.

Andy G55
May 1, 2011 12:10 am

@J.Hansford
I love eating carbon based food, I’m just not at all sure I would like to eat too much carbon (as itself)… yucky gritty black stuff….
ie.. I’m not sure how I would have answered that question in the survey.

Noel Hodges
May 1, 2011 12:55 am

I think the biggest weakness of the models is that they all have different predictions.
Which model is the one which contains the” settled ” science ? If the science is settled then there should only be one prediction and obviously this is not the case.
This question needs to asked whenever claims are made the science is settled. These models only have assumptions not settled science that the net effect of all other forcings from the minor increase left in additional carbon dioxide are so positive. High positive feedbacks have not shown up in the sattelite temperature data to date. We should’nt use any of the model predictions until there is much more certainty in whether positive feedbacks from additional carbon dioxide are being counter balanced by negative feedbacks.

SSam
May 1, 2011 1:39 am

“Just askin’ … but it does make it clear that at this point the models are not suitable for use as the basis for billion dollar decisions.”
What? And miss out on yet another AP-ICBM (Anti-Penguin Intercontinental Ballistic Missile)?
Glory was the second in the series…

kwik
May 1, 2011 1:47 am

On Michael Chrichtons’s (rest in peace) official homepage we could read this a couple of years ago;
http://www.crichton-official.com/speechourenvironmentalfuture.html
From IPCCS ”Third Assessment Report” ;
“In climate research and modeling, we should recognize that we are dealing with a coupled non-linear chaotic system. and therefore that the long term prediction of future climate states is not possible.”
Unfortunately all his climate stuff is now removed from his homepage.
I think most scientists realize that this is a basic fact.

Martin Brumby
May 1, 2011 1:58 am

Bishop Hill / Josh has something almost designed for Willis’ thread:-
http://bishophill.squarespace.com/blog/2011/5/1/climate-no-science-josh-96.html#comments

Peter Miller
May 1, 2011 2:09 am

Climate ‘science’ seeks a new low here. As the facts are becoming increasingly inconvenient for the Team and the AGW cult, they have now found something much more reliable than facts and data, namely people’s long ago memories!
Well, I suppose just about anything is more accurate than an IPCC climate model, but this may be pushing the envelope out just a little too far.
http://www.bbc.co.uk/blogs/thereporters/richardblack/

Peter Miller
May 1, 2011 2:20 am

Ferd Berble’s comments here about climate models are some of the best I have seen.
Start with pre-conceived ideas and then make a computer model to fit those pre-conceived ideas by careful weighting of certain, hard to detect, input parameters to match the past and your expectations of the future – and voila, it becomes reliable fact accepted by the Team and the IPCC.
No wonder climate ‘scientists’ never want to release their code and interpretation methodology to the outside world.

Another Ian
May 1, 2011 2:57 am

Re Peter Miller says:
May 1, 2011 at 2:09 am
Peter, now this is interesting. I have to deal with a branch of post normal science in which anything like this is dismissed as “anecdotal”!
They will eventually find that good anecdotal beats junk science hands down

Alexander K
May 1, 2011 3:33 am

Thanks Willis. Right on the button, as usual, but never boring!

old construction worker
May 1, 2011 4:03 am

‘Dagfinn says:
April 30, 2011 at 11:12 pm
Hansen’s “99 per cent certain”…“If we also burn the tar sands and tar shale, I believe the Venus syndrome is a dead certainty.” ‘
As far as I know Hansen’s “Venus Syndrome” is still just a hypothesis with little facts to support it.

May 1, 2011 4:14 am

A company that I worked for has been making the same product since the 1940’s, so they know every detail of the product intimately. Computer modeling was used from the earliest main frame computers to super computers. Development testing of the product was very expensive, costing millions of dollars. To bring and new product online cost in the neighborhood of a billion dollars. To develop a product faster and at a lower cost, management pushed for more computer modeling and less product testing. This replaced the old-timers philosophy of “build them and bust them.” Also, “You don’t advance the technology unless you have a few failures,” and ” If you don’t have a failure your design isn’t agressive enough.” Computer models were indeed a benefit, but Mother Nature can indeed be cruel. Models did not eliminate test failures, which led to costly delays and expensive redesigns. In this business, unlike climate science, it was not possible to tweak the experimental data, especially when a poduct explodes, to agree with the model. Models can be valuable “tools” to understand the theory, but need to be constantly matched against empirical data. Events such as the Pinatubo eruption were used to advance understanding. Conversely, there seems to be a lack of effort to model the cooling since 1998 to understand what is going on. There are, however, efforts to match the data to the models.

1DandyTroll
May 1, 2011 4:17 am

@kwik
“On Michael Chrichtons’s (rest in peace) official homepage we could read this a couple of years ago;
http://www.crichton-official.com/speechourenvironmentalfuture.html
From IPCCS ”Third Assessment Report” ;
“In climate research and modeling, we should recognize that we are dealing with a coupled non-linear chaotic system. and therefore that the long term prediction of future climate states is not possible.”
Unfortunately all his climate stuff is now removed from his homepage.
I think most scientists realize that this is a basic fact.”
I believe you might find all his writings in the archives:
http://web.archive.org/20050101000000*/http://www.crichton-official.com

gingoro
May 1, 2011 4:48 am

When the models don’t agree with reality how do they know if the problem is in the physics model itself or whether they have a bug in their programming?
Dave W

Garry
May 1, 2011 5:05 am

The errors in computer modeling – and the struggle by warmists to hide those errors – are reflected in the Orwellian evolution of their terminology.
We’ve gone from “global warming” to “climate change” to “climate disruption” to “extreme weather” to the more recent “reframing” (inspired by George Lakoff) of “climate pollution” and currently the more nebulous “greenhouse gas pollution.”
All to describe the same and exact speculative hypothesis (CO2 forcing) that I’ve found in New York Times archives dating back to the mid-1970’s. The only thing that’s advanced in climate “science” is its evasiveness in both terminology and practice.

Jessie
May 1, 2011 5:39 am

Was doing some other background research on another subject today and had seen these data/phenomena papers and downloaded to read later. So here they are, if of any use to your work Willis. I have not read them as yet. Theo G spoke of Kuhn some time ago which to my mind was interesting.
1. Harris T (2002) Data Models and the Acquisition and Manipulation of Data Philosophy of Science 70 (5) Proceedings of the 2002 Biennial Meeting of The Philosophy of Science Association
This paper offers an account of data manipulation in scientific experiments. It will be shown that in many cases raw, unprocessed data is not produced, but rather a form of processed data that will be referred to as a data model. The language of data models will be used to provide a framework within which to understand a recent debate about the status of data and data manipulation. It will be seen that a description in terms of data models allows one to understand cases in which data acquisition and data manipulation cannot be separated into two independent activities.
References of this paper are below, as could be gleaned from the web.
2. Bogen J & Woodward J (1988) Saving the Phenomena The Philosophical Review 97 (3) July p303-52
Our general thesis, then, is that we need to distinguish what theories explain (phenomena or facts about phenomena) from what is uncontroversially observable (data). Traditional accounts of the role of observation in science blur this distinction and, because of this, neglect or misdescribe the details of the procedures
by which scientists move from claims about data to claims about phenomena. In doing so, such accounts also overlook a number of considerations which bear on the reliability of scientific knowledge.
p314
3. Woodward J (1989) Data and Phenomena Synthese 79(3) June p393-472
4. Nagel E, Suppes P & Tarski A (1962) Data in Models Logic, Methodology and Philosophy of Science Stanford Uni, eds. Proceedings of 1960 International Conference p252-61
http://suppes-corpus.stanford.edu/articles/mpm/41.pdf
5. Hunting for another author and this popped up – Podnieks (2010) The Limits of Modeling University of Latvia
http://philsci-archive.pitt.edu/5475/1/Podnieks_Limits_of_Modeling.pdf
(2009) Is Scientific Modelling an Indirect Methodology
http://www.ltn.lv/~podnieks/
ferd berple says: April 30, 2011 at 10:15 pm
Illuminating.
I had understood they used various teams of researchers from a multitude of professions and thus [experts in their chosen] variables. Seemingly unconnected, until someone comes up with the grand narrative (usually when the science is questioned) to achieve this result.

Don K
May 1, 2011 6:02 am

Y’know. Putting proper error bars on projections is quite difficult. Nonetheless, I think that if “climate scientists” made an honest attempt to do so, we’d all learn a lot. Why don’t they? The obvious answers are a) it’s hard to do. b) It would probably reveal that some of their past claims may not have been very realistic. And c) that future funding has at least as high a priority in their minds as searching for truth.
I sort of think that models with realistic error estimates might actually be good for something. Particularly when/if climate science grows up and the error bands shrink.

DocMartyn
May 1, 2011 6:05 am

“ferd berple says:
What you do is take historical temperatures for the past 150 years, then splice a synthetic (artificial) data set to this, for the next 150 years into the future, with temperature going up at 3C per doubling of CO2.
You then train the model for the entire 300 years; so that the weights give you are reasonable fit over the entire period. Then you remove the artificial (future) dataset. The resulting model will then recreate the past accurately, and continue to predict the future that you have trained it to predict. Giving the level of CO2 sensitivity you built into the future data.”
That is a fit, not a model. A model has to have some basis in reality and each of the constants used have to have a known elasticity.
To work out the elasticity of each input parameter you have to change it and keep all the other constants the same. If a constant is altered by, say 0.05%, causes a change in output by 50%, then you have an over reliance on a single input.
Each model should show the output, and the outputs for each input, when changed bu +/-5%.
When ever you give curve fitting programs to young graduates, they fall in love with polynomial fits, you can tell them that these fits are only good for performing calculus, but they love them anyway.

Garry
May 1, 2011 6:06 am

B. Kindseth at 4:14 am: This replaced the old-timers philosophy of “build them and bust them.” Also, “You don’t advance the technology unless you have a few failures,” and ” If you don’t have a failure your design isn’t agressive enough.”
I watched a fascinating documentary recently about the development and deployment of the $1+ billion Hughes Glomar Explorer and its successful “blacks ops” attempt to recover a sunken Russian sub in the 1970’s (it’s called “Azorian: The Raising of the K-129”).
It’s a fascinating engineering story, and several project and engineering managers are interviewed at length. Naturally, they make comments about the engineering tests, failures, and delays, which are all explained as necessary elements of the project.

HankHenry
May 1, 2011 7:38 am

My question is: What’s the difference between a model and a picture? It seems that if one “parameterizes” enough, what people call a model is actually just a picture. For example – if a modeler says to himself, “clouds are too difficult” and then just ascribes, based on observed probabilities, that a cloud appears in the cell of a general circulation model, is something being modeled or is it being pictured?

Pamela Gray
May 1, 2011 7:43 am

I continue to see the models missing a more simple reason why they say we should be lots hotter but we aren’t. The tropospheric hot spot is hard to find, and based on greenhouse theory, it is not there like it should be. In fact, it should be easy to find. And it gets the gang of four’s (or three Stooges’ to give one of them a pass) knickers in a twist because it is not behaving as it should. Instead, one day its there, the next day it has disappeared from that spot. It (they?) moves around like an army of ghosts fading in and out in random fashion.
I think the problems with the models lay at the elementary level and is why we get for example freeze warnings, instead of an early warm spring, while CO2 is still increasing. Radiational cooling is one of those simple processes that I think the models get wrong. Another would be pressure differential driven winds moving things up, down, and sideways, or stagnating things over land and water.
The Earth is not encased in a firmament. It is surrounded by layers of filmy, ethereal gases that expand and contract, and are filled with holes that open and close in random fashion, which is exactly what the upper troposphere is doing, and why it is not heating up at a steadily rising pace. The warmth is escaping in bits and pieces here and there.
You can add and subtract aerosols, clouds, and greenhouse gases, including water vapor, and all it does is prompt the wind and ghostly layers to adjust their movement, random holes behavior and translucency. Sure, major additions of these things and we would see real climate change. But this stuff we are getting our knickers in a twist over is easily handled by I think, the simple things that bring about short and long term weather pattern variations.

ferd berple
May 1, 2011 7:52 am

DocMartyn says:
May 1, 2011 at 6:05 am
That is a fit, not a model.
All machine learning programs – which includes climate models – involve curve/surface fitting of some form during the training process. It may not be called curve fitting. It might be called neural nets for example, but it is a form of curve fitting.
Here is a very simple example:
If I have temperature data for 1850, 1851, 1852 … 2011; then if I fit the correct function (curve) to this data such that:
Function(1850) = Temperature(1850)
Function(1851) = Temperature(1851)

Function(2011) = Temperature(2011)
Then, if you want to know the temperature for 2100 for example, then I just plug the year into my function, and it will calculate the temperature in 2100.
Temperature(1850) = ?
Temperature(1850) = Function(2100)
How is this done in climate science?
Say you have forcings such as solar, CO2, aerosols, albedo, land-use, etc. How much does each of these contribute to the global temperature? The answer is that no one knows. There are guestimates at best.
So, to build a model of climate for temperature, climate science assumes everything is linear and uses a formula like this:
Temperature = (X% solar) + (Y% CO2) + (Z% albedo) + (etc) + (etc)
Then by plugging in “actual” numbers by year to temperature, solar, albedo, etc you solve for X,Y,Z to give you the best fit with historical data.
This then gives you a formula, and by plugging in your estimates for future solar, CO2, albedo, etc., in say 2100, the formula will calculate future temperature in 2100.
So, the key then becomes the values you choose for X,Y,Z. Even a small change can have a large effect going forward, because temperature is cumulative. If the earth heats up in one year, that heat is still there at the start of the next year.
So, this allows you to make small adjustments to X,Y,Z today, which can greatly change the forecast of your model in 2100. This allows you to forecast just about anything you want by selecting the correct X,Y,Z. By having not just 3 weights, but 150 as found in climate models, this gives you a whole lot of wiggle room to make adjustments.
Which is what happens in climate science. Those models with values of X,Y,Z that don’t give the answer that climate scientists expect are discarded. Those with the values of X,Y,Z that deliver the answer they expect are retained. Thus, the models are not forecasting climate, they are forecasting the expectations of the climate modelers.
Had climate models been done correctly, using double blind techniques used in animal training studies, they would have predicted that temperatures would level off around 2000, because they had done this before, in a 60 year cycle. However, the climate scientists discarded those models, because they didn’t expect temperatures to level off. They expected them to continue rising unchanged, which is reflected in their models.
The fault in climate science models is that do not recognize that any training study, be it animal training or machine training is subject to contamination by the observer’s own unconscious actions. Thus you must isolate the model from the expectations of the scientists involved in creating the model, which is extremely hard to do.
To be accurate, the model builder cannot see the results of the model before the model is finalized. Otherwise the model builder can and will unconsciously use this information to modify the model to meet his/her expectations, which contaminates the result. Equally, the model builder cannot choose to select one model over another, based on the result. this is cherry picking, which also contaminates the result.

Olen
May 1, 2011 7:54 am

Climate models are being used like the reading of tea leaves and climate scientists are acting like the Oracle of Delphi, removing science from the modern world to antiquity. .

mondo
May 1, 2011 8:07 am

Willis. Your 1 May 3:50 am comment on the “A Prediction Market For Climate Outcomes” thread over at Climate Etc deserves wider coverage here. Perhaps a separate post. Succinct and penetrating, as usual.

Richard M
May 1, 2011 8:11 am

One of the main problems with models is they contain an “interpretation” of the physics, not physics itself. As such they are prone to human bias.
If they tried to get at the physics, the models would run for decades just to model a few seconds. That is, it can’t be done. So, what we end up with is researcher bias and nothing even remotely resembling a real physical model.

ferd berple
May 1, 2011 8:13 am

Keep in mind that those models that do not meet the expectations of the model builder are rejected by the model builder in a process similar to “selection of the fittest”.
For example: A climate model that predicts a doubling of CO2 will result in no change in temperature will be rejected by the model builder and “fixed”. A model that predicts a coubling of CO2 will result in a 10C increase in temperature will be rejected by the model builder and “fixed”.
How is a model “fixed”? Not by changing the physical laws within the model, but rather changing the weightings.
For example, how important is water vapor, or aerosols, or evaporation rates, or convection or land use. No one knows the true answer to these questions, so the models assign weights to these, and the weights are adjusted so that the models will hindcast AND deliver a future projection that is within the moldel builders range of expectations.
This last point is the critical issue because it give rise to the “experimenter-expectancy effect” the is well recognized in animal studies. We know from animal studies that you need to isolate any organism capable of learning from the experimenter through double blind controls.
What climate science has missed is that climate models are machine learning programs. These programs are subject to the “experimenter-expectancy effect” if proper controls are not used during the model training process.
This means double blind controls. That the model builder cannot have access to the output of the model during the training process. Only after the model is fully trained and no more adjustments will be made can the model builder see the results of the model.
This isolation is not done in climate science, which results in models that are not predicting future climate. Rather they are predicting to match the model builders expectations of what will happen. This generates a feedback loop between the model and the model builder which reinforces the ego of the model builder, to the point of an addiction-dependency. The model becomes more real to the model builder than reality.

Richard M
May 1, 2011 8:17 am

BTW, does anyone know if a model has been built to model one cubic centimeter of air? It seems like one might be able to model the physical activity and see what happens as the content is modified to add more CO2. If a cubic cm is too much make it even smaller. At some point one should be able to model every molecule and its interaction with others.
Seems like good PHD thesis topic.

ferd berple
May 1, 2011 8:36 am

The climate models become unstable as you increase the resolution. Unlike reality where the accuracy of the answer improves when you increase the resolution.

DocMartyn
May 1, 2011 9:12 am

Ferd “To be accurate, the model builder cannot see the results of the model before the model is finalized.”
When one makes a model, as opposed to a fit, one uses data sets to independently arrive at the range for a constant. You examine data sets where only one variable is changed, or a close approximation to it.
The example you state:
Temperature = (X% solar) + (Y% CO2) + (Z% albedo) + (etc) + (etc)
is not a model, as it is not made up of components that are individually testable. It is just a mathematical fitting function where the inputs have been given more complex names than is normal.
in models, there are constraints. Want to measure the effects of areoslos? Go to the desert and at 11 O’Clock fly planes one of which is dumping SO2 and one one isn’t. Measure the difference in temp, spectrum and SO2 density with respect to time.
This is just science.

Jessie
May 1, 2011 9:19 am

Dagfinn says: April 30, 2011 at 11:12 pm
and Olen says: May 1, 2011 at 7:54 am

I had been thinking Pygmalion (Rousseau’ Galatea) after reading on Hans the Horse.
The Earthly Paradise: Pygmalion and the image, William Morris (1868)
http://www.victorianweb.org/authors/morris/poems/pygmalion.html
Thank you Pamela and ferd (x2) for your informative postings.

RACookPE1978
Editor
May 1, 2011 9:22 am

What are cell sizes in the latest (bestest?) climate models’ finite element analysis routines?
How do they “model” the differences when a (real-world) coastline crosses a model-specific artificial modeled “cube” that doesn’t match the coastline’s odd shape?
Do the models actually “create” the macroscopic-wide area climate activities we know from observation are present: That is, if you run a model for 100 years, do you see the Gulf Stream and North Japanese ocean currents actually flow, do you see tropical doldrums, polar jet streams and cold fronts and hurricanes and cyclones being created, rolling to the west, and curving up to colder latitudes?

May 1, 2011 9:57 am

These IPCC climate models are totally useless for one very simple reason: they attempt to calculate warming caused by carbon dioxide greenhouse effect. That warming is non-existent as Ferenc Miskolczi has proved. Using NOAA database of weather balloon observations that goes back to 1948 he determined that the transparency of the atmosphere in the infrared where carbon dioxide absorbs has not changed at all for the last 61 years. During that same period the amount of carbon dioxide in the air increased by 21.6 percent. This means that the greenhouse absorption signature of this added carbon dioxide is missing entirely. And it is this added carbon dioxide that is supposed to create the dangerous greenhouse warming these models predict. This absence of IR absorption is an empirical observation of nature, not derived from any theoretical calculation, and it overrides any calculations from theory. If a theory cannot accurately predict observed features of the natural world it has to be either modified or discarded. Specifically, the theory that Arrhenius proposed more than a hundred years ago is clearly not working as the data from these weather balloons indicates. It needs to be re-evaluated in the light of our current knowledge of IR absorption by the atmosphere. It is time for the warming establishment to take a note of this. They should be held accountable for ignoring the observed properties of greenhouse gases revealed by observations of nature. Standing pat on Arrhenius will not do. You can’t just brush it off by saying that Arrhenius knows best. Miskolczi’s result has been out now for over a year but so far no peer-reviewed criticism has appeared. This month he presented it to the European Geosciences Union meeting in Vienna. The title of his presentation was: “The stable stationary value of the Earth’s IR optical thickness.” You take it from there.

Frank
May 1, 2011 10:00 am

The problem with the IPCC’s climate models is that they were not designed and selected to represent the full range of possibilities that is compatible with the IPCC’s understanding of the climate. This problem can be simply illustrated by a simple calculation: When one multiples the estimated climate sensitivity of 1.5-4.5 degC for a forcing of 3.7 W/m2 from 2X CO2 (90% confidence interval) by the estimated 20th century anthropogenic forcing of 0.6-2.4 W/m2 (95% ci), one gets a temperature rise of 1.3 degC +/- 1.2 degC (95% ci). Nevertheless, the IPCC’s models give a much narrower range of results.
When scientists cite the serious uncertainties in climate sensitivity and radiative forcings, we desperately need better satellites to make useful predictions. When scientists cite the modest differences in GCM projections, there is no need for better information.

May 1, 2011 10:35 am

I’ve poked around on the Crichton site and the Archives.
It is, alas, painfully OBVIOUS that the inheritors of the work to keep M.C.’s memory alive, have an agenda and they are actively CENSORING (or trying to) his work.
Anthony, and Willis, I’d suggest that we keep track of when the AWGiots are so dense as to try to “cover their tracks” they actually are waving a red flag, asking the bull to CHARGE.
Let’s not be afraid to CHARGE!
Max

rbateman
May 1, 2011 10:35 am

If they (climate change scientists) are not sure because of large uncertainties in the climate drivers, they are even less certain because of programming bugs when it comes to model input/output functions.
Take, for example, ENSO and the model forecasts for the next year of the same:
It looks like a fan spray. Why? Because there is no certainty.
Linearity breeds trends, and nature is anything but straight lines and trends.

Septic Matthew
May 1, 2011 10:47 am

Dr. Dave wrote: “With our current understanding of pharmacology, biochemistry, pharmacokinetics, pharmacodynamics and strucutre-activity relationships we can create “virtual” new drugs on a computer. In fact, this is done all the time. The lab grunts have to figure out how to synthesize the damn things.
The computer can predict much of the expected pharmacological activity. Care to venture a guess how ofter they’re right? Seldom…in fact, almost never. Empiric testing is mandatory.”
This example should be employed more often in discussions of modeling climate, and the lessons hammered home. Even the models that work, such as well-tested pharmacokinetic models, have substantial inaccuracies in particular patients, so that doses have to be titrated to effects.
There’s a great diversity in modeling, and some models are demonstrably accurate (the models used for guidance and course-correction in interplanetary exploration), whereas others (the models used in climate forecasting) with no demonstrable record of accurate prediction.

May 1, 2011 11:58 am

I was following up your example of models that don’t do well and landed in your “Prediction is hard…” post. You take a considerable amount of effort and show temperature charts about the alleged Pinatubo cooling that Hansen et al. think they have explained. Well, they haven’t and you have fallen into the same trap of thinking that there was such a thing as a Pinatubo cooling. There was none, and the cooling attributed to it is just a la Nina cooling, part of ENSO. They and many others think that El Ninos are something imposed upon the regular temperature curve which they think can be revealed by removing the El Nino influence in their charts. This is of course nonsense. El Ninos are and have been part of our climate since the Isthmus of Panama rose from the sea. The error of assigning the la Nina of 1992 to Pinatubo cooling goes back to Self et al. who published it in “Fire and Mud,” the big Pinatubo book. According to them “Pinatubo climate forcing was stronger than the opposite warming effects of either the El Nino event or anthropogenic greenhouse gases in the period 1991-1993.” Dead wrong. When you look at a high-res satellite record of global temperatures you can see that Pinatubo eruption coincided with the peak of an El Nino warming and the La Nina cooling which followed this El Nino was simply appropriated by them as volcanic cooling because of accidental timing. They of course did not understand the influence of ENSO upon global temperature because no comprehensible theory existed before mine. They also show stratospheric temperatures according to which the first two years after the eruption were taken up by warming, and stratospheric cooling did not begin until 1993. It is clear that the influence of Pinatubo stayed in the stratosphere and never descended to ground level. But they are clueless and start to wonder why it is that El Chichon was not followed by any cooling like that after Pinatubo. It is easy to understand this from the same satellite temperature record. By chance El Chichon erupted exactly when a La Nina warming had just bottomed out and the strong El Nino of 1983 was beginning. Now there was a chance for this volcano to overcome an El Nino as Self et al. hypothecate but it just could not make it. You might want to look at figures 8 to 10 in my book to understand it better. The elaborate models built by Hansen et al. are obviously nonsense because they do not know what they are talking about. As they wrote their paper the Soviet Union was collapsing but they are oblivious and pontificate: “We estimate the predicted global cooling on such practical matters as the severity of the coming Soviet winter and the dates of cherry blossoming next spring…” Deserving to be quoted in the last issue of the Collective Farmers’ Almanac.

Theo Goodwin
May 1, 2011 12:01 pm

DocMartyn says:
May 1, 2011 at 6:05 am
“ferd berple says:
“That is a fit, not a model. A model has to have some basis in reality and each of the constants used have to have a known elasticity.”
What are you suggesting, that models can do the work of hypotheses? If you had the physical hypotheses, you would not need the models. No doubt there are models that are more interesting than linear programming models, but they cannot do the work of prediction that is done by physical hypotheses. This point is very easy to prove. In the case of physical hypotheses, if you predict an event that does not occur and the non-occurrence withstands intense investigation then at least one of your hypotheses must be recognized as false. In a computer model, what is recognized as false. There is nothing to be recognized as false. There is just more jiggering to do.

Jimbo
May 1, 2011 12:16 pm

“………………..the current uncertainties in the TSI and aerosol forcings are so large that they preclude meaningful climate model evaluation by comparison with observed global temperature change. “

Phew, I was getting worried about clouds for a moment. What about all the unknown unknowns???? They could be the monkey throwing a spanner in the works.

Jimbo
May 1, 2011 12:23 pm

Perhaps the climate scientists are putting up a joint front style brave face – otherwise we may fail to act.

Dr. James Lovelock
“The great climate science centres around the world are more than well aware how weak their science is. If you talk to them privately they’re scared stiff of the fact that they don’t really know what the clouds and the aerosols are doing. They could be absolutely running the show. We haven’t got the physics worked out yet. One of the chiefs once said to me that he agreed that they should include the biology in their models, but he said they hadn’t got the physics right yet and it would be five years before they do. “

How useful are the climate models I ask?

May 1, 2011 12:46 pm

Willis may be overstating his case. I think that GCMs are incontrovertible proof of the existence of computer programmers. 🙂

Theo Goodwin
May 1, 2011 1:34 pm

Arno Arrak says:
May 1, 2011 at 11:58 am
“There was none, and the cooling attributed to it is just a la Nina cooling, part of ENSO. They and many others think that El Ninos are something imposed upon the regular temperature curve which they think can be revealed by removing the El Nino influence in their charts. This is of course nonsense.”
Wonderful post. What you point out reveals just how shallow the Warmista understanding of climate really is. They cannot recognize a physical process such as ENSO. They do not think in terms of physical processes. There are two reasons for this. Number one, they care only for computer models. Number two, if they faced the fact that they must understand the physical processes then they would have to admit that we are talking decades before climate science achieves some kind of maturity.

bob
May 1, 2011 1:54 pm

Willis says ” models are not suitable for use as the basis for billion dollar decisions.”
Willis can ruin a free lunch. As a matter of fact, he probably has done that more than once.

Cherry Pick
May 1, 2011 2:05 pm

You forgot one important point of view : the data. Models should be verified by detailed and accurate data about temperature, pressure, clouds, albedo, humidity, ice, compositions of air, land and seas, behavior of mankind, biology, carbon cycle, … . By detailed I mean a measured data point for each grid cell of a model.
What do you think about a model that matches fabricated data?
Is matching land surface temperatures enough for a projection?

Crispin in Waterloo
May 1, 2011 2:38 pm

@Garry:
“I watched a fascinating documentary recently about the development and deployment of the $1+ billion Hughes Glomar Explorer and its successful “blacks ops” attempt to recover a sunken Russian sub in the 1970′s (it’s called “Azorian: The Raising of the K-129″).”
Did the program mention what is probably the real reason the US spent so much time and energy trying to recover that sub? It was not just a lark.
This is what I heard: That sub tried to launch an ‘unauthorised’ SLBM attack on the US mainland and there was a device on the sub that, should a captain try to do that without permission/instructions, detonated the rocket in some manner to prevent a Dr Strangelove situation. That safety mechanism worked and it sunk the sub when the missle was launched.
A very good reason to retrieve it and look at the logs and correspondence, not to mention the captain’s state of mind, was for the US to confirm what they were in all probability being told by the politicians on the other end of the Red Phone, that is was a renegade submarine captain acting on his own.
You will note that ‘the most important bit’ of the submarine ‘broke off and was not retrieved’ just as it came to the surface. Ri-ight… And the most important bit was ‘not worth retrieving’ in a second grab while they were there on site with a purpose-built crane. Ri-ight…
We are never going to know what they got out of that sub.

CDJacobs
May 1, 2011 4:33 pm

Crispin, I can tell you as a former submariner that we wanted that boat because the intellegence to be gained from having the eactual hardware in hand would be a SPECTACULAR coup. It’s just as simple as that.
Lots of hardware was closely observed and gathered, despite the hull breaking up. (Which, BTW, it actually did. If you think about the design of a submarine structure, where loads would normally be located and how they would be reacted, it’s not hard to see that a flooded vessel with a catastrophic damage might not survive this lifting process.)
There were numerous SECRET/NOFORN INTEL briefings in the years that followed. The bases of many Soviet tactics were revealed in weapon/sensor characteristics that we learned from the Glomar mission. Honestly, it’s marvelously interesting without all the conspiracy theory tacked on.

May 1, 2011 7:33 pm

I remember very well the stories in TIME magazine about Howard Hughes’ Glomar Explorer, supposedly built to harvest the magnesium nodules that were said to be laying around on the sea floor for the taking. But as it turned out, TIME was completely bamboozled by the crafty and patriotic Hughes.
The Glomar Explorer was built for one purpose: to lift a sunken Russian submarine from the sea bed, with its invaluable code books and technology.
TIME finally became aware of the ruse after the fact, as is clear from their wildly speculative article here.
Ah, the good old days of the Cold War. Certainly much preferred to today’s civil war between Americans and the Left.

ferd berple
May 1, 2011 11:39 pm

DocMartyn says:
May 1, 2011 at 9:12 am
Ferd “To be accurate, the model builder cannot see the results of the model before the model is finalized.”
That is not correct. There is no agreed standard for the relative contributions of the various forcings – the weightings. These are chosen by the model builder. Those choices that hindcast well, and meet the model builders expectations for the future are retained. Those that do not are modified.
This is curve fitting. As soon as you assign weightings to the various forcings and given the model builder a say in choosing the weightings, the model is prone to the experimenter-expectation effect.
No model builder is going to publish a model that hindcast well but predicts what they consider an unreasonable forecast for the future. They will assume the model is broken and fix it. Similarly, a model that does not hidncast well will not be published. The model builder will assume the model is broken and fix it. In both cases they will typically adjust the weightings, though they may also adjust forcings that are not well established.
This process is similar to genetic algorithms that converge on the answer through trial and error. Again it is curve fitting by cherry picking the model that gives the “best looking” answer.
The process is flawed because it ignores what we have learned from animal training studies. Unless carefully designed, the experimeter becomes part of the model feedback loop. Otherwise we end up with “Clever Hans” the horse that convinced a great man people that it could do arithmetic. What it could actually do was detect body language and stress levels below the level of human perception.
A great many people are similarly convinced that models can predict the future. What the models are actually predicting is what answers climate scientists will find most plausible – today. In effect, the model is detecting the unconsious desires and expectations of the model builder, to deliver an answer most believable to the model builder.
This is a very much simpler problem than predicting the future, as Yogi pointed out. 10 years from now todays models will all be discarded and replaced with new models that future climate scientists will find even more plausible.

ferd berple
May 2, 2011 12:20 am

“There’s a great diversity in modeling, and some models are demonstrably accurate (the models used for guidance and course-correction in interplanetary exploration), whereas others (the models used in climate forecasting) with no demonstrable record of accurate prediction.”
The models used for interplanetary exploration do not work like climate models. There are no discussion in physics over whether gravity contributes 30 or 40 % to the orbit and magnetism 20 or 30 %. So, when we calculate an orbit, we know within very precise limits what to expect.
However in climate science that is not the case at all. We have a large number of factors such as the sun, clouds, land-use, CO2, natural carbon sinks, evaporation, precipitation, solar wind, magnetic fields, orbital mechanics, gravity, etc. etc, and we don’t know how much each contributes to the average temperature for example. All we have are educated guesses as to the ranges.
So, by slight variations in the relative contributions of each of the various factors, we can achieve wide ranges of values in out cliamte models. By trial and error by selecting the right relative contributions, we can come up with a model that hindcasts well, and produces a future prediction that matches expectations.
We can also by trial and error come up with lots of models that hindcast well and predict much different futures. There is the problem. Very small changes in the weightings give large changes in the results. Thus, very small errors in the weightings will give large errors in the results – and we don’t know the weightings with any degree of certainty.
As would happen when we launch a spacecraft. Even a small error in the direction or speed of launch will yield a large error when the craft arrives at its destination.

Anders Valland
May 2, 2011 12:32 am

Willis, you say that James Hansen et. al. said this, but I can not seem to find his name among the authors. I can only see references to his work. What am I missing?

ferd berple
May 2, 2011 12:42 am

“What do you think about a model that matches fabricated data?”
That is a very good question. A model that hindcasts well to an inacurate temperature record for example cannot hope to forecast accurately except by accident.
It is similar to being poorly trained in a subject, learning the wrong answers to questions, then being asked to take an exam. You’ve never learned the right answers so how can you supply them except by a lucky guess.

Chas
May 2, 2011 9:57 am

Anders, the full author list is truncated in the text version that Willis linked to,
If you look at the pdf, below, it say the same thing and has Hansen as an author:
http://journals.ametsoc.org/doi/pdf/10.1175/BAMS-88-5-677

Garry
May 2, 2011 11:07 am

Crispin in Waterloo says May 1, 2011 at 2:38 pm: “Did the program mention what is probably the real reason the US spent so much time and energy trying to recover that sub? … That sub tried to launch an ‘unauthorised’ SLBM attack on the US mainland”
I do not believe that angle was mentioned, unless I missed it at the very beginning of the documentary.
Interesting idea though.

Bruce Stewart
May 2, 2011 12:32 pm

The article by Schwartz is worth a look. In it one may notice that the case for reducing uncertainty around aerosol forcing depends very much on assuming that natural variability is small. (Schwartz bases his low estimate of natural variability on – wait for it – models and paleoclimate proxy reconstructions.) If he had considered the possibility of larger natural variability, his case for understanding aerosols might disappear. The paper Schwartz should have written would tend to support what Willis is saying, although I would expand “uncertainties in TSI” to encompass unknown mechanisms for solar forcing (GCR, UV) as well as unforced internal variability of the natural climate system.

Jessie
May 2, 2011 8:39 pm

Theo Goodwin says: May 1, 2011 at 1:34 pm
Wonderful post. What you point out reveals just how shallow the Warmista understanding of climate really is. They cannot recognize a physical process such as ENSO. They do not think in terms of physical processes. There are two reasons for this. Number one, they care only for computer models. Number two, if they faced the fact that they must understand the physical processes then they would have to admit that we are talking decades before climate science achieves some kind of maturity.

ferd berple’ comment on double-blind experiments instigated my question to you below Theo. I had long considered the AGW and the subsequent changes in language (coal, carbon, pollution, warming, AGW etc) was a faux revolution premised on poor behaviour in science. Most Freirians (followers of Paulo Freire’ education cult) would have understood and continued to promote the ‘paradigmatic revolution’. With the long observed consequence of continuing the very real suffering of humans I should add.
Q: Is this not what Kuhn realised (viz thinking in terms of physical processes) when he was asked to lecture undergrads majoring in the humanities?
Kuhn used the history of science as the context to teach. This teaching led him to discover that the disparity between Aristotelian physics and Newtonian physics was not a difference of degrees but a difference of kind. That the physics of Newton had not developed from the physics of Aristotle.

Don B
May 3, 2011 11:53 am

Willis, did you see this one?
Organic Fuel:
http://xkcd.com/282/

%d
Verified by MonsterInsights