Reality Leaves A Lot To The Imagination

Guest Post by Willis Eschenbach

On an average day you’ll find lots of people, including NASA folks like Gavin Schmidt and James Hansen, evaluating how well the climate models compare to reality. As I showed here, models often don’t do well when matched up with real-world observations. However, they are still held up as being accurate by the IPCC, which uses climate models throughout their report despite their lack of rigorous testing.

XKCD, of course.

But if you ask me, that evaluation of the models by comparing them with reality is not possible. I think that the current uncertainties in the total solar irradiation (TSI) and aerosol forcings are so large that it is useless to compare climate model results with observed global temperature changes.

Why do I make the unsubstantiated claim that the current uncertainties in TSI and aerosols are that large? And even if they are that large, why do I make the even more outlandish claim, that the size of the uncertainties precludes model testing by comparison with global temperature observations?

Well … actually, I’m not the one who made that claim. It was the boffins at NASA, in particular the good folks at GISS, including James Hansen et al., who said so (emphasis mine) …

Total solar irradiance (TSl) is the dominant driver of global climate, whereas both natural and anthropogenic aerosols are climatically important constituents of the atmosphere also affecting global temperature. Although the climate effects of solar variability and aerosols are believed to be nearly comparable to those of the greenhouse gases (GHGs; such as carbon dioxide and methane), they remain poorly quantified and may represent the largest uncertainty regarding climate change. …

The analysis by Hansen et al. (2005), as well as other recent studies (see, e.g., the reviews by Ramaswamy et al. 2001; Kopp et al. 2()05b; Lean et al. 2005; Loeb and Manalo-Smith 2005; Lohmann and Feichter 2005; Pilewskie et al. 2005; Bates et al. 2006; Penner et al. 2006), indicates that the current uncertainties in the TSI and aerosol forcings are so large that they preclude meaningful climate model evaluation by comparison with observed global temperature change. These uncertainties must be reduced significantly for uncertainty in climate sensitivity to be adequately constrained (Schwartz 2004).

“Preclude meaningful climate model evaluation” … hmmm. Of course, they don’t make that admission all the time. They only say things like that when they want to get money for a new satellite. The rest of the time, they claim that their models are accurate to the nearest 0.15°C …

Now, the satellite that the NASA GISS folks (very reasonably) wanted to get money for, the very satellite that the aforementioned study was written to promote, was the Glory Mission … which was one of NASA’s more unfortunate failures.

NASA’s Glory Satellite Fails To Reach Orbit

WASHINGTON — NASA’s Glory mission launched from Vandenberg Air Force Base in California Friday at 5:09:45 a.m. EST failed to reach orbit.

Telemetry indicated the fairing, the protective shell atop the Taurus XL rocket, did not separate as expected about three minutes after launch.

So … does this mean that the evaluation of models by comparison with observed global temperature change is precluded until we get another Glory satellite?

Just askin’ … but it does make it clear that at this point the models are not suitable for use as the basis for billion dollar decisions.

w.

Get notified when a new post is published.
Subscribe today!
0 0 votes
Article Rating
77 Comments
Inline Feedbacks
View all comments
RACookPE1978
Editor
May 1, 2011 9:22 am

What are cell sizes in the latest (bestest?) climate models’ finite element analysis routines?
How do they “model” the differences when a (real-world) coastline crosses a model-specific artificial modeled “cube” that doesn’t match the coastline’s odd shape?
Do the models actually “create” the macroscopic-wide area climate activities we know from observation are present: That is, if you run a model for 100 years, do you see the Gulf Stream and North Japanese ocean currents actually flow, do you see tropical doldrums, polar jet streams and cold fronts and hurricanes and cyclones being created, rolling to the west, and curving up to colder latitudes?

May 1, 2011 9:57 am

These IPCC climate models are totally useless for one very simple reason: they attempt to calculate warming caused by carbon dioxide greenhouse effect. That warming is non-existent as Ferenc Miskolczi has proved. Using NOAA database of weather balloon observations that goes back to 1948 he determined that the transparency of the atmosphere in the infrared where carbon dioxide absorbs has not changed at all for the last 61 years. During that same period the amount of carbon dioxide in the air increased by 21.6 percent. This means that the greenhouse absorption signature of this added carbon dioxide is missing entirely. And it is this added carbon dioxide that is supposed to create the dangerous greenhouse warming these models predict. This absence of IR absorption is an empirical observation of nature, not derived from any theoretical calculation, and it overrides any calculations from theory. If a theory cannot accurately predict observed features of the natural world it has to be either modified or discarded. Specifically, the theory that Arrhenius proposed more than a hundred years ago is clearly not working as the data from these weather balloons indicates. It needs to be re-evaluated in the light of our current knowledge of IR absorption by the atmosphere. It is time for the warming establishment to take a note of this. They should be held accountable for ignoring the observed properties of greenhouse gases revealed by observations of nature. Standing pat on Arrhenius will not do. You can’t just brush it off by saying that Arrhenius knows best. Miskolczi’s result has been out now for over a year but so far no peer-reviewed criticism has appeared. This month he presented it to the European Geosciences Union meeting in Vienna. The title of his presentation was: “The stable stationary value of the Earth’s IR optical thickness.” You take it from there.

Frank
May 1, 2011 10:00 am

The problem with the IPCC’s climate models is that they were not designed and selected to represent the full range of possibilities that is compatible with the IPCC’s understanding of the climate. This problem can be simply illustrated by a simple calculation: When one multiples the estimated climate sensitivity of 1.5-4.5 degC for a forcing of 3.7 W/m2 from 2X CO2 (90% confidence interval) by the estimated 20th century anthropogenic forcing of 0.6-2.4 W/m2 (95% ci), one gets a temperature rise of 1.3 degC +/- 1.2 degC (95% ci). Nevertheless, the IPCC’s models give a much narrower range of results.
When scientists cite the serious uncertainties in climate sensitivity and radiative forcings, we desperately need better satellites to make useful predictions. When scientists cite the modest differences in GCM projections, there is no need for better information.

May 1, 2011 10:35 am

I’ve poked around on the Crichton site and the Archives.
It is, alas, painfully OBVIOUS that the inheritors of the work to keep M.C.’s memory alive, have an agenda and they are actively CENSORING (or trying to) his work.
Anthony, and Willis, I’d suggest that we keep track of when the AWGiots are so dense as to try to “cover their tracks” they actually are waving a red flag, asking the bull to CHARGE.
Let’s not be afraid to CHARGE!
Max

rbateman
May 1, 2011 10:35 am

If they (climate change scientists) are not sure because of large uncertainties in the climate drivers, they are even less certain because of programming bugs when it comes to model input/output functions.
Take, for example, ENSO and the model forecasts for the next year of the same:
It looks like a fan spray. Why? Because there is no certainty.
Linearity breeds trends, and nature is anything but straight lines and trends.

Septic Matthew
May 1, 2011 10:47 am

Dr. Dave wrote: “With our current understanding of pharmacology, biochemistry, pharmacokinetics, pharmacodynamics and strucutre-activity relationships we can create “virtual” new drugs on a computer. In fact, this is done all the time. The lab grunts have to figure out how to synthesize the damn things.
The computer can predict much of the expected pharmacological activity. Care to venture a guess how ofter they’re right? Seldom…in fact, almost never. Empiric testing is mandatory.”
This example should be employed more often in discussions of modeling climate, and the lessons hammered home. Even the models that work, such as well-tested pharmacokinetic models, have substantial inaccuracies in particular patients, so that doses have to be titrated to effects.
There’s a great diversity in modeling, and some models are demonstrably accurate (the models used for guidance and course-correction in interplanetary exploration), whereas others (the models used in climate forecasting) with no demonstrable record of accurate prediction.

May 1, 2011 11:58 am

I was following up your example of models that don’t do well and landed in your “Prediction is hard…” post. You take a considerable amount of effort and show temperature charts about the alleged Pinatubo cooling that Hansen et al. think they have explained. Well, they haven’t and you have fallen into the same trap of thinking that there was such a thing as a Pinatubo cooling. There was none, and the cooling attributed to it is just a la Nina cooling, part of ENSO. They and many others think that El Ninos are something imposed upon the regular temperature curve which they think can be revealed by removing the El Nino influence in their charts. This is of course nonsense. El Ninos are and have been part of our climate since the Isthmus of Panama rose from the sea. The error of assigning the la Nina of 1992 to Pinatubo cooling goes back to Self et al. who published it in “Fire and Mud,” the big Pinatubo book. According to them “Pinatubo climate forcing was stronger than the opposite warming effects of either the El Nino event or anthropogenic greenhouse gases in the period 1991-1993.” Dead wrong. When you look at a high-res satellite record of global temperatures you can see that Pinatubo eruption coincided with the peak of an El Nino warming and the La Nina cooling which followed this El Nino was simply appropriated by them as volcanic cooling because of accidental timing. They of course did not understand the influence of ENSO upon global temperature because no comprehensible theory existed before mine. They also show stratospheric temperatures according to which the first two years after the eruption were taken up by warming, and stratospheric cooling did not begin until 1993. It is clear that the influence of Pinatubo stayed in the stratosphere and never descended to ground level. But they are clueless and start to wonder why it is that El Chichon was not followed by any cooling like that after Pinatubo. It is easy to understand this from the same satellite temperature record. By chance El Chichon erupted exactly when a La Nina warming had just bottomed out and the strong El Nino of 1983 was beginning. Now there was a chance for this volcano to overcome an El Nino as Self et al. hypothecate but it just could not make it. You might want to look at figures 8 to 10 in my book to understand it better. The elaborate models built by Hansen et al. are obviously nonsense because they do not know what they are talking about. As they wrote their paper the Soviet Union was collapsing but they are oblivious and pontificate: “We estimate the predicted global cooling on such practical matters as the severity of the coming Soviet winter and the dates of cherry blossoming next spring…” Deserving to be quoted in the last issue of the Collective Farmers’ Almanac.

Theo Goodwin
May 1, 2011 12:01 pm

DocMartyn says:
May 1, 2011 at 6:05 am
“ferd berple says:
“That is a fit, not a model. A model has to have some basis in reality and each of the constants used have to have a known elasticity.”
What are you suggesting, that models can do the work of hypotheses? If you had the physical hypotheses, you would not need the models. No doubt there are models that are more interesting than linear programming models, but they cannot do the work of prediction that is done by physical hypotheses. This point is very easy to prove. In the case of physical hypotheses, if you predict an event that does not occur and the non-occurrence withstands intense investigation then at least one of your hypotheses must be recognized as false. In a computer model, what is recognized as false. There is nothing to be recognized as false. There is just more jiggering to do.

Jimbo
May 1, 2011 12:16 pm

“………………..the current uncertainties in the TSI and aerosol forcings are so large that they preclude meaningful climate model evaluation by comparison with observed global temperature change. “

Phew, I was getting worried about clouds for a moment. What about all the unknown unknowns???? They could be the monkey throwing a spanner in the works.

Jimbo
May 1, 2011 12:23 pm

Perhaps the climate scientists are putting up a joint front style brave face – otherwise we may fail to act.

Dr. James Lovelock
“The great climate science centres around the world are more than well aware how weak their science is. If you talk to them privately they’re scared stiff of the fact that they don’t really know what the clouds and the aerosols are doing. They could be absolutely running the show. We haven’t got the physics worked out yet. One of the chiefs once said to me that he agreed that they should include the biology in their models, but he said they hadn’t got the physics right yet and it would be five years before they do. “

How useful are the climate models I ask?

Larry Fields
May 1, 2011 12:46 pm

Willis may be overstating his case. I think that GCMs are incontrovertible proof of the existence of computer programmers. 🙂

Theo Goodwin
May 1, 2011 1:34 pm

Arno Arrak says:
May 1, 2011 at 11:58 am
“There was none, and the cooling attributed to it is just a la Nina cooling, part of ENSO. They and many others think that El Ninos are something imposed upon the regular temperature curve which they think can be revealed by removing the El Nino influence in their charts. This is of course nonsense.”
Wonderful post. What you point out reveals just how shallow the Warmista understanding of climate really is. They cannot recognize a physical process such as ENSO. They do not think in terms of physical processes. There are two reasons for this. Number one, they care only for computer models. Number two, if they faced the fact that they must understand the physical processes then they would have to admit that we are talking decades before climate science achieves some kind of maturity.

bob
May 1, 2011 1:54 pm

Willis says ” models are not suitable for use as the basis for billion dollar decisions.”
Willis can ruin a free lunch. As a matter of fact, he probably has done that more than once.

Cherry Pick
May 1, 2011 2:05 pm

You forgot one important point of view : the data. Models should be verified by detailed and accurate data about temperature, pressure, clouds, albedo, humidity, ice, compositions of air, land and seas, behavior of mankind, biology, carbon cycle, … . By detailed I mean a measured data point for each grid cell of a model.
What do you think about a model that matches fabricated data?
Is matching land surface temperatures enough for a projection?

Crispin in Waterloo
May 1, 2011 2:38 pm

@Garry:
“I watched a fascinating documentary recently about the development and deployment of the $1+ billion Hughes Glomar Explorer and its successful “blacks ops” attempt to recover a sunken Russian sub in the 1970′s (it’s called “Azorian: The Raising of the K-129″).”
Did the program mention what is probably the real reason the US spent so much time and energy trying to recover that sub? It was not just a lark.
This is what I heard: That sub tried to launch an ‘unauthorised’ SLBM attack on the US mainland and there was a device on the sub that, should a captain try to do that without permission/instructions, detonated the rocket in some manner to prevent a Dr Strangelove situation. That safety mechanism worked and it sunk the sub when the missle was launched.
A very good reason to retrieve it and look at the logs and correspondence, not to mention the captain’s state of mind, was for the US to confirm what they were in all probability being told by the politicians on the other end of the Red Phone, that is was a renegade submarine captain acting on his own.
You will note that ‘the most important bit’ of the submarine ‘broke off and was not retrieved’ just as it came to the surface. Ri-ight… And the most important bit was ‘not worth retrieving’ in a second grab while they were there on site with a purpose-built crane. Ri-ight…
We are never going to know what they got out of that sub.

CDJacobs
May 1, 2011 4:33 pm

Crispin, I can tell you as a former submariner that we wanted that boat because the intellegence to be gained from having the eactual hardware in hand would be a SPECTACULAR coup. It’s just as simple as that.
Lots of hardware was closely observed and gathered, despite the hull breaking up. (Which, BTW, it actually did. If you think about the design of a submarine structure, where loads would normally be located and how they would be reacted, it’s not hard to see that a flooded vessel with a catastrophic damage might not survive this lifting process.)
There were numerous SECRET/NOFORN INTEL briefings in the years that followed. The bases of many Soviet tactics were revealed in weapon/sensor characteristics that we learned from the Glomar mission. Honestly, it’s marvelously interesting without all the conspiracy theory tacked on.

May 1, 2011 7:33 pm

I remember very well the stories in TIME magazine about Howard Hughes’ Glomar Explorer, supposedly built to harvest the magnesium nodules that were said to be laying around on the sea floor for the taking. But as it turned out, TIME was completely bamboozled by the crafty and patriotic Hughes.
The Glomar Explorer was built for one purpose: to lift a sunken Russian submarine from the sea bed, with its invaluable code books and technology.
TIME finally became aware of the ruse after the fact, as is clear from their wildly speculative article here.
Ah, the good old days of the Cold War. Certainly much preferred to today’s civil war between Americans and the Left.

ferd berple
May 1, 2011 11:39 pm

DocMartyn says:
May 1, 2011 at 9:12 am
Ferd “To be accurate, the model builder cannot see the results of the model before the model is finalized.”
That is not correct. There is no agreed standard for the relative contributions of the various forcings – the weightings. These are chosen by the model builder. Those choices that hindcast well, and meet the model builders expectations for the future are retained. Those that do not are modified.
This is curve fitting. As soon as you assign weightings to the various forcings and given the model builder a say in choosing the weightings, the model is prone to the experimenter-expectation effect.
No model builder is going to publish a model that hindcast well but predicts what they consider an unreasonable forecast for the future. They will assume the model is broken and fix it. Similarly, a model that does not hidncast well will not be published. The model builder will assume the model is broken and fix it. In both cases they will typically adjust the weightings, though they may also adjust forcings that are not well established.
This process is similar to genetic algorithms that converge on the answer through trial and error. Again it is curve fitting by cherry picking the model that gives the “best looking” answer.
The process is flawed because it ignores what we have learned from animal training studies. Unless carefully designed, the experimeter becomes part of the model feedback loop. Otherwise we end up with “Clever Hans” the horse that convinced a great man people that it could do arithmetic. What it could actually do was detect body language and stress levels below the level of human perception.
A great many people are similarly convinced that models can predict the future. What the models are actually predicting is what answers climate scientists will find most plausible – today. In effect, the model is detecting the unconsious desires and expectations of the model builder, to deliver an answer most believable to the model builder.
This is a very much simpler problem than predicting the future, as Yogi pointed out. 10 years from now todays models will all be discarded and replaced with new models that future climate scientists will find even more plausible.

ferd berple
May 2, 2011 12:20 am

“There’s a great diversity in modeling, and some models are demonstrably accurate (the models used for guidance and course-correction in interplanetary exploration), whereas others (the models used in climate forecasting) with no demonstrable record of accurate prediction.”
The models used for interplanetary exploration do not work like climate models. There are no discussion in physics over whether gravity contributes 30 or 40 % to the orbit and magnetism 20 or 30 %. So, when we calculate an orbit, we know within very precise limits what to expect.
However in climate science that is not the case at all. We have a large number of factors such as the sun, clouds, land-use, CO2, natural carbon sinks, evaporation, precipitation, solar wind, magnetic fields, orbital mechanics, gravity, etc. etc, and we don’t know how much each contributes to the average temperature for example. All we have are educated guesses as to the ranges.
So, by slight variations in the relative contributions of each of the various factors, we can achieve wide ranges of values in out cliamte models. By trial and error by selecting the right relative contributions, we can come up with a model that hindcasts well, and produces a future prediction that matches expectations.
We can also by trial and error come up with lots of models that hindcast well and predict much different futures. There is the problem. Very small changes in the weightings give large changes in the results. Thus, very small errors in the weightings will give large errors in the results – and we don’t know the weightings with any degree of certainty.
As would happen when we launch a spacecraft. Even a small error in the direction or speed of launch will yield a large error when the craft arrives at its destination.

Anders Valland
May 2, 2011 12:32 am

Willis, you say that James Hansen et. al. said this, but I can not seem to find his name among the authors. I can only see references to his work. What am I missing?

ferd berple
May 2, 2011 12:42 am

“What do you think about a model that matches fabricated data?”
That is a very good question. A model that hindcasts well to an inacurate temperature record for example cannot hope to forecast accurately except by accident.
It is similar to being poorly trained in a subject, learning the wrong answers to questions, then being asked to take an exam. You’ve never learned the right answers so how can you supply them except by a lucky guess.

May 2, 2011 9:57 am

Anders, the full author list is truncated in the text version that Willis linked to,
If you look at the pdf, below, it say the same thing and has Hansen as an author:
http://journals.ametsoc.org/doi/pdf/10.1175/BAMS-88-5-677

Garry
May 2, 2011 11:07 am

Crispin in Waterloo says May 1, 2011 at 2:38 pm: “Did the program mention what is probably the real reason the US spent so much time and energy trying to recover that sub? … That sub tried to launch an ‘unauthorised’ SLBM attack on the US mainland”
I do not believe that angle was mentioned, unless I missed it at the very beginning of the documentary.
Interesting idea though.

Bruce Stewart
May 2, 2011 12:32 pm

The article by Schwartz is worth a look. In it one may notice that the case for reducing uncertainty around aerosol forcing depends very much on assuming that natural variability is small. (Schwartz bases his low estimate of natural variability on – wait for it – models and paleoclimate proxy reconstructions.) If he had considered the possibility of larger natural variability, his case for understanding aerosols might disappear. The paper Schwartz should have written would tend to support what Willis is saying, although I would expand “uncertainties in TSI” to encompass unknown mechanisms for solar forcing (GCR, UV) as well as unforced internal variability of the natural climate system.