Model Charged with Excessive Use of Forcing

Guest Post by Willis Eschenbach

The GISS Model E is the workhorse of NASA’s climate models. I got interested in the GISSE hindcasts of the 20th century due to an interesting posting by Lucia over at the Blackboard. She built a simple model (which she calls “Lumpy”) which does a pretty good job of emulating the GISS model results, using only a model including forcings and a time lag. Stephen Mosher points out how to access the NASA data here (with a good discussion), so I went to the NASA site he indicated and got the GISSE results he points to. I plotted them against the GISS version of the global surface air temperature record in Figure 1.

Figure 1. GISSE Global Circulation Model (GCM or “global climate model”) hindcast 1880-1900, and GISS Global Temperature (GISSTemp) Data. Photo shows the new NASA 15,000-processor “Discover” supercomputer. Top speed is 160 trillion floating point operations per second (a unit known by the lovely name of “teraflops”). What it does in a day would take my desktop computer seventeen years.

Now, that all looks impressive. The model hindcast temperatures are a reasonable match both by eyeball and mathematically to the observed temperature. (R^2 = 0.60). True, it misses the early 20th century warming (1920-1940) entirely, but overall it’s a pretty close fit. And the supercomputer does 160 teraflops. So what could go wrong?

To try to understand the GISSE model, I got the forcings used for the GISSE simulation. I took the total forcings, and I compared them to the GISSE model results. The forcings were yearly averages, so I compared them to the yearly results of the GISSE model. Figure 2 shows a comparison of the GISSE model hindcast temperatures and a linear regression of those temperatures on the total forcings.

Figure 2. A comparison of the GISSE annual model results with a linear regression of those results on the total forcing. (A “linear regression” estimates the best fit of the forcings to the model results). Total forcing is the sum of all forcings used by the GISSE model, including volcanos, solar, GHGs, aerosols, and the like. Deep drops in the forcings (and in the model results) are the result of stratospheric aerosols from volcanic eruptions.

Now to my untutored eye, Fig. 2 has all the hallmarks of a linear model with a missing constant trend of unknown origin. (The hallmarks are the obvious similarity in shape combined with differing trends and a low R^2.) To see if that was the case I redid my analysis, this time including a constant trend. As is my custom, I merely included the years of the observation in the analysis to get that trend. That gave me Figure 3.

Figure 3. A comparison of the GISSE annual model results with a regression of the total forcing on those results, including a constant annual trend. Note the very large increase in R^2 compared to Fig. 2, and the near-perfect match of the two datasets.

There are several surprising things in Figure 3, and I’m not sure I see all of the implications of those things yet. The first surprise was how close the model results are to a bozo simple linear response to the forcings plus the passage of time (R^2 = 0.91, average error less than a tenth of a degree). Foolish me, I had the idea that somehow the models were producing some kind of more sophisticated, complex, lagged, non-linear response to the forcings than that.

This almost completely linear response of the GISSE model makes it trivially easy to create IPCC style “scenarios” of the next hundred years of the climate. We just use our magic GISSE formula, that future temperature change is equal to 0.13 times the forcing change plus a quarter of a degree per century, and we can forecast the temperature change corresponding to any combination of projected future forcings …

Second, this analysis strongly suggests that in the absence of any change in forcing, the GISSE model still warms. This is in agreement with the results of the control runs of the GISSE and other models that I discussed st the end of my post here. The GISSE control runs also showed warming when there was no change in forcing. This is a most unsettling result, particularly since other models showed similar (and in some cases larger) warming in the control runs.

Third, the climate sensitivity shown by the analysis is only 0.13°C per W/m2 (0.5°C per doubling of CO2). This is far below the official NASA estimate of the response of the GISSE model to the forcings. They put the climate sensitivity from the GISSE model at about 0.7°C per W/m2 (2.7°C per doubling of CO2). I do not know why their official number is so different.

I thought the difference in calculated sensitivities might be because they have not taken account of the underlying warming trend of the model itself. However, when the analysis is done leaving out the warming trend of the model (Fig. 2), I get a sensitivity of 0.34°C per W/m2 (1.3°C per doubling, Fig. 2). So that doesn’t solve the puzzle either. Unless I’ve made a foolish mathematical mistake (always a possibility for anyone, check my work), the sensitivity calculated from the GISSE results is half a degree of warming per doubling of CO2 …

Troubled by that analysis, I looked further. The forcing is close to the model results, but not exact. Since I was using the sum of the forcings, obviously in their model some forcings make more difference than other forcings. So I decided to remove the volcano forcing, to get a better idea of what else was in the forcing mix. The volcanos are the only forcing that makes such large changes on a short timescale (months). Removing the volcanos allowed me to regress all of the other forcings against the model results (without volcanos), so that I could see how they did. Figure 4 shows that result:

Figure 4. All other forcings regressed against GISSE hindcast temperature results after volcano effect is removed. Forcing abbreviations (used in original dataset): W-M_GHGs = Well Mixed Greenhouse Gases; O3 = Ozone; StratH2O = Stratospheric Water Vapor; Solar = Energy From The Sun; LandUse = Changes in Land Use and Land Cover; SnowAlb = Albedo from Changes in Snow Cover; StratAer = Stratospheric Aerosols from volcanos; BC = Black Carbon; ReflAer = Reflective Aerosols; AIE = Aerosol Indirect Effect. Numbers in parentheses show how  well the various forcings explain the remaining model results, with 1.0 being a perfect score. (The number is called R squared, usually written R^2) Photo Source

Now, this is again interesting. Once the effect of the volcanos is removed, there is very little difference in how well the other forcings explain the remainder. With the obvious exception of solar, the R^2 of most of the forcings are quite similar. The only two that outperform a simple straight line are stratospheric water vapor and GHGs, and that is only by 0.01.

I wanted to look at the shape of the forcings to see if I could understand this better. Figure 5 has NASA GISS’s view of the forcings, shown at their actual sizes:

Figure 5: The radiative forcings used by the GISSE model as shown by GISS. SOURCE

Well, that didn’t tell me a lot (not GISS’s fault, just the wrong chart for my purpose), so I took the forcing data, standardized it, and took a look at the forcings in a form in which they could be seen. I found out the reason that they all fit so well lies in the shape of the forcings. All of them increase slowly (either negatively or positively) until 1950. After that, they increase more quickly. To see these shapes, it is necessary to standardize the forcings so that they all have the same size. Figure 6 shows what the forcings used by the model look like after standardization:

Figure 6. Forcings for the GISSE model hindcast 1880-2003. Forcings have been “standardized” (set to a standard deviation of 1.0) and set to start at zero as in Figure 4.

There are several oddities about their forcings. First, I had assumed that the forcings used were based at least loosely on reality. To make this true, I need to radically redefine “loosely”. You’ll note that by some strange coincidence, many of the forcings go flat from 1990 onwards … loose. Does anyone believe that all those forcings (O3, Landuse, Aerosol Indirect, Aerosol Reflective, Snow Albedo, Black Carbon) really stopped changing in 1990? (It is possible that this is a typographical or other error in the dataset. This idea is supported by the slight post-1990 divergence of the model results from the forcings as seen in Fig. 3)

Next, take a look at the curves for snow albedo and black carbon. It’s hard to see the snow albedo curve, because it is behind the black carbon curve. Why should the shapes of those two curves be nearly identical? … loose.

Next, in many cases the “curves” for the forcings are made up of a few straight lines. Whatever the forcings might or might not be, they are not straight lines.

Next, with the exception of solar and volcanoes, the shape of all of the remaining forcings is very similar. They are all highly correlated, and none of them (including CO2) is much different from a straight line.

Where did these very strange forcings come from? The answer is neatly encompassed in “Twentieth century climate model response and climate sensitivity”, Kiehl, GRL 2007 (emphasis mine):

A large number of climate modeling groups have carried out simulations of the 20th century. These simulations employed a number of forcing agents in the simulations. Although there are established data for the time evolution of well-mixed greenhouse gases [and solar and volcanos although Kiehl doesn’t mention them], there are no established standard datasets for ozone, aerosols or natural forcing factors.

Lest you think that there is at least some factual basis to the GISSE forcings, let’s look again at black carbon and snow albedo forcing. Black carbon is known to melt snow, and this is an issue in the Arctic, so there is a plausible mechanism to connect the two. This is likely why the shapes of the two are similar in the GISSE forcings. But what about that shape, increasing over the period of analysis? Here’s one of the few actual records of black carbon in the 20th century, from 20th-Century Industrial Black Carbon Emissions Altered Arctic Climate Forcing, Science Magazine (paywall)

Figure 7. An ice core record from the Greenland cap showing the amount of black carbon trapped in the ice, year by year. Spikes in the summer are large forest fires.

Note that rather than increasing over the century as GISSE claims, the observed black carbon levels peaked in about 1910-1920, and have been generally decreasing since then.

So in addition to the dozens of parameters that they can tune in the climate models, the GISS folks and the other modelers got to make up some of their own forcings out of the whole cloth … and then they get to tell us proudly that their model hindcasts do well at fitting the historical record.

To close, Figure 8 shows the best part, the final part of the game:

Figure 8. ORIGINAL IPCC CAPTION (emphasis mine). A climate model can be used to simulate the temperature changes that occur from both natural and anthropogenic causes. The simulations in a) were done with only natural forcings: solar variation and volcanic activity. In b) only anthropogenic forcings are included: greenhouse gases and sulfate aerosols. In c) both natural and anthropogenic forcings are included. The best match is obtained when both forcings are combined, as in c). Natural forcing alone cannot explain the global warming over the last 50 years. Source

Here is the sting in the tale. They have designed the perfect forcings, and adjusted the model parameters carefully, to match the historical observations. Having done so, the modelers then claim that the fact that their model no longer matches historical observations when you take out some of their forcings means that “natural forcing alone cannot explain” recent warming … what, what?

You mean that if you tune a model with certain inputs, then remove one or more of the inputs used in the tuning, your results are not as good as with all of the inputs included? I’m shocked, I tell you. Who would have guessed?

The IPCC actually says that because the tuned models don’t work well with part of their input removed, this shows that humans are the cause of the warming … not sure what I can say about that.

What I Learned

1. To a very close approximation (R^2 = 0.91, average error less than a tenth of a degree C) the GISS model output can be replicated by a simple linear transformation of the total forcing and the elapsed time. Since the climate is known to be a non-linear, chaotic system, this does not bode well for the use of GISSE or other similar models.

2. The GISSE model illustrates that when hindcasting the 20th century, the modelers were free to design their own forcings. This explains why, despite having climate sensitivities ranging from 1.8 to 4.2, the various climate models all provide hindcasts which are very close to the historical records. The models are tuned, and the forcings are chosen, to do just that.

3. The GISSE model results show a climate sensitivity of half a degree per doubling of CO2, far below the IPCC value.

4. Most of the assumed GISS forcings vary little from a straight line (except for some of them going flat in 1990).

5. The modelers truly must believe that the future evolution of the climate can be calculated using a simple linear function of the forcings. Me, I misdoubts that …

In closing, let me try to anticipate some objections that people will likely have to this analysis.

1. But that’s not what the GISSE computer is actually doing! It’s doing a whole bunch of really really complicated mathematical stuff that represents the real climate and requires 160 teraflops to calculate, not some simple equation. This is true. However, since their model results can be replicated so exactly by this simple linear model, we can say that considered as black boxes the two models are certainly equivalent, and explore the implications of that equivalence.

2. That’s not a new finding, everyone already knew the models were linear. I also thought the models were linear, but I have never been able to establish this mathematically. I also did not realize how rigid the linearity was.

3. Is there really an inherent linear warming trend built into the model? I don’t know … but there is something in the model that acts just like a built-in inherent linear warming. So in practice, whether the linear warming trend is built-in, or the model just acts as though it is built-in, the outcome is the same. (As a side note, although the high R^2 of 0.91 argues against the possibility of things improving a whole lot by including a simple lagging term, Lucia’s model is worth exploring further.)

4. Is this all a result of bad faith or intentional deception on the part of the modelers? I doubt it very much. I suspect that the choice of forcings and the other parts of the model “jes’ growed”, as Topsy said. My best guess is that this is the result of hundreds of small, incremental decisions and changes made over decades in the forcings, the model code, and the parameters.

5. If what you say is true, why has no one been able to successfully model the system without including anthropogenic forcing?

Glad you asked. Since the GISS model can be represented as a simple linear model, we can use the same model with only natural forcings. Here’s a first cut at that:

Figure 9. Model of the climate using only natural forcings (top panel). All forcings model from Figure 3 included in lower panel for comparison. Yes, the R^2 with only natural forcings is smaller, but it is still a pretty reasonable model.

6. But, but … you can’t just include a 0.42 degree warming like that! For all practical purposes, GISSE does the same thing only with different numbers, so you’ll have to take that up with them. See the US Supreme Court ruling in the case of Sauce For The Goose vs. Sauce For The Gander.

7. The model inherent warming trend doesn’t matter, because the final results for the IPCC scenarios show the change from model control runs, not absolute values. As a result, the warming trend cancels out, and we are left with the variation due to forcings. While this sounds eminently reasonable, consider that if you use their recommended procedure (cancel out the 0.25°C constant inherent warming trend) for their 20th century hindcast shown above, it gives an incorrect answer … so that argument doesn’t make sense.

To simplify access to the data, I have put the forcings, the model response, and the GISS temperature datasets online here as an Excel worksheet. The worksheet also contains the calculations used to produce Figure 3.

And as always, the scientific work of a thousand hands continue.

Regards,

w.

 

[UPDATE: This discussion continues at Where Did I Put That Energy.]

0 0 votes
Article Rating

Discover more from Watts Up With That?

Subscribe to get the latest posts sent to your email.

155 Comments
Inline Feedbacks
View all comments
John from CA
December 20, 2010 7:29 am

Great post, you have an amazing ability to explain the complex.
“Natural forcing alone cannot explain the global warming over the last 50 years.” Figure 8 doesn’t make any sense to me.
Question: looking only at IPCC “Observations” red lines in figure 8, if “Natural Forcing” and “Anthropogenic Forcing” roughly end in the same C range (a) and (b), is the scale incorrect in (c) as a+b˜=c in the model results?

Alexander K
December 20, 2010 7:32 am

Thanks, Willis, for another great post that makes me feel vastly more comfortable with labelling GISS’s climate science and their modelling dubious at best. My original suspicions that ‘the science’ was and is being grossly maipulated to fit a particular scenario have been reconfirmed.
As a fellow artist and educator, I enjoy your graph’s background visuals and believe they generally stimulate thinking and discernment. But us humans are a varied bunch and those suffering from some forms of dyslexia might find them a bit confusing.
I am thankful that the modellers under discussion are not moonlighting as aircraft designers as any aircraft from such modelling would fail in short order and usually in similar ways!

Enneagram
December 20, 2010 8:43 am

Money forcings and “Tips” that lead to “Tipping points” and instead of showing “climate disruption” reveal a wide ethical disruption.

jack morrow
December 20, 2010 9:06 am

Geoff Sherrington says:
Thanks Geoff, I really loved that and passed it along. My cheeks still hurt!

JPeden
December 20, 2010 9:13 am

Here is the sting in the tale. They have designed the perfect forcings, and adjusted the model parameters carefully, to match the historical observations. Having done so, the modelers then claim that the fact that their model no longer matches historical observations when you take out some of their forcings means that “natural forcing alone cannot explain” recent warming … what, what?
I’ve been almost certain they were doing that for quite some time – rigging the system or begging the question by adjusting the other parameters and “forcings” as needed – simply from seeing the way the “Climate Scientists” were doing their “science” otherwise: in effect no fasification possible, attempting to erase the MWP using extremely isolated populations of wild tree rings, insisting that fossil fuel CO2 concentrations must now be contolling an atmospheric “Global Mean Temperature”, refusing to the death to publish the actual “materials and methods” science behind their conclusions, claiming “peer review” by a few selects peers would insure the “given truth” of whatever they had reviewed, and about a million other very telltale practices.
So I always thought it was hilarious when they’d say things like, “We can’t explain the temperature record without using CO2 concentrations.” Well, of course you can’t – because you are either totally inept or else knowingly operating in a completely propagandistic, video game Fantasyland.
It seems they can’t explain the past record without CO2, but they can’t make any successful predictions with CO2.

S Matthews
December 20, 2010 9:14 am

‘Here’s one possible explanation:
The amount of carbon we add to the atmosphere can be estimated with reasonable accuracy, as can the actual increase. There is a discrepancy where about 50% of the added carbon is missing …… absorbed by the biosphere and oceans.’
Alternatively a big chunk of the so-called ‘missing sink’ is merely an artefact of assuming a longer residence time for CO2 than is actually the case.

Jim D
December 20, 2010 9:19 am

Bill Illis, I am fairly sure the radiative feedback (labeled as a response) is just the Planck response, which is due to global warming. This is what has to balance the forcing and feedbacks on the long term.

Richard S Courtney
December 20, 2010 10:34 am

Baa Humbug:
You are right when at December 19, 2010 at 4:09 pm you say;
“This validates what Richard S Courtney has been saying all along, I’m sure he’ll be along soon to verify.”
Over a decade ago I published a refereed paper that showed the explanation suggested by Chris Wright (at December 20, 2010 at 3:28 am) is correct for the Hadley Centre model.
More recently, in 2007, Kiehl reported the same for a variety of models, and Willis cites and quotes from that paper in his article (above).
Richard

George E. Smith
December 20, 2010 11:10 am

So Willis, this is probably a dumb question; but I’m going to ask it anyway.
Your fig 3 graph of the GISSE model (the grey-Blue line. So they take the known laws of physics, programmed into their 160 Terraflop; and they take the present global Temperature anomaly condition; presumably from the last data point that Dr James Hansen plotted on his GISSTemp graph; and then they hit the RUN (back) button on Terraflop, and it computes this blue-grey graph all the way back to 1880 ??
Do I have this correct; just exactly what are they modelling in GISSE; and why would they not simply graph the actual Temperatures themselves; rather than the anomalies. With 160 Terraflops, they should cerytainly be able to replicate the avctual Temp[eratures at each one of their GISSTEMP weather Stations; so why the anomalies, rather than real global Temperatures; since they do have 160 Terraflops to play with. That’s almost as much climate computing power as Mother Gaia has in my front yard.
Do they just have a new faster way to generate nonsense ?

eadler
December 20, 2010 11:17 am

Willis Eschenbach wrote:
“One of the most surprising findings to me, which no one has commented on, is the sensitivity. Depending on whether we include a linear trend term or not, the sensitivity of the GISSE model is either half a degree C or 1.3°C per doubling of CO2. Regardless of the merits of my analysis, that much is indisputable, it’s just simple math.
But both those numbers are way below both the canonical IPCC value (2° – 4.5°C per doubling) and the value given by the GISSE modelers for their model (2.7°C per doubling). The larger value from the analysis is less than half what GISS says the sensitivity of the model is.
Wouldn’t it be nice if someone from the GISSE modeling team would comment on this, or explain to me where I’m wrong? Or say anything?
But I suppose they’re at the AGU conference learning about how to communicate the holy writ of science to us plebians …
Anyone with any insights on that question about sensitivity?
w.”
I don’t have the time to go through your mathematics in detail, but it seems to me there is a flaw in your logical analysis. The basic physics says, that the change in temperature due to a radiative imbalance will proceed until the radiative imbalance is reached, at which point there is said to be an equilibrium condition with emission of radiation balancing absorption of radiation by the earth/atmosphere system. The ultimate temperature change, at which the equilibrium condition is reached is the climate sensitivity.
The time for this imbalance to be corrected takes quite large, because the heat capacity of the earth is large. There are a number of different components in the heat capacity and vastly different time constants between components, some of which are not well understood. The longest time constant is associated with the transmission of heat from the ocean’s surface into the depths of the ocean. So the ultimate temperature change associated with the the forcing will take a long time to develop. It is this ultimate temperature change that is the climate sensitivity.
Because of the time lag, it doesn’t make sense to do a simple linear correlation between the temperature and instantaneous forcing, and claim the result is should equal the climate sensitivity obtained by the climate modelers.
It seems that you have become so enmeshed in the mechanics of the mathematics involved in the linear correlation, that you have lost sight of the important basic ideas involved in the theory of global warming.

stumpy
December 20, 2010 11:34 am

In the world of modelling we call this a “fudge”. When made up data sets are used to create a fit with observed whilst also accomodating a theory we beleive in, its nothing but a “look it could work, assuming this this this and this happened like this this and this”. If there are no records for many of the forcings, then the models are nothing but a fudged model with no skill. But I always knew that ;0)

Benjamin P.
December 20, 2010 12:27 pm

I would humbly suggest you get rid of all that crap in your graphs. Give the data, you don’t need the pictures.

Paul
December 20, 2010 12:49 pm

The real slight of hand here is convincing people that fitting the “temperature anomaly” is a sign of understanding rather then producing a temperature map of the earth. A temperature map is a genuine physical quantity that can be directly compared to how the climate behaves. The anomaly is not physical and has no direct meaning for anything that happens in the real world. The plants in your backyard don’t respond to some global average, they only respond to local temperature.
There was short discussion of this on Lucia’s Blackboard once and the models created rather unconvincing maps.

wayne
December 20, 2010 1:00 pm

“Willis Eschenbach says:
December 20, 2010 at 12:35 pm
Guesses?”
Basically none?
See that little dip at 1975? That’s the year HP came out with their first HP-65 programmable calculator for scientists for $800. I bought one immediately but most people in science no longer had to think, the program would do it for you, and the temperature record has done nothing but rise linearly ever since, (per GISS that is).
Hansen’s RPN GCM must still be running, for as they say, NEVER re-write logic in code that works. Save your brain, just use it. ☺

dr.bill
December 20, 2010 1:05 pm

My “cumulative forcing” guess: Close to zero.
(and I also hate that “forcing” term)
/dr.bill

George E. Smith
December 20, 2010 1:22 pm

“”””” wayne says:
December 20, 2010 at 1:00 pm
“Willis Eschenbach says:
December 20, 2010 at 12:35 pm
Guesses?”
Basically none?
See that little dip at 1975? That’s the year HP came out with their first HP-65 programmable calculator for scientists for $800. I bought one immediately but most people in science no longer had to think, the program would do it for you, and the temperature record has done nothing but rise linearly ever since, (per GISS that is).
Hansen’s RPN GCM must still be running, for as they say, NEVER re-write logic in code that works. Save your brain, just use it. ☺ “””””
Well actually, that would have been a model 35 Calculator; and they never were anything like $800. I believe that HP employees could have got one for $350 and I think normal retail was around $400.
The much later model 65 had a magnetic card reader; and it was not $800 either. I have a model 65 in mint condition that works perfectly; and an older model 35 that has a shot on/off switch. A cheap slide switch that simply couldn’t take the on/off usage. The Battery packs and chargers were among the weak links in what otherwise was a landmark product line.
It was Litronix with a $60 simple four function plus square root calculator that first introduced a hand held calculator with a key stroke on/off function that solved the on-off switch problem; and vastly improved battery life. (and automatic time out shutoff.)
A typical calculation went thusly:- ON, 4.56, x, 7.29, =, 33.2424, OFF
Well that was the Litronix one; the HP models of course use reverse Polish Notation.

December 20, 2010 1:55 pm

Nice overview Willis!
About your question of sensitivity:
Anyone with any insights on that question about sensitivity?
This was discussed in the early days of RC, before the censor devil made any serious discussion impossible there.
The main point in all current climate models is that they expect one sensitivity for all kinds of forcings: 1 W/m2 increase in insolation has the same effect (+/- 10%) as 1 W/m2 more downward IR from more CO2. Which is quite questionable.
Solar has its main effects in the tropics, as well as in the stratosphere (ozone, poleward shift of jet stream positions, rain patterns) as in de upper few hundred meters of the oceans. And there is an inverse correlation with cloud cover. CO2 has its main effect more widespread over the globe, mainly in the troposphere, IR is captured in the upper fraction of a mm of the oceans (more reflection, more evaporation?) and has no clear effect on ocean heating or cloud cover.
That models don’t reflect cloud cover can be found here:
http://www.nerc-essc.ac.uk/~rpa/PAPERS/olr_grl.pdf
The attribution of different sensitivities was tested for the HadCM3 model:
http://climate.envsci.rutgers.edu/pdf/StottEtAl.pdf which shows (within the constraints of the model) that solar probably is underestimated.
The difference in sensitivity (or not) between natural and anthro forcings was discussed at RC:
http://www.realclimate.org/index.php/archives/2005/12/natural-variability-and-climate-sensitivity/
with my comment at #24, #31 and #36 and several more further, and more interesting comments of others at #26 and further…
The main differences in basic sensitivity between the models, resulting in the wide range of sensitivities is from (sulfate) aerosols: if there is a huge forcing/sensitivity for cooling aerosols, then the sensitivity for CO2 must be high and opposite, to explain the 1945-1975 cooling trend. That was discussed some years ago at RC:
http://www.realclimate.org/?comments_popup=245 with my comment at #6 and further discussion.
The graphs at the introduction of another RC discussion shows the interdependence of aerosols and GHG sensitivity:
http://www.realclimate.org/index.php/archives/2005/07/climate-sensitivity-and-aerosol-forcings/
The moment you use different sensitivities for different forcings, you can attribute any set of forcing x sensitivity and match the past temperature with better and better R^2, where the (mathematical! not necessarely the real) optimum may show a very low sensitivity for CO2, as Wayne calculated: December 19, 2010 at 11:36 pm
Further, the multi-million dollar GCM’s don’t perform better in hindcasting the temperature trend: Your (and others) simple EBM (energy balance models) only based on the forcings do as well or better than the very expensive GCM’s.
That was discussed by Kaufmann and Stern, their work is not anymore online, but it was discussed here:
http://climateaudit.org/2005/12/21/kaufmann-and-stern-2005-on-gcms/
From that link:
These results indicate that the GCM temperature reconstruction does not add significantly to the explanatory power provided by the radiative forcing aggregate that is used to simulate the GCM

George E. Smith
December 20, 2010 2:08 pm

This story (of the GISSE Terraflop) sounds much like a minor event that occurred sometime in 1961, in an Electronics trade journal, named Electronic Design, in their column; “Ideas For Design.”
Now recall this was in the days of IBM mainframes (and Control Data) and computer timesharing.
So in this “Idea”, the author (a reader) had been donated one hour of computing time on some IBM mainfame timesharing system; and he had been tasked with designing a simple two transistor amplifier. The requirement was for an amplifier with a Voltage Gain of 10.0 +/-1.0; and the designer “claimed” that he had done a “Worst-Case” design, based on 5% tolerance resistors, and some reasonable production spread in transistor beta (common emitter current gain).
So our hero proposed to use his hour of expensive IBM time to do a Monte Carlo analysis of this design and find out what the production yields might turn out to be.
The circuit design consisted of a common emitter gain stage, with the base biassed up on a resistive divider across the (ten Volt) power supply, with a collector load resistor, and a small emitter degenerating resistor. A second identical transistor was connected to the load resistor as an emitter follower (common collector) stage, to provide the final output.
So he runs the MC analysis on his worst case designed circuit, and plotted his results. Holy Cow !!! Howdat happen ?
The IBM MC said that the gain was NOT 10.0 +/- 1.0 but was more like 9.6 +/- 1.2; but all was not lost; because that multiflopping IBM monster (1103 I think) told him that the emitter degenerating resistor was the most critical component in the desing, and the collector load resistor was the second most critical; and the Transistor Beta was the third most critical design parameter.
But the computer said it could fix his circuit, and it recommended changing the collector load resistor to the next highest 5% resistor value; and he would get the right gain of ten pretty much, but he would still have some fallouts, beyond that +/- 1.0 gain spread. Computer couldn’t think of any way round that; just basic laws of Physics; Tough S*** !!
Now this design genius figured that Monte Carlo was a great thing, if you ever got donated a free hour on an IBM 1103.
Did I explain that this chap had already done a “WORST CASE” design, that said his wonder circuit would do the job; so how the hell did MC find some examples that lay outside the worst case boundaries.
But the neat part was that the computer could tell him that the next resistor value up from his 4.7 kOhm load resistor would be 5.1 kOhm; problem solved.
Well his idea for design got rave reviews; and lots of folks wondered how to scrounge some number crunching time.
Doesn’t this sound like this NASA GISSE Terraflop situation; it sure does to me.
Here’s some of the things the IBM machine didn’t, and couldn’t have told our hero.
Hey Dummy ! If you are going to do a circuit design; don’t set the load resistor to 5kOhms; unless you pay real money for a precision component you can’t buy such a thing; so you slapped in a 4.7 kOhm, which is actually a 10% tolerance list value; that you probably had on hand when you breadboarded the prototype; and then you simply called out a 5% tolerance for it; when you found that 10% wouldn’t fly with your WC design (and I do mean it was a WC design; and fit for any Loo !)
The real crime was that this designer didn’t realize that this was a totally brain dead circuit architecture to begin with.
He could have used those two transistors; both as CE Voltage gain stages, to create a much higher gain that 10.0; and then he could have applied overall negative Voltage feedback which would have let him set his gain quite accurately as the ratio of just two resistors; and the gain would have been largely independent of any ordinary range of transistor beta spread as well.
He wasted a whole hour of valuable flops on what was a shitty circuit to begin with; and if he had used a decent architecture; he could have donw the WC design in his head.
Well this is about how I see this GISSE story. What is the good of all that computer power; if the damn model is a WC design to begin with.
Mother Gaia models this problem (planet earth climate) on her “ANALOG COMPUTER” and she does it in real time; and she has more ocmputing power in her little finger nail, that NASA has in all its Terraflops.
Is it any wonder, that Mother Gaia’s model always matches the real world climate; while the muscle bound computer geeks, are still playing with their model.
So they produce garbage out, at ever increasing rates.
Hey for the record; I DO BELIEVE that such number crunching power can be usefully utilized in looking at local patterns of WEATHER; so I am not at all unhappy that NASA has spent my tax dollars on this behemoth.
But please use it to work on the right problems; do you mind ?

George E. Smith
December 20, 2010 3:42 pm

“Cloud Scattering.”
A part of any modelling program should be the optical scattering due to clouds. When we fly over clouds in an aeroplane; we can’t help noticing that those big billowing thunderheads or cumulus clouds look cotton wool white; and we talk about cloud reflectance numbers of 80% or more from such clouds. Thinner laminar cloud layers (izzat stratus) look far more grey on top; somewhat like the standard Kodak 18% reflectance grey card that film photographers all used to own.
Well actually water in bulk has quite low reflectance; about 2% for normal incidence over most of the solar spectrum energy range; with may be an integrated total of about 3% reflectance over a borader angle of incidence range. So how can clouds reflect 80% plus ?
Well the answer is that they don’t. Mostly it is just scattering over large angles, so most of the light simply gets turned around and sent back out from whence it came; and everywhere else too.
So to get some numbers, I set up a simple rain drop model. I picked a raindrop that is 2.000 mm in diameter made out of ordinary fresh water. Well I could have picked any size but a 1 mm radius seemed a nice number.
So my light source is 432,000 mm radius; and it is located 93 million mm from my rain drop. Well I just used a mm per mile for the sun to establish a rough angular diameter. Well that comes in at 0.5323 deg angular diameter; for the sun average.
So I clipped my sunbeam with an aperture stop in front of the rain drop, with a radius of 0.8 mm or 80% of the rain drop size.
It turns out that at the edge of that aperture, the ray incidence angle on the water drop is 53.1 degrees, and that is quite close to the Brewster angle for water, so the edge reflected sunlight would be almost perfectly plane polarised, and the real reflectance of the droplet would be quite close to the 2% normal incidence value; but would then increase rapidly beyond that.
So my 1 mm radius raindrop, becomes a simple biconvex lens, with a front radiust of +1.000 mm, a back radius, of -1.000 mm and a central thickness of 2.000 mm, making a perfect sphere lens.
Well such a lens focusses the sunlight into a beam whoe extreme marginal rays strike the optical axis at about 32.5 degrees, making for a 65 degree full cone angle of light at the focus region. That near focal point is almost exatly 0.5 mm or 1/2 the drop radius from the second surface of the drop. Now the image is beset by a whole lot of spherical aberration, so it is anything but a point image; the point being that an input beam with zero divergence is converted into a 65 degree full angle beam coming out of the raindrop. If you take away the aperture stop, and illuminate the full droplet, then the cone angle goes way up to 82.6 degrees cone HALF angle.
The collimated beam, now is spread over almost a full hemisphere. If I actually take light from the full solar disk, rather than just its axial point; then the light scatters into a full hemisphere; after passing through just one raindrop.
But some cautions. Because of the Fresnel Reflection formulae, the reflectance climbs very rapidly beyond the Brewster angle, so less light is transmitted. Note however that the reflected light itself also contibutes to the total scattering, inlcuding about 2% max coming straight back; so the single drop of water scatters a nice 0.5 degree solar beam into a full spherical output distribution.
But I prefer to stay with the 80% aperture and limit myself to the basic 65 degree full cone angle. It only takes a few rain drops in succession; I’d guess 3-5 and you have a full spherical beam of almost isotropic angular distribution.
Of course the size is somewhat irrelevent. 1.o mm radius is huge in visible light optics. Your optical mouse (specially laser mice) has lenses in it that can have 1 mm radius of curvature surfaces and less than 0.5 mm apertures.
So the apparent reflectance of clouds, is actually a fairly efficient scattering that quickly turns a solar collimated beam into an isotropic light distribution; with relatively little actual loss, except at the spectral regions in the 0.7 to 4.0 micron range, where H2O has strong absorption bands; specially at 0.94 and 1.1 microns.

AJ
December 20, 2010 3:58 pm

Lucia’s Lumpy model looks like it’s a restatement of Newton’s “Law” of Cooling. In Lumpy, the unrealized temperature change to a given forcing is realized by the formula exp(-t/T). In Newton’s model I’ve seen this expressed as exp(-rt). So it looks like: r = 1/T. Using this model, one can show that the “heat in the pipeline” converges to a maximum value. Given a high enough r value, this convergence happens quickly, resulting in the same rate of heating going into and coming out of the pipeline. This would imply that estimating sensitivity using linear regressions of ln(co2) and temperature is valid.
However, from what I’ve seen, it looks like using a constant “r” is not generally accepted by the thermal dynamics experts. I haven’t seen a clear explanation of how the change is realized, but it looks like it’s some sort of “exponential integral” curve. Effectively, this has the realization rate decreasing over time due to ocean conduction, heat building up in the pipeline, and with linear regressions no longer being valid. However, since the amount realized is fairly rapid initially, it just looks like there is a linear relationship between the forcing and the modeled temp.
How one would use observations to pick the more suitable model or the parameters for the “exponential integral” model is beyond me. At one time I thought that the zero lag between Milankovich Cycles and temperature might challenge the “exponential integral” model, but that was just a SWAG.

onion
December 20, 2010 4:13 pm

“The first surprise was how close the model results are to a bozo simple linear response to the forcings plus the passage of time (R^2 = 0.91, average error less than a tenth of a degree). Foolish me, I had the idea that somehow the models were producing some kind of more sophisticated, complex, lagged, non-linear response to the forcings than that.”
They are, what you have done is approximate it.
“This almost completely linear response of the GISSE model makes it trivially easy to create IPCC style “scenarios” of the next hundred years of the climate. We just use our magic GISSE formula, that future temperature change is equal to 0.13 times the forcing change plus a quarter of a degree per century, and we can forecast the temperature change corresponding to any combination of projected future forcings …”
That makes no sense to me. Can’t you just look at the actual ModelE run for an IPCC scenario rather than trying to guess it with linear regression?
“Second, this analysis strongly suggests that in the absence of any change in forcing, the GISSE model still warms.”
No it doesn’t. What you’ve shown is that approximating modelE with a line of best fit doesn’t work and you have to add an extra constant term. That extra constant term is likely needed because the model has lagged response, not because there is an inherent warming trend in GISTEMP.
That is also probably why you find a climate sensitivity so low, because you are excluding the extra constant term as if it has nothing to do with the forcings (only in your regression model does it have nothing to do with the forcing – in the actual modelE it probably does)
“Third, the climate sensitivity shown by the analysis is only 0.13°C per W/m2 (0.5°C per doubling of CO2). This is far below the official NASA estimate of the response of the GISSE model to the forcings. They put the climate sensitivity from the GISSE model at about 0.7°C per W/m2 (2.7°C per doubling of CO2). I do not know why their official number is so different.”
Obviously you’ve made a mistake, because the answer is known. If they run the model with forcing of 4wm-2 and get a 3C temperature rise out of it then ModelE has a sensitivity of about 0.7C per W/m2. If you find a different result for what sensitivity ModelE should show using linear regression, then you have found the wrong answer which probably implies there is a flaw with the linear regression method (and I bet in this case it has to do with the exclusion of that constant 0.25C/century term)

tckev
December 20, 2010 4:21 pm

It’s nice to know that by the judicious application of a computer model, naturally chaotic changes can be smoothed away, just like the boom/bust cycles were taken out of the computerized economic model.
Unfortunately computers’ communicate with the climate is as good as they communicated with the economy.

Verified by MonsterInsights