Zero Point Three times the Forcing

Guest Post by Willis Eschenbach

Now that my blood pressure has returned to normal after responding to Dr. Trenberth, I returned to thinking about my earlier somewhat unsatisfying attempt to make a very simple emulation of the GISS Model E (herinafter GISSE) climate model. I described that attempt here, please see that post for the sources of the datasets used in this exercise.

After some reflection and investigation, I realized that the GISSE model treats all of the forcings equally … except volcanoes. For whatever reason, the GISSE climate model only gives the volcanic forcings about 40% of the weight of the rest of the forcings.

So I took the total forcings, and reduced the volcanic forcing by 60%. Then it was easy, because nothing further was required. It turns out that the GISSE model temperature hindcast is that the temperature change in degrees C will be 30% of the adjusted forcing change in watts per square metre (W/m2). Figure 1 shows that result:

 

Figure 1. GISSE climate model hindcast temperatures, compared with temperatures hindcast using the formula ∆T = 0.3 ∆Q, where T is temperature and Q is the same forcings used by the GISSE model, with the volcanic forcing reduced by 60%.

What are the implications of this curious finding?

First, a necessary detour into black boxes. For the purpose of this exercise, I have treated the GISS-E model as a black box, for which I know only the inputs (forcings) and outputs (hindcast temperatures). It’s like a detective game, trying to emulate what’s happening inside the GISSE black box without being able to see inside.

The resulting emulation can’t tell us what actually is happening inside the black box. For example, the black box may take the input, divide it by four, and then multiply the result by eight and output that number.

Looking at this from the outside of the black box, what we see is that if we input the number 2, the black box outputs the number 4. We input 3 and get 6, we input 5 and we get 10, and so on. So we conclude that the black box multiplies the input by 2.

Of course, the black box is not actually multiplying the input by 2. It is dividing by 4 and multiplying by 8. But from outside the black box that doesn’t matter. It is effectively multiplying the input by 2. We cannot use the emulation to say what is actually happening inside the black box. But we can say that the black box is functionally equivalent to a black box that multiplies by two. The functional equivalence means that we can replace one black box with the other because they give the same result. It also allows us to discover and state what the first black box is effectively doing. Not what it is actually doing, but what it is effectively doing. I will return to this idea of functional equivalence shortly.

METHODS

Let me describe what I have done to get to the conclusions in Figure 1. First, I did a multiple linear regression using all the forcings, to see if the GISSE temperature hindcast could be expressed as a linear combination of the forcing inputs. It can, with an r^2 of 0.95. That’s a good fit.

However, that result is almost certainly subject to “overfitting”, because there are ten individual forcings that make up the total. With so many forcings, you end up with lots of parameters, so you can match most anything. This means that the good fit doesn’t mean a lot.

I looked further, and I saw that the total forcing versus temperature match was excellent except for one forcing — the volcanoes. Experimentation showed that the GISSE climate model is underweighting the volcanic forcings by about 60% from the original value, while the rest of the forcings are given full value.

Then I used the total GISS forcing with the appropriately reduced volcanic contribution, and we have the result shown in Figure 1. Temperature change is 30% of the change in the adjusted forcing. Simple as that. It’s a really, really short methods section because what the GISSE model is effectively doing is really, really simple.

DISCUSSION

Now, what are (and aren’t) the implications available within this interesting finding? What does it mean that regarding temperature, to within an accuracy of five hundredths of a degree (0.05°C RMS error) the GISSE model black box is functionally equivalent to a black box that simply multiplies the adjusted forcing times 0.3?

My first implication would have to be that the almost unbelievable complexity of the Model E, with thousands of gridcells and dozens of atmospheric and oceanic levels simulated, and ice and land and lakes and everything else, all of that complexity masks a correspondingly almost unbelievable simplicity. The modellers really weren’t kidding when they said everything else averages out and all that’s left is radiation and temperature. I don’t think the climate works that way … but their model certainly does.

The second implication is an odd one, and quite important. Consider the fact that their temperature change hindcast (in degrees) is simply 0.3 times the forcing change (in watts per meter squared). But that is also a statement of the climate sensitivity, 0.3 degrees per W/m2. Converting this to degrees of warming for a doubling of CO2 gives us (0.3°C per W/m2) times (3.7 W/m2 per doubling of CO2), which yields a climate sensitivity of 1.1°C for a doubling of CO2. This is far below the canonical value given by the GISSE modelers, which is about 0.8°C per W/m2 or about 3°C per doubling.

The third implication is that there appears to be surprisingly little lag in their system. I can improve the fit of the above model slightly by adding a lag term based on the change in forcing with time d(Q)/dt. But that only improves the r^2 to 0.95, mainly by clipping the peaks of the volcanic excursions (temperature drops in e.g. 1885, 1964). A more complex lag expression could probably improve that, but with the initial expression having an r^2 of 0.92, that only leaves 0.08 of room for improvement, and some of that is surely random noise.

The fourth implication is that the model slavishly follows the radiative forcings. The model results are a 5-run average, so it is not clear how far an individual model run might stray from the fold. But since the five runs’ temperatures average out so close to 0.3 times the forcings, no individual one of them can be very far from the forcings.

Anyhow, that’s what I get out of the exercise. Further inferences, questions, objections, influences and expansions welcomed, politeness roolz, and please, no speculation about motives. Motives don’t matter.

w.

 

Get notified when a new post is published.
Subscribe today!
5 2 votes
Article Rating
165 Comments
Inline Feedbacks
View all comments
k winterkorn
January 17, 2011 11:31 am

Related to this and the earlier refutation of Trenberth:
There are several important hypotheses re AGW with corresponding Null hypotheses:
1. Hypothesis #1: That we have measured a rise in global temperature over several centuries with sufficient accuracy to move on to hypotheses re causation. Null Hypothesis: We have not accurately measured global temperatures adequately to move on to any other hypotheses.
—–The Mann Hockey Stick was the “fact” on which the “science” of the IPCC statement that global warming is “unequivocal” was based. The Hockey Stick has been broken by follow up analysis (Eg., confirmination that there was a Medieval Warm Period).
——The Surface Stations project has shown the unreliability of the temperature measurements.
—– Demonstration of the importance of Urban Heat Island effects shows that much of “global” warming is actually multi-local, not a diffuse global phenomenon.
2. Hypothesis #2: That global atmospheric CO2 is rising, unnaturally, mostly due to human activity. Null Hypothesis: most or all of measured changes in CO2 are natural.
—–Though a likely hypothesis, based on coincidence and size of measured increase in CO2 is the air in proportion to human-caused CO2 emissions (plus the isotope issue), there is still controversy, since CO2 levels are known to vary greatly without human input.
3. Hypothesis #3: That rising CO2 in the atmosphere predictably causes a measurable rise in global temps. Null Hypothesis: CO2 effects on global temps are too small to be definitively measured or inferred.
—–Given uncertainty of global mean temp as a measurement and the chaotic nature of the weather system of the Earth in general, a CO2 signal would need to be large to be definitively separated from the noise.
—–In the last couple of centuries, global temp changes have not correlated well with CO2 changes. Absence of correlation is strong evidence against a hypothesis.
4. Hypothesis #4: Not only is the Earth warming, due to man-caused CO2 changes, but the process is dominated by positive feedbacks and will become catastrophic. Null Hypothesis: The Earth’s climate system is not dominated by positive feedback (hence, CO2-driven warming will be mild, or, because of negative feedback effects, too small to definitively detect).
—–The Earth’s temps have been stable within a small range (on a Kelvin scale, which is most apropos) for eons, despite wild swings in CO2. This strongly suggests the system is dominated by negative feedbacks.
—–There is little certainty regarding the role clouds play in global temps, except that the role could be dominating.
All of the four hypotheses above remain in play. The science of climate change is in its infancy, far from settled.

January 17, 2011 11:48 am

IMO Trenberth is a model climate scientist.
When the data disagrees with the model, he blames the data.

January 17, 2011 12:09 pm

Very nice empirical analysis of GISS E output, Willis. Your result is very similar to the one described in my Skeptic article, which also showed that GCMs in general just linearly propagate changes in GHG forcing. My own empirical factor was very close to yours — 0.36 x (fractional forcing change). It was derived from Manabe’s modeling during the 1960’s, and his work is clearly still current in modern climate models.
Like your result, that 0.36 factor produced warming curves that matched the outputs of multiple modern climate models. Anthony kindly posted the Skeptic article last June, here.
One general outcome from this work is that if a simple linear equation produces prognostications of surface air temperature that match those of expensive climate models, then why does anyone need expensive climate models?
And if those models contain the complete physical description of Earth climate, as is claimed by the “we know all the forcings” crowd, then the linear equation clearly and accurately reproduces the complete temperature outputs from the complete physical theory.
So there’s obviously no more for climate scientists to do. They have produced their final theory, which we can all emulate by simple means. They can now, with supreme satisfaction, retire the field and go do something else that usefully employs their great acumen in physics.

Jim Petrie
January 17, 2011 12:42 pm

What would happen if you assumed a 0.1% reduction in CO2 forcing (due to clouds)
Obviously changing the sign of the forcing would abolish man made global warming entirely. Michel Mann might feel this to be a little unfair!
But add one simple step. What change in the volcanic forcing would you then have to make to get a hindcast showing a correlation of 90% or more?

Jim Petrie
January 17, 2011 12:47 pm

What would happen if you assumed a 0.1% decrease in CO2 forcing due to clouds?Obviously if you change the sign of the forcing you abolish man made global warming entirely. The warming supporters might regard this as being a little unfair!
But add one simple step. What would you then have to do to volcanic forcing to get a 90% correlation with your hindcast?

Shevva
January 17, 2011 12:48 pm

Hi Willis, great post, I always wonder how they measure the Sun in these models? how do they measure every single atom of energy that makes it to the earth?
I watch a BBC 2 program that stated there is no such thing as tempreture, it is just a transfer of energy. So you would have to understand how every energy process worked in our solar system (assuming energy does not come from outside the solar system) before you could single out CO2? clever these climate sientists.

January 17, 2011 12:50 pm

It seems to me that these “forcings” are little more that “fudge factors” used to make poorly designed models appear to work; and global averaging over long periods masks reality. In multiple linear regression analysis, to expect meaningfull statistical significance, I use a rule of thumb that the number of data points must be at least five times the number of possible factors with all their possible interactions ; and that is assuming the dependent variable is affected linearly.

Paul Martin
January 17, 2011 12:57 pm

Off topic, but funny.
Global Wobbling Denialism amongst astrologers.

January 17, 2011 1:05 pm

Dear Willis,
How many teraflops in your super computer? Or did you do your modeling on a paper bag with pencil? Inquiring taxpayers wish to know. Because if you get the same results with the paper bag method, why in the blue blazes are we spending megabucks on shiny black boxes for the GISSers?

DocMartyn
January 17, 2011 1:26 pm

“But a model does what its designers and constructors define it will do”
No, a true model is designed to give you insights into the system by generating data that was not previously known. This ‘model’ are more like fits than true models, any fool can fit a polynomial to a plot, but you don’t get information from it.

Jim D
January 17, 2011 1:27 pm

Why should the temperature change be proportional to the instantaneous forcing at the end of a period, rather than the average forcing over the period? If the average forcing is 1 W/m2, and the temperature change is 0.7 degrees, the sensitivity is 0.7 C /(Wm-2) giving a climate sensitivity to CO2 doubling of 2.6 C.

jorgekafkazar
January 17, 2011 1:35 pm

To Willis: Great post. My only question is whether autocorrelation in the GISSE output requires a corresponding correction in the calculation of r²? A reduction in the latter might give you some running room for investigation of other parameters, lags, etc.–an opportunity.
To Joel Shore: Nicely constructed comments, very helpful.
To C1UE: If, as you state, “the models simply encapsulate real world snapshots in a lookup table,” that almost guarantees that the models are garbage.
To Rex: Yes, you’re right: “mean global annual temperature” is meaningless in terms of the actual physics. We need to remember that at all times.
To Roy Clark: True. My two sensors tell me, after much painful observation, that air temperatures differ widely from barefoot asphalt temperatures. Another thing that may not have been properly allowed for.

January 17, 2011 1:44 pm

Pamela Gray says:
January 17, 2011 at 6:52 am
I’ve attempted to do what you suggest. http://www.kidswincom.net/CO2OLR.pdf

January 17, 2011 1:56 pm

Joel Shore says:
January 17, 2011 at 6:06 am
“However, the climate models also put out a lot more information regarding where the warming is greater or lesser, how weather patterns change, and so forth.” That’s
Hilarious, Joel. You know very well they might try to do that but they are singularly unsuccessful. Try Koutsyannis I think it is.

Ferdinand Engelbeen
January 17, 2011 1:59 pm

NicL_UK says:
January 17, 2011 at 11:12 am
Indeed that is a backup of the article! Thanks for the link, immediately downloaded it here…

Richard S Courtney
January 17, 2011 2:12 pm

DocMartyn:
At January 17, 2011 at 1:26 pm you respond to my true statement that said;
“But a model does what its designers and constructors define it will do”
by asserting
“No, a true model is designed to give you insights into the system by generating data that was not previously known. This ‘model’ are more like fits than true models, any fool can fit a polynomial to a plot, but you don’t get information from it.”
Oh!? Really? How does a computer model do other than it is programmed to do? By including a random number generator in its code?
And what “insights” have the climate models provided?
The climate models (GCMs and radiative transfer models) are curve fits adjusted by applying CO2 climate senisitivity forcing that differs between models by a factor of 2.4 and then adjusted to hindcast previous global temperature data by applying assumed (n.b. assumed not estimated) aerosol cooling forcing that also differs between models by a factor of 2.4.
(see
Courtney RS, ‘An Assessment of Validation Experiments Conducted on Computer Models of Global Climate Using the General Circulation Model of the UK’s Hadley Centre’, E&E vol.10 no.5 (1999)
And
Kiehl JT, ‘Twentieth Century Climate Model Response and Climate Sensitivity’, GRL, vol.34 (2007) )
Hence, it is no wonder that Willis Eschenbach achieves similar performance to one of the models by simply applying a curve fit to the data. His simple model provides the same “information” as the climate models and for the same reason.
Richard

jorgekafkazar
January 17, 2011 2:16 pm

I once joined a project late in the game. The twelve-input computer program they were using was very complex internally–so complex that no one on the team really grasped it. After puzzling over some oddities (such as occasional outputs with minus signs), I discovered that just taking the average of six variable differentials gave the exact same results as the computer. On further investigation, I found that someone on the project had replaced two of the inputs with (in effect) lookup tables. That explained the negative outputs! Good data in, garbage out. They’d fooled themselves by the complexity of the program into believing it was a valid model of the thermodynamic structure of the system. I think that’s the case with all the climate models ever constructed.
Vis-a-vis global temperature, the net advantage of a backcast-tweaked, megabuck climate model over a historical curve on graph paper is that you can extrapolate the model without owning a 25¢ straightedge, and get an answer that is equally wrong. Climate models are the least cost-effective things ever made.

January 17, 2011 2:39 pm

Willis,
Nice work somewhat Akin to Lucia’s Lumpy. Also, ou should note that scientists are also persuing the statistical emulation of GCMs. This is something we always do in high end modelling, especially for design of experiments in a high DOF parameter space.
Also, I’m not surprised to find issues in the volcano/aerosol area.
A couple points.
1. It would have been interesting to build your model with half of the data.
2. I’m pretty sure your sensitivity here is the transient response, wait 60 years ad you’ll see the equillibrium response.. err you need a GCM to do that. nevertheless I do think you can use this work to set a lower bound for sensitivity. So, a Lukewarmer
is going to say that the equillibrium sensitivity is between 1C and 3C ( maybe 2.5c, we are still determining membership rules). given the inertia in the system it’s safe to say that the equilibrium response will be higher than the transient.

Mike Haseler
January 17, 2011 2:41 pm

Richard S Courtney says: January 17, 2011 at 9:20 am
Mike Haseler, … On this basis isn’t the best estimate based on the climategate team’s own model, a prediction that expected warming is 0.44C for a doubling of CO2?”
I answer, on the basis of Willis Eschenbach’s analysis, the answer is, yes

thanks Richard for the confirmation. Yes I agree it does need writing up and publicising, other people need to see the implication – it’s the first time I’ve seen any kind of estimation of the effect of CO2 that actually is based on real world events rather than post-modernist scientific fantasy.
This is quite a momentous post for those of us who want to know the truth (for good or ill)

c1ue
January 17, 2011 2:44 pm

Paddy: In response to me saying: “I would just note, however, that modeling isn’t a case of all right or all wrong.”
You noted: Isn’t a little bit correct or a little bit wrong the same as being a little bit pregnant?
This is not a correct analogy. For one thing, a model may be 100% correct over 90% of its range but be 100% wrong in 10%.
If your operating conditions in a given circuit lie within this 90%, then the model is perfectly fine – for example a digital circuit.
If on the other hand if the 10% in question is exercised and more importantly affects the overall operation of the device (i.e. in an analog world where startup behavior sets initial conditions for later behavior) then the model’s output would be wrong.
The point I was making wasn’t that Mr. Eschenbach’s article is incorrect – it is that the complexity of climate models doesn’t necessarily mean the complexity was intended to model a simple behavior. Much of the complexity could be specifically to handle corner cases (i.e. the 10% in the analogy).
Again this says nothing about validity of the climate models; in the semiconductor physics world there are constant test chips going through to validate both overall models and specific model parameters/behaviors.
Obviously the climate models have no such verification going on.
jorgekafkazar said: If, as you state, “the models simply encapsulate real world snapshots in a lookup table,” that almost guarantees that the models are garbage.
This is a wrong statement. The real world is exactly that – and a model which contains all possible real world behavior would therefore be reality. Of course this is not possible, but again your blanket statement is invalidated by this example.
More importantly there are behaviors which cannot be modeled using equations because of their inherent structure.
An example for this is ‘flash’ memory. Unlike other forms of memory, ‘flash’ memory actually uses quantum tunneling – i.e. current leaping through an otherwise opaque barrier via quantum effects.
There are no parameterizable equations which a simulator can handle (at any reasonable level of usability) which capture this behavior, thus it is far easier and more useful to create a lookup table to recreate this behavior in a model.
With respect to climate models – there are many aspects which are chaotic including but not limited to: cloud behavior, molecular level friction, hurricane formation (not so much over a period of time but in specific times/places), initiation of rainfall, etc etc.

January 17, 2011 2:49 pm

I will point out to people that what Willis has done here is No different than the work some have done correlating temperature to sun spots or to movements of planets or whatever.
There is one critical difference, however, the regressors have the right units. There are understood mechanisms that connect the independent and dependent variables.
Put it this way. If Willis “hid” his 10 variables from you.. or told you those variables were ‘sunspot’ numbers, the position of the magnetic field, the barycentric whoha, and the integrated drift in the magnetic pole, I wonder how many people would say
” great science willis”