Zero Point Three times the Forcing

Guest Post by Willis Eschenbach

Now that my blood pressure has returned to normal after responding to Dr. Trenberth, I returned to thinking about my earlier somewhat unsatisfying attempt to make a very simple emulation of the GISS Model E (herinafter GISSE) climate model. I described that attempt here, please see that post for the sources of the datasets used in this exercise.

After some reflection and investigation, I realized that the GISSE model treats all of the forcings equally … except volcanoes. For whatever reason, the GISSE climate model only gives the volcanic forcings about 40% of the weight of the rest of the forcings.

So I took the total forcings, and reduced the volcanic forcing by 60%. Then it was easy, because nothing further was required. It turns out that the GISSE model temperature hindcast is that the temperature change in degrees C will be 30% of the adjusted forcing change in watts per square metre (W/m2). Figure 1 shows that result:

 

Figure 1. GISSE climate model hindcast temperatures, compared with temperatures hindcast using the formula ∆T = 0.3 ∆Q, where T is temperature and Q is the same forcings used by the GISSE model, with the volcanic forcing reduced by 60%.

What are the implications of this curious finding?

First, a necessary detour into black boxes. For the purpose of this exercise, I have treated the GISS-E model as a black box, for which I know only the inputs (forcings) and outputs (hindcast temperatures). It’s like a detective game, trying to emulate what’s happening inside the GISSE black box without being able to see inside.

The resulting emulation can’t tell us what actually is happening inside the black box. For example, the black box may take the input, divide it by four, and then multiply the result by eight and output that number.

Looking at this from the outside of the black box, what we see is that if we input the number 2, the black box outputs the number 4. We input 3 and get 6, we input 5 and we get 10, and so on. So we conclude that the black box multiplies the input by 2.

Of course, the black box is not actually multiplying the input by 2. It is dividing by 4 and multiplying by 8. But from outside the black box that doesn’t matter. It is effectively multiplying the input by 2. We cannot use the emulation to say what is actually happening inside the black box. But we can say that the black box is functionally equivalent to a black box that multiplies by two. The functional equivalence means that we can replace one black box with the other because they give the same result. It also allows us to discover and state what the first black box is effectively doing. Not what it is actually doing, but what it is effectively doing. I will return to this idea of functional equivalence shortly.

METHODS

Let me describe what I have done to get to the conclusions in Figure 1. First, I did a multiple linear regression using all the forcings, to see if the GISSE temperature hindcast could be expressed as a linear combination of the forcing inputs. It can, with an r^2 of 0.95. That’s a good fit.

However, that result is almost certainly subject to “overfitting”, because there are ten individual forcings that make up the total. With so many forcings, you end up with lots of parameters, so you can match most anything. This means that the good fit doesn’t mean a lot.

I looked further, and I saw that the total forcing versus temperature match was excellent except for one forcing — the volcanoes. Experimentation showed that the GISSE climate model is underweighting the volcanic forcings by about 60% from the original value, while the rest of the forcings are given full value.

Then I used the total GISS forcing with the appropriately reduced volcanic contribution, and we have the result shown in Figure 1. Temperature change is 30% of the change in the adjusted forcing. Simple as that. It’s a really, really short methods section because what the GISSE model is effectively doing is really, really simple.

DISCUSSION

Now, what are (and aren’t) the implications available within this interesting finding? What does it mean that regarding temperature, to within an accuracy of five hundredths of a degree (0.05°C RMS error) the GISSE model black box is functionally equivalent to a black box that simply multiplies the adjusted forcing times 0.3?

My first implication would have to be that the almost unbelievable complexity of the Model E, with thousands of gridcells and dozens of atmospheric and oceanic levels simulated, and ice and land and lakes and everything else, all of that complexity masks a correspondingly almost unbelievable simplicity. The modellers really weren’t kidding when they said everything else averages out and all that’s left is radiation and temperature. I don’t think the climate works that way … but their model certainly does.

The second implication is an odd one, and quite important. Consider the fact that their temperature change hindcast (in degrees) is simply 0.3 times the forcing change (in watts per meter squared). But that is also a statement of the climate sensitivity, 0.3 degrees per W/m2. Converting this to degrees of warming for a doubling of CO2 gives us (0.3°C per W/m2) times (3.7 W/m2 per doubling of CO2), which yields a climate sensitivity of 1.1°C for a doubling of CO2. This is far below the canonical value given by the GISSE modelers, which is about 0.8°C per W/m2 or about 3°C per doubling.

The third implication is that there appears to be surprisingly little lag in their system. I can improve the fit of the above model slightly by adding a lag term based on the change in forcing with time d(Q)/dt. But that only improves the r^2 to 0.95, mainly by clipping the peaks of the volcanic excursions (temperature drops in e.g. 1885, 1964). A more complex lag expression could probably improve that, but with the initial expression having an r^2 of 0.92, that only leaves 0.08 of room for improvement, and some of that is surely random noise.

The fourth implication is that the model slavishly follows the radiative forcings. The model results are a 5-run average, so it is not clear how far an individual model run might stray from the fold. But since the five runs’ temperatures average out so close to 0.3 times the forcings, no individual one of them can be very far from the forcings.

Anyhow, that’s what I get out of the exercise. Further inferences, questions, objections, influences and expansions welcomed, politeness roolz, and please, no speculation about motives. Motives don’t matter.

w.

 

5 2 votes
Article Rating

Discover more from Watts Up With That?

Subscribe to get the latest posts sent to your email.

165 Comments
Inline Feedbacks
View all comments
Baa Humbug
January 17, 2011 2:52 am

“Calling Dr Lacis, is Dr lacis in da house?”

January 17, 2011 2:54 am

You say that there are ten individual forcings. Perhaps you should have added ‘So Far’
Peter Taylor’s book Chill lists some more recently discovered solar forcings which actually explain the slight warming of the 20th cent. (with zero CO2 input!)

Baa Humbug
January 17, 2011 2:59 am

Willis would you clarify paragraphs 2 and 3 for me please.
Volcanic forcing is weighted at 40% of other forcings, then you reduced that 40% by 60%? You mean you weighted volcanic forcing as 36%?
Also, is there an explanation or a reason why 1Wm2 of volcanic forcing would be treated as 0.4Wm2? (do I have that right?)

Moritz Petersen
January 17, 2011 3:04 am

Very interesting article
What is the source of “3.7 W/m2 per doubling of CO2” I have read this multiple times, but I would like to look into how this has been calculated.
Thanks
Moe

Carl Chapman
January 17, 2011 3:24 am

1.2 degrees C for a doubling of CO2 is the “no feedback” calculation from physics. 1.1 degrees C is close to 1.2 degrees, so the modellers might as well pack up and go home. A straight physics calculation is all that’s needed. It seems very co-incidental that all the feedbacks cancel each other out and add up to almost 0. Either the feedbacks are very small, or they just happen to cancel out. Since CO2 is going up, but temperatures haven’t gone up for 12 years, there must be a negative forcing cancelling CO2’s forcing. Since were at the peak of an El Nino in 2010, and were also at the peak of an El Nino in 1998, the El Nino/La Nina can’t be the forcing. What is this mysterious negative forcing that they use to explain the lack of warming since 1998? Either the models don’t show the lack of warming and the models are wrong, or the models include some negative forcing. Could Dr. Trenberth please identify the negative forcing since 1998, or admit that the models have shown rapid warming while the climate has disagreed for 12 years.
Silly me. I forgot the third possibility: there has been rapid warming since 1998 but there’s a travesty causing the thousands of temperature measurements by satellite to miss it.

LazyTeenager
January 17, 2011 3:28 am

Willis says
——-
mospheric and oceanic levels simulated, and ice and land and lakes and everything else, all of that complexity masks a correspondingly almost unbelievable simplicity.
——-
I find this quite interesting. Even more interesting will be how others spin this with the intent of discrediting the modellers..
One school of thought I have noticed around here is that the models are so incredibly complex that they allow the output to be fiddled by hiding cheats amongst the complexity.
And if course there will be those who claim that we do not need the expense of doing the model calculations, since we just need a simple formula, and they knew this all along because their belly button told them.

peter_ga
January 17, 2011 3:31 am

“is that the temperature change in degrees C will be 30% of the adjusted forcing change in watts per square metre (W/m2)”
Does one not usually compare apples and apples using percentages, and not apples and oranges? I stopped reading after this figure. It was too mentally draining.

January 17, 2011 3:35 am

How come the thread a few down, The PAST is Not What it Used to be, by Ira Glickstein, gives two versions of negative slopes to the temperature graph 1940-1970, while your delta T graph here gives a positive slope?
It has been interesting over the years to see thes initially negative slope gradually getting tortured back to a positive slope.

tmtisfree
January 17, 2011 4:04 am

Since CO2 is going up, but temperatures haven’t gone up for 12 years, there must be a negative forcing cancelling CO2′s forcing.

There is a fourth possibility: there is no CO2 forcing.

amicus curiae
January 17, 2011 4:20 am

not great at math, but seems to be a stacked deck?

sped
January 17, 2011 4:24 am

Could you post a spreadsheet with the data you used and their values?
Have you considered using nonlinear terms? Volterra (x^2) terms are pretty common. Saturation values too. Linear regression can still give you the affine volterra vals, vut finding saturation limits is a pita.

c1ue
January 17, 2011 4:31 am

A nice article, thank you for writing it.
I would just note, however, that modeling isn’t a case of all right or all wrong.
While you have shown that a high degree of correlation can be achieved vs. GISSE using a very simple set of parameters – the seeming small differences are exactly what causes the GISSE model to be complex.
I have experience in device modeling in semiconductors – in much the same fashion a transistor can be modeled by a very simple equation to within 90% accuracy.
However, achieving the remaining 5% or even 1% is what causes the transistor device model (especially at 2 digit nanometer scales) to balloon into multiple megabyte sizes.
Drawing from this same analogy, the real problem with climate models is that unlike transistor models, it is impossible to simulate all or even most operating conditions. Much of the aforementioned complexity has to do with nonlinear effects especially at the low current/initial condition stages, with the next category being long term decay type effects.
As you might imagine, both of these are chaotic; it would be extremely difficult if not impossible to model without real world data – and in fact in a number of cases the models simply encapsulate real world snapshots in a lookup table.
The point being not GIGO, but no data in yields no data out.

RockyRoad
January 17, 2011 4:33 am

LazyTeenager says:
January 17, 2011 at 3:28 am

Willis says
——-
mospheric and oceanic levels simulated, and ice and land and lakes and everything else, all of that complexity masks a correspondingly almost unbelievable simplicity.
——-

I find this quite interesting. Even more interesting will be how others spin this with the intent of discrediting the modellers..
One school of thought I have noticed around here is that the models are so incredibly complex that they allow the output to be fiddled by hiding cheats amongst the complexity.
And if course there will be those who claim that we do not need the expense of doing the model calculations, since we just need a simple formula, and they knew this all along because their belly button told them.

And your point is?
(Really, I kick myself everytime I read something written by this Teenager poster as it always ends up being a waste of my time.)

Mike Haseler
January 17, 2011 4:44 am

Willis, a great piece of detective work. Only question, you said volcano forcing was lowered, but didn’t say the r^2 value improved by reducing the value. Nor did you mention any improvement in r^2 with introducing time lags.
But at the end of the day, it really doesn’t matter, because fundamentally you have shown that for all their complexity, the climate models really boil down to one assumption regarding one constant linking radiative forcing and warming.
All the rest is just frills and PR bells to pretend there is actually real science in this guess work.

björn
January 17, 2011 4:55 am

If “everything else averages out and all that’s left is radiation and temperature”, well, shouldnt one apply Occhams razor to the theory and cut away the useless bits?

Lance Wallace
January 17, 2011 4:55 am

Willis I have tried reproducing your results but am not getting a perfect match. I used your 10 forcings, reduced the one called StratAer which I assume is stratospheric aerosols due to volcanoes by 60%, and summed the 10 forcings, then multiplied the total by 0.3. The resulting graph looks like yours but ranges from -0.39 in 1884 to +0.58 in 2003, whereas your graph (blue line) goes from about -0.52 to + 0.47. There appears to be a nearly constant offset of about 0.1 between the two graphs. What am I doing wrong?

Richard S Courtney
January 17, 2011 5:04 am

Willis:
Superb! Thankyou. An excellent analysis.
Please write it up for publication in a journal so there can be no justifiable reason for the next IPCC Report to ignore it.
The only – yes, ONLY – support for AGW is the models. But a model does what its designers and constructors define it will do: a model is not capable of doing anything else. So, a clear demonstration of what the models are made to do refutes the only existing support for AGW.
Your analysis shows what the GISS Model E is doing in effect; i.e.
“The second implication is an odd one, and quite important. Consider the fact that their temperature change hindcast (in degrees) is simply 0.3 times the forcing change (in watts per meter squared). But that is also a statement of the climate sensitivity, 0.3 degrees per W/m2. Converting this to degrees of warming for a doubling of CO2 gives us (0.3°C per W/m2) times (3.7 W/m2 per doubling of CO2), which yields a climate sensitivity of 1.1°C for a doubling of CO2. This is far below the canonical value given by the GISSE modelers, which is about 0.8°C per W/m2 or about 3°C per doubling.”
This finding of so large a difference between the effective climate sensitivity of the model and the “canonical value” is so extremely important that it requires publication in a form that prevents the IPCC from justifiably ignoring it. Indeed, it is so important that it demands research provides similar assessment of the effective behaviour of other climate models.
The IPCC will try to spin your finding and to and downplay it (as past IPCC Reports have with all other inconvenient truths) but, in my opinion, the IPCC needs to be prevented from having an excuse for ignoring it.
Richard

Steve Keohane
January 17, 2011 5:05 am

Another clear purveyance Willis. 1.1°C per doubling is close to the 1.5°C Arrhenius(sp?) finally settled on, half of what the IPCC yearns for.
Geoff Sherrington says: January 17, 2011 at 3:35 am
How come the thread a few down, The PAST is Not What it Used to be, by Ira Glickstein, gives two versions of negative slopes to the temperature graph 1940-1970, while your delta T graph here gives a positive slope?
It has been interesting over the years to see the initially negative slope gradually getting tortured back to a positive slope.

In case others have not seen this: http://i54.tinypic.com/fylq2w.jpg
LazyTeenager says: January 17, 2011 at 3:28 am “and they knew this all along because their belly button told them.”
Kettle meet pot.

Lance Wallace
January 17, 2011 5:20 am

Willis, now I have looked at your earlier Excel sheet and have done your recommended reduction of StratAer by 0.6 and I continue to get the same value of -0.39 for the 1884 volcano year. This doesn’t match the value of about -0.52 that you show in the graph. And all other values are also off by roughly 0.1 from the graph.

Joel Shore
January 17, 2011 5:53 am

Willis,
(1) What you are computing here is closer to being the transient climate response of the model, not the climate sensitivity. I don’t know what the transient climate response for GISS Model E is, but it is lower than the equilibrium sensitivity.
(2) I don’t think the reason why you have to multiply the volcanic forcing by a number to reduce it is any real mystery. A system that has a lagged response (due to the thermal inertia of the oceans) like the climate system has will tend to have a reduced amplitude in response to higher frequency perturbations than it has in response to lower frequency perturbations. In fact, you could probably model this with some simple AC circuit analogy.

Joel Shore
January 17, 2011 6:00 am

Willis,
Actually, if you look at Table 8.2 of the IPCC AR4 WG1 report, you’ll see that the GISS Model-EH and -ER both have an equilibrium climate sensitivity of 2.7 C and they have transient climate responses of 1.6 and 1.5 C, respectively. These transient climate responses are still a little bit higher than what you are finding (1.1 C when you used 3.7 W/m^2 for doubling, although it would be more like 1.2 C if you used 4.0 W/m^2) for some reason, but not too far off.

Joel Shore
January 17, 2011 6:06 am

Mike Haseler says:

But at the end of the day, it really doesn’t matter, because fundamentally you have shown that for all their complexity, the climate models really boil down to one assumption regarding one constant linking radiative forcing and warming.

For the global temperature…and ignoring fluctuations (which averaging over several runs effectively does), I don’t think this is really any surprise. In fact, there has been a considerable amount of work using simplified models to emulate the results of the climate models for such things (see Section 8.8 of the IPCC AR4 WG1 report). And, it is also why the mono-numeric concepts of equilibrium climate sensitivity and transient climate response are useful at all.
However, the climate models also put out a lot more information regarding where the warming is greater or lesser, how weather patterns change, and so forth.

1 2 3 7
Verified by MonsterInsights