Zero Point Three times the Forcing

Guest Post by Willis Eschenbach

Now that my blood pressure has returned to normal after responding to Dr. Trenberth, I returned to thinking about my earlier somewhat unsatisfying attempt to make a very simple emulation of the GISS Model E (herinafter GISSE) climate model. I described that attempt here, please see that post for the sources of the datasets used in this exercise.

After some reflection and investigation, I realized that the GISSE model treats all of the forcings equally … except volcanoes. For whatever reason, the GISSE climate model only gives the volcanic forcings about 40% of the weight of the rest of the forcings.

So I took the total forcings, and reduced the volcanic forcing by 60%. Then it was easy, because nothing further was required. It turns out that the GISSE model temperature hindcast is that the temperature change in degrees C will be 30% of the adjusted forcing change in watts per square metre (W/m2). Figure 1 shows that result:

 

Figure 1. GISSE climate model hindcast temperatures, compared with temperatures hindcast using the formula ∆T = 0.3 ∆Q, where T is temperature and Q is the same forcings used by the GISSE model, with the volcanic forcing reduced by 60%.

What are the implications of this curious finding?

First, a necessary detour into black boxes. For the purpose of this exercise, I have treated the GISS-E model as a black box, for which I know only the inputs (forcings) and outputs (hindcast temperatures). It’s like a detective game, trying to emulate what’s happening inside the GISSE black box without being able to see inside.

The resulting emulation can’t tell us what actually is happening inside the black box. For example, the black box may take the input, divide it by four, and then multiply the result by eight and output that number.

Looking at this from the outside of the black box, what we see is that if we input the number 2, the black box outputs the number 4. We input 3 and get 6, we input 5 and we get 10, and so on. So we conclude that the black box multiplies the input by 2.

Of course, the black box is not actually multiplying the input by 2. It is dividing by 4 and multiplying by 8. But from outside the black box that doesn’t matter. It is effectively multiplying the input by 2. We cannot use the emulation to say what is actually happening inside the black box. But we can say that the black box is functionally equivalent to a black box that multiplies by two. The functional equivalence means that we can replace one black box with the other because they give the same result. It also allows us to discover and state what the first black box is effectively doing. Not what it is actually doing, but what it is effectively doing. I will return to this idea of functional equivalence shortly.

METHODS

Let me describe what I have done to get to the conclusions in Figure 1. First, I did a multiple linear regression using all the forcings, to see if the GISSE temperature hindcast could be expressed as a linear combination of the forcing inputs. It can, with an r^2 of 0.95. That’s a good fit.

However, that result is almost certainly subject to “overfitting”, because there are ten individual forcings that make up the total. With so many forcings, you end up with lots of parameters, so you can match most anything. This means that the good fit doesn’t mean a lot.

I looked further, and I saw that the total forcing versus temperature match was excellent except for one forcing — the volcanoes. Experimentation showed that the GISSE climate model is underweighting the volcanic forcings by about 60% from the original value, while the rest of the forcings are given full value.

Then I used the total GISS forcing with the appropriately reduced volcanic contribution, and we have the result shown in Figure 1. Temperature change is 30% of the change in the adjusted forcing. Simple as that. It’s a really, really short methods section because what the GISSE model is effectively doing is really, really simple.

DISCUSSION

Now, what are (and aren’t) the implications available within this interesting finding? What does it mean that regarding temperature, to within an accuracy of five hundredths of a degree (0.05°C RMS error) the GISSE model black box is functionally equivalent to a black box that simply multiplies the adjusted forcing times 0.3?

My first implication would have to be that the almost unbelievable complexity of the Model E, with thousands of gridcells and dozens of atmospheric and oceanic levels simulated, and ice and land and lakes and everything else, all of that complexity masks a correspondingly almost unbelievable simplicity. The modellers really weren’t kidding when they said everything else averages out and all that’s left is radiation and temperature. I don’t think the climate works that way … but their model certainly does.

The second implication is an odd one, and quite important. Consider the fact that their temperature change hindcast (in degrees) is simply 0.3 times the forcing change (in watts per meter squared). But that is also a statement of the climate sensitivity, 0.3 degrees per W/m2. Converting this to degrees of warming for a doubling of CO2 gives us (0.3°C per W/m2) times (3.7 W/m2 per doubling of CO2), which yields a climate sensitivity of 1.1°C for a doubling of CO2. This is far below the canonical value given by the GISSE modelers, which is about 0.8°C per W/m2 or about 3°C per doubling.

The third implication is that there appears to be surprisingly little lag in their system. I can improve the fit of the above model slightly by adding a lag term based on the change in forcing with time d(Q)/dt. But that only improves the r^2 to 0.95, mainly by clipping the peaks of the volcanic excursions (temperature drops in e.g. 1885, 1964). A more complex lag expression could probably improve that, but with the initial expression having an r^2 of 0.92, that only leaves 0.08 of room for improvement, and some of that is surely random noise.

The fourth implication is that the model slavishly follows the radiative forcings. The model results are a 5-run average, so it is not clear how far an individual model run might stray from the fold. But since the five runs’ temperatures average out so close to 0.3 times the forcings, no individual one of them can be very far from the forcings.

Anyhow, that’s what I get out of the exercise. Further inferences, questions, objections, influences and expansions welcomed, politeness roolz, and please, no speculation about motives. Motives don’t matter.

w.

 

Get notified when a new post is published.
Subscribe today!
5 2 votes
Article Rating
165 Comments
Inline Feedbacks
View all comments
Jryan
January 18, 2011 6:01 am

So why is there still a black box at all? Were the AGW folks claiming they are completely open now?

cba
January 18, 2011 6:33 am

“Moritz Petersen says:
January 17, 2011 at 3:04 am
Very interesting article
What is the source of “3.7 W/m2 per doubling of CO2″ I have read this multiple times, but I would like to look into how this has been calculated.
Thanks
Moe”
Moe,
Emission and absorption lines in the atmosphere are rather well known. Projects, like HItran that started in the 1960s by the military have been going on for decades where almost every molecule type has been measured and or calculated to have hundreds and thousands of spectral lines. If you take that and create a model of the atmosphere for pressure, temperature, and molecular content, and the spectrum can then be created by combining these 10s of thousands of lines. you can then calculate the difference in power transmission and absorption between a reference point, like conditions in 1976, and another point, say with twice the co2 present in 1976. Looking down from the tropopause, one finds that the difference in power reaching there is about 3.6 or 3.7 w/m^2 for our two points. The v alue is also for clear skies only as clouds will block even more radiated power than that.
When warmers claim the science is well understood, this is what they are referring to although they are essentially lying about it because there is still much that is poorly understood in this. Also, they conveniently forget that cloud cover matters dramatically and it is unpredictable and accounts for over half of the sky conditions.
If you want to play with a simplified yet still sophisticated system online, check out the Modtran calculator by Archer. It isn’t line by line calculations but it does a pretty fair job of working at least up to about 70km in altitude.
It’s a fairly decent number to know but its effects are not that straight forward.

beng
January 18, 2011 8:39 am

******
Jim D says:
January 17, 2011 at 8:10 pm
A thought experiment. Imagine the forcing suddenly went to zero in the last year. Willis’s model’s temperature perturbation would immediately go to zero, but obviously the earth’s (or the GISS model’s) temperature would not respond that quickly, maybe taking decades”
******
Huh??? Hot, subtropical deserts go from 35C to near freezing every night.

Wolfgang Flamme
January 18, 2011 9:22 am

Wir respect to volcanic aerosol impact, here’s an old one:
Nir Shaviv: The Fine Art of Fitting Elephants

Jim D
January 18, 2011 9:31 am

The questions on my thought experiment illustrate the point. In a day, the forcing changes hundreds of W/m2, but the sensitivity is maybe only 0.1 C per W/m2. For higher frequencies, the sensitivity goes down due to thermal inertia. Thermal inertia effects only go away gradually over decades, which is the whole reason why an equilibrium sensitivity has to be distinguished from the transient one. It has to do with the depth of the layer that the warming gets to, which also determines how lasting the effect will be.

Laurence M. Sheehan, PE
January 18, 2011 11:41 am

The real problem is that these climate so-called ” scientists” are practicing yellow journalism. It should be obvious that the term “contribution” should be used instead of the absurd term “forcing”.
The fact of the matter is that the contribution of CO2 to the atmospheric temperature is nil, far to small an amount to even be measured, if there is any contribution at all.

Bill Illis
January 18, 2011 1:07 pm

I think GISS Model E just covers the lag issue by assuming that CO2 will always increase.
You don’t need to go back in time and calculate the lagged impact from every daily change in CO2 back to 1700.
You just build in a Temp response per ln (CO2) that simulates the lag response. You need to get to +3.0C by the year 2100 and CO2 rises to 715 ppm by 2100. It just takes a simple module in the model to make that work. The actual monthly temperatures in Model E as a result of GHG forcing seems to follow this principle extremely closely all the way back to the beginning of the simulation. So, if the response is not actually programmed in this way, then the model spontaneously spits that out.
So the 0.3C per W/m2 already incorporates the lag (as long as CO2/GHGs are increasing).
Given what is shown about what I have seen about what happens to temps after GHGs stop increasing, there is very little lag built into the models. Hansen’s 1988 model fully adjusted in 7 years. In IPCC AR4, there is only 0.1C of temperature increase after CO2 stops increasing in 2000 (although it takes 100 years to get there). 0.1C of lag after 100 years is nothing to make special note of.

Jim D
January 18, 2011 4:11 pm

Willis, by putting in an exponential lag response with a time-scale to be determined (as in your replies to Joel), you will be able to remove your volcano fudge factor. I would say you could try tuning this time-scale parameter in such a way as to allow the full effect of volcanoes. I say this because, and it may be obvious, the volcano forcing is high-frequency spikes, so any kind of time-averaging of the forcing will diminish their effect automatically without need for the fudge factor.

Baa Humbug
January 18, 2011 4:49 pm

I’m having trouble accepting the “lag time” theory.
We experience changes in forcing in the real climate regularly, (the seasons) these don’t take years to manifest themselves.
Is CO2 forcing somehow supra-special that it’s effects take years to manifest themselves?

Joel Shore
January 18, 2011 5:03 pm

Bill Illis says:

Given what is shown about what I have seen about what happens to temps after GHGs stop increasing, there is very little lag built into the models. Hansen’s 1988 model fully adjusted in 7 years. In IPCC AR4, there is only 0.1C of temperature increase after CO2 stops increasing in 2000 (although it takes 100 years to get there). 0.1C of lag after 100 years is nothing to make special note of.

Well, I can’t speak to Hansen’s 1988 model, which treated the oceans in pretty primitive ways relative to modern incarnations (and the oceans are really what matter for this issue). But, the IPCC does not show what you claim it does at least anywhere that I can find. In fact, in Section 10.7 of the WG1 report, they say:

The multi-model average warming for all radiative forcing agents held constant at year 2000 (reported earlier for several of the models by Meehl et al., 2005c), is about 0.6°C for the period 2090 to 2099 relative to the 1980 to 1999 reference
period. This is roughly the magnitude of warming simulated in the 20th century. Applying the same uncertainty assessment as for the SRES scenarios in Fig. 10.29 (–40 to +60%), the likely uncertainty range is 0.3°C to 0.9°C.

I’m not sure how you reached your erroneous conclusion, but perhaps it was from misinterpreting or misremembering this statement in the same section:

The committed warming trend values show a rate of warming averaged over the first two decades of the 21st century of about 0.1°C per decade, due mainly to the slow response of the oceans. About twice as much warming (0.2°C per decade) would be expected if emissions are within the range of the SRES scenarios.

Needless to say, 0.1°C per decade is not the same thing as 0.1°C increase by 2100.

Joel Shore
January 18, 2011 5:23 pm

Willis Eschenbach says:

That all sounds great, Joel, but it is all handwaving until you can actually show us in the numbers where this is happening. For what you claim to be true, you need to show, not claim but show that there is a really long exponential lag between the imposition of the forcing and the results of that forcing, and you need let us know your estimate of what that exponent might be.

Willis, I agree that this is sort of handwaving. I was not attempting to do the work for you but just to point you in the right direction. If I didn’t have over 100 intro physics exams to grade, I might be able to do more of the research to answer your question. Since, alas, I do have these other commitments, I am just trying to point out what I think the issue is and where you can find more discussion of it. One direction was the section of the IPCC AR4 report on transient climate response. Another is the section that I pointed out to Bill Illis on the long term climate commitment. (See, in particular, Figure 10.34, although alas the scale there is not ideal because they are trying to show a lot of things on one graph. I know there are some papers in the literature that look at the “constant composition commitment” scenario in more detail.)
The advantage of models is that one is not constrained by the (estimated) real world forcings, which is what your study of the GISS Model E has addressed so far. One can easily test the models by putting in all sorts of different forcing scenarios. I honestly don’t know if even a simple exponential relaxation model is sufficient to get reasonable emulation of the models or if one has to assume non-exponential relaxation, but certainly exponential relaxation would be better than the instantaneous assumption that you are using now.
As Jim D pointed out, one way to go about estimating things with the current data is to see what kind of exponential relaxation is necessary to get a better fit to the GISS Model E response to volcanic forcings without having to put in your volcano fudge factor. This will give you an estimate of the relaxation timescale, although probably an underestimate because I am pretty sure that the form the relaxation will actually take is non-exponential, with an initial fairly rapidly approach but then a longer-than-exponential tail.

Since both you and Joel obviously think there is a greater than 120 year lag between the application of a forcing and the results of that forcing, I applaud your imaginations

I am not saying the lag is greater than 120 years. The fact is that the net forcings were fairly small over much of that 120 year span and it is only over the past 30 or 40 years that the net forcing has really ramped up.

George E. Smith
January 18, 2011 6:47 pm

“”””” Willis Eschenbach says:
January 17, 2011 at 4:18 am
peter_ga says:
January 17, 2011 at 3:31 am
“is that the temperature change in degrees C will be 30% of the adjusted forcing change in watts per square metre (W/m2)”
Does one not usually compare apples and apples using percentages, and not apples and oranges? I stopped reading after this figure. It was too mentally draining.
peter_ga, don’t give up so quickly. Orthodox climate theory posits something called the “climate sensitivity”. This says that their is a linear relationship between changes in top-of-atmosphere forcing Q (in watts per square metre, or W/m2) and changes in surface temperature T (in °C).
These changes are related by the climate sensitivity S, such that
∆T = ∆Q * S
or “Change in temperature is equal to change in forcing times the climate sensitivity”.
Climate sensitivity has the units of degrees C per W/m2, so all of the units work out, and we are dealing with apples and apples. Or oranges and oranges.
Please note that I do not subscribe to this idea of “climate sensitivity”, I am reporting the mainstream view. “””””
Now you have me totally copnfused. I was under the impression that “Climate Sensitivity” is defined as the increase in global mean Temperature (presumably the lower troposphere two metre high thing) for a doubling of CO2; thereby enshrining forever the presumption that Temperature is proportional to log of CO2 abundance. That seems to be what IPCC defines it as 3.0 deg C per doubling +/-50%
It seems like everyone who writes on this subject has their own definition of “Climate Sensitivity” How did W/m^2 get into the picture if it is just a CO2 doubling that does it.
I don’t believe either the logarithmic bit or the value of the deg C per doubling (which I don’t believe in anyway). Going from 280 ppm to 560 ppm CO2 gives the same three degree warming that going from one ppm to two ppm gives; preposterous; but correct according to the definition. And the definition doesn’t say anything about H2O; just CO2 barefoot.

beng
January 18, 2011 7:10 pm

Joel & SMosher:
I don’t know where these 120 yr “lag” times mentioned are coming from. Ocean (like ENSO) & wind currents may cycle on various timescales — yrs to decades to perhaps even 1000s of yrs.
But that’s completely different from the reaction times/lags to a forcing. That’s determined by mass & the resultant “storage” of heat. A larger mass at a given temp will come into equilibrium over a longer time period after a given forcing.
The ocean is really the only “storage” medium for heat — land & air lack the mass or thermal conductivity. Look at a thermal map of the ocean — it’s literally a cold-water tub /an oil-slick thickness of warm water at the top. Most of the ocean mass is well below the earth’s avg temperature! That’s not a very good “heat storage” mechanism at all. It’s actually storing “cold” in relation to the earth’s average temperature. And it’s stratified/isolated from the warm-water above, except where upwelling occurs.
So the only significant heat-storage is the first few hundred meters of ocean — on the scale of paint-thickness on a toy globe. Global pulse-forcings like Pinatubo have demonstratively shown transient responses of only 0.6 yrs! Equilibrium in a mere 2.5 yrs. That’s all. Much bigger volcanoes would have longer response lags, but not much — maybe a decade for an instantaneous super-volcano.
What’s it all mean? It means one can toss out all the “heat in the pipeline” arguments. And toss out the 120 yr “effects” down the road. And that what one sees right now from CO2 is what one gets. The yearly increase in CO2 is only a few ppm, so considering the earth’s quick response, particularly to such a small incremental forcing, means there is no significant lag to human-emitted CO2.
Changing ocean currents & other cycles are a different, separate issue to forcing/response time issues. Now, if someone wants to venture that CO2 causes ocean-currents changes & such, that’s stretching beyond belief at this point in our understanding.

Brian H
January 18, 2011 7:14 pm

Willis;
Your reduction of the entire GISSE model to a single multiplication has an interesting implication:
If a model consists of linear equations, it can always be reduced to a single arithmetic operation in the end. The only value of the model/equation set is to discover what that operation is.
You have done so with GISSE, so its purpose is now achieved, and it can be retired.
🙂
😉

Joel Shore
January 18, 2011 8:19 pm

George E. Smith says:

It seems like everyone who writes on this subject has their own definition of “Climate Sensitivity” How did W/m^2 get into the picture if it is just a CO2 doubling that does it.

People use the term in a few different ways. The more fundamental definition of climate sensitivity that holds for any sort of forcing is to define it in terms of degrees Celsius per (W/m^2). When you apply this to the forcing due to a doubling of CO2 (which basically everyone from Richard Lindzen and Roy Spencer and the climate scientists who support the consensus view on AGW agree is ~4 W/m^2), you get the number for a CO2 doubling. In particular, 3 deg C for a doubling corresponds to roughly 0.75 C per (W/m^2).

I don’t believe either the logarithmic bit or the value of the deg C per doubling (which I don’t believe in anyway). Going from 280 ppm to 560 ppm CO2 gives the same three degree warming that going from one ppm to two ppm gives; preposterous; but correct according to the definition. And the definition doesn’t say anything about H2O; just CO2 barefoot.

George, I know people have explained this to you countless times here: Before you can choose to believe or not believe something, it is best to at least understand what it is you are choosing to believe or disbelieve. The logarithmic bit refers to the fact that the radiative forcing due to increased CO2 increases approximately logarithmically in the concentration regime we are in. It is not a law of nature…It is just an empirical fit that works pretty well in said regime. At lower concentrations, it becomes more like linear in CO2, I believe…and at higher concentrations than the current regime, it transitions to something that is more like a square root dependence (at least for a while). This has to do with which absorption bands are contributing the most to the radiative forcing effect and what regime one is in for those particular bands (saturated in the center but not in the wings, …).
Also, in going from concentration to the effect on global average temperature, one also has to consider how the climate sensitivity [i.e., the number in C per (W/m^2)] varies with the climate state. So, that is an additional factor that comes into play. As I understand it, the current thinking is that it is not strongly dependent on the climate state, at least in the general regime that we are currently in.