Zero Point Three times the Forcing

Guest Post by Willis Eschenbach

Now that my blood pressure has returned to normal after responding to Dr. Trenberth, I returned to thinking about my earlier somewhat unsatisfying attempt to make a very simple emulation of the GISS Model E (herinafter GISSE) climate model. I described that attempt here, please see that post for the sources of the datasets used in this exercise.

After some reflection and investigation, I realized that the GISSE model treats all of the forcings equally … except volcanoes. For whatever reason, the GISSE climate model only gives the volcanic forcings about 40% of the weight of the rest of the forcings.

So I took the total forcings, and reduced the volcanic forcing by 60%. Then it was easy, because nothing further was required. It turns out that the GISSE model temperature hindcast is that the temperature change in degrees C will be 30% of the adjusted forcing change in watts per square metre (W/m2). Figure 1 shows that result:

 

Figure 1. GISSE climate model hindcast temperatures, compared with temperatures hindcast using the formula ∆T = 0.3 ∆Q, where T is temperature and Q is the same forcings used by the GISSE model, with the volcanic forcing reduced by 60%.

What are the implications of this curious finding?

First, a necessary detour into black boxes. For the purpose of this exercise, I have treated the GISS-E model as a black box, for which I know only the inputs (forcings) and outputs (hindcast temperatures). It’s like a detective game, trying to emulate what’s happening inside the GISSE black box without being able to see inside.

The resulting emulation can’t tell us what actually is happening inside the black box. For example, the black box may take the input, divide it by four, and then multiply the result by eight and output that number.

Looking at this from the outside of the black box, what we see is that if we input the number 2, the black box outputs the number 4. We input 3 and get 6, we input 5 and we get 10, and so on. So we conclude that the black box multiplies the input by 2.

Of course, the black box is not actually multiplying the input by 2. It is dividing by 4 and multiplying by 8. But from outside the black box that doesn’t matter. It is effectively multiplying the input by 2. We cannot use the emulation to say what is actually happening inside the black box. But we can say that the black box is functionally equivalent to a black box that multiplies by two. The functional equivalence means that we can replace one black box with the other because they give the same result. It also allows us to discover and state what the first black box is effectively doing. Not what it is actually doing, but what it is effectively doing. I will return to this idea of functional equivalence shortly.

METHODS

Let me describe what I have done to get to the conclusions in Figure 1. First, I did a multiple linear regression using all the forcings, to see if the GISSE temperature hindcast could be expressed as a linear combination of the forcing inputs. It can, with an r^2 of 0.95. That’s a good fit.

However, that result is almost certainly subject to “overfitting”, because there are ten individual forcings that make up the total. With so many forcings, you end up with lots of parameters, so you can match most anything. This means that the good fit doesn’t mean a lot.

I looked further, and I saw that the total forcing versus temperature match was excellent except for one forcing — the volcanoes. Experimentation showed that the GISSE climate model is underweighting the volcanic forcings by about 60% from the original value, while the rest of the forcings are given full value.

Then I used the total GISS forcing with the appropriately reduced volcanic contribution, and we have the result shown in Figure 1. Temperature change is 30% of the change in the adjusted forcing. Simple as that. It’s a really, really short methods section because what the GISSE model is effectively doing is really, really simple.

DISCUSSION

Now, what are (and aren’t) the implications available within this interesting finding? What does it mean that regarding temperature, to within an accuracy of five hundredths of a degree (0.05°C RMS error) the GISSE model black box is functionally equivalent to a black box that simply multiplies the adjusted forcing times 0.3?

My first implication would have to be that the almost unbelievable complexity of the Model E, with thousands of gridcells and dozens of atmospheric and oceanic levels simulated, and ice and land and lakes and everything else, all of that complexity masks a correspondingly almost unbelievable simplicity. The modellers really weren’t kidding when they said everything else averages out and all that’s left is radiation and temperature. I don’t think the climate works that way … but their model certainly does.

The second implication is an odd one, and quite important. Consider the fact that their temperature change hindcast (in degrees) is simply 0.3 times the forcing change (in watts per meter squared). But that is also a statement of the climate sensitivity, 0.3 degrees per W/m2. Converting this to degrees of warming for a doubling of CO2 gives us (0.3°C per W/m2) times (3.7 W/m2 per doubling of CO2), which yields a climate sensitivity of 1.1°C for a doubling of CO2. This is far below the canonical value given by the GISSE modelers, which is about 0.8°C per W/m2 or about 3°C per doubling.

The third implication is that there appears to be surprisingly little lag in their system. I can improve the fit of the above model slightly by adding a lag term based on the change in forcing with time d(Q)/dt. But that only improves the r^2 to 0.95, mainly by clipping the peaks of the volcanic excursions (temperature drops in e.g. 1885, 1964). A more complex lag expression could probably improve that, but with the initial expression having an r^2 of 0.92, that only leaves 0.08 of room for improvement, and some of that is surely random noise.

The fourth implication is that the model slavishly follows the radiative forcings. The model results are a 5-run average, so it is not clear how far an individual model run might stray from the fold. But since the five runs’ temperatures average out so close to 0.3 times the forcings, no individual one of them can be very far from the forcings.

Anyhow, that’s what I get out of the exercise. Further inferences, questions, objections, influences and expansions welcomed, politeness roolz, and please, no speculation about motives. Motives don’t matter.

w.

 

5 2 votes
Article Rating
165 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Baa Humbug
January 17, 2011 2:52 am

“Calling Dr Lacis, is Dr lacis in da house?”

John Marshall
January 17, 2011 2:54 am

You say that there are ten individual forcings. Perhaps you should have added ‘So Far’
Peter Taylor’s book Chill lists some more recently discovered solar forcings which actually explain the slight warming of the 20th cent. (with zero CO2 input!)

Baa Humbug
January 17, 2011 2:59 am

Willis would you clarify paragraphs 2 and 3 for me please.
Volcanic forcing is weighted at 40% of other forcings, then you reduced that 40% by 60%? You mean you weighted volcanic forcing as 36%?
Also, is there an explanation or a reason why 1Wm2 of volcanic forcing would be treated as 0.4Wm2? (do I have that right?)

Moritz Petersen
January 17, 2011 3:04 am

Very interesting article
What is the source of “3.7 W/m2 per doubling of CO2” I have read this multiple times, but I would like to look into how this has been calculated.
Thanks
Moe

Carl Chapman
January 17, 2011 3:24 am

1.2 degrees C for a doubling of CO2 is the “no feedback” calculation from physics. 1.1 degrees C is close to 1.2 degrees, so the modellers might as well pack up and go home. A straight physics calculation is all that’s needed. It seems very co-incidental that all the feedbacks cancel each other out and add up to almost 0. Either the feedbacks are very small, or they just happen to cancel out. Since CO2 is going up, but temperatures haven’t gone up for 12 years, there must be a negative forcing cancelling CO2’s forcing. Since were at the peak of an El Nino in 2010, and were also at the peak of an El Nino in 1998, the El Nino/La Nina can’t be the forcing. What is this mysterious negative forcing that they use to explain the lack of warming since 1998? Either the models don’t show the lack of warming and the models are wrong, or the models include some negative forcing. Could Dr. Trenberth please identify the negative forcing since 1998, or admit that the models have shown rapid warming while the climate has disagreed for 12 years.
Silly me. I forgot the third possibility: there has been rapid warming since 1998 but there’s a travesty causing the thousands of temperature measurements by satellite to miss it.

LazyTeenager
January 17, 2011 3:28 am

Willis says
——-
mospheric and oceanic levels simulated, and ice and land and lakes and everything else, all of that complexity masks a correspondingly almost unbelievable simplicity.
——-
I find this quite interesting. Even more interesting will be how others spin this with the intent of discrediting the modellers..
One school of thought I have noticed around here is that the models are so incredibly complex that they allow the output to be fiddled by hiding cheats amongst the complexity.
And if course there will be those who claim that we do not need the expense of doing the model calculations, since we just need a simple formula, and they knew this all along because their belly button told them.

peter_ga
January 17, 2011 3:31 am

“is that the temperature change in degrees C will be 30% of the adjusted forcing change in watts per square metre (W/m2)”
Does one not usually compare apples and apples using percentages, and not apples and oranges? I stopped reading after this figure. It was too mentally draining.

Geoff Sherrington
January 17, 2011 3:35 am

How come the thread a few down, The PAST is Not What it Used to be, by Ira Glickstein, gives two versions of negative slopes to the temperature graph 1940-1970, while your delta T graph here gives a positive slope?
It has been interesting over the years to see thes initially negative slope gradually getting tortured back to a positive slope.

tmtisfree
January 17, 2011 4:04 am

Since CO2 is going up, but temperatures haven’t gone up for 12 years, there must be a negative forcing cancelling CO2′s forcing.

There is a fourth possibility: there is no CO2 forcing.

amicus curiae
January 17, 2011 4:20 am

not great at math, but seems to be a stacked deck?

sped
January 17, 2011 4:24 am

Could you post a spreadsheet with the data you used and their values?
Have you considered using nonlinear terms? Volterra (x^2) terms are pretty common. Saturation values too. Linear regression can still give you the affine volterra vals, vut finding saturation limits is a pita.

c1ue
January 17, 2011 4:31 am

A nice article, thank you for writing it.
I would just note, however, that modeling isn’t a case of all right or all wrong.
While you have shown that a high degree of correlation can be achieved vs. GISSE using a very simple set of parameters – the seeming small differences are exactly what causes the GISSE model to be complex.
I have experience in device modeling in semiconductors – in much the same fashion a transistor can be modeled by a very simple equation to within 90% accuracy.
However, achieving the remaining 5% or even 1% is what causes the transistor device model (especially at 2 digit nanometer scales) to balloon into multiple megabyte sizes.
Drawing from this same analogy, the real problem with climate models is that unlike transistor models, it is impossible to simulate all or even most operating conditions. Much of the aforementioned complexity has to do with nonlinear effects especially at the low current/initial condition stages, with the next category being long term decay type effects.
As you might imagine, both of these are chaotic; it would be extremely difficult if not impossible to model without real world data – and in fact in a number of cases the models simply encapsulate real world snapshots in a lookup table.
The point being not GIGO, but no data in yields no data out.

RockyRoad
January 17, 2011 4:33 am

LazyTeenager says:
January 17, 2011 at 3:28 am

Willis says
——-
mospheric and oceanic levels simulated, and ice and land and lakes and everything else, all of that complexity masks a correspondingly almost unbelievable simplicity.
——-

I find this quite interesting. Even more interesting will be how others spin this with the intent of discrediting the modellers..
One school of thought I have noticed around here is that the models are so incredibly complex that they allow the output to be fiddled by hiding cheats amongst the complexity.
And if course there will be those who claim that we do not need the expense of doing the model calculations, since we just need a simple formula, and they knew this all along because their belly button told them.

And your point is?
(Really, I kick myself everytime I read something written by this Teenager poster as it always ends up being a waste of my time.)

Mike Haseler
January 17, 2011 4:44 am

Willis, a great piece of detective work. Only question, you said volcano forcing was lowered, but didn’t say the r^2 value improved by reducing the value. Nor did you mention any improvement in r^2 with introducing time lags.
But at the end of the day, it really doesn’t matter, because fundamentally you have shown that for all their complexity, the climate models really boil down to one assumption regarding one constant linking radiative forcing and warming.
All the rest is just frills and PR bells to pretend there is actually real science in this guess work.

björn
January 17, 2011 4:55 am

If “everything else averages out and all that’s left is radiation and temperature”, well, shouldnt one apply Occhams razor to the theory and cut away the useless bits?

Lance Wallace
January 17, 2011 4:55 am

Willis I have tried reproducing your results but am not getting a perfect match. I used your 10 forcings, reduced the one called StratAer which I assume is stratospheric aerosols due to volcanoes by 60%, and summed the 10 forcings, then multiplied the total by 0.3. The resulting graph looks like yours but ranges from -0.39 in 1884 to +0.58 in 2003, whereas your graph (blue line) goes from about -0.52 to + 0.47. There appears to be a nearly constant offset of about 0.1 between the two graphs. What am I doing wrong?

Richard S Courtney
January 17, 2011 5:04 am

Willis:
Superb! Thankyou. An excellent analysis.
Please write it up for publication in a journal so there can be no justifiable reason for the next IPCC Report to ignore it.
The only – yes, ONLY – support for AGW is the models. But a model does what its designers and constructors define it will do: a model is not capable of doing anything else. So, a clear demonstration of what the models are made to do refutes the only existing support for AGW.
Your analysis shows what the GISS Model E is doing in effect; i.e.
“The second implication is an odd one, and quite important. Consider the fact that their temperature change hindcast (in degrees) is simply 0.3 times the forcing change (in watts per meter squared). But that is also a statement of the climate sensitivity, 0.3 degrees per W/m2. Converting this to degrees of warming for a doubling of CO2 gives us (0.3°C per W/m2) times (3.7 W/m2 per doubling of CO2), which yields a climate sensitivity of 1.1°C for a doubling of CO2. This is far below the canonical value given by the GISSE modelers, which is about 0.8°C per W/m2 or about 3°C per doubling.”
This finding of so large a difference between the effective climate sensitivity of the model and the “canonical value” is so extremely important that it requires publication in a form that prevents the IPCC from justifiably ignoring it. Indeed, it is so important that it demands research provides similar assessment of the effective behaviour of other climate models.
The IPCC will try to spin your finding and to and downplay it (as past IPCC Reports have with all other inconvenient truths) but, in my opinion, the IPCC needs to be prevented from having an excuse for ignoring it.
Richard

Steve Keohane
January 17, 2011 5:05 am

Another clear purveyance Willis. 1.1°C per doubling is close to the 1.5°C Arrhenius(sp?) finally settled on, half of what the IPCC yearns for.
Geoff Sherrington says: January 17, 2011 at 3:35 am
How come the thread a few down, The PAST is Not What it Used to be, by Ira Glickstein, gives two versions of negative slopes to the temperature graph 1940-1970, while your delta T graph here gives a positive slope?
It has been interesting over the years to see the initially negative slope gradually getting tortured back to a positive slope.

In case others have not seen this: http://i54.tinypic.com/fylq2w.jpg
LazyTeenager says: January 17, 2011 at 3:28 am “and they knew this all along because their belly button told them.”
Kettle meet pot.

Lance Wallace
January 17, 2011 5:20 am

Willis, now I have looked at your earlier Excel sheet and have done your recommended reduction of StratAer by 0.6 and I continue to get the same value of -0.39 for the 1884 volcano year. This doesn’t match the value of about -0.52 that you show in the graph. And all other values are also off by roughly 0.1 from the graph.

Joel Shore
January 17, 2011 5:53 am

Willis,
(1) What you are computing here is closer to being the transient climate response of the model, not the climate sensitivity. I don’t know what the transient climate response for GISS Model E is, but it is lower than the equilibrium sensitivity.
(2) I don’t think the reason why you have to multiply the volcanic forcing by a number to reduce it is any real mystery. A system that has a lagged response (due to the thermal inertia of the oceans) like the climate system has will tend to have a reduced amplitude in response to higher frequency perturbations than it has in response to lower frequency perturbations. In fact, you could probably model this with some simple AC circuit analogy.

Joel Shore
January 17, 2011 6:00 am

Willis,
Actually, if you look at Table 8.2 of the IPCC AR4 WG1 report, you’ll see that the GISS Model-EH and -ER both have an equilibrium climate sensitivity of 2.7 C and they have transient climate responses of 1.6 and 1.5 C, respectively. These transient climate responses are still a little bit higher than what you are finding (1.1 C when you used 3.7 W/m^2 for doubling, although it would be more like 1.2 C if you used 4.0 W/m^2) for some reason, but not too far off.

Joel Shore
January 17, 2011 6:06 am

Mike Haseler says:

But at the end of the day, it really doesn’t matter, because fundamentally you have shown that for all their complexity, the climate models really boil down to one assumption regarding one constant linking radiative forcing and warming.

For the global temperature…and ignoring fluctuations (which averaging over several runs effectively does), I don’t think this is really any surprise. In fact, there has been a considerable amount of work using simplified models to emulate the results of the climate models for such things (see Section 8.8 of the IPCC AR4 WG1 report). And, it is also why the mono-numeric concepts of equilibrium climate sensitivity and transient climate response are useful at all.
However, the climate models also put out a lot more information regarding where the warming is greater or lesser, how weather patterns change, and so forth.

Dr Chaos
January 17, 2011 6:08 am

In a highly non-linear and almost certainly chaotic system like the earth’s atmosphere, this orthodox climate theory can only be seen as baloney. It’s utterly preposterous (and I know next to nothing about climatology!).
How do these guys get away with it? Extraordinary.

pettyfog
January 17, 2011 6:12 am

It seems to me this negation of forcings has all been hinted at before. However, I’m a little stuck on the volcanic forcing.
It seems to me from what you are saying that, at this point in time if there’s another Pinatubo, we’re screwed.

Baa Humbug
January 17, 2011 6:35 am

I’m trying to gather the implications of this. Let me try a summary.
(Going back to fig. 5 in the earlier post “Model Charged With Excessive…) HERE
Seems to me all non-GHG forcings except aerosols and volcanoes are pretty much flat from 1880 thru to 2000. These can effectively be left out of the models.
Well mixed GHGs increase temperatures, however these temperatures don’t match observed data, so they are modulated by aerosols. Primarily by reflective tropospheric aerosols plus aerosol indirect effects, and periodically by volcanic aerosols.
Considering our knowledge of aerosol effects are so limited (I’d even go so far as to say guessed at), these models are EITHER totally worthless for future predictions, OR the much vaunted IPCC sensitivity is really only 1.1DegC per doubling INCLUDING ALL FEEDBACKS.
Surely the modellers are aware of this? if so, they’ve been remarkably silent about it.
Am I close to the mark? Maybe Dr Lacis can set us straight.

Mike Haseler
January 17, 2011 6:40 am

Richard S Courtney says:
“But that is also a statement of the climate sensitivity, 0.3 degrees per W/m2. Converting this to degrees of warming for a doubling of CO2 gives us (0.3°C per W/m2) times (3.7 W/m2 per doubling of CO2), which yields a climate sensitivity of 1.1°C for a doubling of CO2. This is far below the canonical value given by the GISSE modelers, which is about 0.8°C per W/m2 or about 3°C per doubling.”
So the model has a climate sensitivity of 0.3 degrees per W/m2 for all the inputs that have not been tested, apart from the only one that has been tested which at 40% is 0.12C. per W/M2
Wouldn’t the sensible thing be to assume that as the only value which has been validated in any way is the volcano forcing from Pinatubo’s eruption, that the best estimate of the others is the same value.
On this basis isn’t the best estimate based on the climategate team’s own model, a prediction that expected warming is 0.44C for a doubling of CO2? Or am I missing something?

January 17, 2011 6:44 am

Climate model predictions just slavishly follow the CO2 curve, here and there patching it with volcanos or ad-hoc aerosols.
http://i55.tinypic.com/14mf04i.jpg

January 17, 2011 6:46 am

I went and had a look at your referenced post after reading this. This is very interesting. What I find remarkable is that when looking at the forcings used (see your other post) the only ones that look they have been measured/modelled are:
Strat-Aer
Solar
W-M_GHGS
StratH2o
All the others look to me like fudge factors – the series shape is identical, they all have essentially a linear form with slope changes at the same points eg circa 1950 and a second slope change circa 1990. These other forcings are all flat after 1990:
SnowAlb
O3
BC
ReflAer
LandUse
and even earlier in the case of LandUse and O3 which appear flat from 1980.
It is quite surprising to me that even with modern satellite data and all the money spent on Global Warming research at an institute called NASA, we cannot map/measure changes due to SnowAlb, BC, and Landuse?
The second rather remarkable observation is that the largest forcings in order of magnitude of the forcings (ignoring sign) appears to be:
Positive: W-M_GHG
Negative: StratAer (ie volcanos, but these have to be used at 40%)
Negative: ReflAer
Negative: AIE
Everything else appears to have very little influence, including that big yellow thing in the sky. Without W-M_GHG in the above mix we would heading for an ice age – except how many of the factors in the list of forcings are considered to be influenced by man? I assume BC is, as is LandUse and that StratAer (volcanos) is not. What about ReflAer and AIE and the others? the reason for this question is what would happen to the model if you removed all the (implied) human influences and looked at just the natural response. How many natural forcings are actually included in the model? Enough to actually model the climate system?

Pamela Gray
January 17, 2011 6:52 am

While I understand the purpose of your experiment, to discover how the black box might work, the entire exercise (their’s and your’s) may be reporting a false positive, beginning with the raw data source, averaged and smoothed as it were, to show a single global temperature. That we can’t model the single raw data number without an added anthropogenic variable expression such as increasing CO2, is not evidence that this is the major forcing. Adding data together, smoothing it, and dealing with an always changing sample temperature set, may be contaminating the whole entire thing. If there ever was an experimental design (taking a smoothed anomaly and trying to model it) potentially overwhelmed with basic design errors, this would be that exercise. The samples are not controlled, the temperature anomaly data is combined without reason, and their model is a swag. Your endeavor could be confirming this.
The raw data needs to be regionalized. Example: As in Artic belt, northern hemispheric belt, tropical belt, southern hemispheric belt, and Antarctic belt. The seasons should stay intact and the air temperature should be actual, not anomalized. This will approximate the areas affected by natural weather pattern variation drivers. This data then needs to be compared with data measuring easterlies, westerlies, SST anomalies within the belts, warm pool migration patterns, and pressure systems, each separately and also in various combinations. The null hypothesis is that there will not be a significant match between observed temperature and observed weather pattern variation drivers. My mind experiment tells me there will be a match on a region by region basis. If we do, and then discover that we don’t see it with the combined data, indicates a statistical contamination brought on by the artificial process used to combine the sets into a single averaged number.
The next step is to model these weather pattern variation parameters to see if one can output something similar to the temperature, region by region. This needs to be done way before anthropogenic CO2 is considered as an additional parameter.
The beauty of this design is that we don’t need a long run of temperature data. The satellite period will do nicely. The fun part will be to reasonably adjust the weather pattern change parameters to see what you can get, temperature wise. We might be able to develop reasonable explanations for mini ice ages.
This is to say, I just don’t see the value in studying bad boxes of any color.

Editor
January 17, 2011 7:06 am

Lazy Teenager said
“One school of thought I have noticed around here is that the models are so incredibly complex that they allow the output to be fiddled by hiding cheats amongst the complexity.”
I am more interested in the input. Would you say that wildly inaccurate- or missing- data relating to sea surface temperatures (for example) should then be considered accurate once they have been through the modelling mangle, or are they still the nonsense they started off as?
tonyb

harrywr2
January 17, 2011 7:09 am

It seems to me Hansen admits that the atmospheric response up to the year 2000 was ‘less then we thought’ but falls back on a big chunk of the heat is being stored in the oceans.

dp
January 17, 2011 7:10 am

For this you need computers costing billions? No wonder Piers Corbyn is so effective.

January 17, 2011 8:00 am

I have worked with geophysical interments and geophysicists in mineral exploration for over 40 years. The black box is a standard joke. We have all tested dozens of them and just like the GISSE model go thorough dozens of gyrations that do noting more complex then move the slide and cross hair on my slide rule.

Brian Buerke
January 17, 2011 8:04 am

It’s no surprise that their model gives a climate sensitivity of 0.3. That just means it’s consistent with thermodynamics and the condition that Earth acts as a blackbody.
The sensitivity of any blackbody is deltaT = (T/4Q)deltaQ.
The measured parameters of Earth are T= 287 K and Q = 239 W/m^2, the latter being the absorbed solar radiation. Plugging these in gives T/4Q = 0.3.
The only real mystery is why climate scientists keep insisting that the water vapor feedback will raise the sensitivity by a factor of 3. That expectation does not seem consistent with the physical requirement that Earth act as a blackbody.

Feet2theFire
January 17, 2011 8:19 am

Last year I read about the very early days of climate modeling. (Sorry, I can’t recall the source.) There was one variable that at one point would make the results increase off the chart and they didn’t know what to do about it, because it seemed to be the right values and code. They got in some Japanese whiz and he just fudged the code until it stopped doing that. His code had nothing to do with reality – only to make the curve behave itself. From what I recall reading, they took exception to what he did, but were also relieved that the wild acceleration was tamed.
They ended up accepting it and keeping it in the code – at least for some time after that. It may still be there.
This is the exact opposite of science. Yes, you compare the results with reality. But you don’t just throw in whatever code makes the curve do what you want it to do, not if it is not the mathematical statement of the reality as best you can come up with.

January 17, 2011 8:36 am

Dear Willis,
Nice discovery! Some others have found that too, did write an article about that, but it wasn’t published, because of some resistence of the peer reviewers. See comment #60 at RC from a few years ago:
http://www.realclimate.org/index.php/archives/2005/12/naturally-trendy/comment-page-2/#comments. Unfortunately his analyses was removed from the net.
Moreover, the reduction of 60% for (sensitivity to) the forcing from volcanoes is quite remarkable and sounds like an ad hoc adjustments to fit the temperature curve.
But if they reduce the sensitivity for volcanies, they need to reduce the sensitivity for human aerosols too: the effect on reflection of incoming sunlight is equal for volcanic SO2 as good as for human SO2. The difference is mainly in the residence time: human aerosols last average 4 days before rained out, while volcanic aerosols may last several years before dropping out of the stratosphere.
But the models need the human aerosols with a huge influence, or they can’t explain the 1945-1975 cool(er) period. The influence of aerosols and the sensitivity for GHGs is balanced: a huge influence of aerosols means a huge influence of GHGs and vv. See RC of some years ago:
http://www.realclimate.org/index.php/archives/2005/07/climate-sensitivity-and-aerosol-forcings/
with my comment at #14.
BTW, splendid response to Dr. Trenberth!

Rex
January 17, 2011 8:38 am

Following on from Pamela Gray’s comment: I know little about climatology, but
have been a Market Research practitioner for forty years, and have some knowledge
of means, statistical congruences, and what have you, and have been appalled for
decades at the bilgewater emanating from certain scientific communities (mainly medical ones), attempting to prove ‘links’ etc, and assuming cause-and-effect
relationships when there are none – most statistically significant linkages assumed to
be meaningful are actually happenstance.
Which leads to my question; does this single-figure “mean global annual temperature”
have any connection at all with “the real world” … or is it just a statistical artifact which diguises significant regional and temporal variations?
(Apart from all the fiddling that goes on.)

Warren in Minnesota
January 17, 2011 8:58 am

What bothers me most is that someone-as in you, Willis-has to try to reverse engineer and determine what the model is doing. It would be much easier and better if the modelers would simply make their work public.

JFD
January 17, 2011 9:09 am

Willis, you are good — really good, thanks for you do, mon ami. It seems to me that the GISS folks have misused superposition in their model. My understanding of superposition is that one must fully understand each of the variables inside of the black box. The 10 variables used by GISS are not known too well as pointed out by ThinkingScientist above at 6.46am. Even partial knowledge of the 10 may still be okay but what about the variables that GISS have not included? By not including all of the variables that impact climate in the black box, GISS have forced all of the missing forcings to be de facto included in the greenhouse gases variable i.e. carbon dioxide in the look back curve fit. This may result in a hindcast curve fit since superposition allows variables i.e. forcings to be added together, but it in no way allows a forecast to be made with the model. Since carbon dioxide in the atmosphere is increasing, the model will always show warming.
What variables are missing from the black box: ocean currents exchanging heat between the equatorial areas and the polar areas, impact of sun variations on ocean currents, variations of ocean current locations, produced ground water from slow to recharge aquifers, water aerosols from evaporative cooling towers, impact of new water and aerosols added to atmosphere on clouds, decreasing humidity in the Troposphere since 1948, perhaps the mathematical sign of clouds impacts and probably several others.
By using superposition without having all of the variables that impact the climate in the black box, the GISS model overstates the impact of carbon dioxide by the summation of all of the missing forcings.

Richard S Courtney
January 17, 2011 9:20 am

Mike Haseler, Lance Wallace, Baa Humbug, and Ferdinand Engelbeen:
At January 17, 2011 at 6:40 am Lance Wallace asks me:
“So the model has a climate sensitivity of 0.3 degrees per W/m2 for all the inputs that have not been tested, apart from the only one that has been tested which at 40% is 0.12C. per W/M2
Wouldn’t the sensible thing be to assume that as the only value which has been validated in any way is the volcano forcing from Pinatubo’s eruption, that the best estimate of the others is the same value.
On this basis isn’t the best estimate based on the climategate team’s own model, a prediction that expected warming is 0.44C for a doubling of CO2?”
I answer, on the basis of Willis Eschenbach’s analysis, the answer is, yes.
And that is why I asked Willis (above at January 17, 2011 at 5:04 am ):
“Please write it up for publication in a journal so there can be no justifiable reason for the next IPCC Report to ignore it.”
And it is why I eagerly await Willis’ answer to the posts from Lance Wallace at January 17, 2011 at 4:55 am and at January 17, 2011 at 5:20 am.
This matter is far, far too important for it to have any doubt attached to its formal presentation.
Baa Humbug says (at January 17, 2011 at 6:35 am):
“Well mixed GHGs increase temperatures, however these temperatures don’t match observed data, so they are modulated by aerosols. Primarily by reflective tropospheric aerosols plus aerosol indirect effects, and periodically by volcanic aerosols.
Considering our knowledge of aerosol effects are so limited (I’d even go so far as to say guessed at), these models are EITHER totally worthless for future predictions, OR the much vaunted IPCC sensitivity is really only 1.1DegC per doubling INCLUDING ALL FEEDBACKS.”
Indeed, that is so as I have repeatedly explained in several places including WUWT, and Ferdinand Engelbeen makes the same argument again at January 17, 2011 at 8:36 am where he writes:
“But the models need the human aerosols with a huge influence, or they can’t explain the 1945-1975 cool(er) period. The influence of aerosols and the sensitivity for GHGs is balanced: a huge influence of aerosols means a huge influence of GHGs and vv. See RC of some years ago:
http://www.realclimate.org/index.php/archives/2005/07/climate-sensitivity-and-aerosol-forcings/
with my comment at #14.”
This, again, is why Willis’ analysis is so very important that it needs solidifying such that points similar to those of Lance Wallace are addressed and then it needs to be published in a form that prevents the IPCC from merely ignoring it without challenge.
Richard

Ron Cram
January 17, 2011 9:27 am

Willis,
Another excellent post. Model E is a black box because ALL of the documentation normally required and normally provided is not available. This failure to provide documentation is another example of a failure to archive and/or release data and methods required by the scientific method.

Doug Proctor
January 17, 2011 9:29 am

“a doubling of CO2 gives us (0.3°C per W/m2) times (3.7 W/m2 per doubling of CO2), which yields a climate sensitivity of 1.1°C for a doubling of CO2. This is far below the canonical value given by the GISSE modelers, which is about 0.8°C per W/m2 or about 3°C per doubling.”
In the model there must be a feedback to the water vapour that increases as the total CO2 increases, a value specific to a higher amount. Although this contradicts the concept that the forcing decreases for CO2 content, as the atmosphere is saturated/the IR is all used up, it is what would make sense to get an increased sensitivity by 780 ppm CO2.

Paddy
January 17, 2011 9:54 am

C1UE: You said: “I would just note, however, that modeling isn’t a case of all right or all wrong.”
Isn’t a little bit correct or a little bit wrong the same as being a little bit pregnant?

Roy Clark
January 17, 2011 10:19 am

Thank you Willis for an excellent post.
The whole concept of radiative forcing is empirical pseudoscience, or climate astrology. The fundamental assumption is that long term averages of ‘surface temperature’ and ‘radiative flux’ are somehow in equlibrium and can be analyzed using perturbation theory to ‘predict’ a ‘forcing’ from an increase in atmospheric CO2 concentration and other ‘greenhouse gases’ and ‘aerosols’.
There is no long term record of the real surface temperature, meaning the temperature of the ground under our bare feet. Instead, the meteorological surface air temperature (MSAT) record has been substitituted for the real surface temperature. The MSAT is the air temperature in an enclosure at eye level. 1.5 to 2 m above the ground. This follows the ocean temperatures and local heat island effects, and has been ‘adjusted’ [upwards] a few too may times.
Radiative forcing appears to be an elegant mathematical theory, but it is incapable of predicting its way out of a wet paper bag.
Independent (and reliable) radiative transfer calcualtions based on HITRAN show that a 100 ppm increase in atmospheric CO2 concentration from 280 to 380 ppm produces an increase in downward flux of about 1.7 W.m-2 in the downward atmospheric LWIR flux. The hockey stick – yes the hockey stick – shows an increase of 1 C in the MSAT average during the time the CO2 concentration increased. Therefore, a 1 W.m-2 increase in downward LWIR flux from any greenhouse gas must produce a 0.67 C [1/1.7] increase in ‘average surface temperature’ by Royal Decree from the Climate Gods of Radiative Forcing. This is the basis of the radiative forcing constants used in the IPCC models – climate astrology. The forcing constants for the IR gases are fixed by the spectroscopic properties of those gases and the CO2 ‘calibration constant’, so the only fix left is to manipilate the aerosol forcing, which is a reduction in solar illumination, not an increase in LWIR flux.
Radiative forcing was introduced into climate analysis in the mid 1960’s, before satellites and supercomputers and should have been rejected as invalid soon after. Instead, it has become enshrined as one of the major pillars of the global warming altar.
The underlying issue is that the change in surface flux from CO2 has to be added to the total surface flux BEFORE the surface temperature is calculated. The solar flux is zero at night and up to 1000 W.m-2 during the day. The net LWIR flux varies from 0 to 100 W.m-2 at night and up to ~ 250 W.m-2 during the day. About 80% of the surface cooling flux during the day is moist convection, so the there is no surface temperature radiative equilibrium on any time scale. The heat capacity and thermal capacity of the surface have to be included as well, and the latent heat …
Willis has done a good job in showing that the radiative forcing is used empirically to ‘fix’ the surface temperature. There is no physics involved, just a little empirical pseudoscience and some meaningless mathematical manipulations of the flux equations.
Reality is that there is no climate sensitivity to CO2. A doubling of the CO2 concentration will have no effect on the Earth’s climate. How many angels can fit on the head of a pin when the CO2 concentration is doubled?
Time for Trenberth, Hansen, Solomon etc. to explain their climate fraud to a Federal Grand Jury. As taxpayers we should get our money back.
For more on surface temperature see:
http://hidethedecline.eu/pages/posts/what-surface-temperature-is-your-model-really-predicting-190.php

Bob Koss
January 17, 2011 10:38 am

Mount Hudson in Patagonia erupted two months after Pinatubo. It didn’t get noticed much due to its remote location, but it was about the same size as Mount St. Helens in the early 80’s and had the same VEI 5 rating. Pinatubo was a VEI 6.
That leads to wondering how the models handle multiple eruptions close in time, but separated widely by distance.

NicL_UK
January 17, 2011 11:12 am

Ferdinand Engelbeen commented that:
“Some others have found that too, did write an article about that, but it wasn’t published, because of some resistence of the peer reviewers. … Unfortunately his analyses was removed from the net.”
A version of the paper referred to, “A statistical evaluation of GCMs: Modeling the Temporal Relation between Radiative Forcing and Global Surface Temperature” by Kaufmann and Stern, is available at:
http://replay.waybackmachine.org/20070203081607/http://www.bu.edu/cees/people/faculty/kaufmann/documents/Model-temporal-relation.pdf

January 17, 2011 11:20 am

Bob Koss says:
January 17, 2011 at 10:38 am
That leads to wondering how the models handle multiple eruptions close in time, but separated widely by distance.
Depends how much aerosols reach the stratosphere. The Mt. St. Helens blast was mostly sidewards and not much did reach the stratosphere. The Mount Hudson was more directly injecting into the stratosphere and added about 10% of the total amount of SO2 as the Pinatubo (the VEI index is a logarithmic scale). This was probably added to the total volcanic aerosols load of the stratosphere in the models.
The distance to the equator matters somewhat, but the bulk of the SO2/aerosol levels of the Pinatubo was spread all over the stratosphere within weeks, thus the distance between the volcanoes doesn’t matter much.

k winterkorn
January 17, 2011 11:31 am

Related to this and the earlier refutation of Trenberth:
There are several important hypotheses re AGW with corresponding Null hypotheses:
1. Hypothesis #1: That we have measured a rise in global temperature over several centuries with sufficient accuracy to move on to hypotheses re causation. Null Hypothesis: We have not accurately measured global temperatures adequately to move on to any other hypotheses.
—–The Mann Hockey Stick was the “fact” on which the “science” of the IPCC statement that global warming is “unequivocal” was based. The Hockey Stick has been broken by follow up analysis (Eg., confirmination that there was a Medieval Warm Period).
——The Surface Stations project has shown the unreliability of the temperature measurements.
—– Demonstration of the importance of Urban Heat Island effects shows that much of “global” warming is actually multi-local, not a diffuse global phenomenon.
2. Hypothesis #2: That global atmospheric CO2 is rising, unnaturally, mostly due to human activity. Null Hypothesis: most or all of measured changes in CO2 are natural.
—–Though a likely hypothesis, based on coincidence and size of measured increase in CO2 is the air in proportion to human-caused CO2 emissions (plus the isotope issue), there is still controversy, since CO2 levels are known to vary greatly without human input.
3. Hypothesis #3: That rising CO2 in the atmosphere predictably causes a measurable rise in global temps. Null Hypothesis: CO2 effects on global temps are too small to be definitively measured or inferred.
—–Given uncertainty of global mean temp as a measurement and the chaotic nature of the weather system of the Earth in general, a CO2 signal would need to be large to be definitively separated from the noise.
—–In the last couple of centuries, global temp changes have not correlated well with CO2 changes. Absence of correlation is strong evidence against a hypothesis.
4. Hypothesis #4: Not only is the Earth warming, due to man-caused CO2 changes, but the process is dominated by positive feedbacks and will become catastrophic. Null Hypothesis: The Earth’s climate system is not dominated by positive feedback (hence, CO2-driven warming will be mild, or, because of negative feedback effects, too small to definitively detect).
—–The Earth’s temps have been stable within a small range (on a Kelvin scale, which is most apropos) for eons, despite wild swings in CO2. This strongly suggests the system is dominated by negative feedbacks.
—–There is little certainty regarding the role clouds play in global temps, except that the role could be dominating.
All of the four hypotheses above remain in play. The science of climate change is in its infancy, far from settled.

January 17, 2011 11:48 am

IMO Trenberth is a model climate scientist.
When the data disagrees with the model, he blames the data.

January 17, 2011 12:09 pm

Very nice empirical analysis of GISS E output, Willis. Your result is very similar to the one described in my Skeptic article, which also showed that GCMs in general just linearly propagate changes in GHG forcing. My own empirical factor was very close to yours — 0.36 x (fractional forcing change). It was derived from Manabe’s modeling during the 1960’s, and his work is clearly still current in modern climate models.
Like your result, that 0.36 factor produced warming curves that matched the outputs of multiple modern climate models. Anthony kindly posted the Skeptic article last June, here.
One general outcome from this work is that if a simple linear equation produces prognostications of surface air temperature that match those of expensive climate models, then why does anyone need expensive climate models?
And if those models contain the complete physical description of Earth climate, as is claimed by the “we know all the forcings” crowd, then the linear equation clearly and accurately reproduces the complete temperature outputs from the complete physical theory.
So there’s obviously no more for climate scientists to do. They have produced their final theory, which we can all emulate by simple means. They can now, with supreme satisfaction, retire the field and go do something else that usefully employs their great acumen in physics.

Jim Petrie
January 17, 2011 12:42 pm

What would happen if you assumed a 0.1% reduction in CO2 forcing (due to clouds)
Obviously changing the sign of the forcing would abolish man made global warming entirely. Michel Mann might feel this to be a little unfair!
But add one simple step. What change in the volcanic forcing would you then have to make to get a hindcast showing a correlation of 90% or more?

Jim Petrie
January 17, 2011 12:47 pm

What would happen if you assumed a 0.1% decrease in CO2 forcing due to clouds?Obviously if you change the sign of the forcing you abolish man made global warming entirely. The warming supporters might regard this as being a little unfair!
But add one simple step. What would you then have to do to volcanic forcing to get a 90% correlation with your hindcast?

Shevva
January 17, 2011 12:48 pm

Hi Willis, great post, I always wonder how they measure the Sun in these models? how do they measure every single atom of energy that makes it to the earth?
I watch a BBC 2 program that stated there is no such thing as tempreture, it is just a transfer of energy. So you would have to understand how every energy process worked in our solar system (assuming energy does not come from outside the solar system) before you could single out CO2? clever these climate sientists.

January 17, 2011 12:50 pm

It seems to me that these “forcings” are little more that “fudge factors” used to make poorly designed models appear to work; and global averaging over long periods masks reality. In multiple linear regression analysis, to expect meaningfull statistical significance, I use a rule of thumb that the number of data points must be at least five times the number of possible factors with all their possible interactions ; and that is assuming the dependent variable is affected linearly.

Paul Martin
January 17, 2011 12:57 pm

Off topic, but funny.
Global Wobbling Denialism amongst astrologers.

January 17, 2011 1:05 pm

Dear Willis,
How many teraflops in your super computer? Or did you do your modeling on a paper bag with pencil? Inquiring taxpayers wish to know. Because if you get the same results with the paper bag method, why in the blue blazes are we spending megabucks on shiny black boxes for the GISSers?

DocMartyn
January 17, 2011 1:26 pm

“But a model does what its designers and constructors define it will do”
No, a true model is designed to give you insights into the system by generating data that was not previously known. This ‘model’ are more like fits than true models, any fool can fit a polynomial to a plot, but you don’t get information from it.

Jim D
January 17, 2011 1:27 pm

Why should the temperature change be proportional to the instantaneous forcing at the end of a period, rather than the average forcing over the period? If the average forcing is 1 W/m2, and the temperature change is 0.7 degrees, the sensitivity is 0.7 C /(Wm-2) giving a climate sensitivity to CO2 doubling of 2.6 C.

jorgekafkazar
January 17, 2011 1:35 pm

To Willis: Great post. My only question is whether autocorrelation in the GISSE output requires a corresponding correction in the calculation of r²? A reduction in the latter might give you some running room for investigation of other parameters, lags, etc.–an opportunity.
To Joel Shore: Nicely constructed comments, very helpful.
To C1UE: If, as you state, “the models simply encapsulate real world snapshots in a lookup table,” that almost guarantees that the models are garbage.
To Rex: Yes, you’re right: “mean global annual temperature” is meaningless in terms of the actual physics. We need to remember that at all times.
To Roy Clark: True. My two sensors tell me, after much painful observation, that air temperatures differ widely from barefoot asphalt temperatures. Another thing that may not have been properly allowed for.

January 17, 2011 1:44 pm

Pamela Gray says:
January 17, 2011 at 6:52 am
I’ve attempted to do what you suggest. http://www.kidswincom.net/CO2OLR.pdf

Mike Borgelt
January 17, 2011 1:56 pm

Joel Shore says:
January 17, 2011 at 6:06 am
“However, the climate models also put out a lot more information regarding where the warming is greater or lesser, how weather patterns change, and so forth.” That’s
Hilarious, Joel. You know very well they might try to do that but they are singularly unsuccessful. Try Koutsyannis I think it is.

January 17, 2011 1:59 pm

NicL_UK says:
January 17, 2011 at 11:12 am
Indeed that is a backup of the article! Thanks for the link, immediately downloaded it here…

Richard S Courtney
January 17, 2011 2:12 pm

DocMartyn:
At January 17, 2011 at 1:26 pm you respond to my true statement that said;
“But a model does what its designers and constructors define it will do”
by asserting
“No, a true model is designed to give you insights into the system by generating data that was not previously known. This ‘model’ are more like fits than true models, any fool can fit a polynomial to a plot, but you don’t get information from it.”
Oh!? Really? How does a computer model do other than it is programmed to do? By including a random number generator in its code?
And what “insights” have the climate models provided?
The climate models (GCMs and radiative transfer models) are curve fits adjusted by applying CO2 climate senisitivity forcing that differs between models by a factor of 2.4 and then adjusted to hindcast previous global temperature data by applying assumed (n.b. assumed not estimated) aerosol cooling forcing that also differs between models by a factor of 2.4.
(see
Courtney RS, ‘An Assessment of Validation Experiments Conducted on Computer Models of Global Climate Using the General Circulation Model of the UK’s Hadley Centre’, E&E vol.10 no.5 (1999)
And
Kiehl JT, ‘Twentieth Century Climate Model Response and Climate Sensitivity’, GRL, vol.34 (2007) )
Hence, it is no wonder that Willis Eschenbach achieves similar performance to one of the models by simply applying a curve fit to the data. His simple model provides the same “information” as the climate models and for the same reason.
Richard

jorgekafkazar
January 17, 2011 2:16 pm

I once joined a project late in the game. The twelve-input computer program they were using was very complex internally–so complex that no one on the team really grasped it. After puzzling over some oddities (such as occasional outputs with minus signs), I discovered that just taking the average of six variable differentials gave the exact same results as the computer. On further investigation, I found that someone on the project had replaced two of the inputs with (in effect) lookup tables. That explained the negative outputs! Good data in, garbage out. They’d fooled themselves by the complexity of the program into believing it was a valid model of the thermodynamic structure of the system. I think that’s the case with all the climate models ever constructed.
Vis-a-vis global temperature, the net advantage of a backcast-tweaked, megabuck climate model over a historical curve on graph paper is that you can extrapolate the model without owning a 25¢ straightedge, and get an answer that is equally wrong. Climate models are the least cost-effective things ever made.

January 17, 2011 2:39 pm

Willis,
Nice work somewhat Akin to Lucia’s Lumpy. Also, ou should note that scientists are also persuing the statistical emulation of GCMs. This is something we always do in high end modelling, especially for design of experiments in a high DOF parameter space.
Also, I’m not surprised to find issues in the volcano/aerosol area.
A couple points.
1. It would have been interesting to build your model with half of the data.
2. I’m pretty sure your sensitivity here is the transient response, wait 60 years ad you’ll see the equillibrium response.. err you need a GCM to do that. nevertheless I do think you can use this work to set a lower bound for sensitivity. So, a Lukewarmer
is going to say that the equillibrium sensitivity is between 1C and 3C ( maybe 2.5c, we are still determining membership rules). given the inertia in the system it’s safe to say that the equilibrium response will be higher than the transient.

Mike Haseler
January 17, 2011 2:41 pm

Richard S Courtney says: January 17, 2011 at 9:20 am
Mike Haseler, … On this basis isn’t the best estimate based on the climategate team’s own model, a prediction that expected warming is 0.44C for a doubling of CO2?”
I answer, on the basis of Willis Eschenbach’s analysis, the answer is, yes

thanks Richard for the confirmation. Yes I agree it does need writing up and publicising, other people need to see the implication – it’s the first time I’ve seen any kind of estimation of the effect of CO2 that actually is based on real world events rather than post-modernist scientific fantasy.
This is quite a momentous post for those of us who want to know the truth (for good or ill)

c1ue
January 17, 2011 2:44 pm

Paddy: In response to me saying: “I would just note, however, that modeling isn’t a case of all right or all wrong.”
You noted: Isn’t a little bit correct or a little bit wrong the same as being a little bit pregnant?
This is not a correct analogy. For one thing, a model may be 100% correct over 90% of its range but be 100% wrong in 10%.
If your operating conditions in a given circuit lie within this 90%, then the model is perfectly fine – for example a digital circuit.
If on the other hand if the 10% in question is exercised and more importantly affects the overall operation of the device (i.e. in an analog world where startup behavior sets initial conditions for later behavior) then the model’s output would be wrong.
The point I was making wasn’t that Mr. Eschenbach’s article is incorrect – it is that the complexity of climate models doesn’t necessarily mean the complexity was intended to model a simple behavior. Much of the complexity could be specifically to handle corner cases (i.e. the 10% in the analogy).
Again this says nothing about validity of the climate models; in the semiconductor physics world there are constant test chips going through to validate both overall models and specific model parameters/behaviors.
Obviously the climate models have no such verification going on.
jorgekafkazar said: If, as you state, “the models simply encapsulate real world snapshots in a lookup table,” that almost guarantees that the models are garbage.
This is a wrong statement. The real world is exactly that – and a model which contains all possible real world behavior would therefore be reality. Of course this is not possible, but again your blanket statement is invalidated by this example.
More importantly there are behaviors which cannot be modeled using equations because of their inherent structure.
An example for this is ‘flash’ memory. Unlike other forms of memory, ‘flash’ memory actually uses quantum tunneling – i.e. current leaping through an otherwise opaque barrier via quantum effects.
There are no parameterizable equations which a simulator can handle (at any reasonable level of usability) which capture this behavior, thus it is far easier and more useful to create a lookup table to recreate this behavior in a model.
With respect to climate models – there are many aspects which are chaotic including but not limited to: cloud behavior, molecular level friction, hurricane formation (not so much over a period of time but in specific times/places), initiation of rainfall, etc etc.

January 17, 2011 2:49 pm

I will point out to people that what Willis has done here is No different than the work some have done correlating temperature to sun spots or to movements of planets or whatever.
There is one critical difference, however, the regressors have the right units. There are understood mechanisms that connect the independent and dependent variables.
Put it this way. If Willis “hid” his 10 variables from you.. or told you those variables were ‘sunspot’ numbers, the position of the magnetic field, the barycentric whoha, and the integrated drift in the magnetic pole, I wonder how many people would say
” great science willis”

Louis Hissink
January 17, 2011 2:56 pm

Another conclusion which could be made is that if all the forcings cancel each other out leaving the simple outcome Willis noticed, then it might mean that we don’t really understand what drives climate in the first place, hence all the various forcing parameters are effectively random (does not explain weather) and thus cancel each other out and that the final outcome is simply verification of the model inbuilt assumption of “climate sensitivity”.

Richard S Courtney
January 17, 2011 3:01 pm

Mike Haseler:
At January 17, 2011 at 2:41 pm you say of Willis’ analysis;
“it’s the first time I’ve seen any kind of estimation of the effect of CO2 that actually is based on real world events rather than post-modernist scientific fantasy.”
Oh, really? Then I think you need to see Idso snr.’s work from long ago (published in Climate Research in 1998): 8 completely independent natural experiments to determine climate sensitivity that give a best estimate 0.10 C/W/m^2 which corresponds to a temperature increase of 0.37 Celsius for a doubling of CO2. You can read it at
http://www.warwickhughes.com/papers/Idso_CR_1998.pdf
A summary of the findings is at
http://www.friendsofscience.org/assets/documents/FOS%20Essay/Idso_CO2_induced_Global_Warming.htm
and that URL also has a link to the paper.
Several have attacked Idso’s work but nobody has faulted it.
Richard

Joel Shore
January 17, 2011 3:04 pm

Willis Eschenbach says:

1. What you are calling the “transient response” remains constant for the 120 years of the dataset … so if your theory is correct, when exactly does the actual “climate sensitivity” kick in, if we are only seeing the “transient response”? How come there’s no sign of of the climate sensitivity in the 120 year record?

I don’t really understand what you are asking. The point is that the climate gradually adjusts to the forcings over time. The transient response is defined as the change in temperature at the time when CO2 has doubled from pre-industrial levels, increasing at a rate of ~1% per year (I believe), which is about double what the actual rate of increase has been. I suggest reading the relevant parts of the IPCC report Chapter 8 that talk about the ECS and the TCR.

2. As I pointed out above, when the straight response covers 92% of the territory, and random fluctuations take up at least some percent, there’s very little room left for a lag term. However, I do like the thought of the lagging acting as a low frequency filter … but there’s still not much room for any kind of lag term. Once the main term is removed we’re playing with a few percent.

I think you are falling into the trap of thinking that because you can fit the data in one way, that is the only way in which the data can be fit. Also, note that when I mean a “lag”, I am not talking about simply offsetting by a certain amount of time. It would be something more like a term where at any time the climate is trying to exponentially-approach radiative equilibrium for the current levels of forcing at that time (rather than instantaneously responding to the current levels of forcing, as your picture assumes).
Brian Buerke says:

The only real mystery is why climate scientists keep insisting that the water vapor feedback will raise the sensitivity by a factor of 3. That expectation does not seem consistent with the physical requirement that Earth act as a blackbody.

No…because the water vapor feedback changes the radiative balance. I.e., the total radiative effect of doubling of CO2 is, in essence, more than just the ~4 W/m^2. (It is a little more complex than this…What happens is that if you instantaneously doubled the CO2 levels, then the radiative imbalance would be about 4 W/m^2…As the temperature rose, this imbalance would decrease but not as fast as the Stefan-Boltzmann Equation would imply because the rise in temperature would cause an increase in water vapor in the atmosphere, which would reduce the effect of this heating up in restoring the radiative balance. So, the radiative balance is never more than 4 W/m^2, but if you could somehow factor out the effect on the radiative imbalance due to the temperature increase while not factoring out the increase in water vapor, it would be larger.)

Joel Shore
January 17, 2011 3:08 pm

Willis,
To put it another way, your fit of the GISS Model E would predict that if they hold the forcings constant starting today then the temperature would remain constant. However, we know that these sorts of experiments have been done on the models (although I am not sure about GISS Model E specifically) and the models predict that the temperature continues to climb, albeit at a decreasing rate and eventually leveling off.
So, your simple fit is clearly too simple to emulate this behavior that we know the models do actually exhibit.

Roger Lancaster
January 17, 2011 3:13 pm

10 forcings – “With four parameters I can fit an elephant, and with five I can make him wiggle his trunk” (Attributed to von Neumann by Enrico Fermi).

Machiavelli
January 17, 2011 4:43 pm

[please see the contact page under the “about menu”]

Paul_K
January 17, 2011 5:29 pm

Willis,
This is truly a mind-blowing result – so much so that I think you need to do some careful verification work before crafting anything for publication. There is something seriously wrong here – and I don’t necessarily mean in your calculations.
At the very least, your statistical model should be showing heteroscedasticity in the residuals – with an increasing variance towards the later time-frame. Have you tested for this? Or at least eye-balled a graph of the residuals against time?
The GCMs should be generating/calculating a multi-decadal temperature perturbation from each year’s change in forcing. The model form you have adopted only recognises a weighted first year response. Given the overall increase in positive forcing over the timeframe, you should therefore be seeing an increasing separation between the GCM results and your statistical model as time goes on. If you DO see this, then the excellent quality of your match may be suggesting no more than that the “characteristic response” of the temperature response in the GCM is a big -step in the first year followed by a very shallow gradient extending out to the long-time equilibrium condition. You would then have to be suitably cautious about any claims about equilibrium climate sensitivity. If you DON’T see this, then the implications are staggering.

Bill Illis
January 17, 2011 5:37 pm

In this science, there are all kinds of tunable parametres that allow one to come up with any result one wants:
– you’ve got your forcing impact in Watts/m2 which is more calculated rather than measured;
– you’ve got your efficacy of forcing factors; like the need to change the volcanic forcing effectiveness as Willis demonstrated because the real climate does not respond the way the theory says it should;
– you’ve got your estimated negative adjustment like Aerosols which are techically three straight lines in GISS Model E; they completely offset all the GHG warming in Model E up to 1970; then they offset 50% of the increase that should have happened since 1970;
– then you have the infamous Temperature C response per forcing Watt/m2 which Willis is also pointing out here. This number has been quoted at everything from 0.1C to 1.5C per Watts/m2 – only a range of 15 times;
– Then one has this mysterious transient response. The energy hides in the ocean and melting ice sheets for a time and then impact starts to increase over time. One can play with this timeline any way one wants. It started out at 20 years in the first IPCC, moved to 30 by the third and the AR4 is really talking about 1000 years. Today’s CO2 will not have its full impact until 3010 AD.
Not too hard to come up with any number from 0.5C per doubling to 8.0C per doubling by just varying these assumptions.
In extremely complex systems like the climate, we have to measure what really happens. Forget the theory, there are a dozen hugely varying assumptions you have to use. They just changed the volcano assumptions because they did not work in the measured real climate. That is what they should be doing.

Jim D
January 17, 2011 8:10 pm

A thought experiment. Imagine the forcing suddenly went to zero in the last year. Willis’s model’s temperature perturbation would immediately go to zero, but obviously the earth’s (or the GISS model’s) temperature would not respond that quickly, maybe taking decades. What does this tell us? It says the temperature response is not just proportional to the current forcing, but to a weighting of the previous forcing that may be a decay function as you go back several decades.

Carl Chapman
January 17, 2011 8:35 pm

Regarding:
Carl Chapman:
Since CO2 is going up, but temperatures haven’t gone up for 12 years, there must be a negative forcing cancelling CO2′s forcing.
Timisfree:
There is a fourth possibility: there is no CO2 forcing.
I was being sarcastic to show that the models are junk.

tregembo
January 17, 2011 9:24 pm

Is it me or does the hindcast model not reflect reality? I see no oceanic cycle whatsoever, some small solar forcing peaking in 1960, but made noticeable only by volcanic aerosols. No ENSO? I don’t see a grasp of (or an attempt really) the climatic system in this model…either really, just a keeling curve. Wonder what happens if you add this to the oceanic model, it would go way over reality. Guess you can’t add the oceanic cycles, or you would have to adjust the CO2 weighting…can’t have that!

Stephen
January 17, 2011 9:43 pm

The Lazy Teenager makes a good point. It’s essentially one that Willis made originally, but it is worthwhile to point out that it will likely be ignored or missed by anyone reading this or a similar article, which is what the Lazy Teenager did (though perhaps unclearly).
A well-considered and well-motivated model can give results which are matchable by a toy-model built for simplicity with no theoretical motivation or reasoning. After seeing those results, such a simple model can be built, but going in there is no way of knowing this would happen. The fact that such reconstruction is possible does not discredit the original model. It certainly raises the question of whether someone actually did the work that was claimed, but all it actually points out is an interesting cancellation or coincidence.
I use something similar in my work. I deal with a 124-parameter theory (minimal supersymmetry) in particle physics. There is an incomplete part of the theory (how the symmetry is broken) which people to work with different versions. Fortunately for computation, what I consider to be the best-motivated version (that gravity does it and that the breaking does not come from some new unknown realm of physics) predicts that the number reduces to primarily 4 free parameters and some others which are immeasurably close to zero. It doesn’t mean that I just ignore the other 120 or that the version was created just to be easy to handle. It just says they come out in a simple way which someone might guess without having studied the theory, and if it said something different I would be doing something different.

kadaka (KD Knoebel)
January 17, 2011 9:44 pm

Mike D. said on January 17, 2011 at 1:05 pm:

Dear Willis,
How many teraflops in your super computer? Or did you do your modeling on a paper bag with pencil? Inquiring taxpayers wish to know. Because if you get the same results with the paper bag method, why in the blue blazes are we spending megabucks on shiny black boxes for the GISSers?

The sheer size of the programming shows its value, larger is obviously worth more. The amount of computing resources consumed shows its worth in operation, if it takes more that it obviously does more.
This is the wisdom dispensed by Micro$oft. Everyone believed it up to Windoze Vista. Most still do. ☺

AusieDan
January 17, 2011 10:01 pm

Willis – I admire your tenacity.
however I join the other sketics on this one.
I know nothing of modelling the climate.
I have some slight theoretical and practical exposure to econometric modelling.
They don’t do such a bad job over the next few quarters, after that, it’s “hello nurse time”.
But they have no predictive power when the unexpected occurs and being a chaotic system that happens very frequently.
The very idea of forecasting or projecting the future into the long term distance seems to me to be just boys playing with toys and pretending to be scientists.
We not only don’t know what we don’t know but we also don’t know what we can never do.
Do you disagree?
If so, what DO you know that I do not know or do not understand?
Please help.

Geoff Sherrington
January 17, 2011 10:03 pm

Dennis Nikols, P. Geol. says:
January 17, 2011 at 8:00 am “I have worked with geophysical interments and geophysicists in mineral exploration for over 40 years. The black box is a standard joke. ”
Likewise. We used to say that the difference between a geophysicist and a […] was that the latter had a […] that worked.
[trimmed. Robt]

January 17, 2011 11:50 pm

To Kadaka: Software bloat leads to morbid fatheadedness, the epidemic social disease of our day and age.
To Dan: Not my intention to speak for Willis, by my impression is that he was tweaking GISS’ nose, not attempting to build a better predictive model. As we all know, the GCMs can’t predict the future with any skill at all. Heck, they can’t post-dict the real past, just the homogenized-spliced-imaginary past. It’s all mental m************ — counting angels on pinheads.

tty
January 18, 2011 12:42 am

“Jim D says:
January 17, 2011 at 8:10 pm
A thought experiment. Imagine the forcing suddenly went to zero in the last year. Willis’s model’s temperature perturbation would immediately go to zero, but obviously the earth’s (or the GISS model’s) temperature would not respond that quickly, maybe taking decades”
Weeks or months, not decades, or we wouldn’t have any winters on this planet.

Geoff Sherrington
January 18, 2011 12:44 am

Willis Eschenbach says: January 17, 2011 at 4:19 am “Because this is not a graph of the historical temperature. It is a graph of the GISSE climate model hindcast of the historical temperature. ”
Precisely. But if that is supposed to be a good hindcast, GISSE is not a good fit to land temperatures (with their attendant uncertainties).
Is the generation of the positive trend from 1940 to 1970 because the assumptions and numbers take it that way; or is it because the components are constrained to upwards trends, either singly or in aggregate? Seems to this non-specialist that there has to be an advertisement of temperature increase in each public picture, to ram home the message.
Should I still be alive, I’m going to be fascinated to learn how to calibrate proxies for temperature over the present period in which global temperature presents as T = const. It’s almost like Dr Trenberth’s call for reversal of the null hypothesis. Because T = const, does any proxy with equi-spaced response features have to be considered as a valid responder carrying a message?

Wolfgang Flamme
January 18, 2011 1:20 am

Willis,
considering deciphering black boxes you might try using the brute force formula finder Eurequa to improve insights.

AusieDan
January 18, 2011 2:18 am

Mike_D
Thanks – fair enough.
It’s just that people take this long term forecasting so seriously that I sometimes get confused.
I KNOW they can’t do it.
But a little voice in me keeps saying “what if you are wrong?”
“What do they know that I don’t know or can’t understand?”
So I’m even skeptical of myself!

AusieDan
January 18, 2011 2:20 am

Thanks Mike_D

Paul Jackson
January 18, 2011 4:01 am

Maybe the answer to Life, the Universe and Everything really is 42. I guess the answer doesn’t matter if your not sure what the question really was.

orkneygal
January 18, 2011 4:11 am

Willis Eschenbach-
Thank you for your commentary and analysis. Very well done, except for the background on the chart.
I find that the chart background detracts more from the commentary than it adds to it.
Again, thank you for your most informative post.

Mike Haseler
January 18, 2011 4:17 am

kadaka (KD Knoebel) says:
The sheer size of the programming shows its value, larger is obviously worth more.
Do you remember the fad at one time for running part of the model as a screensaver on PCs? I even was tempted myself!
Now it appears, from Willis’ research that this added complexity was really adding nothing meaningful to the model in terms of science as output still equals 0.3 input when all the complexity is averaged.
But boy was that great publicity
There’s a psychological trick used to get people to “buy in” to things. You let them feel that it was partly their work, and they are far far far more likely to accept the result even if their work was pretty meaningless.
You could say that about US elections, where no-one’s vote is individually important but for some strange reason they all have some absurd nostalgia about the “president” even if they didn’t vote for him.
Likewise, I was gullible enough to buy a raingauge yesterday, just because it was made by a company I used to work for … and as I read their useless instructions I quickly remembered why I left them!
So, I don’t think that screensaver program had anything to do with real science. It was just a way of getting a lot of people to buy in to the idea of global warming. Basically those who ran the screensaver were being brainwashed in the best marketing tradition!

kzb
January 18, 2011 5:15 am

I agree with Warren of Minnesota. The shear fact that you are having to treat the model as a black box is surely unacceptable. When trillion dollar decisions, and laws that affect peoples’ fundamental freedoms are coming about as a result of these models, it is absolutely unacceptable that the codes are treated as commercial secrets.
The algorithms used should be published in full, so that they are open to peer review and scrutiny.

Jryan
January 18, 2011 6:01 am

So why is there still a black box at all? Were the AGW folks claiming they are completely open now?

cba
January 18, 2011 6:33 am

“Moritz Petersen says:
January 17, 2011 at 3:04 am
Very interesting article
What is the source of “3.7 W/m2 per doubling of CO2″ I have read this multiple times, but I would like to look into how this has been calculated.
Thanks
Moe”
Moe,
Emission and absorption lines in the atmosphere are rather well known. Projects, like HItran that started in the 1960s by the military have been going on for decades where almost every molecule type has been measured and or calculated to have hundreds and thousands of spectral lines. If you take that and create a model of the atmosphere for pressure, temperature, and molecular content, and the spectrum can then be created by combining these 10s of thousands of lines. you can then calculate the difference in power transmission and absorption between a reference point, like conditions in 1976, and another point, say with twice the co2 present in 1976. Looking down from the tropopause, one finds that the difference in power reaching there is about 3.6 or 3.7 w/m^2 for our two points. The v alue is also for clear skies only as clouds will block even more radiated power than that.
When warmers claim the science is well understood, this is what they are referring to although they are essentially lying about it because there is still much that is poorly understood in this. Also, they conveniently forget that cloud cover matters dramatically and it is unpredictable and accounts for over half of the sky conditions.
If you want to play with a simplified yet still sophisticated system online, check out the Modtran calculator by Archer. It isn’t line by line calculations but it does a pretty fair job of working at least up to about 70km in altitude.
It’s a fairly decent number to know but its effects are not that straight forward.

beng
January 18, 2011 8:39 am

******
Jim D says:
January 17, 2011 at 8:10 pm
A thought experiment. Imagine the forcing suddenly went to zero in the last year. Willis’s model’s temperature perturbation would immediately go to zero, but obviously the earth’s (or the GISS model’s) temperature would not respond that quickly, maybe taking decades”
******
Huh??? Hot, subtropical deserts go from 35C to near freezing every night.

Wolfgang Flamme
January 18, 2011 9:22 am

Wir respect to volcanic aerosol impact, here’s an old one:
Nir Shaviv: The Fine Art of Fitting Elephants

Jim D
January 18, 2011 9:31 am

The questions on my thought experiment illustrate the point. In a day, the forcing changes hundreds of W/m2, but the sensitivity is maybe only 0.1 C per W/m2. For higher frequencies, the sensitivity goes down due to thermal inertia. Thermal inertia effects only go away gradually over decades, which is the whole reason why an equilibrium sensitivity has to be distinguished from the transient one. It has to do with the depth of the layer that the warming gets to, which also determines how lasting the effect will be.

Laurence M. Sheehan, PE
January 18, 2011 11:41 am

The real problem is that these climate so-called ” scientists” are practicing yellow journalism. It should be obvious that the term “contribution” should be used instead of the absurd term “forcing”.
The fact of the matter is that the contribution of CO2 to the atmospheric temperature is nil, far to small an amount to even be measured, if there is any contribution at all.

Bill Illis
January 18, 2011 1:07 pm

I think GISS Model E just covers the lag issue by assuming that CO2 will always increase.
You don’t need to go back in time and calculate the lagged impact from every daily change in CO2 back to 1700.
You just build in a Temp response per ln (CO2) that simulates the lag response. You need to get to +3.0C by the year 2100 and CO2 rises to 715 ppm by 2100. It just takes a simple module in the model to make that work. The actual monthly temperatures in Model E as a result of GHG forcing seems to follow this principle extremely closely all the way back to the beginning of the simulation. So, if the response is not actually programmed in this way, then the model spontaneously spits that out.
So the 0.3C per W/m2 already incorporates the lag (as long as CO2/GHGs are increasing).
Given what is shown about what I have seen about what happens to temps after GHGs stop increasing, there is very little lag built into the models. Hansen’s 1988 model fully adjusted in 7 years. In IPCC AR4, there is only 0.1C of temperature increase after CO2 stops increasing in 2000 (although it takes 100 years to get there). 0.1C of lag after 100 years is nothing to make special note of.

Jim D
January 18, 2011 4:11 pm

Willis, by putting in an exponential lag response with a time-scale to be determined (as in your replies to Joel), you will be able to remove your volcano fudge factor. I would say you could try tuning this time-scale parameter in such a way as to allow the full effect of volcanoes. I say this because, and it may be obvious, the volcano forcing is high-frequency spikes, so any kind of time-averaging of the forcing will diminish their effect automatically without need for the fudge factor.

Baa Humbug
January 18, 2011 4:49 pm

I’m having trouble accepting the “lag time” theory.
We experience changes in forcing in the real climate regularly, (the seasons) these don’t take years to manifest themselves.
Is CO2 forcing somehow supra-special that it’s effects take years to manifest themselves?

Joel Shore
January 18, 2011 5:03 pm

Bill Illis says:

Given what is shown about what I have seen about what happens to temps after GHGs stop increasing, there is very little lag built into the models. Hansen’s 1988 model fully adjusted in 7 years. In IPCC AR4, there is only 0.1C of temperature increase after CO2 stops increasing in 2000 (although it takes 100 years to get there). 0.1C of lag after 100 years is nothing to make special note of.

Well, I can’t speak to Hansen’s 1988 model, which treated the oceans in pretty primitive ways relative to modern incarnations (and the oceans are really what matter for this issue). But, the IPCC does not show what you claim it does at least anywhere that I can find. In fact, in Section 10.7 of the WG1 report, they say:

The multi-model average warming for all radiative forcing agents held constant at year 2000 (reported earlier for several of the models by Meehl et al., 2005c), is about 0.6°C for the period 2090 to 2099 relative to the 1980 to 1999 reference
period. This is roughly the magnitude of warming simulated in the 20th century. Applying the same uncertainty assessment as for the SRES scenarios in Fig. 10.29 (–40 to +60%), the likely uncertainty range is 0.3°C to 0.9°C.

I’m not sure how you reached your erroneous conclusion, but perhaps it was from misinterpreting or misremembering this statement in the same section:

The committed warming trend values show a rate of warming averaged over the first two decades of the 21st century of about 0.1°C per decade, due mainly to the slow response of the oceans. About twice as much warming (0.2°C per decade) would be expected if emissions are within the range of the SRES scenarios.

Needless to say, 0.1°C per decade is not the same thing as 0.1°C increase by 2100.

Joel Shore
January 18, 2011 5:23 pm

Willis Eschenbach says:

That all sounds great, Joel, but it is all handwaving until you can actually show us in the numbers where this is happening. For what you claim to be true, you need to show, not claim but show that there is a really long exponential lag between the imposition of the forcing and the results of that forcing, and you need let us know your estimate of what that exponent might be.

Willis, I agree that this is sort of handwaving. I was not attempting to do the work for you but just to point you in the right direction. If I didn’t have over 100 intro physics exams to grade, I might be able to do more of the research to answer your question. Since, alas, I do have these other commitments, I am just trying to point out what I think the issue is and where you can find more discussion of it. One direction was the section of the IPCC AR4 report on transient climate response. Another is the section that I pointed out to Bill Illis on the long term climate commitment. (See, in particular, Figure 10.34, although alas the scale there is not ideal because they are trying to show a lot of things on one graph. I know there are some papers in the literature that look at the “constant composition commitment” scenario in more detail.)
The advantage of models is that one is not constrained by the (estimated) real world forcings, which is what your study of the GISS Model E has addressed so far. One can easily test the models by putting in all sorts of different forcing scenarios. I honestly don’t know if even a simple exponential relaxation model is sufficient to get reasonable emulation of the models or if one has to assume non-exponential relaxation, but certainly exponential relaxation would be better than the instantaneous assumption that you are using now.
As Jim D pointed out, one way to go about estimating things with the current data is to see what kind of exponential relaxation is necessary to get a better fit to the GISS Model E response to volcanic forcings without having to put in your volcano fudge factor. This will give you an estimate of the relaxation timescale, although probably an underestimate because I am pretty sure that the form the relaxation will actually take is non-exponential, with an initial fairly rapidly approach but then a longer-than-exponential tail.

Since both you and Joel obviously think there is a greater than 120 year lag between the application of a forcing and the results of that forcing, I applaud your imaginations

I am not saying the lag is greater than 120 years. The fact is that the net forcings were fairly small over much of that 120 year span and it is only over the past 30 or 40 years that the net forcing has really ramped up.

George E. Smith
January 18, 2011 6:47 pm

“”””” Willis Eschenbach says:
January 17, 2011 at 4:18 am
peter_ga says:
January 17, 2011 at 3:31 am
“is that the temperature change in degrees C will be 30% of the adjusted forcing change in watts per square metre (W/m2)”
Does one not usually compare apples and apples using percentages, and not apples and oranges? I stopped reading after this figure. It was too mentally draining.
peter_ga, don’t give up so quickly. Orthodox climate theory posits something called the “climate sensitivity”. This says that their is a linear relationship between changes in top-of-atmosphere forcing Q (in watts per square metre, or W/m2) and changes in surface temperature T (in °C).
These changes are related by the climate sensitivity S, such that
∆T = ∆Q * S
or “Change in temperature is equal to change in forcing times the climate sensitivity”.
Climate sensitivity has the units of degrees C per W/m2, so all of the units work out, and we are dealing with apples and apples. Or oranges and oranges.
Please note that I do not subscribe to this idea of “climate sensitivity”, I am reporting the mainstream view. “””””
Now you have me totally copnfused. I was under the impression that “Climate Sensitivity” is defined as the increase in global mean Temperature (presumably the lower troposphere two metre high thing) for a doubling of CO2; thereby enshrining forever the presumption that Temperature is proportional to log of CO2 abundance. That seems to be what IPCC defines it as 3.0 deg C per doubling +/-50%
It seems like everyone who writes on this subject has their own definition of “Climate Sensitivity” How did W/m^2 get into the picture if it is just a CO2 doubling that does it.
I don’t believe either the logarithmic bit or the value of the deg C per doubling (which I don’t believe in anyway). Going from 280 ppm to 560 ppm CO2 gives the same three degree warming that going from one ppm to two ppm gives; preposterous; but correct according to the definition. And the definition doesn’t say anything about H2O; just CO2 barefoot.

beng
January 18, 2011 7:10 pm

Joel & SMosher:
I don’t know where these 120 yr “lag” times mentioned are coming from. Ocean (like ENSO) & wind currents may cycle on various timescales — yrs to decades to perhaps even 1000s of yrs.
But that’s completely different from the reaction times/lags to a forcing. That’s determined by mass & the resultant “storage” of heat. A larger mass at a given temp will come into equilibrium over a longer time period after a given forcing.
The ocean is really the only “storage” medium for heat — land & air lack the mass or thermal conductivity. Look at a thermal map of the ocean — it’s literally a cold-water tub /an oil-slick thickness of warm water at the top. Most of the ocean mass is well below the earth’s avg temperature! That’s not a very good “heat storage” mechanism at all. It’s actually storing “cold” in relation to the earth’s average temperature. And it’s stratified/isolated from the warm-water above, except where upwelling occurs.
So the only significant heat-storage is the first few hundred meters of ocean — on the scale of paint-thickness on a toy globe. Global pulse-forcings like Pinatubo have demonstratively shown transient responses of only 0.6 yrs! Equilibrium in a mere 2.5 yrs. That’s all. Much bigger volcanoes would have longer response lags, but not much — maybe a decade for an instantaneous super-volcano.
What’s it all mean? It means one can toss out all the “heat in the pipeline” arguments. And toss out the 120 yr “effects” down the road. And that what one sees right now from CO2 is what one gets. The yearly increase in CO2 is only a few ppm, so considering the earth’s quick response, particularly to such a small incremental forcing, means there is no significant lag to human-emitted CO2.
Changing ocean currents & other cycles are a different, separate issue to forcing/response time issues. Now, if someone wants to venture that CO2 causes ocean-currents changes & such, that’s stretching beyond belief at this point in our understanding.

Brian H
January 18, 2011 7:14 pm

Willis;
Your reduction of the entire GISSE model to a single multiplication has an interesting implication:
If a model consists of linear equations, it can always be reduced to a single arithmetic operation in the end. The only value of the model/equation set is to discover what that operation is.
You have done so with GISSE, so its purpose is now achieved, and it can be retired.
🙂
😉

Joel Shore
January 18, 2011 8:19 pm

George E. Smith says:

It seems like everyone who writes on this subject has their own definition of “Climate Sensitivity” How did W/m^2 get into the picture if it is just a CO2 doubling that does it.

People use the term in a few different ways. The more fundamental definition of climate sensitivity that holds for any sort of forcing is to define it in terms of degrees Celsius per (W/m^2). When you apply this to the forcing due to a doubling of CO2 (which basically everyone from Richard Lindzen and Roy Spencer and the climate scientists who support the consensus view on AGW agree is ~4 W/m^2), you get the number for a CO2 doubling. In particular, 3 deg C for a doubling corresponds to roughly 0.75 C per (W/m^2).

I don’t believe either the logarithmic bit or the value of the deg C per doubling (which I don’t believe in anyway). Going from 280 ppm to 560 ppm CO2 gives the same three degree warming that going from one ppm to two ppm gives; preposterous; but correct according to the definition. And the definition doesn’t say anything about H2O; just CO2 barefoot.

George, I know people have explained this to you countless times here: Before you can choose to believe or not believe something, it is best to at least understand what it is you are choosing to believe or disbelieve. The logarithmic bit refers to the fact that the radiative forcing due to increased CO2 increases approximately logarithmically in the concentration regime we are in. It is not a law of nature…It is just an empirical fit that works pretty well in said regime. At lower concentrations, it becomes more like linear in CO2, I believe…and at higher concentrations than the current regime, it transitions to something that is more like a square root dependence (at least for a while). This has to do with which absorption bands are contributing the most to the radiative forcing effect and what regime one is in for those particular bands (saturated in the center but not in the wings, …).
Also, in going from concentration to the effect on global average temperature, one also has to consider how the climate sensitivity [i.e., the number in C per (W/m^2)] varies with the climate state. So, that is an additional factor that comes into play. As I understand it, the current thinking is that it is not strongly dependent on the climate state, at least in the general regime that we are currently in.

Baa Humbug
January 18, 2011 10:54 pm

“CAN WE TRUST CLIMATE MODELS?” Is an article at Yale e360 readers may find interesting.
I quite liked this quote…

“Because models are put together by different scientists using different codes, each one has its strengths and weaknesses,” says Dixon. “Sometimes one [modeling] group ends up with too much or too little sea ice but does very well with El Niño and precipitation in the continental U.S., for example,” while another nails the ice but falls down on sea-level rise. When you average many models together, however, the errors tend to cancel.

You see, 2 wrongs do make a right. And why not? models are mathematics, and in mathematics, multiply 2 negatives, you get a positive.
Lets hope there is an even number of models, else their output will be wrong.

Mike Haseler
January 19, 2011 12:57 am

Jim D says:
“I say this because, and it may be obvious, the volcano forcing is high-frequency spikes, so any kind of time-averaging of the forcing will diminish their effect automatically without need for the fudge factor.”
doh …. (slaps head) …. why didn’t I think of that!
Still, I was half way there because I was thinking what would happen if you added a simple exponential averaging term rather than just time shift.

TomVonk
January 19, 2011 3:34 am

Willis
It is not necessary to loose time talking about lags.
What your interesting experiment shows is that there is NO lag in the model.
Actually as your fit is much better in the recent decades where the forcing are supposed to have varied much more than in the beginning , it is a further proof that there is (almost) no lag in the model.
Everything else about lags seemed rather irrelevant to me.
Argument1: There is a lag in the real Earth . Might very well be but you don’t look at the real Earth , you look at a model . The argument is irrelevant to what you do .
Argument2: There must be a lag in the model. Might very well be but the output shows a completely lagless behaviour. So whether there is or is not a lag programmed in the model , it behaves like if there was none for whatever reason . The argument is also irrelevant to what you do.
Argument 3: There is lag in the model but it is longer than 120 years so that you don’t see it. Might very well be but we have only 120 points. It appears anyway strange that the lag would be longer than the period about which we have at least some half reasonable data . Again an argument irrelevant to what you do .
Last I will stress how important it is that people do the kind of experiments you do .
I must say that I find it flabbergasting (if you have not done some mistake in the experiment) that the output of a multimillion program over a huge time period of 120 years reduces with an excellent accuracy to a lagless multiplication.
And of course from that follows that such a model has nothing to do with the real Earth even if your purpose was not to examine this part of the problem.

Bill Illis
January 19, 2011 4:48 am

Joel Shore says:
January 18, 2011 at 5:03 pm
You missed the part where they used a different reference period than the period when CO2 concentration is held constant 2000. Reference period 1980 to 1999.
“The multi-model average warming for all radiative forcing agents held constant at year 2000 (reported earlier for several of the models by Meehl et al., 2005c), is about 0.6°C for the period 2090 to 2099 relative to the 1980 to 1999 reference”
And about 0.1C per decade in the first few decades could mean anything from the IPCC – how about 0.05C per decade for the first decade and a half and then not much after that.
Simple chart from the IPCC shows very little lag warming after a constant concentration in 2000 – certainly not 0.1C per decade. Between 0.1C and 0.2C per century.
http://www.ipcc.ch/graphics/ar4-wg1/jpg/fig-10-4.jpg

Paul_K
January 19, 2011 12:56 pm

Willis Eschenbach says:
January 18, 2011 at 2:55 pm
Thanks for the response, Willis.
I have bad news and good news.
First the bad news…
I replicated your results last night and analysed the error function. Unsurprisingly it fails every test for homoscedasticity. Even worse the change in variance is exactly of the shape to be expected (i.e. predicted in hindsight) by your critics, and therefore your analysis is easily rebutted. But wait for the really good news later.
You have replaced a time-dependent response function (delta T), which needs to be integrated forwards , with a single first year impulse response. One might therefore expect (the Team will argue) that the variance of the residuals should be high at the start of the emulation (because of unaccounted-for effects in your model from the forcings in the “spin-up” of the GISS model), reasonable in the middle (because you are free-fitting your “sensitivity” coefficient of 0.3), and should increase towards the end of the plot as the second order errors accumulate from your (curtailing) approximation of the response function. Unfortunately for your analysis, these characteristics are exactly what the error statistics actually reveal. So it would be easy to dismiss your 0.3 value as a meaningless artefact of curve fitting which reflects at best only the short-term transient response to the forcing. If you publish in the present form, I am pretty certain this is the response you will get, among other less polite replies.
(And the good news…) HOWEVER, inspired by your approach, I got to thinking about overcoming this problem by actually building a “forward integration” model. I succeeded in doing so in the early hours of this morning. So, instead of replacing the temperature response function with a single year temperature response, I built a model which actually uses a temperature response function with a physics pedigree, and went through the process of numerical integration by superposition. The result is shown here:
http://img408.imageshack.us/img408/7152/imagessuperpositionmode.jpg
Yes, there really are two lines on the first plot, but you have to squint to tell the difference. Adjusted correlation coefficient is 98.5%. Residual sum of squares is 0.056. So how do you like them apples? Not only are the statistics a lot better, but this model cannot be refuted using the arguments you have heard so far concerning transient vs equilibrium effects.
The temperature response function that this uses is the solution to the single heat-capacity energy balance model referenced in Schwartz 2007.
ΔT = λ F (1 – exp(-t/ τ))
Where:
– ΔT = the change in temperature over time due to the single impulse forcing F
– F is a single impulse forcing applied at time t=0
– λ is the equilibrium climate sensitivity expressed in deg C per W/m2
– τ is the time equilibration constant (years)
– t is the time (years)
As t goes to infinity this expression unequivocally goes to λ F – the EQUILIBRIUM temperature change associated with the forcing. τ controls the shape of the curve, specifically how fast or slow the equilibrium is achieved.
For any given forcing, in the n+1th year after the application of the forcing, we can trivially write the change in temperature in year n+1 due to this forcing, given by
ΔT(n+1) – ΔT(n) = λ F (exp(-n / τ) – exp(-(n +1)/ τ))
The total temperature change in any given year is then the sum of all such expressions from previous forcings. Mathematically this is identical to numerical integration by superposition.
After setting up the programme, I used a brute-force optimiser to select the parameters, minimising the RSS.
The answer was:-
λ = 0.335
τ = 2.613
Volcano factor (applied to the forcing) = 0.724
So the GISS E results can be matched to an incredible accuracy with just a 3 parameter model, and as you suspected, the de facto equilibrium climate sensitivity in the GISS E model is much lower than we normally hear about and the model results can be matched with a much shorter equilibration time than we normally hear about.
I have a lot more commentary, including some thoughts on the volcano issue, but I think I should pause here after this mammoth post to see what your thoughts are!

January 19, 2011 3:00 pm

Question to Paul_K from a lay lurker before he attempts to replicate your work:
Is your equation, i.e., ΔT = λ F (1 – exp(-t/ τ)), intended to be the response to an impulse, i.e., to a one-time slug of F watt-years/m^2 of energy at t = 0, or is it the response to a step, i.e., to a forcing component that is zero before t = 0 and thereafter is F watts/m^2 forever? To this layman, it is ΔT(t) = λ F exp(-t/ τ) that would look more like an impulse response (which–after dropping the F–would be convolved with the historical forcings to get the hindcast temperature anomaly ΔT(t)).
I think I’m questioning only the “impulse” nomenclature and not the result.

Paul_K
January 19, 2011 3:37 pm

Re: Bill Illis says:
January 19, 2011 at 4:48 am
“The multi-model average warming for all radiative forcing agents held constant at year 2000 (reported earlier for several of the models by Meehl et al., 2005c), is about 0.6°C for the period 2090 to 2099 relative to the 1980 to 1999 reference”
I see what you mean. That is an extraordinary way to present these data.
For what it’s worth in the superposition model which I discussed above, there is of course some continued warming because the effects of forcings towards the end of the 20th century haven’t fully worked through. I tested the model by switching off all forcing changes after 2000, and this results in a further temperature gain of 0.11 deg C by 2020. After that there is negligible temperature change. This seems to be about 60% of the gain reported by the IPCC (eyeballing your graph).
One problem with interpreting the IPCC’s “constant composition” data is that we only know that the GHG forcing (change) has been halted and don’t know whether or not other internal or external forcings are frozen for this sensitivity.

Paul_K
January 19, 2011 3:47 pm

Joe Born says:
January 19, 2011 at 3:00 pm
“Question to Paul_K from a lay lurker before he attempts to replicate your work:
Is your equation, i.e., ΔT = λ F (1 – exp(-t/ τ)), intended to be the response to an impulse, i.e., to a one-time slug of F watt-years/m^2 of energy at t = 0, or is it the response to a step, i.e., to a forcing component that is zero before t = 0 and thereafter is F watts/m^2 forever?”
It is the response to a step change of F w/m2 held forever. So in the calculation each year, it is necessary to take the CHANGE in forcing from the previous year, and use that as the basis for calculating the forward temperature response to be added in.
I will try to post a reference for the Schwartz paper. I know that it is available in pdf.
In terms of replication, I’ll be quite happy to make the code available (it’s just an Excel spreadsheet) and let people play with it, but need some help from Willis (or Anthony?) to do that.

Paul_K
January 19, 2011 4:01 pm

For those wanting to understand the derivation of the temperature response function I used above, the Schwatz 2007 paper is to be found here:
http://www.ecd.bnl.gov/steve/pubs/HeatCapacity.pdf

January 19, 2011 11:23 pm

Damn. Paul K. Speechless. Slick idea using Schwartz.
Here is a thought.
Go get the forcings for the various SRES. these are the future scenarios.
Emulate the projections.
http://www.ipcc-data.org/ddc_co2.html
The real test I suppose would be seeing how your model reacts if you take C02 forcing to Zero as gavin did in a recent GCM experiment..
Stumped for now. I know that doesnt help

wayne
January 20, 2011 4:09 am

Paul_K, now that’s impressive!
Now I’m going to ask you a big one, are you willing to make it about four times harder and extend your program. You fit that close using the three parameters but can you go back to Willis’s last article, his first stab, with the 10 forcings and simultaneously fit all 10 forcings to fit with minimal RSS to GISS temps? That is what I did but just using linear and excels solver to do the grunt work. You just might push the r2 to .99xxx and would give us much better view of what of the 10 forcings are really in play and how much. Mine even detected some forcings being inversely correlated and others had minimal effects. A little table of the fitted forcing weight I cam up with is here, your method should really tighten them up.
Good work! Impulses… and that makes perfect physical sense how to smear the lags.

January 20, 2011 6:59 am

Paul_K:
Thanks for the response–and for your excellent work.
I would indeed welcome the spreadsheet, since my attempt at replication suggests either that I improperly disambiguated your explanation or that I was using the wrong input values. (Also, it would be helpful to us laymen if you could suggest tools we might use to obtain the lambda, tau, and volcano-fudge-factor numbers for ourselves.)
Anyway, to demonstrate how imperfectly some of us less-gifted readers may have comprehended your comment, I’ll observe that your [1 – exp(-t/ τ)] λ step response (if it is indeed a step response) corresponds to an impulse response of (λ/τ) exp(-t/ τ). Since (as I understand it) you chose an approach considerably more complicated than simply convolving that impulse response with the composite input F ( where F=sum of nine constituent inputs + volcano fudge factor times the remaining, volcano constituent, right?) , i.e, than simply using ΔT(n+1) = λF(n+1)/τ + ΔT(n) exp(-1/ τ)), I assume there’s a subtlety I failed to grasp.
So any hints you find time to give will be welcome.

Joel Shore
January 20, 2011 10:21 am

Bill Illis says:

Simple chart from the IPCC shows very little lag warming after a constant concentration in 2000 – certainly not 0.1C per decade. Between 0.1C and 0.2C per century.
http://www.ipcc.ch/graphics/ar4-wg1/jpg/fig-10-4.jpg

Bill,
I just blow up that graph and did the measurement and it looks like the central value for the rise from 2000 to 2100 is close to 0.4 C. As you noted, this is a bit off the 0.6 C that they discussed, presumably because their baseline for that number is actually the 20 year period under in 2000. However, I cannot see how you possibly got 0.1 to 0.2 C of warming over the century from that graph.

Joel Shore
January 20, 2011 2:24 pm

Paul_K: I agree that Schwartz’s paper is a good starting point. However, there were several comments on that paper and a reply by Schwartz in which he made some changes (and got a higher estimate of the climate sensitivity).
In particular, I think that one of the criticisms of Schwartz was precisely that it diagnosed too low a climate sensitivity when it was applied to temperature data generated by model runs themselves. (In such a case, the actual climate sensitivity of the model is known, so you are testing the method’s ability to correctly reproduce a known answer.) So, I think one would have to understand these issues and how one might correct them before one could diagnose the climate sensitivity of a model (or the real world) using this method.

Paul_K
January 21, 2011 3:25 pm

Willis,
Thanks for the comments. I thought you might like it.
I don’t know how to post a spreadsheet onto a URL accessible from WUWT. I am blog-ignorant. Can you point me to a guide?
Paul
REPLY: Sign up for Google Docs, then it will give you a URL to share that spreadsheet from – Anthony

Paul_K
January 21, 2011 3:29 pm

Steven Mosher,
The problem, Mosh, is that I don’t know what forcings GISS has used in any of its projections. I can estimate the direct forcing from CO2, of course, but I can’t find any statement of other forcings.
I did try, as I reported above, switching off all forcings in 2000, and allowing temperature to equilibrate, but without knowing what GISS has done with aerosols and solar for example, it is difficult to match projected results.

Paul_K
January 21, 2011 3:46 pm

Joe Born,
“Anyway, to demonstrate how imperfectly some of us less-gifted readers may have comprehended your comment, I’ll observe that your [1 – exp(-t/ τ)] λ step response (if it is indeed a step response) corresponds to an impulse response of (λ/τ) exp(-t/ τ).”
No, Joe. I thiknk that the nomenclature is getting you a bit. In climate science the term impulse forcing is used to describe a step change in forcing which lasts forever. (I know that this is confusingly different from typical use of the term in dynamics.) The impulse character is observed in the perturbation of net flux. An impulse forcing F applied to a system in radiative balance results in a perturbation in radiative flux which is F at t=0, and which declines to 0 as t becomes large. For example, say that the sun suddenly starts spitting out an additional 1 W/m2 in net received flux at TOA and stays at that level. The impulse forcing is F=1 w/m2. The initial perturbation to the flux balance is equal to F at t=0. However, pretty soon the Earth is going to start to heat up, until eventually the outgoing radiation balances the incoming. At that point, the perturbation in radiative flux is reduced to zero. I don’t like this terminology, but I didn’t invent it. The term impulse is typically used to distinguish the forcing from a continuously changing or ccumulative forcing. However, any cumulative forcing can be represented as the sum of a series of impulse forcings, and that’s all I am doing here. The temperature response function to an impulse forcing is the temperature response functino and doesn’t need to be differentiated. It does however need to be partitioned in time i.e. expressed as the sum of a series of annual temperature increments. My complicated-looking expression for DT(n+1) -DT(n) is just calculating these annual increments from the temperature response function.

January 21, 2011 4:40 pm

Paul_K:
You can also sign up for a Dropbox account if you don’t have a Google account already. Or, if you don’t want to sign up for anything and the spreadsheet is small enough, you can e-mail it to me at troy_ca (at) live (dot) com and I’ll just host it in my public Dropbox for you and send the link.
Regardless, as I said at Lucia’s, this seems pretty amazing, and it should be fun to tinker with.

Paul_K
January 21, 2011 4:47 pm

OK, Willis et al,
You should be able to pick up the spreadsheet from this link:
https://spreadsheets.google.com/ccc?key=0AtwVGjhtohA2dEN4ZW9JOG83ejZydFFlUWMwTkpHZ0E&hl=en&authkey=COu7lc8G
The critical worksheet is called “Willis”, and contains a replication of Willis’s fit up to column “R’. To the right of Column R are the calculations of the temperature differences to be added year by year. The blue box shows the parameters which control the results.
Paul

Paul_K
January 21, 2011 5:19 pm

Wayne:
You wrote
“…can you go back to Willis’s last article, his first stab, with the 10 forcings and simultaneously fit all 10 forcings to fit with minimal RSS to GISS temps?”
This is a bit challenging. If I separate out all of the forcings for separate characterisation, I would end up looking at a climate sensitivity and equilibration time constant for each of them giving me 20 coefficients to fit! I am fairly sure that we couldn’t extract meaningful results this way.
However, there is potentially another way to skin this cat and address a separate question. Do some of the forcings perhaps have a much longer equilibration time and higher sensitivity – which is lost in the grouping of all such forcings?
The simpler way to test this is to abstract each of the forcings in turn and assign it its own parameters, leaving the remainder all grouped. We can then test if the assignment of the additional two parameters and the loss of two degrees of freedom really does improve the match. This seems a worthwhile test. I will do it and post if Willis doesn’t beat me to it.

Paul_K
January 21, 2011 5:37 pm

Joel,
You wrote:
“In particular, I think that one of the criticisms of Schwartz was precisely that it diagnosed too low a climate sensitivity when it was applied to temperature data generated by model runs themselves. (In such a case, the actual climate sensitivity of the model is known, so you are testing the method’s ability to correctly reproduce a known answer.)”
Joel, I understand that Schwartz revised his estimate of climate sensitivity in 2009 in response to criticisms of his previous underestimation of autocorrelation in the temperature time series. (His updated value came in at the lower boundary of estimates from the CMIP suite.) I would be very interested in any references to the work you are talking about, since this is central to the issue here. One serious question is this. You generate a number of runs in a climate model, and produce a number of temperature series for a given set of forcings. Now how do you calculate what the equilibium climate sensitivity really is in the model? If someone in the literature has addressed this question I would really like to know how it is done. Thanks.

January 22, 2011 5:14 am

Paul_K:
Thanks for your patience in responding to a tyro. It turns out that I actually had replicated your work–except for (1) the additive constant (“SHIFT2”) that, having downloaded your spreadsheet, I now see you include, (2) the one-year time shift you introduce in columns EQ and ER, and (3) what I think is an error in your calculation.
Specifically, if you compare, say, cell W12 in your WILLIS worksheet with its cell X12, you’ll see that the latter’s exponential decay is shifted by two years from the former’s rather than by one year. That seems wrong.

January 22, 2011 6:26 am

Paul_K:
Please ignore my “that seems wrong” post; I failed to notice that you also have a “+1” in the X column’s exp() arguments.

Paul_K
January 22, 2011 6:50 am

Joe,
Thanks for checking!
1) The shift in temperature is a fitted value which just “translates” the predicted temperature CHANGES to the same reference frame. Willis did the same thing, but thanks for mentioning it.
2) There is not really a one year shift. The forcing CHANGE over the previous year, is measured from the cumulative forcings at the end of the year. The assumption is that this initiates a flux perturbation and consequent temperature effect starting at the end of that year/beginning of the following year. Or, if you prefer, that the change in forcing occurs as a step at the end of each year considered. This is typical for an EXPLICIT formulation.
3) I re-examined the cells you mentioned and I don’t think there is an error. Each column represents a new temperature response initiated by the forcing in that year. (As a check, if you sum each of the columns, they should be equal to lamda*F, where F is the forcing relevant for that year.)
So let’s consider W12. This should be the temperature change at the end of the third year after the first forcing was implemented namely DT(3) – DT(2) for the first year forcing, and that is what is calculated. X12 should be the temperature change at the end of the second year from the second year forcing, or DT(2) – DT(1) from the second year forcing, and that is what it looks like to me. Moving along one column, Y12 should be the first year temperature response from the third year forcing, and it looks to me like DT(1) – DT(0) for the third year forcing. I can’t see any mistake there. Note that I did not generalise the EXCEL formula for the first column (“W”), but did so for all the other columns. This may be confusing you?
Thanks again
Paul

Paul_K
January 22, 2011 9:28 am

Joe,
Let me try 2) above again, because I managed to confuse myself.
There is no one year shift. The message in the spreadsheet just signals that all of the rows have been moved down by one in that area of the spreadsheet – this only needs to be taken into account for plotting against actual years. The forcing for year n does initiate the temperature response in the same year, contrary to what I said above. For example, if you check the data inputs, the forcing for year 1881 is callculated in cell T10, and the temperature from GISS E for 1881 is -.09749. In the spreadsheet, the first year temperature response from that forcing is calculated in EQ11, ER11 and ES11 (one row down). The resultant temperature is then compared to the GISS E temp in cell ET11, which is -.09749 – the temp for 1881. In other words the rows in that area have all been displaced by one year. This is becasue at one stage I really was coarsely testing numerical effects of start-year, mid-year and end-year calcs. There is no shift applied in the results shown. Sorry for the confusion.

January 22, 2011 3:54 pm

Paul_K:
It is I who should apologize, since all you said would have been completely apparent–and I would have saved you the trouble of an explanation–if I had just thought the spreadsheet through a bit more.
Perhaps I can partially redeem myself by making a suggestion that may (1) simplify your calculations and (2) afford an alternative view. As I suspect you already know, what climate folks apparently call an “impulse” actually is the integral of what some other disciplines use that term for; back when we did analog signal processing, we used “impulse” to mean the Dirac delta function, i.e., the derivative of the unit step.
Now, I brought this up before I saw your spreadsheet because your initial (and, as it turned out, accurate) description of what you were attempting to do made me question that description. Specifically, it sounded to me as though you were doing much more computation than was necessary to accomplish what your description seemed to say you were trying to do. When I received your spreadsheet, though, it showed that you said what you meant–and that your columns W through EO did perform what seemed to be needlessly involved computation.
What you essentially do is a numerical approach to convolving the forcing differences with the system’s response to what climate folks call an impulse (and I would have called a unit step). Analytically (but not numerically) such a convolution is is equivalent to convolving the forcings themselves with the system’s response to the Dirac delta function.
But that latter, equivalent convolution can simply be performed numerically in accordance with:
T(n) = [lambda * Forcings(n-1)/tau + T(n-1) ] exp(-1/tau).
This would eliminate the need for the above-mentioned columns W through EO .
As I say, these two convolutions are analytically equivalent. However, their results differ if you do them numerically. When I used the Dirac-delta-function approach, I got a temperature curve that was much the same as yours but exhibited stronger reactions to sudden forcing changes. It is not clear to me that this result is inferior to what you get in your spreadsheet. But computing it is more straightforward.
Thanks again for helping me sort through your model. (Now I can finally start to look at the real point, which is what is implied by your model’s ability to emulate the model that Deep Thought implements.)

Jim D
January 22, 2011 4:51 pm

Interesting results, Paul_K.
I am curious whether, if you change the tau to 10 years, you can get a fit that is equally good without having to reduce the volcanoes.
In other words I think you have some compensating free parameters in the lag period and sensitivity. Make the former longer and the best-fit sensitivity would be higher, and the fit may not degrade.

Joel Shore
January 22, 2011 5:59 pm

Paul_K:
Here are two of the 3 comments on Schwartz:
http://www.jamstec.go.jp/frsgc/research/d5/jdannan/comment_on_schwartz.pdf
http://www.fel.duke.edu/~scafetta/pdf/2007JD009586.pdf
(The 3rd one, by Knutti et al., I can’t find a copy of.)
And, here is Schwartz’s reply to the comments: http://www.ecd.bnl.gov/pubs/BNL-80226-2008-JA.pdf

Paul_K
January 22, 2011 6:29 pm

Jim D says:
January 22, 2011 at 4:51 pm
I am curious whether, if you change the tau to 10 years, you can get a fit that is equally good without having to reduce the volcanoes.
Jim,
The answer to this is “no”. I tried free fitting the parameters without factoring the volcanoes and the match is not bad, but it is not as good as with the factor. I also tried forcing a fit with a large value of tau (greater than 5 years) with and without a factor on the volcanoes, but the optimiser always wants to bring the value back to the constraint condition.
What I have not yet tried is allowing the volcano forcing to have its own properties of sensitivity and equilibration time. I suspect that this might eliminate the need for a factor.

Paul_K
January 22, 2011 6:45 pm

Joel Shore,
Many thanks for the references. It seems that the main complaint in both critiques was Schwartz’s error in assessment of autocorrelation, since the method he applied involved abstracting a trend from non-stationary data. The first paper (also) comments that the results gave a different answer from runs from a GCM where the climate sensitivity was “known” to be 2.7 deg C for a doubling of CO2, but frustratingly does not explain how the model’s equilibrium climate sensitivity was calculated, unless I missed something important.
I have just finished an analysis of the OHC response to the GISS forcings, and it matches the available GISS data remarkably well with a short equilibration time constant. I will post on it tomorrow, since it is a bit involved. I am more and more puzzled.

wayne
January 22, 2011 7:01 pm

Paul_K says:
January 21, 2011 at 5:19 pm

However, there is potentially another way to skin this cat and address a separate question. Do some of the forcings perhaps have a much longer equilibration time and higher sensitivity – which is lost in the grouping of all such forcings?
—-
Exactly Paul, that was my very point. The quick fitting as I references in my last comment kept coming up with snow/ice albedo massively underweighted and others showing the best fit as if they were best ignored, some with best fit if they were actually reversed (ie, inverse effect, but small, which basically negates it’s effect on T).
I know that is a different question and don’t want to distract from your current track but if you get time in the future, I think you will find some very curious indications there, especially if you attempt to duplicate GISS Temp and not the model’s output while using those forcings supplied by GISS.
I’ll leave you alone for now, you do see what I was pointing at.

Jim D
January 22, 2011 7:40 pm

Paul_K, another way to go is to have a sensitivity as an inverse function of frequency which is a forced harmonic oscillator analogy. This would be more involved mathematically, as I think you would have to decompose the forcing frequencies. There is reason to believe that the response is stronger at lower frequency rather than being uniform for all driving frequencies.

Paul_K
January 23, 2011 7:53 am

I was searching for GISS E model OHC results, without any success, when Gavin read my mind and produced them here, at least for recent times (1970 to 2003).
http://www.realclimate.org/index.php/archives/2011/01/2010-updates-to-model-data-comparisons/
The graph of OHC data shows the “ensemble mean” results from the GISS E model to2003. These data should correspond to the temperature profile from GISS E which we matched earlier with a simple superposition model.
Using the same model as previously (Schwartz), we can convert each forcing increment into a perturbation on the net difference between incoming and outgoing flux. The sum of all these individual net differences at any point in time gives us the total net difference, and the integration of this forwards in time, gives us the cumulative energy gained or lost by Earth’s system. Since the heat capacity of the ocean is several orders of magnitude greater than the atmosphere or land surface, we expect most of this additional energy to be seen in the oceans in the form of heat energy.
Several commenters over at Lucia’s suggested that the reason for the big difference in apparent climate sensitivity between the simple model and the GCM might be explained by energy lost to OHC or some other secondary long-term system. See conversation after Comment#66788
http://rankexploits.com/musings/2011/odds-are/
So here is a graphical presentation of how well the simple superposition model matches GISS E.
http://img218.imageshack.us/img218/5416/ohcmatch.jpg
The solid yellow line is the change in OHC computed from the simple model using the original estimation of time equilibration constant, tau = 2.61. It reflects the shape of the GISS E result very well, but is not quite matching the energy gain. The second dotted-purple line is the result obtained by resetting the value of tau to 3.5. The match in energy gain becomes excellent over the data period.
I then returned to the temperature match and re-optimised the match for tau fixed at 3.5. The result is a small loss of fidelity. R^2 falls to 98.2%. The re-matched parameters are as follows:
Equilibrium Climate sensitivity 0.345
Tau 3.5
Volcano factor 0.775
RSS 0.0687
RMS error .024
In other words, this is still an incredibly high quality match of both temperature and OHC from the GISS model – and no missing heat anywhere.

wayne
January 23, 2011 2:51 pm

Hi Paul… thought I should save you from the trouble… went ahead and wrote in c sharp a monte carlo fitting using your method (excel wouldn’t handle the 33 simultaneous parameters), this time against GISSTemp instead of the model’s output. Took many hours to run but simple. The results are basically the same I got before. Snow albedo up four times, land use and solar up, and GHG’s one fourth with things like O3 and aerosols even going negative (should be inversed).
After some thinking it seems this might show absolutely nothing except that the data of the forcings has certain shapes that are preferred and those ‘shapes’ are what fit the data the closest. For instance nasa shows (think is was via MODIS) that albedo shows no real effect at all, at least by radiation readings but, this fitting shows it should be increased four fold. Makes no real sense. Thought you would want to know so not to waste your time too though it’s still curious.

Paul_K
January 24, 2011 2:16 pm

Wayne,
Interesting. You are tackling a different problem entirely (and a notoriously difficult one). If you are interested in the albedo issue, particularly, you might want to examine Figure 9.3 in Chapter 9 of WG1 of the AR4. It shows a massive mismatch of albedo in the models and that observed.