Guest Post by Willis Eschenbach
In my earlier post about climate models, “Zero Point Three Times The Forcing“, a commenter provided the breakthrough that allowed the analysis of the GISSE climate model as a black box. In a “black box” type of analysis, we know nothing but what goes into the box and what comes out. We don’t know what the black box is doing internally with the input that it has been given. Figure 1 shows the situation of a black box on a shelf in some laboratory.
Figure 1. The CCSM3 climate model seen as a black box, with only the inputs and outputs known.
A “black box” analysis may allow us to discover the “functional equivalent” of whatever might be going on inside the black box. In other words, we may be able to find a simple function that provides the same output as the black box. I thought it might be interesting if I explain how I went about doing this with the CCSM3 model.
First, I went and got the input variables. They are all in the form of “ncdf” files, a standard format that contains both data and metadata. I converted them to annual or monthly averages using the computer language “R”, and saved them as text files. I opened these in Excel, and collected them into one file. I have posted the data up here as an Excel spreadsheet.
Next, I needed the output. The simplest place to get it was the graphic located here. I digitized that data using a digitizing program (I use “GraphClick”, on a Mac computer).
My first procedure in this kind of exercise is to “normalize” or “standardize” the various datasets. This means to adjust each one so that the average is zero, and the standard deviation is one. I use the Excel function ‘STANDARDIZE” for this purpose. This allows me to see all of the data in a common size format. Figure 2 shows those results.
Figure 2. Standardized forcings used by the CCSM 3.0 climate model to hindcast the 20th century temperatures. Dark black line shows the temperature hindcast by the CCSM3 model.
Looking at that, I could see several things. First, the CO2 data has the same general shape as the sulfur, ozone, and methane (CH4) data. Next, the effects of the solar and volcano data were clearly visible in the temperature output signal. This led me to believe that the GHG data, along with the solar and the volcano data, would be enough to replicate the model’s temperature output.
And indeed, this proved to be the case. Using the Excel “Solver” function, I used the formula which (as mentioned above) had been developed through the analysis of the GISS model. This is:
T(n+1) = T(n)+λ ∆F(n+1) * (1- exp( -1 / τ )) + ΔT(n) exp( -1 / τ )
OK, now lets render this equation in English. It looks complex, but it’s not.
T(n) is pronounced “T sub n”. It is the temperature “T” at time “n”. So T sub n plus one, written as T(n+1), is the temperature during the following time period. In this case we’re using years, so it would be the next year’s temperature.
F is the forcing, in watts per square metre. This is the total of all of the forcings under consideration. The same time convention is followed, so F(n) means the forcing “F” in time period “n”.
Delta, or “∆”, means “the change in”. So ∆T(n) is the change in temperature since the previous period, or T(n) minus the previous temperature T(n-1). ∆F(n), correspondingly, is the change in forcing since the previous time period.
Lambda, or “λ”, is the climate sensitivity. And finally tau, or “τ”, is the lag time constant. The time constant establishes the amount of the lag in the response of the system to forcing. And finally, “exp (x)” means the number 2.71828 to the power of x.
So in English, this means that the temperature next year, or T(n+1), is equal to the temperature this year T(n), plus the immediate temperature increase due to the change in forcing λ F(n+1) * (1-exp( -1 / τ )), plus the lag term ΔT(n) exp( -1 / τ ) from the previous forcing. This lag term is necessary because the effects of the changes in forcing are not instantaneous.
Figure 3 shows the final result of that calculation. I used only a subset of the forcings, which were the greenhouse gases (GHGs), the solar, and the volcanic inputs. The size of the others is quite small in terms of forcing potential, so I neglected them in the calculation.
Figure 3. CCSM3 model functional equivalent equation, compared to actual CCSM3 output. The two are almost identical.
As with the GISSE model, we find that the CCSM3 model also slavishly follows the lagged input. The match once again is excellent, with a correlation of 0.995. The values for lambda and tau are also similar to those found during the GISSE investigation.
So what does all of this mean?
Well, the first thing it means is that, just as with the GISSE model, the output temperature of the CCSM3 model is functionally equivalent to a simple, one-line lagged linear transformation of the input forcings.
It also implies that, given that the GISSE and CCSM3 models function in the same way, it is very likely that we will find the same linear dependence of output on input in other climate models.
(Let me add in passing that the CCSM3 model does a very poor job of replicating the historical decline in temperatures from ~ 1945 to ~ 1975 … as did the GISSE model.)
Now, I suppose that if you think the temperature of the planet is simply a linear transformation of the input forcings plus some “natural variations”, those model results might seem reasonable, or at least theoretically sound.
Me, I find the idea of a linear connection between inputs and output in a complex, multiply interconnected, chaotic system like the climate to be a risible fantasy. It is not true of any other complex system that I know of. Why would climate be so simply and mechanistically predictable when other comparable systems are not?
This all highlights what I see as the basic misunderstanding of current climate science. The current climate paradigm, as exemplified by the models, is that the global temperature is a linear function of the forcings. I find this extremely unlikely, from both a theoretical and practical standpoint. This claim is the result of the bad mathematics that I have detailed in “The Cold Equations“. There, erroneous substitutions allow them to cancel everything out of the equation except forcing and temperature … which leads to the false claim that if forcing goes up, temperature must perforce follow in a linear, slavish manner.
As we can see from the failure of both the GISS and the CCSM3 models to replicate the post 1945 cooling, this claim of linearity between forcings and temperatures fails the real-world test as well as the test of common sense.
w.
TECHNICAL NOTES ON THE CONVERSION TO WATTS PER SQUARE METRE
Many of the forcings used by the CCSM3 model are given in units other than watts/square metre. Various conversions were used.
The CO2, CH4, NO2, CFC-11, and CFC-12 values were converted to w/m2 using the various formulas of Myhre as given in Table 3.
Solar forcing was converted to equivalent average forcing by dividing by 4.
The volcanic effect, which CCSM3 gives in total tonnes of mass ejected, has no standard conversion to W/m2. As a result we don’t know what volcanic forcing the CCSM3 model used. Accordingly, I first matched their data to the same W/m2 values as used by the GISSE model. I then adjusted the values iteratively to give the best fit, which resulted in the “Volcanic Adjustment” shown above in Figure 3.
[UPDATE] Steve McIntyre pointed out that I had not given the website for the forcing data. It is available here (registration required, a couple of gigabyte file).
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.
Bob Moss says:
May 14, 2011 at 5:51 am
If you take this formula and project into the future with a doubling of CO2 does it match the IPCC projections?
###################################
I imagine it might – but that’s with the assumption that none of the other “forcings” change much. A change in any other “input”, even with a doubling of CO2, might cause little change in output.
That was proven, because, as he said, “…the CCSM3 model does a very poor job of replicating the historical decline in temperatures from ~ 1945 to ~ 1975…”
Remember, he only used the GHG data, solar and the volcano data.
In order for any model to forecast the future, you’d have to know for certain when the next large volcanic eruption will occur.
Or if the sun comes up tomorrow…
Jim D says:
May 14, 2011 at 11:28 am
netdr, the more typical value, and what would be expected from 3 degrees C per doubling is near 0.2 C per decade, and indeed the 2000′s was nearly 0.2 degrees warmer than the 90′s. I think the 2010′s will exceed the 2000′s by a similar amount, especially since that decade ended up with a soft record to beat by not warming much.
******************
I notice you avoided answering my observation that the models predict .4 ° C warming from 2000 to 2010. Do you agree or disagree ?
The actual warming is much less than that. Do you agree or disagree ?
The satellite data shows very slight warming from 2000 to 2010 [inclusive]
#Least squares trend line; slope = 0.00660975 per year
http://www.woodfortrees.org/data/uah/from:2000/to:2010/trend/plot/uah/from:2000/to:2010
That is .66 ° C in 100 years which is puny at best.
The warming effect of CO2 is obviously not a problem worthy of crow-barring our economy for.
The games being played with surface stations makes their data unreliable so I didn’t use it.]
Willis, very audacious! I doff my mortarboard to you.
You set out on an experiment to test the hypothesis, “Can I get close to CCSM3 output by simply combining the input forcings using the simplest linear time-decay function?” Most would have said, “You’re crazy.” Most others, who appreciate that the CCSM3 is linear in its parts, maybe you can get close, but there will still be a lot of residual.”
You just did it and knocked one out of the park! As a favorite professor of OR/MS, Gene Woolsey, says: “Do the dumb things first!”
Here is my take home:
Willis would be the first to say that predicting climate response to human and natural forcings is beyond his knowledge and power.
But Willis found a way to predict the output of a CCSM3 model! That IS apparently within his power!
So, If Willis can predict CCSM3, we are left with only 2 possible solution states:
A. Willis CAN predict tomorrow’s climate, or
B. CCSM3 CANNOT predict climate.
The Emperor’s threads are looking quite tattered.
Quick question – wouldn’t it be possible to replicate the observed 20C temperatures with any set of curves, using the same methodology?
Willis
Your analysis is correct.
Sadly enough, you conclusions are wrong 🙂 🙂
It is precisely because the CCSM3 reduces the complex, interacting variables fed to it, into a simple output so elegantly, that we should completely believe in it.
The logic goes like this:
[1] Your simple equation shows that the climate is essentially a simple first order responder to CO2
[2] The manifestly complex computer model which represents the entire climate system, the output of which resembles your simple equation, is the very proof that the climate is essentially a simple first-order responder to CO2
Where do you think the climate activists get their certainty from? Why do you think we are branded ‘deniers’?
It is because we can’t get a simple thing as this.
Here is some clear evidence of IPPC modeling bias!
Forecasting experts’ simple model leaves expensive climate models cold
Modeling problems: The naïve model approach Verses the IPCC model forecasting procedures
The naïve model approach is confusing to non-forecasters who are aware that temperatures have always varied. Moreover, much has been made of the observation that the temperature series that the IPCC uses shows a broadly upward trend since 1850 and that this coincides with increasing industrialization and associated increases in manmade carbon dioxide gas emissions.
To test the naive model, we started with the actual global average temperature for the year 1850 and simulated making annual forecasts from one to 100 years after that date – i.e. for every year from 1851 to 1950. We then started with the actual 1851 temperature and made simulated forecasts for each of the next 100 years after that date – i.e. for every year from 1852 to 1951. This process was repeated over and over starting with the actual temperature in each subsequent year, up to 2007, and simulating forecasts for the years that followed (i.e. 100 years of forecasts for each series until after 1908 when the number of years in the temperature record started to diminish as we approached the present). This produced 10,750 annual temperature forecasts for all time horizons, one to 100 years, which we then compared with forecasts for the same periods from the IPCC forecasting procedures. It was the first time that the IPCC’s forecasting procedures had been subject to a large-scale test of the accuracy of their forecasts.
Over all the forecasts, the IPCC error was 7.7 times larger than the error from the naïve model.
http://www.copenhagenclimatechallenge.org/index.php?option=com_content&view=article&id=52&Itemid=55
A Black Box of N-Rays
This little experiment Willis did on CCSM research reminds me of the story of the debunking of N-Rays by Robert Wood.
Wood suspected that N-rays were a delusion. To demonstrate such, he removed the prism from the N-ray detection device, unbeknownst to Blondlot or his assistant. Without the prism, the machine couldn’t work. Yet, when Blondlot’s assistant conducted the next experiment he found N-rays. Wood then tried to surreptitiously replace the prism but the assistant saw him and thought he was removing the prism. The next time he tried the experiment, the assistant swore he could not see any N-rays. But he should have, since the equipment was in full working order.
http://skepdic.com/blondlot.html
With this quote at the top of the page:
One of the first things I did with every graduate student who worked with me is to convince them how difficult it was to keep oneself from unconscious bias.* –Michael Witherell, head of Fermi National Accelerator Laboratory
another telling of the story: http://www.mikeepstein.com/path/nrays.html
@Roy Clark
“Using the ‘calibration factor’ from CO2, an increase in LWIR flux of 1.7 W.m-2 has produced an increase in temperature of 1 C. Linear scaling then indicates that a 1 Wm-2 increase in LWIR flux from any greenhouse gas or other ‘forcing’ agent’ must produce an increase in ‘equilbrium surface temperature’ of 2/3 C.”
I’m not sure what you mean by this. IF the calibration assumes that a 1C increase in the temperature of the surface was caused by a 1.7W/sq.m. increase in IR incident upon and absorbed by the surface, then a linear scaling would assert that a 1W/sq.m increase would produce a .588C increase in surface temperature, not a .667C increase (1/1.7=.588…).
“Since an increase of 1.7 W.m-2 of flux comes from a change in ‘surface temperature’ of 0.31 C at an average surface temperature of 15 C (288 K), the other 0.69 C must come from ‘water vapor feedback’.”
Then when you make the second quoted statement above I can’t tell if you are saying that a .31C increase in surface temperature caused a 1.7W/sq.m increase in surface IR emission; or whether you meant that a .31C increase in surface temperature was the effect of a 1.7W/sq.m increase in surface IR absorption. That is: you speak of the flux “coming from” the .31C temperature change, and then you refer to a .69C temperature change “coming from” ‘water vapor feedback’ which is a flux.
Finally, I don’t understand how you are relating the .69C difference between a .31C increase in surface temperature and a 1C increase in surface temperature to the figure of 1.7W/sq.m in terms of “amplification”.
Stephen Singer says:
May 14, 2011 at 11:38 am
Margaret and Willis it appears to me to be a combination of a couple of small volcanoes just prior to and after 1970 along with the beginning of the large drop in ozone. You’ll note that after 1970 there were similar drops around a couple of large volcanoes as ozone continues to decline.
I have no plausible explanation for how the two small volcanoes and Ozone might be linked to the temp drop.
Is it not more likely that the large drop in UV radiation from the Sun led to the drop in ozone as well as a drop in the altitude of the Top of the Atmosphere? It is the high frequency energy that heats the oceans not the low IR frequencies. So ozone drop is just an indicator not a causation.
You have to remember that the temperature calculation is done step-wise; it’s not a smooth function because the forcings are empirically adjusted. I’d be curious to see a graph of the actual value of the total forcing function over time and if the various forcings are simple time-variant or coupled. Except of course, the very evil CO2 😉 which we know is the root of all climate disaster and whose forcing must perforce rocket upward with time (/sarc). So, does a change in CO2 or water vapor also affect, say, the value of the forcing for methane? Or is everything coupled to CO2? Except for ozone, apparently.
Willis, this is a surprising result. The reason that it is surprising to me is that I know for a fact (first hand knowledge) that the modelers are making substantial use of very large supercomputers. For example, NASA’s Columbia system is a 10,240 processor, NUMA machine with 10 Terabytes of main memory– this is a big machine by any standard. I also know that a simulation run of this machine takes many hours — in other words, there’s a massive number of cycles expended solving a very large number of computational fluid dynamics equations. Even without knowing the structure of the model, one would assume that the relationship of the inputs to the output would be a very complex function. What’s surprising is that the output could be so closely emulated by a simple linear transformation consisting of a few variables. This is a stunning result. But I’m scratching me head to figure out how this could be. Any thoughts on this?
What would be expected from unbiased scientists is models which are sometimes high and sometimes low but all the models which have been published predict too much warming. Is that because of bias ? I think so.
Dr Hansen’s 1988 model was fairly accurate until 2007 but as of 2011 it is completely off the mark.
http://cstpr.colorado.edu/prometheus/archives/hansenscenarios.png
The temperature as of 2011 is lower than the scenario “C” which was with stringent CO2 curbs. Even using the biased GISS numbers.
http://data.giss.nasa.gov/gistemp/graphs/Fig.A2.txt
The CCSM 3.0 simulations were done in 2000 and already they are way too high.
The warming since 2000 has been at a rate of .06 ° per decade which is puny. The prediction of .4 ° C is 666 % too high. These predictions are clearly “not skillful “!
EllisM says:
May 14, 2011 at 1:27 pm
Quick answer … no. The final curve is a very specific shape, which cannot in general be matched by “any set of curves” to a correlation of 0.995 as I have done above.
w.
Great work Willis. Could you please tell us then how the model manages to do forecasting when the data runs out? Clearly nearly all the core data are rising. Ozone looks like it turned around – where is it going now and will it cause cooling or warming??
“The final curve is a very specific shape, which cannot in general be matched by “any set of curves” to a correlation of 0.995 as I have done above.”
But wait – wouldn’t proper weighting, or normalization as you call it, and judicious choices for τ and λ result in the observed temp time series, given some random set of “input” curves?
Likewise, instead of using an admixture of the kinds of forcings (W/m-2, volcanic aerosol mass, ppm, ppb, etc.) wouldn’t it be better to use the computed radiative values for each forcing? CCSM3 is over 7 years old now, certainly someone somewhere has those numbers.
Shub Niggurath says:
May 14, 2011 at 1:30 pm
As clever a description of circular reasoning as I have seen. Thank You, Sir.
netdr says:
May 14, 2011 at 2:53 pm
.06 C/decade and what is going on now seems to imply that the warming ended, but is now rolling over into cooling. What we need is the rate of cooling in C/decade.
Not to belittle Will’s excellent work (as always) but this is of course rather similar to what Dr. Jeff Glassman did last year over at Rocket Scientists Journal (‘RSJ’) where he elegantly teased out the (amplified and lagged) solar signal in the official IPCC AR4 HadCRUT3 global temperature record of the last ~160 years.
http://rocketscientistsjournal.com/2010/03/sgw.html
Little known is that Jeff is the retired head of Hughes Aerospace Science Division and a world expert in signal deconvolution algorithms to whom, in a nice irony, every cell phone user owes an unconscious debt.
RSJ – the quietest sceptical blog in the world (and, again ironically, one of the very best).
Nothing anyone has said since (including most especially that self-appointed solar dogmatist Leif Svalgaard) has invalidated Jeff’s conclusion.
Almost every month now we are seeing mainstream literature papers which refine our understanding of the manifold ways in which solar forcing is amplified e.g.
http://www.mps.mpg.de/projects/sun-climate/papers/uvmm-2col.pdf
http://www.agu.org/pubs/crossref/2011/2010JA016220.shtml
Tamino decided to call it all “fake”, yeah that’s the ticket.
http://tamino.wordpress.com/2011/05/14/fake-forcing/
EllisM
I got a better idea.
Why don’t they show us how they got there….or have they?
Hi Willis,
The formula you give is incorrect (typo?) and should read
T(n+1) = T(n) + (lamba)dF(n+1)/tau + dT(n)exp(-1/tau)
Where dF(n+1) = F(n+1) – F(n)
Nice post though. Thanks.
“Me, I find the idea of a linear connection between inputs and output in a complex, multiply interconnected, chaotic system like the climate to be a risible fantasy. It is not true of any other complex system that I know of. Why would climate be so simply and mechanistically predictable when other comparable systems are not?”
#1. The output you are fitting is a global average. That means a large amount of the regional variability is suppressed. If you looked at the work that real scientists do when they try to “model” the “model”, they have to build much more complex emulations than you have done. In those versions they build models to capture the regional variations. This is know as predicting the HIGH DIMENSIONAL output of a GCM. In short, it is not at all surprising that the global average (low dimensional output) of a complex system is this simple. You’re surprised; I’m not, neother are the people who have tried to predict the high dimensional output..
#2 the output you are fitting is itself the average of individual runs. Again, you are fitting averages.
There is nothing surprising in any of this. We’ve known it for a while
http://www.gfdl.noaa.gov/blog/isaac-held/2011/03/05/2-linearity-of-the-forced-response/
CE Buck, does some very nice work ( she presented a great PPT at AGU )
http://www.noc.soton.ac.uk/pucm/
here Willis.
One way its done with 103 runs of a GCM.
http://www.newton.ac.uk/programmes/CLP/seminars/120711001.html
steven mosher says:
“#1. The output you are fitting is a global average. That means a large amount of the regional variability is suppressed. If you looked at the work that real scientists do…”
‘Real’ scientists?
rbateman says:
May 14, 2011 at 4:38 pm
netdr says:
May 14, 2011 at 2:53 pm
.06 C/decade and what is going on now seems to imply that the warming ended, but is now rolling over into cooling. What we need is the rate of cooling in C/decade.
**********
I didn’t claim cooling yet but we are in the negative phase of the PDO so wait a few years. In 5 years or much less there will be measurable cooling. [2011 is very cool so far ]
Models which include ocean currents show 20 to 30 years of slight cooling from now which causes the overall rate of warming to be 1/2 ° C per century.
This study seems to me to have it pretty close to correct.
http://people.iarc.uaf.edu/~sakasofu/pdf/two_natural_components_recent_climate_change.pdf
Paraphrasing it says the temperature is a 60 year sine wave caused by ocean currents and 1/2 ° C per century from coming out of the little ice age.
As opposed to the fake scientists like Willis, he means.