Guest Post by Willis Eschenbach
In my earlier post about climate models, “Zero Point Three Times The Forcing“, a commenter provided the breakthrough that allowed the analysis of the GISSE climate model as a black box. In a “black box” type of analysis, we know nothing but what goes into the box and what comes out. We don’t know what the black box is doing internally with the input that it has been given. Figure 1 shows the situation of a black box on a shelf in some laboratory.
Figure 1. The CCSM3 climate model seen as a black box, with only the inputs and outputs known.
A “black box” analysis may allow us to discover the “functional equivalent” of whatever might be going on inside the black box. In other words, we may be able to find a simple function that provides the same output as the black box. I thought it might be interesting if I explain how I went about doing this with the CCSM3 model.
First, I went and got the input variables. They are all in the form of “ncdf” files, a standard format that contains both data and metadata. I converted them to annual or monthly averages using the computer language “R”, and saved them as text files. I opened these in Excel, and collected them into one file. I have posted the data up here as an Excel spreadsheet.
Next, I needed the output. The simplest place to get it was the graphic located here. I digitized that data using a digitizing program (I use “GraphClick”, on a Mac computer).
My first procedure in this kind of exercise is to “normalize” or “standardize” the various datasets. This means to adjust each one so that the average is zero, and the standard deviation is one. I use the Excel function ‘STANDARDIZE” for this purpose. This allows me to see all of the data in a common size format. Figure 2 shows those results.
Figure 2. Standardized forcings used by the CCSM 3.0 climate model to hindcast the 20th century temperatures. Dark black line shows the temperature hindcast by the CCSM3 model.
Looking at that, I could see several things. First, the CO2 data has the same general shape as the sulfur, ozone, and methane (CH4) data. Next, the effects of the solar and volcano data were clearly visible in the temperature output signal. This led me to believe that the GHG data, along with the solar and the volcano data, would be enough to replicate the model’s temperature output.
And indeed, this proved to be the case. Using the Excel “Solver” function, I used the formula which (as mentioned above) had been developed through the analysis of the GISS model. This is:
T(n+1) = T(n)+λ ∆F(n+1) * (1- exp( -1 / τ )) + ΔT(n) exp( -1 / τ )
OK, now lets render this equation in English. It looks complex, but it’s not.
T(n) is pronounced “T sub n”. It is the temperature “T” at time “n”. So T sub n plus one, written as T(n+1), is the temperature during the following time period. In this case we’re using years, so it would be the next year’s temperature.
F is the forcing, in watts per square metre. This is the total of all of the forcings under consideration. The same time convention is followed, so F(n) means the forcing “F” in time period “n”.
Delta, or “∆”, means “the change in”. So ∆T(n) is the change in temperature since the previous period, or T(n) minus the previous temperature T(n-1). ∆F(n), correspondingly, is the change in forcing since the previous time period.
Lambda, or “λ”, is the climate sensitivity. And finally tau, or “τ”, is the lag time constant. The time constant establishes the amount of the lag in the response of the system to forcing. And finally, “exp (x)” means the number 2.71828 to the power of x.
So in English, this means that the temperature next year, or T(n+1), is equal to the temperature this year T(n), plus the immediate temperature increase due to the change in forcing λ F(n+1) * (1-exp( -1 / τ )), plus the lag term ΔT(n) exp( -1 / τ ) from the previous forcing. This lag term is necessary because the effects of the changes in forcing are not instantaneous.
Figure 3 shows the final result of that calculation. I used only a subset of the forcings, which were the greenhouse gases (GHGs), the solar, and the volcanic inputs. The size of the others is quite small in terms of forcing potential, so I neglected them in the calculation.
Figure 3. CCSM3 model functional equivalent equation, compared to actual CCSM3 output. The two are almost identical.
As with the GISSE model, we find that the CCSM3 model also slavishly follows the lagged input. The match once again is excellent, with a correlation of 0.995. The values for lambda and tau are also similar to those found during the GISSE investigation.
So what does all of this mean?
Well, the first thing it means is that, just as with the GISSE model, the output temperature of the CCSM3 model is functionally equivalent to a simple, one-line lagged linear transformation of the input forcings.
It also implies that, given that the GISSE and CCSM3 models function in the same way, it is very likely that we will find the same linear dependence of output on input in other climate models.
(Let me add in passing that the CCSM3 model does a very poor job of replicating the historical decline in temperatures from ~ 1945 to ~ 1975 … as did the GISSE model.)
Now, I suppose that if you think the temperature of the planet is simply a linear transformation of the input forcings plus some “natural variations”, those model results might seem reasonable, or at least theoretically sound.
Me, I find the idea of a linear connection between inputs and output in a complex, multiply interconnected, chaotic system like the climate to be a risible fantasy. It is not true of any other complex system that I know of. Why would climate be so simply and mechanistically predictable when other comparable systems are not?
This all highlights what I see as the basic misunderstanding of current climate science. The current climate paradigm, as exemplified by the models, is that the global temperature is a linear function of the forcings. I find this extremely unlikely, from both a theoretical and practical standpoint. This claim is the result of the bad mathematics that I have detailed in “The Cold Equations“. There, erroneous substitutions allow them to cancel everything out of the equation except forcing and temperature … which leads to the false claim that if forcing goes up, temperature must perforce follow in a linear, slavish manner.
As we can see from the failure of both the GISS and the CCSM3 models to replicate the post 1945 cooling, this claim of linearity between forcings and temperatures fails the real-world test as well as the test of common sense.
w.
TECHNICAL NOTES ON THE CONVERSION TO WATTS PER SQUARE METRE
Many of the forcings used by the CCSM3 model are given in units other than watts/square metre. Various conversions were used.
The CO2, CH4, NO2, CFC-11, and CFC-12 values were converted to w/m2 using the various formulas of Myhre as given in Table 3.
Solar forcing was converted to equivalent average forcing by dividing by 4.
The volcanic effect, which CCSM3 gives in total tonnes of mass ejected, has no standard conversion to W/m2. As a result we don’t know what volcanic forcing the CCSM3 model used. Accordingly, I first matched their data to the same W/m2 values as used by the GISSE model. I then adjusted the values iteratively to give the best fit, which resulted in the “Volcanic Adjustment” shown above in Figure 3.
[UPDATE] Steve McIntyre pointed out that I had not given the website for the forcing data. It is available here (registration required, a couple of gigabyte file).
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.
Willis’ analysis shows that the model he is emulating (not the real climate) does not have any “tipping points” because such can not exist with a linear system. The lack of multidecadal oscillations also suggests the GCM model is missing something.
Very simple formula can simulate the climate model’s output.
I downloaded the A1B monthly forecast going out to the year 2100 from the 23 climate models used in the IPCC’s AR4 report.
Other than the fact, the ensemble mean projects a large increase in the annual seasonal cycle change (the Earth average temperature throughout the year – which peaks in July),
… it is primarily a function of CO2. The projections do not spontaneously pop-out based on a simulation of the climate. The result is programmed in. Going out to 2100, the A1B climate simulation result is very closely related to the projected A1B CO2 levels.
http://imageshack.us/m/831/1395/ipccar4a1bmonthlypredic.png
For March 2011, the enemble mean projection was about +0.58C while Hadcrut3 was at +0.318C.
Shub Niggurath says:
May 15, 2011 at 3:29 am
Mosher
“The need to create and maintain complex computer models of the climate system, the whole justification for the exercise, is the unstated assumption or claim that the model is a good replication of a climate system because it is representationally irreducible.”
Brilliant Post. A new level of sophistication in discussions of models. Keep up the good work.
I really do not want to see Willis’ post hijacked. For that reason, I must insist that there is a Bottom Line here. Willis has shown that Warmista models are computationally equivalent to a simple linear transformation on inputs. That is Willis’ contribution to the debate. Those who wish to defend the Warmista must now present some information about their model(s) which show that they really are complicated and refined in their representation of forces that determine climate but all that reduces to a simple linear transformation. No one commenting on this site has attempted this. For anyone wanting a formal statement of this point, look at Shub’s post above. He spells out formally, in the simplest terms, what must be accomplished. Warmista, the ball is in your court.
Forgive me, but..
1 – comparing your output to theirs using a digitization of one of their graphs is absurd. That graphical representation incorporated rounding effects, so did the print/image, and so did your digitizer. And no, averages accumulate error, it does not magically cancel out.
2 – I looked carefully at the inards of one of the major models about a year ago. There was 1960s fortran IV there – along with data structures from that same era and linearity assumptions imposed by the limitations of the 1401/360 machines they were writing for.
Very obviously what had happened was that people moved from job to job carrying card decks (boxes and later tapes) encoding their relative competitive advantage in climate modeling – i.e. old models get perpetuated.
Then, of course, the results don’t match – so in goes a new bit adjusting something to make the numbers come out right. And then another grad student works on it, but doesn’t grok the original structure so.. etc etc etc
The next result? several hundred thousand lines (including libs) most of which does nothing more than perpetuate forgotten fantasies – strip away the detrius, especially the counter-acting code bits put in at different times, and what you get is a pretty simplistic formulation.
Hi Willis… if forcings are realized as per the Newtonian view (exp( -1 / τ )), then the amount of “heat in the pipeline” has converged on a limit.
This is like an annuity where you deposit $100 the first year, $110 the next, then $120 and so on. If at the same time you have a negative interest rate (2% management fee), then after some time your balance will reach an upper limit where your deposits are matched by the fees.
If that’s the case, then we can estimate sensitivity by simply regressing ln(co2) against temperature.
netdr, I was looking at the IPCC report AR4 from WG1. It has a graph. Bill Illis above posted something similar. I think the models underestimated aerosol effects for the early 2000’s, possibly because the 90’s had a rapid temperature rise due to decreasing aerosol effects, and when they increase aerosols later the rate of temperature increase drops. Aerosols are important to predict correctly for near-future decades, but can only mask AGW for a limited time without unacceptable pollution levels occurring.
People haven’t realized that Willis removed the CCSM3 multi-decadal signals by taking an ensemble average of multiple CCSM3 runs. He therefore found that climate forcing does indeed force climate in these models, and that this is independent of internal variations.
Willis,
did you really just call your two previous posts “papers” ? I’m pretty sure for them to be papers they’d have to be published, and if you’re so confident in your results i’m sure you can publish them somewhere? But if you’re afraid that there is some AGW bias at some of the journals then I’m sure E&E could accept you…
Jim D says:
May 15, 2011 at 11:13 am
People haven’t realized that Willis removed the CCSM3 multi-decadal signals by taking an ensemble average of multiple CCSM3 runs.
So why don’t they just publish the “correct” runs?
Paul Murphy says:
May 15, 2011 at 10:35 am
Anyone who starts with “Forgive me, but …” gets no forgiveness from me.
No, making claims about errors without investigating the ACTUAL SIZE of the errors is absurd. Yes, there are errors in the process. But I’m not stoopid, you know. So I’ve done a number of tests of my digitization process and the errors are meaninglessly small. This is because the image of the data is created digitally, and if the digitization is done carefully the accuracy is typically on the order of the width of the line used to create the graph. I know this because, unlike you, I’ve actually done and tested the process.
And if you look at any graph, including the one I digitized (expand your browser picture to get a better view), an error the size of the width of a thin line is as meaningless as your claim that using digitized data is “absurd”.
w.
Robert says:
May 15, 2011 at 11:16 am
Oh, piss off with your ad hominem attacks. There is no formal distinction for “papers”, at least in my world, but you can call the “essays” or “studies” or “investigations” or whatever floats your boat.
And have you published something peer-reviewed on climate in Nature? Because I have. I’ve also published peer reviewed articles in E&E, and the peer review there was as probing and serious as that at Nature.
In other words, other than incorrect assertions and ugly insinuations that have nothing to do with the science … you have nothing to say.
w.
Jim D says:
May 15, 2011 at 11:09 am
netdr, I was looking at the IPCC report AR4 from WG1. It has a graph. Bill Illis above posted something similar. I think the models underestimated aerosol effects for the early 2000′s, possibly because the 90′s had a rapid temperature rise due to decreasing aerosol effects, and when they increase aerosols later the rate of temperature increase drops. Aerosols are important to predict correctly for near-future decades, but can only mask AGW for a limited time without unacceptable pollution levels occurring.
***************
Thanks I checked the post and it seems to predict .2 ° C from 2005 to 2011 which didn’t happen.
I think aerosols are being used as a “fudge factor” to explain why warming isn’t happening as expected. At some time in the future the aerosols are removed [from the simulation] and the temperature [simulation] climbs rapidly.
The time is far enough in the future that the forecasters will be safely retired.
A better explanation it seems to me is that there was a negative PDO from 1940 to 1978 and this caused excess La Nina’s over El Nino’s. Check the chart below.
http://rankexploits.com/musings/2008/nasa-says-pdo-switched-to-cold-phase/
From 1978 to 1998 the PDO switched to positive and the temperature went up.
From 1998 to present there have been an equal number of El Nino’s and La Nina’s.
The overall temperature change during that time is essentially zero.
http://www.cpc.ncep.noaa.gov/products/analysis_monitoring/ensostuff/ensoyears.shtml
The correspondence of temperature and the El Nino/la Nina balance is astounding.
1999 and 2000 were la Nina years as was 2008.
The El Nino/La Nina balance is the first derivative of temperature. When there are more La Nina’s the temperature goes down.
In between there were all El Nino’s and the temperature went up. The general shape is an inverted “U”.
It looks to me like CO2 did almost nothing.
Jimd,
Actually, the cooling effect of aerosols have been overestimated. See http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.161.571&rep=rep1&type=pdf
Willis,
I do hope you publish this analysis in a high impact journal. I just read your earlier blog post Zero Point Three times the Forcing and the comments by Paul_K and I hope Paul_K becomes a co-author of the paper.
I would also love to see a blog post on the implications of your analysis. Several commenters have expressed their view of the most implications.
Keep up the great work!
PAPER: pa·per (ppr); n.
…3a. A formal written composition intended to be published, presented, or read aloud; a scholarly essay or treatise.
Formal? Check
Intended to be published? It was published here, so Check
Scholarly? Check
Essay or treatise? Check
Ok, looks like it’s a paper.
Actually the cooling effect of variations in the Sun’s activity and the geomagnetic field strength has clearly been underestimated:
“An interesting question is what role the
Sun is going to play in the near future.
The 9300-year long composite of solar activity
(Steinhilber et al., 2008) shows that
during the past six decades the Sun has
been in a state of high solar activity compared
to the entire period of 9300 years.
The statistics of the occurrence of periods
of high activity suggests that the current
period of high activity will come to
an end in the next decades (Abreu et al.,
2008). Furthermore, the distribution of
grand solar minima in the past 9300 years
shows that it is likely that a Maunder Minimum-
like period would occur around
2100 AD (Abreu et al., 2010). Such a period
of low solar activity would probably
lead to a temporary reduction in Earth’s
temperature rise due to the anthropogenic
greenhouse effect. However, the
9300-year long record shows that in the
past a grand maximum has always been
followed by a period of high activity, with
the very likely assumption that the Sun’s
future behavior will be similar to that of
the past, it is clear that the Sun will not
permanently compensate for human made
global warming.”
http://www.pages-igbp.org/download/docs/Steinhilber%20and%20Beer_2011-1%285-6%29.pdf
Despite the obligatory nod to the ruling AGW ‘consensus’ at the end of the above quote, note that this nod also ignores the fact that, based on the experience of the last million years plus, the end of the current (Holocene) interglacial could just as easily begin to terminate any time real soon now …..or is poor Milankovitch just another nasty old sceptic too?
Tamino seems to have pulled his “fake forcing” post cited by Anthony in comments above. At least when I try, I get the Open Mind blog but the page is completely blank.
The post over at Tamino’s is still there.
REPLY: yes the only one he deletes are the ones that he loses control of – Anthony
Anthony, it’s good to know that you’re better than Tamino in not deleting anything. That puts you one up on him!
REPLY: No please don’t put words in my mouth. I never said that I was “better”, only that he’s deleted some posts that he can’t control. – Anthony
I posted over at Tamino’s too just for the fun of it.
I am razzing them because their explanation for cooling from 1940 to 1978 is so lame.
They claim that aerosols just conveniently mimic the 60 year PDO cycle ?
I don’t believe it !
Okay, the post at Tamino’s is still there but I have to scroll down to see it. Does anyone else have that problem? Or is it just Google Chrome users?
Anyway, I skimmed through Tamino’s writeup and he quotes from Willis’s first post but not much from his second and more correct post. Tamino is saying Paul_K suggested fake data (using only 72.4% of volcanic forcing), but he doesn’t quote Paul_K saying that. I used the find function on my browser and it did not come up.
I don’t get what Tamino thinks he is proving when he writes “I can do that too.” So he is proving Willis’s work is reproducible. And he thinks that is a bad thing?
Can’t wait to see Willis’s withering response!
Topical topic:
http://www.sciencedaily.com/releases/2011/05/110512104220.htm
Considering Joseph Postma’s “Thermodynamic Atmosphere Effect” paper appears to more than adequately explain temperatures on earth, perhaps Willis could build a little thermodynamic atmosphere model using the inputs he has that have an effect on the relevant parameters in the equations as used by Postma to see how that runs??
Theo:
‘Explicate the model for the general public. ”
If you want to know what a GCM does… READ THE CODE. The first time I looked at ModelE was 2007. Its not that hard or that long. get started.
I’ve linked to a presentation and a PDF of the process of emulating the higher order output of a GCM. Its not that hard, read it and watch the video.
Jim D
“People haven’t realized that Willis removed the CCSM3 multi-decadal signals by taking an ensemble average of multiple CCSM3 runs. He therefore found that climate forcing does indeed force climate in these models, and that this is independent of internal variations.”
err I believe I’ve pointed that out.