Guest Post by Willis Eschenbach
In my earlier post about climate models, “Zero Point Three Times The Forcing“, a commenter provided the breakthrough that allowed the analysis of the GISSE climate model as a black box. In a “black box” type of analysis, we know nothing but what goes into the box and what comes out. We don’t know what the black box is doing internally with the input that it has been given. Figure 1 shows the situation of a black box on a shelf in some laboratory.
Figure 1. The CCSM3 climate model seen as a black box, with only the inputs and outputs known.
A “black box” analysis may allow us to discover the “functional equivalent” of whatever might be going on inside the black box. In other words, we may be able to find a simple function that provides the same output as the black box. I thought it might be interesting if I explain how I went about doing this with the CCSM3 model.
First, I went and got the input variables. They are all in the form of “ncdf” files, a standard format that contains both data and metadata. I converted them to annual or monthly averages using the computer language “R”, and saved them as text files. I opened these in Excel, and collected them into one file. I have posted the data up here as an Excel spreadsheet.
Next, I needed the output. The simplest place to get it was the graphic located here. I digitized that data using a digitizing program (I use “GraphClick”, on a Mac computer).
My first procedure in this kind of exercise is to “normalize” or “standardize” the various datasets. This means to adjust each one so that the average is zero, and the standard deviation is one. I use the Excel function ‘STANDARDIZE” for this purpose. This allows me to see all of the data in a common size format. Figure 2 shows those results.
Figure 2. Standardized forcings used by the CCSM 3.0 climate model to hindcast the 20th century temperatures. Dark black line shows the temperature hindcast by the CCSM3 model.
Looking at that, I could see several things. First, the CO2 data has the same general shape as the sulfur, ozone, and methane (CH4) data. Next, the effects of the solar and volcano data were clearly visible in the temperature output signal. This led me to believe that the GHG data, along with the solar and the volcano data, would be enough to replicate the model’s temperature output.
And indeed, this proved to be the case. Using the Excel “Solver” function, I used the formula which (as mentioned above) had been developed through the analysis of the GISS model. This is:
T(n+1) = T(n)+λ ∆F(n+1) * (1- exp( -1 / τ )) + ΔT(n) exp( -1 / τ )
OK, now lets render this equation in English. It looks complex, but it’s not.
T(n) is pronounced “T sub n”. It is the temperature “T” at time “n”. So T sub n plus one, written as T(n+1), is the temperature during the following time period. In this case we’re using years, so it would be the next year’s temperature.
F is the forcing, in watts per square metre. This is the total of all of the forcings under consideration. The same time convention is followed, so F(n) means the forcing “F” in time period “n”.
Delta, or “∆”, means “the change in”. So ∆T(n) is the change in temperature since the previous period, or T(n) minus the previous temperature T(n-1). ∆F(n), correspondingly, is the change in forcing since the previous time period.
Lambda, or “λ”, is the climate sensitivity. And finally tau, or “τ”, is the lag time constant. The time constant establishes the amount of the lag in the response of the system to forcing. And finally, “exp (x)” means the number 2.71828 to the power of x.
So in English, this means that the temperature next year, or T(n+1), is equal to the temperature this year T(n), plus the immediate temperature increase due to the change in forcing λ F(n+1) * (1-exp( -1 / τ )), plus the lag term ΔT(n) exp( -1 / τ ) from the previous forcing. This lag term is necessary because the effects of the changes in forcing are not instantaneous.
Figure 3 shows the final result of that calculation. I used only a subset of the forcings, which were the greenhouse gases (GHGs), the solar, and the volcanic inputs. The size of the others is quite small in terms of forcing potential, so I neglected them in the calculation.
Figure 3. CCSM3 model functional equivalent equation, compared to actual CCSM3 output. The two are almost identical.
As with the GISSE model, we find that the CCSM3 model also slavishly follows the lagged input. The match once again is excellent, with a correlation of 0.995. The values for lambda and tau are also similar to those found during the GISSE investigation.
So what does all of this mean?
Well, the first thing it means is that, just as with the GISSE model, the output temperature of the CCSM3 model is functionally equivalent to a simple, one-line lagged linear transformation of the input forcings.
It also implies that, given that the GISSE and CCSM3 models function in the same way, it is very likely that we will find the same linear dependence of output on input in other climate models.
(Let me add in passing that the CCSM3 model does a very poor job of replicating the historical decline in temperatures from ~ 1945 to ~ 1975 … as did the GISSE model.)
Now, I suppose that if you think the temperature of the planet is simply a linear transformation of the input forcings plus some “natural variations”, those model results might seem reasonable, or at least theoretically sound.
Me, I find the idea of a linear connection between inputs and output in a complex, multiply interconnected, chaotic system like the climate to be a risible fantasy. It is not true of any other complex system that I know of. Why would climate be so simply and mechanistically predictable when other comparable systems are not?
This all highlights what I see as the basic misunderstanding of current climate science. The current climate paradigm, as exemplified by the models, is that the global temperature is a linear function of the forcings. I find this extremely unlikely, from both a theoretical and practical standpoint. This claim is the result of the bad mathematics that I have detailed in “The Cold Equations“. There, erroneous substitutions allow them to cancel everything out of the equation except forcing and temperature … which leads to the false claim that if forcing goes up, temperature must perforce follow in a linear, slavish manner.
As we can see from the failure of both the GISS and the CCSM3 models to replicate the post 1945 cooling, this claim of linearity between forcings and temperatures fails the real-world test as well as the test of common sense.
w.
TECHNICAL NOTES ON THE CONVERSION TO WATTS PER SQUARE METRE
Many of the forcings used by the CCSM3 model are given in units other than watts/square metre. Various conversions were used.
The CO2, CH4, NO2, CFC-11, and CFC-12 values were converted to w/m2 using the various formulas of Myhre as given in Table 3.
Solar forcing was converted to equivalent average forcing by dividing by 4.
The volcanic effect, which CCSM3 gives in total tonnes of mass ejected, has no standard conversion to W/m2. As a result we don’t know what volcanic forcing the CCSM3 model used. Accordingly, I first matched their data to the same W/m2 values as used by the GISSE model. I then adjusted the values iteratively to give the best fit, which resulted in the “Volcanic Adjustment” shown above in Figure 3.
[UPDATE] Steve McIntyre pointed out that I had not given the website for the forcing data. It is available here (registration required, a couple of gigabyte file).
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.
“David Wells says:
May 14, 2011 at 5:13 am
My understanding is that a computer is incapable of generating a random number if that is true then whatever information you input the result would always be something other than a random/chaotic number therefore trying to model a climate which is by nature chaotic is not possible.”
Strictly speaking, computer “random numbers” are pseudo-random numbers generated by various algorithms. But pseudo-random really is good enough for most purposes, and you have the option of “seeding” them with some more random number generated from, for example, the time needed to respond to an operator prompt. Or you can just pick a number from a table of actual random numbers generated, as I recall, by digitizing white noise. For more information than you really want to know about pseudo-random numbers, Donald Knuth devoted a lot of space in one of the volumes of “The Art of Computer Programming” to their generation and use.
Hmm, can see one thing wrong right off the back – the assumption that this year’s temperature is predicated on last year’s. No wonder this always builds up over time.
What if there is no correlation to last year’s temperature? What if this year’s temperature is independent, and actually due to the current TSI alone? What if the lag time from TSI (due to the enormous heat sink which are our oceans) is what causes the slow response across years?
What if the natural balance point of the Earth is the (internal heat + TSI) – (total energy radiation rate to space), where “radiation rate” is both the atmospheric rate and the much slower heat transfer from the oceans to atmosphere?
I see nothing in any model which can conclude each year’s temperature is driven by the previous year’s temperature. I can see how radiation rates can be driven by processes which span years, where heat trapped takes years to dissipate.
But this rate could fluctuate based on the TSI changes. Small changes result in small radiation rate changes. But a huge drop in TSI could accelerate the loss, while a huge increase in heat (either TSI, TSI capture due to volcanoes, or a spike in heat escaping from the core) could take years to bleed off (such as thew 1998 spike).
Is prior year temp just a lousy representation for excess heat left over from a prior time period? How does this function work when the prior year is much cooler. Does it still force upwards due to more GHG (something now proven to be wrong)?
The most important input was omitted, money fueled by special interests.
onion2 says:
May 14, 2011 at 4:59 am
“Other examples abound.”
What you give as “examples” are exactly not examples of complex systems with feedbacks (mind you, the existence of life itself proves that negative feedbacks dominate; some of them created by life itself, for instance the obvious CO2-regulating features of vegetation). So expecting the planet’s climate to behave like a simple one line linear transformation speaks volumes about the childish mental state of the Institute of Professional Corrupt Collusionists (did i get the long form of IPCC right? Hope so.)
Michael J says:
May 14, 2011 at 6:28 am
“Willis: In fairness you haven’t really proven anything. You were able to fit the model data to a linear equation, but that doesn’t mean that the original model used that equation. They may have used a much more complex model and your result may be simply coincidental. Then again, they may not. We don’t have enough information to tell. ”
Willis has proven that the original model is practically functionally equivalent with a simple transformation of the input forcings, no more and no less – so for all practical considerations it can be substituted with the simple transformation.
There *might* be a possibility that the complex models develop a deviating behaviour sometime in the future *but* as they are validated by assessing their hindcasting (which *is* functionally equivalent to Willis’ transformation) such a deviating behaviour would come as a surprise even for the authors of the complex model!
In other words, the future projections of the IPCC *must* conform to Willis’ simple model as well!
Well this should send some of the True Believers into fits.
I think I can hear the first Tamino bleatings . . . . Incoming !
onion2 writes “The system might be chaotic but the global surface temperature element of it might be highly deterministic on energy balance and so can be expressed as simple equation. In fact that makes complete sense does it not?”
So if it could be shown that the climate behaved non-linearly, then that would be a strong argument for falsifying the models wouldn’t it?
David Wells says:
May 14, 2011 at 5:13 am
This is why cryptography uses a seed for pseudorandom number generation where the seed must contain sufficient entropy to provide an assurance of randomness.
See NIST Special Publication 800-90A, Rev 1 at: http://csrc.nist.gov/publications/drafts/800-90/Draft_SP800-90A-Rev1_May-2011.pdf
Wonderful post, Willis, despite posters such as Onion, who obviously try desperately hard to understand stuff, throwing metaphorical rocks at you for demostrating that one of the icons of Warmism is merely a piece of applied snake-oil salesmanship.
Willis,
Your PC has the computing power of Crays from decades ago.
The importance computation is less than the importance of thinking !
-Jay
onion2 says:
May 14, 2011 at 4:59 am
“Some commenter’s seem to be confusing the derivation of such a simple equation to fit the behavior as a suitable replacement for the models themselves. As if scientists could have just foregone the millions spent on modeling and instead just used a simple equation. But this overlooks the fact that the equation is derived by fitting variables to the model output. Without the model output in the first place you can’t generate the equation.”
You are arguing in a circle. You are assuming that there is something called “the Warmista climate model” that the Warmista have explicated for the public, that the public appreciates the internal complexity and beauty of this model though they might not fully understand it, and that the purpose of Willis’ equation is to summarize the main result from the model. Willis’ starting point (assumption) is the factual truth that the Warmista present us with a Black Box. The Warmista have explicated nothing for the public, except how to bow to Warmista. Willis’ equation, then, is not a summary of model results; rather, it is the sum total of all the public knows about the model. And the blame for that lies squarely at the door of the Warmista who will allow the public to understand their model when it is pried from their cold dead hands. It is the same behavior that is found in Mann who hides his data and his statistical methods, in Briffa who hides the physical changes in his proxies and never finds the scientific curiosity to pursue an explanation of those physical changes, and this behavior is found in all of the Warmista. None of these people have the instincts of scientists.
AJStrata says:
May 14, 2011 at 7:04 am
“Hmm, can see one thing wrong right off the back – the assumption that this year’s temperature is predicated on last year’s. No wonder this always builds up over time.
What if there is no correlation to last year’s temperature? What if this year’s temperature is independent, and actually due to the current TSI alone?”
There you go introducing science again. Haven’t you learned that this is “Warmista Science?” /sarc
“Brett says:
May 14, 2011 at 6:11 am
Does anyone actually know what the equation used in CCSM3 actually is? I havn’t the time or wherewithal to dig through the code. ”
I don’t know for sure, but I’d more or less assumed that the models use step wise integration. Basically, you have a simple prediction equation that includes all the forcings. You take a little step, recompute the forcings, then take another little step, recalculate, etc. It’s much more complicated than that, but that’s the way object tracking/prediction works for many missiles, aircraft, and satellites. I’ve always assumed that tropical storm and other weather tracking works like that. I also assumed that climate scientists draw on experience with weather prediction and would use similar techniques.
And no, just because the basic equations may be simple, the results very often might not be simple at all. It’s entirely possible to track an object that is maneuvering enthusiastically using stepwise integration.
Margaret & Jack Simmons
Looking at the colour key I’d say CFC11 – Montreal Agreement?
Climate logic is not linear, but circular.
onion2 “Without the model output in the first place you can’t generate the equation.”
You’re kidding right ?
The equation is physics101 of a linear response with a timelag (hence lambda/tau).
Then it’s just fitting to get the right parameters.
The more free parameters you have, the better you can fit but the less significant it is (with an infinite number of parameters, you can fit anything).
I believe there is a contradiction in the following two comments from the article and that they can be reconciled:
“Figure 3 shows the final result of that calculation. I used only a subset of the forcings, which were the greenhouse gases (GHGs), the solar, and the volcanic inputs. The size of the others is quite small in terms of forcing potential, so I neglected them in the calculation.”
“(Let me add in passing that the CCSM3 model does a very poor job of replicating the historical decline in temperatures from ~ 1945 to ~ 1975 … as did the GISSE model.)”
I believe that the PDO, which is roughly a 60 year sine wave, needs to added to the calculations as another term which is NOT negligible. Then all other forcings would be relative to the sine wave instead of to a straight line. It would certainly account for the slight drop in temperatures over the last decade.
AJStrata says,
“I see nothing in any model which can conclude each year’s temperature is driven by the previous year’s temperature. I can see how radiation rates can be driven by processes which span years, where heat trapped takes years to dissipate.”
have a look at the second expert article regarding simple modelling:
http://www.copenhagenclimatechallenge.org/index.php?option=com_content&view=article&id=52&Itemid=55
“Brett says:
May 14, 2011 at 6:11 am
“Does anyone actually know what the equation used in CCSM3 actually is? I havn’t the time or wherewithal to dig through the code. ”
The equations are only a minor part of the story. What Warmista must explicate for the public, what they have a duty to explicate, are their judgements. They make model runs, see something in the results that they do not like, and they change the model. The big question is: what are their processes of judgement, of recording their judgements, and of reasoning about their judgements. What we non-Warmista suspect is that they get a result that they don’t like, they rejigger the model, and they try again. In other words, they don’t even have a record of their judgements (rejiggers) and there is no rational process in place for evaluating judgements (rejiggers). Warmista have a duty to explicate their judgements and permit criticisms of their judgements and their methods of evaluating their judgements. If they fail to satisfy that duty then they are not practicing as scientists. So far, they are not practicing as scientists.
Now instead of leaving those giant computers running idle whilst we compute the climate of 2050 using OpenOffice, how about using them to model the AGWer brain? And yes, I fully expect a linear equation as the output.
How does the model handle ozone? If in some manner, the ozone depletion drives at least a significant part of the post 1970 warming, then the black-box correlation with the model is not meaningful. If the ozone does nothing, why is it included as a forcing?
Anthony Watts says:
May 14, 2011 at 8:08 am
“Climate logic is not linear, but circular.”
Warmista logic is circular. When your only goal is to deflect criticism then necessarily your logic will be circular, unless you take the next step and become outrightly deceptive.
Ok . . . I’ll take a bite . . . . I like the hind sight approach . . . but what does (did) it say for 2000 – 2010 and then say just the next 5, 10, 15, and 20 yrs. . . . Just out of curiosity . . .
Jack Simmons says:
May 14, 2011 at 3:59 am
I’m with Margaret.
What is the item taking the dive around 1970?
From the earlier figure 2 there is a large volcanic eruption at that time causing the drop in temperature in the original CCSM3 output. All Willis has done is replicate that output using far simpler modeling.
onion2 says:
May 14, 2011 at 4:59 am
The problem is that the climate has NOT responded linearly to forcing increases. So you may think that all chaotic systems of inter-reacting chaotic sub-systems are linear in their behavior in response to inputs – but you are wrong as the real world climate has shown. Or there would have been no drop in the 1940’s
The problem I have is with a singe ‘lambda’ λ; the real world does not appear to have a single sensitivity. It would appear that there is a tendency to move toward an attractor that is the ‘normal’ temperature for the Holocene. A ‘forcing’ that moves the system toward that attractor receives positive feedback but if the same forcing continues taking the system past the attractor the feedback to that forcing becomes negative to return the system back toward the attractor. The climatologists’ assumption that feedbacks are always positive is a major weakness in the AGW hypothesis.
For some reason the climate systems have another stronger attractor of the ice-age conditions and given the right inputs at the right time the system can move from interglacial to that strong attractor. Perhaps the Bond events are excursions that nearly move from the interglacial attractor to the ice-age strong attractor but some input is missing, is in the wrong phase or is the incorrect value.
For CCSM3 the ensemble mean was used, which removed all the internal variability of the individual members, so no wonder the fit is good, because on average internal variability should cancel out, and this becomes an expression that the global climate response holds for each year. It is equivalent to decadally smoothing the measured surface temperature and displaying that as a function of measured forcing. It should match quite well, given the right forcing, because the real climate response is a simple function of forcing too. Use an individual run of CCSM3 and see what kind of fit that gives. That would be more like using real data.