**Guest Post by Willis Eschenbach**

In my earlier post about climate models, “Zero Point Three Times The Forcing“, a commenter provided the breakthrough that allowed the analysis of the GISSE climate model as a black box. In a “black box” type of analysis, we know nothing but what goes into the box and what comes out. We don’t know what the black box is doing internally with the input that it has been given. Figure 1 shows the situation of a black box on a shelf in some laboratory.

*Figure 1. The CCSM3 climate model seen as a black box, with only the inputs and outputs known.*

A “black box” analysis may allow us to discover the “functional equivalent” of whatever might be going on inside the black box. In other words, we may be able to find a simple function that provides the same output as the black box. I thought it might be interesting if I explain how I went about doing this with the CCSM3 model.

First, I went and got the input variables. They are all in the form of “ncdf” files, a standard format that contains both data and metadata. I converted them to annual or monthly averages using the computer language “R”, and saved them as text files. I opened these in Excel, and collected them into one file. I have posted the data up here as an Excel spreadsheet.

Next, I needed the output. The simplest place to get it was the graphic located here. I digitized that data using a digitizing program (I use *“GraphClick”*, on a Mac computer).

My first procedure in this kind of exercise is to “normalize” or “standardize” the various datasets. This means to adjust each one so that the average is zero, and the standard deviation is one. I use the Excel function ‘STANDARDIZE” for this purpose. This allows me to see all of the data in a common size format. Figure 2 shows those results.

*Figure 2. Standardized forcings used by the CCSM 3.0 climate model to hindcast the 20th century temperatures. Dark black line shows the temperature hindcast by the CCSM3 model.*

Looking at that, I could see several things. First, the CO2 data has the same general shape as the sulfur, ozone, and methane (CH4) data. Next, the effects of the solar and volcano data were clearly visible in the temperature output signal. This led me to believe that the GHG data, along with the solar and the volcano data, would be enough to replicate the model’s temperature output.

And indeed, this proved to be the case. Using the Excel “Solver” function, I used the formula which (as mentioned above) had been developed through the analysis of the GISS model. This is:

** T(n+1) = T(n)+λ ∆F(n+1) / τ + ΔT(n) exp( -1 / τ )**

OK, now lets render this equation in English. It looks complex, but it’s not.

T(n) is pronounced “T sub n”. It is the temperature “T” at time “n”. So T sub n plus one, written as T(n+1), is the temperature during the following time period. In this case we’re using years, so it would be the next year’s temperature.

F is the forcing, in watts per square metre. This is the total of all of the forcings under consideration. The same time convention is followed, so F(n) means the forcing “F” in time period “n”.

Delta, or “∆”, means “the change in”. So ∆T(n) is the change in temperature since the previous period, or T(n) minus the previous temperature T(n-1). ∆F(n), correspondingly, is the change in forcing since the previous time period.

Lambda, or “λ”, is the climate sensitivity. And finally tau, or “τ”, is the lag time constant. The time constant establishes the amount of the lag in the response of the system to forcing. And finally, “exp (x)” means the number 2.71828 to the power of x.

So in English, this means that the temperature next year, or** T(n+1)**, is equal to the temperature this year **T(n)**, plus the immediate temperature increase due to the change in forcing** λ F(n+1) / τ**, plus the lag term** ΔT(n) exp( -1 / τ )** from the previous forcing. This lag term is necessary because the effects of the changes in forcing are not instantaneous.

Figure 3 shows the final result of that calculation. I used only a subset of the forcings, which were the greenhouse gases (GHGs), the solar, and the volcanic inputs. The size of the others is quite small in terms of forcing potential, so I neglected them in the calculation.

*Figure 3. CCSM3 model functional equivalent equation, compared to actual CCSM3 output. The two are almost identical.*

As with the GISSE model, we find that the CCSM3 model also slavishly follows the lagged input. The match once again is excellent, with a correlation of 0.995. The values for lambda and tau are also similar to those found during the GISSE investigation.

So what does all of this mean?

Well, the first thing it means is that, just as with the GISSE model, the output temperature of the CCSM3 model is functionally equivalent to a simple, one-line lagged linear transformation of the input forcings.

It also implies that, given that the GISSE and CCSM3 models function in the same way, it is very likely that we will find the same linear dependence of output on input in other climate models.

(Let me add in passing that the CCSM3 model does a very poor job of replicating the historical decline in temperatures from ~ 1945 to ~ 1975 … as did the GISSE model.)

Now, I suppose that if you think the temperature of the planet is simply a linear transformation of the input forcings plus some “natural variations”, those model results might seem reasonable, or at least theoretically sound.

Me, I find the idea of a linear connection between inputs and output in a complex, multiply interconnected, chaotic system like the climate to be a risible fantasy. It is not true of any other complex system that I know of. Why would climate be so simply and mechanistically predictable when other comparable systems are not?

This all highlights what I see as the basic misunderstanding of current climate science. The current climate paradigm, as exemplified by the models, is that the global temperature is a linear function of the forcings. I find this extremely unlikely, from both a theoretical and practical standpoint. This claim is the result of the bad mathematics that I have detailed in “The Cold Equations“. There, erroneous substitutions allow them to cancel everything out of the equation except forcing and temperature … which leads to the false claim that if forcing goes up, temperature must perforce follow in a linear, slavish manner.

As we can see from the failure of both the GISS and the CCSM3 models to replicate the post 1945 cooling, this claim of linearity between forcings and temperatures fails the real-world test as well as the test of common sense.

w.

TECHNICAL NOTES ON THE CONVERSION TO WATTS PER SQUARE METRE

Many of the forcings used by the CCSM3 model are given in units other than watts/square metre. Various conversions were used.

The CO2, CH4, NO2, CFC-11, and CFC-12 values were converted to w/m2 using the various formulas of Myhre as given in Table 3.

Solar forcing was converted to equivalent average forcing by dividing by 4.

The volcanic effect, which CCSM3 gives in total tonnes of mass ejected, has no standard conversion to W/m2. As a result we don’t know what volcanic forcing the CCSM3 model used. Accordingly, I first matched their data to the same W/m2 values as used by the GISSE model. I then adjusted the values iteratively to give the best fit, which resulted in the “Volcanic Adjustment” shown above in Figure 3.

**[UPDATE] **Steve McIntyre pointed out that I had not given the website for the forcing data. It is available here (registration required, a couple of gigabyte file).

Willis – congratulations for your new super-cooled-computer. I didn’t know you had access to such a powerful machine, of the kind

neededby the Met Office in the UK at the small cost of £10M.ps you’re not implying you’ve done it all on a sub-$1,000 PC, are you? What a scandal that would be!!!

pps in hindsight, considering the absurdist idea that climate can be computed by adding forcings up, it makes perfect senses that the output is a linear transformation of the forcings…

I am puzzled as to which variable on your standardised forcing graph has that amazing downward trend from 1970 onwards. Is it ozone? and whatever it is, can you comment on whether it is a reasonable path for it to take.

Oh the “black box” fix for everything complex. Yes, the simple, thoughtless, solution. Rather than actually finding/working out how it works or working within the boundaries of a “known” system, some want to “pump stuff in”, regardless of, say capacity, and then, “expect desired results”.

I’m with Margaret.

What is the item taking the dive around 1970?

Wow! Just wow!

All those lines of code evolving over decades, all those complex interactions between grid cells, all these finely tuned parameters, this huge international academic cooperation, to get – what?

An output equivalent to a simple Excel sheet cell formula with three parameters!

This is really mindboggling!

Congrats to Willis and all those involved and cooperating in putting the pieces together over the last months.

Marcus Kesseler

“risible fantasy”, Willis? You’re being too kind. I like risible, though.

Willis,

There are termites in that shelf.

I don’t understand what this article proves and I completely disagree with this:

“Me, I find the idea of a linear connection between inputs and output in a complex, multiply interconnected, chaotic system like the climate to be a risible fantasy.”

Quite the contrary, I would be surprised if the behavior =didnt= boil down to a simple equation. This is quite common. The system might be chaotic but the global surface temperature element of it might be highly deterministic on energy balance and so can be expressed as simple equation. In fact that makes complete sense does it not?

After-all the concept of climate sensitivity and forcing implicitly mean global temperature is linearly related to forcings.

Other examples abound. Line by line radiative models to calculate the forcing from a doubling of CO2 for example are very complicated but after you have the results you can find a simple equation to fit it, which happens to be the famous ln(CO2/CO2base) * 5.35. Similar simple equations have been made for other greenhouse gas increases. In unrelated areas too – eg we can fit express the complex motions of planets and moons to the equation G(m1 + m2) / r2.

Some commenter’s seem to be confusing the derivation of such a simple equation to fit the behavior as a suitable replacement for the models themselves. As if scientists could have just foregone the millions spent on modeling and instead just used a simple equation. But this overlooks the fact that the equation is derived by fitting variables to the model output. Without the model output in the first place you can’t generate the equation.

My understanding is that a computer is incapable of generating a random number if that is true then whatever information you input the result would always be something other than a random/chaotic number therefore trying to model a climate which is by nature chaotic is not possible. A computer can record data and display it by way of a graph but as far as I understand it cannot generate chaos of its own volition but if you have a degree in computer science you would know this so why do we bother trying to model our climate when we know you can never design a suitable model by using a computer? You can use all the clever equations that you like but the result will always be a result based upon what you input and our climate does not work in that way it is random so any model is utterley and completely pointless, correct if if I am wrong?

Just because some horribly expensive computer can perform an amazing number of FLOPS or IPS doesn’t mean the application they’re targeting necessarily NEEDS that level of processing power–It appears in all likelihood they’re applying “monetary forcing” to solve a problem that, indeed, a standard PC with a perfect solution (substance in/substance out) can process without difficulty. Let’s just remove their bombastic cover and label it accurately: Deception. Spin. Gluttony. All bad.

Good work, Willis! This is the hardest face palm I’ve applied in a quite a while.

Great post Willis!

Until we get REAL verification, validation, and documentation standards applied to the climate models, they will forever be “black boxes” stuffed with ill-posed numerical representations of coupled systems of partial differential equations, parameterized models of subgrid scale physics, and approximate (or poorly understood) boundary and initial conditions. Some groups (e.g. NCAR) are at least doing a respectable job on the documentation front…

My question would be, with the findings now done and the formula now configured as a black box equivalent for the used factors, but adding the CO2 factors and those following its shape. How would a third and fourth graph look on the page, a third graph that shows the added curve and a fourth graph that shows the result had CO2 remained constant during the years, how much would the difference between known and equalised CO2 changes make on the graph?

— Mats —

“F is the forcing, in watts per square metre. This is the total of all of the forcings under consideration”

It’s the “all forcings under consideration” that gets me. Is this not a subjective consideration? Or is it agreed upon by everyone that these forcings are all the correct ones that should be considered and that they are using the correct numbers for those forcings? If so then apparently we know everything about climate forcings and the science is settled.

Great work, Willis. Please submit this to a few dozen journals.

If you take this formula and project into the future with a doubling of CO2 does it match the IPCC projections?

I don’t think it is at all unusual that during a time of relative climate stability one would see a linear response to the major forcings. However, since the models fail to properly hindcast the temperatures, even given the relative stability, it demonstrates they are pretty much worthless in their current state.

If it can’t get the easy one right what use is it?

Does anyone actually know what the equation used in CCSM3 actually is? I havn’t the time or wherewithal to dig through the code. If these models are truly using linear equations to test causality then the results should be suspect. I’m a little rusty in stats as it’s been a few decades, but wouldn’t these guys be using a more complex non-linear model or at least non-linear estimation techniques? The old adage that correlation does not mean causality certainly rings true if the modelers believe that a linear equation fit to the data is proof of a causal model.

Willis: In fairness you haven’t really proven anything. You were able to fit the model data to a linear equation, but that doesn’t mean that the original model used that equation. They may have used a much more complex model and your result may be simply coincidental. Then again, they may not. We don’t have enough information to tell.

One of my great objections to AGW enthusiasts (and many involved in modern “science”) is confusing “plausible” with “proven”. You have show that it is plausible that the model in question used a linear equation, but I don’t believe that you have proven it.

Regardless, thanks for a thought-provoking article.

“which variable on your standardised forcing graph has that amazing downward trend from 1970 onwards. Is it ozone?”

The argument made by the IPCC and others that CO2 must be responsible for the temperature rise in the last half of the 20th century because they cannot find any other cause if visibly false.

The main forcing that changed is at that time is Ozone. If you were looking for a reason that temperatures went up, this must be the first suspect. Less Ozone = higher temperatures. Maybe because it is blocking less UV?

Nifty analysis. Does not your/their low value for lamda suggest some potent negative feedback mechansims?

A computer cannot generate randomness from nothing. It can generate “pseudorandom” numbers that look like random numbers, but are not truly random.

However a computer can tap into a source of true randomness and use that to make true random numbers. In simple cases it might sample voltage fluctuations or mouse movements or sounds picked up by a microphone. In more robust setups it mught sample white noise from a radio or even (in one case) digital images of a lava lamp.

So in summary, computers

cangenerate random numbers, but it is hard.Have you run the model forward to see what it calculates for future temperatures past 2000 to see how well it is doing? Also to see how well it tracks the GISS and CCSM3 projections.

This is potentially a huge finding, because if your Excel spreadsheet tracks the model to .995 going forward, it suggests that the the models can be replaced by a much less expensive equivalent.

It also suggests that for all their reported sophistication, the model are nothing of the sort. Their behavior can be reduced to an extremely simple equation that demonstrates the underlying assumptions inherent in the models.

Can you replicate the TOA flux, or precipitation, or sea ice area or volume, with this method? What about regional effects? Likewise, what of ENSO/PDO/AMO?

That would be really interesting to see. Who knew Excel had such power?

onion2 says:

May 14, 2011 at 4:59 amI don’t understand what this article proves. . .

Apparently.

The models, with all their complexity, can be replaced with a single line formula to get the same result. The climate, on the other hand, cannot.

Both model and formula miss the chilly 50’s and 60’s. So after $80 billion spent, we give birth to a mouse, a model that is no better at predictions than a simple equation, and less skillful than the Farmer’s Almanac.

hmmm, seem to have been promised a box of chocolates, or even a discussion about chocolates. Had again by Anthony’s witty blog titles.

Actually, this is very interesting – we’re doing “functions” in math, and here’s an example of a real world application. That’s as far as I can follow it. I need another cup of coffee.

“David Wells says:

May 14, 2011 at 5:13 am

My understanding is that a computer is incapable of generating a random number if that is true then whatever information you input the result would always be something other than a random/chaotic number therefore trying to model a climate which is by nature chaotic is not possible.”

Strictly speaking, computer “random numbers” are pseudo-random numbers generated by various algorithms. But pseudo-random really is good enough for most purposes, and you have the option of “seeding” them with some more random number generated from, for example, the time needed to respond to an operator prompt. Or you can just pick a number from a table of actual random numbers generated, as I recall, by digitizing white noise. For more information than you really want to know about pseudo-random numbers, Donald Knuth devoted a lot of space in one of the volumes of “The Art of Computer Programming” to their generation and use.

Hmm, can see one thing wrong right off the back – the assumption that this year’s temperature is predicated on last year’s. No wonder this always builds up over time.

What if there is no correlation to last year’s temperature? What if this year’s temperature is independent, and actually due to the current TSI alone? What if the lag time from TSI (due to the enormous heat sink which are our oceans) is what causes the slow response across years?

What if the natural balance point of the Earth is the (internal heat + TSI) – (total energy radiation rate to space), where “radiation rate” is both the atmospheric rate and the much slower heat transfer from the oceans to atmosphere?

I see nothing in any model which can conclude each year’s temperature is driven by the previous year’s temperature. I can see how radiation rates can be driven by processes which span years, where heat trapped takes years to dissipate.

But this rate could fluctuate based on the TSI changes. Small changes result in small radiation rate changes. But a huge drop in TSI could accelerate the loss, while a huge increase in heat (either TSI, TSI capture due to volcanoes, or a spike in heat escaping from the core) could take years to bleed off (such as thew 1998 spike).

Is prior year temp just a lousy representation for excess heat left over from a prior time period? How does this function work when the prior year is much cooler. Does it still force upwards due to more GHG (something now proven to be wrong)?

The most important input was omitted, money fueled by special interests.

onion2 says:

May 14, 2011 at 4:59 am

“Other examples abound.”

What you give as “examples” are exactly not examples of complex systems with feedbacks (mind you, the existence of life itself proves that negative feedbacks dominate; some of them created by life itself, for instance the obvious CO2-regulating features of vegetation). So expecting the planet’s climate to behave like a simple one line linear transformation speaks volumes about the childish mental state of the Institute of Professional Corrupt Collusionists (did i get the long form of IPCC right? Hope so.)

Michael J says:

May 14, 2011 at 6:28 am

“Willis: In fairness you haven’t really proven anything. You were able to fit the model data to a linear equation, but that doesn’t mean that the original model used that equation. They may have used a much more complex model and your result may be simply coincidental. Then again, they may not. We don’t have enough information to tell. ”

Willis has proven that the original model is practically functionally equivalent with a simple transformation of the input forcings, no more and no less – so for all practical considerations it can be substituted with the simple transformation.

There *might* be a possibility that the complex models develop a deviating behaviour sometime in the future *but* as they are validated by assessing their hindcasting (which *is* functionally equivalent to Willis’ transformation) such a deviating behaviour would come as a surprise even for the authors of the complex model!

In other words, the future projections of the IPCC *must* conform to Willis’ simple model as well!

Well this should send some of the True Believers into fits.

I think I can hear the first Tamino bleatings . . . . Incoming !

onion2 writes “The system might be chaotic but the global surface temperature element of it might be highly deterministic on energy balance and so can be expressed as simple equation. In fact that makes complete sense does it not?”

So if it could be shown that the climate behaved non-linearly, then that would be a strong argument for falsifying the models wouldn’t it?

David Wells says:

May 14, 2011 at 5:13 am

This is why cryptography uses a seed for pseudorandom number generation where the seed must contain sufficient entropy to provide an assurance of randomness.

See NIST Special Publication 800-90A, Rev 1 at: http://csrc.nist.gov/publications/drafts/800-90/Draft_SP800-90A-Rev1_May-2011.pdf

Wonderful post, Willis, despite posters such as Onion, who obviously try desperately hard to understand stuff, throwing metaphorical rocks at you for demostrating that one of the icons of Warmism is merely a piece of applied snake-oil salesmanship.

Willis,

Your PC has the computing power of Crays from decades ago.

The importance computation is less than the importance of thinking !

-Jay

onion2 says:

May 14, 2011 at 4:59 am

“Some commenter’s seem to be confusing the derivation of such a simple equation to fit the behavior as a suitable replacement for the models themselves. As if scientists could have just foregone the millions spent on modeling and instead just used a simple equation. But this overlooks the fact that the equation is derived by fitting variables to the model output. Without the model output in the first place you can’t generate the equation.”

You are arguing in a circle. You are assuming that there is something called “the Warmista climate model” that the Warmista have explicated for the public, that the public appreciates the internal complexity and beauty of this model though they might not fully understand it, and that the purpose of Willis’ equation is to summarize the main result from the model. Willis’ starting point (assumption) is the factual truth that the Warmista present us with a Black Box. The Warmista have explicated nothing for the public, except how to bow to Warmista. Willis’ equation, then, is not a summary of model results; rather, it is the sum total of all the public knows about the model. And the blame for that lies squarely at the door of the Warmista who will allow the public to understand their model when it is pried from their cold dead hands. It is the same behavior that is found in Mann who hides his data and his statistical methods, in Briffa who hides the physical changes in his proxies and never finds the scientific curiosity to pursue an explanation of those physical changes, and this behavior is found in all of the Warmista. None of these people have the instincts of scientists.

AJStrata says:

May 14, 2011 at 7:04 am

“Hmm, can see one thing wrong right off the back – the assumption that this year’s temperature is predicated on last year’s. No wonder this always builds up over time.

What if there is no correlation to last year’s temperature? What if this year’s temperature is independent, and actually due to the current TSI alone?”

There you go introducing science again. Haven’t you learned that this is “Warmista Science?” /sarc

“Brett says:

May 14, 2011 at 6:11 am

Does anyone actually know what the equation used in CCSM3 actually is? I havn’t the time or wherewithal to dig through the code. ”

I don’t know for sure, but I’d more or less assumed that the models use step wise integration. Basically, you have a simple prediction equation that includes all the forcings. You take a little step, recompute the forcings, then take another little step, recalculate, etc. It’s much more complicated than that, but that’s the way object tracking/prediction works for many missiles, aircraft, and satellites. I’ve always assumed that tropical storm and other weather tracking works like that. I also assumed that climate scientists draw on experience with weather prediction and would use similar techniques.

And no, just because the basic equations may be simple, the results very often might not be simple at all. It’s entirely possible to track an object that is maneuvering enthusiastically using stepwise integration.

Margaret & Jack Simmons

Looking at the colour key I’d say CFC11 – Montreal Agreement?

Climate logic is not linear, but circular.

onion2 “Without the model output in the first place you can’t generate the equation.”

You’re kidding right ?

The equation is physics101 of a linear response with a timelag (hence lambda/tau).

Then it’s just fitting to get the right parameters.

The more free parameters you have, the better you can fit but the less significant it is (with an infinite number of parameters, you can fit anything).

I believe there is a contradiction in the following two comments from the article and that they can be reconciled:

“Figure 3 shows the final result of that calculation. I used only a subset of the forcings, which were the greenhouse gases (GHGs), the solar, and the volcanic inputs. The size of the others is quite small in terms of forcing potential, so I neglected them in the calculation.”

“(Let me add in passing that the CCSM3 model does a very poor job of replicating the historical decline in temperatures from ~ 1945 to ~ 1975 … as did the GISSE model.)”

I believe that the PDO, which is roughly a 60 year sine wave, needs to added to the calculations as another term which is NOT negligible. Then all other forcings would be relative to the sine wave instead of to a straight line. It would certainly account for the slight drop in temperatures over the last decade.

AJStrata says,

“I see nothing in any model which can conclude each year’s temperature is driven by the previous year’s temperature. I can see how radiation rates can be driven by processes which span years, where heat trapped takes years to dissipate.”

have a look at the second expert article regarding simple modelling:

http://www.copenhagenclimatechallenge.org/index.php?option=com_content&view=article&id=52&Itemid=55

“Brett says:

May 14, 2011 at 6:11 am

“Does anyone actually know what the equation used in CCSM3 actually is? I havn’t the time or wherewithal to dig through the code. ”

The equations are only a minor part of the story. What Warmista must explicate for the public, what they have a duty to explicate, are their judgements. They make model runs, see something in the results that they do not like, and they change the model. The big question is: what are their processes of judgement, of recording their judgements, and of reasoning about their judgements. What we non-Warmista suspect is that they get a result that they don’t like, they rejigger the model, and they try again. In other words, they don’t even have a record of their judgements (rejiggers) and there is no rational process in place for evaluating judgements (rejiggers). Warmista have a duty to explicate their judgements and permit criticisms of their judgements and their methods of evaluating their judgements. If they fail to satisfy that duty then they are not practicing as scientists. So far, they are not practicing as scientists.

Now instead of leaving those giant computers running idle whilst we compute the climate of 2050 using OpenOffice, how about using them to model the AGWer brain? And yes, I fully expect a linear equation as the output.

How does the model handle ozone? If in some manner, the ozone depletion drives at least a significant part of the post 1970 warming, then the black-box correlation with the model is not meaningful. If the ozone does nothing, why is it included as a forcing?

Anthony Watts says:

May 14, 2011 at 8:08 am

“Climate logic is not linear, but circular.”

Warmista logic is circular. When your only goal is to deflect criticism then necessarily your logic will be circular, unless you take the next step and become outrightly deceptive.

Ok . . . I’ll take a bite . . . . I like the hind sight approach . . . but what does (did) it say for 2000 – 2010 and then say just the next 5, 10, 15, and 20 yrs. . . . Just out of curiosity . . .

Jack Simmons says:May 14, 2011 at 3:59 am

I’m with Margaret.

What is the item taking the dive around 1970?

From the earlier figure 2 there is a large volcanic eruption at that time causing the drop in temperature in the

originalCCSM3 output. All Willis has done is replicate that output using far simpler modeling.onion2 says:

May 14, 2011 at 4:59 am

The problem is that the climate has NOT responded linearly to forcing increases. So you may think that all chaotic systems of inter-reacting chaotic sub-systems are linear in their behavior in response to inputs – but you are wrong as the real world climate has shown. Or there would have been no drop in the 1940’s

The problem I have is with a singe ‘lambda’ λ; the real world does not appear to have a single sensitivity. It would appear that there is a tendency to move toward an attractor that is the ‘normal’ temperature for the Holocene. A ‘forcing’ that moves the system toward that attractor receives positive feedback but if the same forcing continues taking the system past the attractor the feedback to that forcing becomes negative to return the system back toward the attractor. The climatologists’ assumption that feedbacks are always positive is a

majorweakness in the AGW hypothesis.For some reason the climate systems have another stronger attractor of the ice-age conditions and given the right inputs at the right time the system can move from interglacial to that strong attractor. Perhaps the Bond events are excursions that

nearlymove from the interglacial attractor to the ice-age strong attractor but some input is missing, is in the wrong phase or is the incorrect value.For CCSM3 the ensemble mean was used, which removed all the internal variability of the individual members, so no wonder the fit is good, because on average internal variability should cancel out, and this becomes an expression that the global climate response holds for each year. It is equivalent to decadally smoothing the measured surface temperature and displaying that as a function of measured forcing. It should match quite well, given the right forcing, because the real climate response is a simple function of forcing too. Use an individual run of CCSM3 and see what kind of fit that gives. That would be more like using real data.