Mechanical Models

Guest Post by Willis Eschenbach

[NOTE the update at the end of the post.] I’ve continued my peregrinations following the spoor of the global climate model data cited in my last post. This was data from 19 global climate models. There are two parts to the data, the inputs and the outputs. The inputs to the models are the annual forcings (the change in downwelling radiation at the top of the atmosphere) for the period 1860 to 2100. The outputs of the models are the temperature hindcasts/forecasts for the same period, 1860 to 2100. Figure 1 shows an overview of the two datasets (model forcings and modeled temperatures) the nineteen models, for the historical period 1860-2000.

model output & inputFigure 1. Forcing (red lines, W/m2) and modeled temperatures (blue lines, °C) from 19 global climate models for the period 1860-2000. Light vertical lines show the timing of the major volcanic eruptions. The value shown in upper part of each panel is the decadal trend in the temperatures.  For comparison, the trend in the HadCRUT observational dataset is 0.04°C/decade, while the models range from 0.01 to 0.1°C/decade, a tenfold variation. The value in the lower part of each panel is the decadal trend in forcing. Click any graphic to enlarge.

The most surprising thing to me about this is the wide disparity in the amount, trend, and overall shape of the different forcings. Even the effects of the volcanic eruptions (sharp downwards excursions in the forcings [red line]), which I expected to be similar between the models, have large variations between the models. Look at the rightmost eruption in each panel, Pinatubo in 1991. The GFDL-ESM2M model shows a very large volcanic effect from Pinatubo, over 3 W/m2. Compare that to the effect of Pinatubo in the ACCESS1-0 model, only about 1 W/m2.

And the shapes of the forcings are all over the map. GISS-E2-R increases almost monotonically except for the volcanoes. On the other hand, the MIROC-ESM and HadGEM2-ES forcings have a big hump in the middle. (Note also how the temperatures from those models have a big hump in the middle as well.) Some historical forcings have little annual variability, while others are all over the map. Each model is using its own personal forcing, presumably chosen because it produces the best results …

Next, as you can see from even a superficial examination of the data, the output of the models is quite similar to the input. How similar? Well, as I’ve shown before, the input of the models (forcings) can be transformed into an accurate emulation of the output (temperature hindcasts/forecasts) through the use of a one-line iterative model.

Now, the current climate paradigm is that over time, the changes in global surface air temperature evolve as a linear function of the changes in global top-of-atmosphere forcing.  The canonical equation expressing this relationship is:

∆T = lambda * ∆F           [Equation 1]

In this equation, “∆T” is the change in temperature from the previous year. It can also be written as T[n] – T[n-1], where n is the time of the observation. Similarly, “∆F” is the change in forcing from the previous year, which can be written as F[n] – F[n-1]. Finally, lambda is the transient climate response (°C / W/m^2). Because I don’t have their modeled ocean heat storage data, lambda does not represent the equilibrium climate sensitivity. Instead, lambda in all of my calculations represents the transient climate response, or TCR.

The way that I am modeling the models is to use a simple lagging of the effects of Equation 1. The equation used is:

∆T = lambda * ∆F * ( 1-e^( -1/tau )) + ( T[n-1] – T[n-2] ) * e^(-1/tau)  [Eqn. 2]

In Equation 2, T is temperature (°C), n is time (years), ∆T is T[n] – T[n-1], lambda is the sensitivity (°C / W/m^2), ∆F is the change in forcing F[n] – F[n-1] (W/m2), and tau is the time constant (years) for the lag in the system.

So … what does that all say? Well, it says two things.

First, it says that the world is slow to warm up and cool down. So when you have a sudden change in forcing, for example from a volcano, the temperature changes more slowly. The amount of lag in the system (in years) is given by the time constant tau.

Next, just as in Equation 1, Equation 2 scales the input by the transient climate response lambda.

So what Equation 2 does is to lag and scale the forcings. It lags them by tau, the time constant and it scales them by lambda, the transient climate response (TCR).

In this dataset, the TCR ranges from 0.36 to 0.88 depending on the model. It is the expected change in the temperature (in degrees C) from a 1 W/m2 change in forcing. The transient climate response (TCR) is the rapid response of the climate to a change in forcing. It does not include the amount of energy which has gone into the ocean. As a result, the equilibrium climate sensitivity (ECS) will always be larger than the TCR. The observations in the Otto study indicate that over the last 50 years, ECS has remained stable at about 30% larger than the TCR (lambda). I have used that estimate in Figure 2 below. See my comment here for a discussion of the derivation of this relationship between ECS and TCR.

Using the two free parameters lambda and tau to lag and scale the input, I fit the above equation to each model in turn. I used the full length (1860-2100) of the same dataset shown in Figure 1, the RCP 4.5 scenario. Note that the same equation is applied to the different forcings in all instances, and only the two parameters are varied. The results are shown in Figure 2.

modeled temperature and emulation longFigure 2. Temperatures (hindcast & forecast) from 19 models for the period 1860 to 2100 (light blue), and emulations using the simple lagged model shown in Equation 1 (dark blue). The value for “tau” is the time constant for the lag in the system. The ECS is the equilibrium climate sensitivity (in degrees C) to a doubling of CO2 (“2xCO2”). Following the work of Otto, the ECS is estimated in all cases as being 30% larger than “lambda”, which is the transient climate response (TCR). See the end note regarding units. Click to enlarge.

In all cases, the use of Equation 2 on the model forcings and temperatures results in a very accurate, faithful match to the model temperature output. Note that the worst r^2 of the group is 0.94, and the median r^2 is 0.99. In other words, no matter what each of the models is actually doing internally, functionally they are all just lagging and resizing the inputs.

Other than the accuracy and fidelity of the emulation of every single one of the model outputs, there are some issues I want to discuss. One is the meaning of this type of “black box” analysis. Another are the implications of the fact that all

of these modeled temperatures are so accurately represented by this simplistic formula. And finally, I’ll talk about the elusive “equilibrium climate sensitivity”.

Black Box Analyses

A “black box” analysis is an attempt to determine what is going on inside a “black box”, such as a climate model. In Figure 3, I repeat a drawing I did for an earlier discussion of these issues. I see that it used an earlier version of the CCSM model than the one used in the new data above, which is CCSM4.

ccsm3 as a black boxFigure 3. My depiction of the global climate model CCSM3 as a black box, where only the inputs and outputs are known.

In a “black box” analysis, all that we know are the inputs (forcings) and the outputs (global average surface air temperatures). We don’t know what’s inside the box. The game is to figure out what a set of possible rules might be that would reliably transform the given input (forcings) into the output (temperatures). Figure 2 demonstrates that functionally, the output temperatures of every one of the climate models shown above in Figure 2 can be accurately and faithfully emulated by simply lagging and scaling the input forcings.

Note that a black box analysis is much like the historical development of the calculations for the location of the planets. The same conditions applied to that situation, in that no one knew the rules governing the movements of the planets. The first successful solution to that black box problem utilized an intricate method called “epicycles”. It worked fine, in that it was able to predict the planetary locations, but it was hugely complex. It was replaced by a sun-centered method of calculation that gave the same results but was much simpler.

I bring that up to highlight the fact that in a “black box” puzzle as shown in Figure 3, you want to find not just a solution, but the simplest solution you can find. Equation 2 certainly qualifies as simple, it is a one-line equation.

Finally, be clear that I am not saying that the models are actually scaling and lagging the forcings. A black box analysis just finds the simplest equation that can transform the input into the output, but that equation says nothing about what actually might be going on inside the black box. Instead, the equation functions the same as whatever might be going on inside the box—given a set of inputs, the equation gives the same outputs as the black box. Thus we can say that they are functionally very similar.

Implications

The finding that functionally all the climate models do is to merely lag and rescale the inputs has some interesting implications. The first one that comes to mind is that regarding the models, as the forcings go, so goes the temperature. If the forcings have a hump in the middle, the hindcast temperatures will have a hump in the middle. That’s why I titled this post “Mechanical Models”. They are mechanistic slaves to the forcings.

Another implication of the mechanical nature of the models is that the models are working “properly”. By that, I mean that the programmers of the models firmly believe that Equation 1 rules the evolution of global temperatures … and the models reflect that exactly, as Figure 2 shows. The models are obeying Equation 1 slavishly, which means they have successfully implemented the ideas of the programmers.

Climate Sensitivity

Finally, to the question of the elusive “climate sensitivity”. Me, I hold that in a system such as the climate which contains emergent thermostatic mechanisms, the concept of “climate sensitivity” has no real meaning. In part this is because the climate sensitivity varies depending on the temperature. In part this is because the temperature regulation is done by emergent, local phenomena.

However, the models are built around the hypothesis that the change in temperature is a linear function of forcing. To remind folks, the canonical equation, the equation around which the models are built, is Equation 1 above, ∆T = lambda ∆F, where ∆T is the change in temperature (°C), lambda is the sensitivity (°C per W/m2), and ∆F is the change in forcing (W/m2)

In Equation 1, lambda is the climate sensitivity. If the ∆F calculations include the ocean heat gains and losses, then lambda is the equilibrium climate sensitivity or ECS. If (as in my calculations above) ∆F does not include the ocean heat gains and losses, then lambda is the short-term climate sensitivity, called the “transient climate response” or TCR.

Now, an oddity that I had noted in my prior investigations was that the transient climate response lambda was closely related to the trend ratio, which is the ratio of the trend of the temperature to the trend of the forcing associated with each model run. I speculated at that time (based on only the few models for which I had data back then) that lambda would be equal to the trend ratio. With access now to the nineteen models shown above, I can give a more nuanced view of the situation. As Figure 4 shows, it turns out to be slightly different from what I speculated.

transient climate response vs trend ratioFigure 4. Transient climate response “lambda” compared to the trend ratio (temperature trend / forcing trend) for the 19 models shown in the above figures. Red line shows where lambda equals the trend ratio. Blue line is the linear fit of the actual data. The equation of the blue line is lambda = trend ratio * 1.03 – 0.05 °C / W/m-2.

Figure 4 shows that if we know the input and output of a given climate model, we can closely estimate the transient climate response lambda of the model. The internal workings of the various models don’t seem to matter—in all cases, lambda turns out to be about equal to the trend ratio.

The final curiosity occurs because all of the models need to emulate the historical temperature trend 1860-2000. Not that they do it at all well, as Figure 1 shows. But since they all have different forcings, and they are at least attempting to emulate the historical record, that means that at least to a first order, the difference in the reported climate sensitivities of the models is the result of their differing choices of forcings.

Conclusions? Well, the most obvious conclusion is that the models are simply incapable of a main task they have been asked to do. This is the determination of the climate sensitivity. All of these models do a passable job of emulating the historical temperatures, but since they use different forcings they have very different sensitivities, and there is no way to pick between them.

Another conclusion is that the sensitivity lambda of a given model is well estimated by the trend ratio of the temperatures and forcings. This means that if your model is trying to replicate the historical trend, the only variable is the trend of the forcings. This means that the sensitivity lambda is a function of your particular idiosyncratic choice of forcings.

Are there more conclusions? Sure … but I’ve worked on this dang post long enough. I’m just going to publish it as it is. Comments, suggestions, and expansions welcome.

Best regards to everyone,

w.

A NOTE ON THE UNITS

The “climate sensitivity” is commonly expressed in two different units. One is the c

hange in temperature (in °C) corresponding to a 1 W/m2 change in forcing. The second is the change in temperature corresponding to a 3.7 W/m2 change in forcing. Since 3.7 W/m2 is the amount of additional forcing expected from a change in CO2, this is referred to as the climate sensitivity (in degrees C) to a doubling of CO2. This is often abbreviated as “°C / 2xCO2

DATA AND CODE: As usual, my R code is a snarl, but for what it’s worth it’s here, and the data is in an Excel spreadsheet here.

[UPDATE]. From the comments:

Nick Stokes says:
December 2, 2013 at 2:47 am

In fact, the close association with the “canonical equation” is not surprising. F et al say:

“The FT06 method makes use of a global linearized energy budget approach where the top of atmosphere (TOA) change in energy imbalance (N) is split between a climate forcing component (F) and a component associated with climate feedbacks that is proportional to globally averaged surface temperature change (ΔT), such that:
N = F – α ΔT (1)
where α is the climate feedback parameter in units of W m-2 K-1 and is the reciprocal of the climate sensitivity parameter.”

IOW, they have used that equation to derive the adjusted forcings. It’s not surprising that if you use the thus calculated AFs to back derive the temperatures, you’ll get a good correspondence.

Dang, I hadn’t realized that they had done that. I was under the incorrect impression that they’d used the TOA imbalance as the forcing … always more to learn.

So we have a couple of choices here.

The first choice is that Forster et al have accurately calculated the forcings.

If that is the case, then the models are merely mechanistic, as I’ve said. And as Nick said, in that case it’s not surprising that the forcings and the temperatures are intimately linked. And if that is the case, all of my conclusions above still stand.

The second choice is that Forster et al have NOT accurately calculated the forcings.

In that case, we have no idea what is happening, because we don’t know what the forcings are that resulted in the modeled temperatures.

Advertisements

  Subscribe  
newest oldest most voted
Notify of
John B. Lomax

You should really send this to the OMB. Your one-liner equation could save our government (and we taxpayers) a few $B by replacing all of the complex computer models.

Rhoda Klapp

Are the inputs really in units of watts/m2? It was my impression that the modellers used CO2 levels and modelled radiative physics in terms of local conditions. Taking some average figure for a supposed forcing, no matter how accurate the average is, can never be a satisfactory model input. This is true of any forcing, not just radiative.

thingadonta

“epicycles…It was replaced by a sun-centered method of calculation”.
So should climate models, history repeating itself…

“All of these models do a passable job of emulating the historical temperatures”
I do not agree at all. Maybe MIROC-ESM resembles reality, and probably only by plugging “aerosols” when needed. No one model mimics 1900-1940 warming, stronger than the 1975-2000 one, followed by cooling.

Leonard Lane

Nice analyses Willis. It seems that you have shown that no matter how complex and “scientific” the GCM climate models appear, and are claimed to be, they are nothing more that Rube Goldburg digital concoctions to linearly relate climate forcing to mean annual global temperature (the hind castings you show). As such, how can they have predicative ability for future mean annual global temperature predictions?

Brian H

Edit: “Another conclusion is that the sensitivity lambda of a given model is well estimated by the ratio is determined by the trend ratio of the temperatures and forcings
Not sure which of the verbs (estimated, determined) to read here. Copy-paste error?
[Thanks, fixed -w.]

Willis Eschenbach said:
“The inputs to the models are the annual forcings (the change in downwelling radiation at the top of the atmosphere) for the period 1860 to 2100.”
I have been trying hard to follow the reasoning behind the models in order to follow your post closely.
I’m not sure how to interpret the forcings. How are they calculated, how much observational data is included? If the top of atmosphere is represented by a single temperature, what ‘averaging’ rule is used by the models? Are they even using the same one?
Cheers,
Scott

“The inputs to the models are the annual forcings (the change in downwelling radiation at the top of the atmosphere) for the period 1860 to 2100.”
I agree with Rhoda here. The forcings are often expressed as radiative equivalents, but they aren’t the actual input to models. Those inputs are the direct physical quantities, such as GHG concentrations, or for some modern AOGCM’s, the actual emissions (from scenarios). The radiative forcings in W/m2 are back-calculated for comparison. Hansen describes that here:
“We compute Fi, Fa, Fs and Fs* for most forcing mechanisms to aid understanding and to allow other researchers easy comparison with our results.”
I believe the forcings quoted here are from the paper by Forster et al. They are explicitly computed by those authors; they call them adjusted forcings (AF). They were not model inputs. They say:
“Forster and Taylor [2006], hereinafter FT06, developed a methodology to diagnose 60 globally averaged AF in Coupled Model Intercomparison Project phase 3 (CMIP3) models and we use the same approach here within CMIP5 models, taking advantage of their improved diagnostics and additional integrations to improve the methodology.”
In fact, the close association with the “canonical equation” is not surprising. F et al say:
“The FT06 method makes use of a global linearized energy budget approach where the top of atmosphere (TOA) change in energy imbalance (N) is split between a climate forcing component (F) and a component associated with climate feedbacks that is proportional to globally averaged surface temperature change (ΔT), such that:
N = F – α ΔT (1)
where α is the climate feedback parameter in units of W m-2 K-1 and is the reciprocal of the climate sensitivity parameter.”

IOW, they have used that equation to derive the adjusted forcings. It’s not surprising that if you use the thus calculated AFs to back derive the temperatures, you’ll get a good correspondence.

gordon walker

Thank you Willis for an empirical verification of logical necessity!
Le Chatalier’s Principle tells us that when a change is imposed upon a physical system it will react in a way that resists the change. Otherwise stable systems would be like pencils balanced on their points and ever ready to swing from one extreme to another.
But some people purport to believe in “our fragile planet” or “tipping points”!

I have come up with a model which demonstrates that my footprints accurately predict my previous location to a high confidence level. Therefore it is obvious that my footprint model will accurately predict where I will be any time in the future.

cd

Willis
All your points follow. However, if I understand your method, you’re essentially fitting a function and playing about with lamda and tau and until you get a reasonable fit with the models. While this gives you an “adaptive model”, it does sound like a statistical model of the models and therefore is one of many possible solutions – although in truth you now have your own climate model that was designed to mimic the ones your testing. This brings me on to your point on model being a Black Box. They aren’t, there are a number of online articles/lectures as well as journal papers that explain what types of algorithms they use right down to up-scaling methods and even what type of programming paradigms are chosen. So I think this is unfair, you’re almost suggesting that we should somehow be suspicious of a model because their unfathomable complexity hides a simple, and limited, algorithm – and such commentary, implicitly suggests stealth by design. If they seem like Black Boxes then that’s because you haven’t made the effort to find out what makes them tick.

Thank you Willis,
What a lot of hard work, and good work too!
I agree that climate sensitivity (“ECS” in units of C/2xCO2) has no meaning, perhaps for different reasons from yours.
I demonstrated with confidence in January 2008 that temperature drives atmospheric CO2, not the reverse. This of course does to preclude other major drivers of CO2 such as fossil fuel combustion, deforestation, volcanoes, etc. I suggest that Jan Veizer and a few others were probably already there, or mostly so.
If ECS (which assumes CO2 drives temperature) actually exists in the Earth system, it is so small that it is overwhelmed by the reality that temperature drives of CO2.
Proof:
In this enormous CO2 equation, the only signal that is apparent is that dCO2/dt varies ~contemporaneously with temperature, and CO2 lags temperature by about 9 months.
http://icecap.us/index.php/go/joes-blog/carbon_dioxide_in_not_the_primary_cause_of_global_warming_the_future_can_no/
To suggest that ECS is larger that 1C is not credible. I suggest that if ECS exists, it is much smaller than 1C, so small as to be essentially insignificant.
So all this costly hysteria about catastrophic humanmade global warming has been for naught, and all our yesterdays have lighted fools:
… the expensive climate models can be replaced by a one-line equation (Bravo Willis!) …
… the expensive wind and solar power schemes never really worked, due to the stubborn refusal of the wind to blow and the sun to shine WHEN we needed the power.
As Richard Courtney ably pointed out, we do not understand the complexities of the carbon balance.
As I pointed out, climate science does not even know what drives what and has “put the cart before the horse”.
And yet several political entities have compromised their economies and their vital energy systems and even put their populations at serious risk due to this utter nonsense.
If one was to write the global warming “scary story” as fiction, it would be dismissed as too absurd to be published – a tale told by an idiot – sadly, we have been governed by too many of them.
In closing, I recommend the 15fps AIRS data animation of atmospheric CO2 at
http://svs.gsfc.nasa.gov/vis/a000000/a003500/a003562/carbonDioxideSequence2002_2008_at15fps.mp4
There is no apparent impact of humanity in this magnificent display of nature’s power.

I may be jumping ahead here but your emulation seems extraordinary if the models are actually doing something complex. But then again, even if they are doing some intricate algorithmic dance in order model dynamic process. It appears they are averaging the cell temperatures, in order to come up with a single global figure, thereby destroying all the good work. It is not surprising then, that the input equals the output by the sensitivity!! This implies that the canonical equation is not physical but statistical! It can’t show how changes in that statistic might affect real dynamic states! “It’s life Jim, but not as we know it.” 😉

Nick Stokes

“To remind folks, the canonical equation, the equation around which the models are built, is Equation 1 above, ΔT = lambda ΔF, where ΔT is the change in temperature (°C), lambda is the sensitivity (°C per W/m2), and ΔF is the change in forcing (W/m2)”
It doesn’t have anything to do with the way the models are built. It’s not their canonical equation. What it is is Forster’s equation (1), which he used to infer the adjusted forcings that you are using from the model output. The math is circular. You are feeding his Eq (1) derived AFs into your analysis and coming up with Eq (1).

cd

Scott
As far as I am aware the models do “discretise” the atmosphere into cells. Generally they perturb the system with an internal/external forcing, and under various and changing assumptions, let the system respond through the cellular model. The mechanism by which these perturbation are disseminated are based on established physics such as the Navier-Stokes equations. However, their implementation of the physics is often poor and hence the poor performance of the models outside their training set.
This really explained the issue so eloquently that I had no problems following it – one for the layman:

What Willis has done here, rather than digging deep into the issue, has produced a statistical model of the models. I don’t know why he doesn’t just go speak to a modeller or at least engage with one in order to see whether he has stumbled upon something important.

Good article. It will take some time to digest. I don’t think I’d consider model output as “data”.

Nic Lewis

Willis
Many thanks for putting in the time to produce this and your previous post – both very informative.
I wonder if you may have slightly misunderstood the relationship between the temperature and forcing data?
As you wrote previously, the data is from the study Evaluating adjusted forcing and model spread for historical and future scenarios in the CMIP5 generation of climate models, by Forster, P. M., T. Andrews, P. Good, J. M. Gregory, L. S. Jackson, and M. Zelinka, 2013, JGR. As explained in paragraph [6] of that paper, the forcing timeseries were not obtained as model run data as such. They were actually derived for each model by multiplying its temperature timeseries by its diagnosed climate feedback parameter alpha and adding the Top-of-Atmosphere/ Top-of-Model change in energy imbalance (which is part of the model run data). So they are estimates, not actual model run data, and may not be completely accurate where a model exhibits non-linearities or time-dependence.
If you multiply the temperature change by the value of alpha (net) for each model per Forster et al (2013) Table 1 and deduct the corresponding forcing values, you will recover timeseries for the TOA/TOM energy imbalance, which is almost the same as ocean heat uptake. Those timeseries would make quite interesting graphs, I think.
Incidentally, a purist would not describe the ‘true’ model forcings as model inputs, although I agree it is reasonable to treat them as such in this sort of analysis. As you are probably aware, for the CMIP5 runs, the GCMs calculate greenhouse gas forcings, using their own radiative transfer codes, from specified atmospheric concentrations (See Meinshausen et al, 2011: The RCP GHG concentrations and their extension from 1765 to 2300). They also make their own estimates of forcing from aerosols, ozone, etc, I believe from abundance data. The models all do these things differently, resulting in surprisingly wide divergences between their forcings, as you have observed. The differences in the model aerosol forcings are particularly significant. As you point out, there is also a very large range in the noisyness of the timeseries between models – I’m not sure why.

Old Huemul

Willis, in this passage: “Another conclusion is that the sensitivity lambda of a given model is well estimated by the ratio is determined by the trend ratio of the temperatures and forcings” you should cancel the phrase “the ratio is determined by”. The new version would be: “Another conclusion is that the sensitivity lambda of a given model is well estimated by the trend ratio of the temperatures and forcings”.
It seems you wanted to adjust your phrasing but forgot to cancel the old version.

jhborn

Mr. Eschenbach:
Thank you for another informative post. Whether the forcings values you use are the models’ actual stimuli or represent the forcings they respectively infer from the stimuli they do use, I find it telling that, after all their machinations, their results differ from respective simple one-pole linear models by much less than they differ from each other.
One comment, which I’ve made before, so I’ll apologize for being repetitive. If a system is characterized by dy/dt + y / tau = lambda / tau x, then its response to a ramp is lambda[t – tau(1 – exp(-t / tau)]: the rate ratio equals the constant-stimulus gain. So your statement regarding ratio seems merely to be a tautology. What am I missing?

Bloke down the pub

Willis, knowing your attitude towards typos, you might like to change ‘ The first one the comes to mind is that regarding the models’.
[Thanks, fixed. -w.]

Genghis

Willis, here is my best attempt to mangle the epicycle analogy. The models are an attempt to explain and predict CHANGES in the climate and make accurate predictions.
What you have shown is that the models are incapable of predicting CHANGES, much less predictions of changes. They ultimately only compute one simple relationship, exactly like the complex epicycles did.
The beauty of the Newtonian/Einstein models is that they are less constrained and have multiple variables, hence their predictive power rises exponentially.
Without multiple variables (feedbacks) that do not cancel each other out, Maths and Models are less than useless. Much like the famous E=MC^2 is useless at unity 1=1. You have demonstrated quite nicely that the GCM’s have only a single variable, C=C and hence are useless.

I think Nick’s comments are correct. From what I’ve seen of models, none of them use TOA “net forcing” as input. They use a complex field of initial data, model TOA insolation with equations (describing e.g. orbital variation), and then use all sorts of radiative and bulk transport equations to solve one or sometimes two coupled Navier-Stokes equations at some granularity. So it is not correct to say that one can model the output of GCMs with a one liner iterative model given the input of TOA forcing, because that is not what GCMs use as input.
With that said, the disparity of the “TOA forcing” associated with an inversion of model predictions is indeed indicative of one of the many problems with GCMs. As is the generally poor agreement of the model-generated temperatures with each other and with the actual temperature in e.g. HADCRUT4.
The other possibly useful aspect of the linearized equation Willis ends up with after processing the models shown is that it appears to be at least moderately invertible, at least as a difference equation. That is, one could (perhaps) take the actual temperature data, e.g. HADCRUT4, and apply the “universal model” equation to it backwards to infer the effective TOA forcing it implies for ANY of the models. Indeed, one can in principle apply this process all the way back across the Holocene.
The main point of doing so would be to determine the model’s consistency and “physicality”. That is, if one inverts one of the models across the LIA, what exactly does it say about TOA net forcing during this period of intense, rapid, cooling (per model)? Is what it says consistent with our beliefs about the physics — given that e.g. CO_2 could not have played a significant role back then, or across much of the post-LIA warming, does the model have mechanisms that could (re)produce that variation without significant modulation from aerosols or CO_2? Such an analysis could result in signposts to missing physics, or to model inconsistencies (an incorrect balance between internals leading to natural forcings vs anthropogenic forcings, for example), or both.
rgb

Joe Born says: December 2, 2013 at 6:19 am
“Whether the forcings values you use are the models’ actual stimuli or represent the forcings they respectively infer from the stimuli they do use, I find it telling that, after all their machinations, their results differ from respective simple one-pole linear models by much less than they differ from each other.”

I think you’ve missed the point of my earlier comment, and of Nic Lewis. Forster et al took the temperature outputs of the models and calculated adjusted forcings ΔF (they call it F) using the formula
N = ΔF – α ΔT (1)
Here N, the TOA imbalance, has to be small by cons eng, and some models constrain it to be zero.
This post substitutes those ΔF into a regression and finds that, presto
ΔF – λ ΔT=0.
But of course, they have to. It has nothing to do with what the models actually do. It’s just repeating the arithmetic of Forster et al by which ΔF was derived.

Box of Rocks

What happened to all the hot weather of the 1930’s?
Where is it hiding?

RC Saumarez

I have some difficulty with this analysis.
You spent some time producing a model of cloud feedback, which you felt has implications for climate sensitivity.
Now you have abandoned this in terms of a simple model. This model is simply a first order system which is represented by a first order differential equation.
If we take an input X(t) and our model has a response as you suggest, h(t) and gives an output Y(t).
We can write: Y(t)=X(t)*h(t) , where the symbol * represents convolution.
Now assume that cloud formation is a first order process and has a gain of g in modulating radiative forcing. If is easier to analyse this using transforms in terms of the Laplace variable, s.
The above equation can be written as:
Y(s)=H(s)X(s) and y(t)=L-1[Y(s)]
And H(s)=1/(T+s), where T is the time constant
Using the well-known feedback relationship for the system response to create a “black box”:
Y(s)/X(s)=H(s)/(1+H(s)G(s))
Hence:
Y(s)(1+H(s)G(s))=H(s)X(s)
Writing H(s)=1/(s+a) and G(s)=1/(s+b) and a little algebra we get:
Y(s)(1+ab+(a+b)s+s^2)= H(s)X(s)
Now, s.Y(s)= y(t) and s^2.Y(s)= y’’(t), where the apostrophe indicates differentiation.
Therefore we have a differential equation that is of the form:
y’’(t) + k1y’(t)+k2y(t)=x(t)*h(t)
This is a second order differential equation and therefore the model you propose here is not compatible with those you have proposed earlier.
What you have done is in fact a curve fitting exercise and unless the match between data and output is perfect, it is unlikely that the model is correct. One way to tell if the model is correct is to compare its auto-correlation structure with that of the data. In fact it is well known that a first order model does not give a good explanation of the temperature data and there has been a large amount of debate about what form of model gives a realistic persistence. (See McIntryre and the debate between Keenan and the UK Met Office)

Jim G

“The most surprising thing to me about this is the wide disparity in the amount, trend, and overall shape of the different forcings.”
Excellent and very readable graphical anaysis. Not surprising that the forcings differ so greatly given their rather miniscule absolute levels and the lack of any consistent scientific determinants of how they are constructed from model to model. I particularly appreciated Paul Sarkisian’s model regarding his footprints as it says it all quite well.

Willis Eschenbach

Nick Stokes says:
December 2, 2013 at 3:39 am

“To remind folks, the canonical equation, the equation around which the models are built, is Equation 1 above, ΔT = lambda ΔF, where ΔT is the change in temperature (°C), lambda is the sensitivity (°C per W/m2), and ΔF is the change in forcing (W/m2)”

It doesn’t have anything to do with the way the models are built. It’s not their canonical equation. What it is is Forster’s equation (1), which he used to infer the adjusted forcings that you are using from the model output. The math is circular. You are feeding his Eq (1) derived AFs into your analysis and coming up with Eq (1).

The “canonical equation”, in your words, “doesn’t have anything to do with the way the models are built” … and yet it describes their outputs exactly.
So you’re telling us that is a coincidence? Really?
You might try my post entitled “The Cold Equations” for a further discussion of the canonical equation.
w.
PS—It’s not “Forster’s equation” either, it’s the reported forcing from the models as shown in the CMIP5.

Willis Eschenbach:
That Equation 1 is “canonical” implies that ∆T is not the change in the temperature but rather is the change in the equilibrium temperature. With ∆T taken to be the equilibrium temperature, Equation 1 is scientifically invalid for while the temperature is an observable the equilibrium temperature is not; a consequence from the lack of observability is that Equation 1 is insusceptible to being falsified by the evidence.
If, on the other hand, ∆T is taken to be the temperature rather than the equilibrium temperature then Equation 1 states a false proposition, for a number of different values for ∆T are associated with every value of ∆F. In either case, Equation 1 is scientifically invalid.

Willis Eschenbach

Nick Stokes says:
December 2, 2013 at 2:47 am

In fact, the close association with the “canonical equation” is not surprising. F et al say:

“The FT06 method makes use of a global linearized energy budget approach where the top of atmosphere (TOA) change in energy imbalance (N) is split between a climate forcing component (F) and a component associated with climate feedbacks that is proportional to globally averaged surface temperature change (ΔT), such that:
N = F – α ΔT (1)
where α is the climate feedback parameter in units of W m-2 K-1 and is the reciprocal of the climate sensitivity parameter.”

IOW, they have used that equation to derive the adjusted forcings. It’s not surprising that if you use the thus calculated AFs to back derive the temperatures, you’ll get a good correspondence.

Thanks, Nick. I see I spoke prematurely above. Dang, I hadn’t realized that they had done that. I was under the incorrect impression that they’d used the TOA imbalance as the forcing … always more to learn.
So we have a couple of choices here.
The first choice is that Forster et al have accurately calculated the forcings.
If that is the case, then the models are merely mechanistic, as I’ve said. And as you said, in that case it’s not surprising that the forcings and the temperatures are intimately linked. And if that is the case, all of my conclusions above still stand.
The second choice is that Forster et al have NOT accurately calculated the forcings.
In that case, we have no idea what is happening, because we don’t know what the forcings are that resulted in the modeled temperatures.
I’ll add an update to the head post …
w.

Willis Eschenbach

Terry Oldberg says:
December 2, 2013 at 10:50 am

Willis Eschenbach:
That Equation 1 is “canonical” implies that ∆T is not the change in the temperature but rather is the change in the equilibrium temperature. With ∆T taken to be the equilibrium temperature, Equation 1 is scientifically invalid for while the temperature is an observable the equilibrium temperature is not; a consequence from the lack of observability is that Equation 1 is insusceptible to being falsified by the evidence.

Thanks, Terry. Actually, “∆T” is simply one years temperature minus the previous years temperature, and thus has absolutely nothing to do with some purported “equilibrium” …
w.

Willis Eschenbach:
Thank you for taking the time to reply. If we interpret ∆T as you say we should then Equation 1 states a falsehood, for many different values for ∆T are associated with every value for ∆F but Equation 1 implies there is only 1 value. To state this objection in mathematical terms: the relation from the values of ∆F to the values of ∆T is not a functional relation but Equation 1 implies that it is.

Billy Liar

cd says:
December 2, 2013 at 3:44 am
Great video! Thanks.

Nic Lewis

Nick Stokes says, December 2, 2013 at 7:24 am :
“Here N, the TOA imbalance, has to be small by cons eng, and some models constrain it to be zero.”
You’re getting a bit confused here, Nick. No model constrains the TOA radiative energy flux imbalance to zero. The TOA imbalance is the counterpart of heat uptake by the Earth’s climate system (ocean, etc.), and is not at all close to zero, although it averages close to zero whilst forcing is low. By 2005, it averages 0.7 W/m2 across the models. It bounces around a lot, and plunges upon volcanic eruptions, of course. You are maybe thinking of the difference between the TOA imbalance and the climate system heat uptake.

Chip Javert

I’m amazed how, time and again, Willis intellectually shreds the (presumed) hard work of hundreds (thousands?) of earnest grad students and ethically compromised professors who produce these vile models. You’d think public humiliation would have some impact.
At the 50,000 nano-meter level (sarc), the physics at any point in time is fairly simple: photons are being absorbed and re-emitted by atoms. Period. The questions are how much ends up as heat and how much of that is retained in the atmosphere/ocean. We can’t even agree if it’s a stochastic or chaotic system. This requires accounting for (among other things) changing energy inputs, changing chemistry (eg % CO2), changing atmospheric temperature and pressure, changing absorption spectrums, etc. Sounds like a whole lot of differential equations to me, not some half-witted attempt to statistically back-fit tree rings to snippets of temperature records of questionable accuracy.
…and yes, I’m realistic enough to understand this will go on until sometime after funding dries up.

Chip Javert

@ RC Saumarez says:
December 2, 2013 at 8:06 am
I have some difficulty with this analysis.
===================================
Not intending to speak for Willis, my understanding is this is a black box analysis – how models get from input to output. Willis’ analysis shows there’s probably not a whole lot of meaningful guts or analysis in the black box.
I understand the black box analysis to be separate and distinct from Willis’ cloud models. You appear to be mixing the two.

Nic Lewis says: December 2, 2013 at 12:38 pm
“No model constrains the TOA radiative energy flux imbalance to zero.”

Well, I may be a little out of date there. Stephens et al (2012) say
“Models are commonly tuned to the TOA, so direct comparison of TOA fluxes provides little insight into model performance.”
“tuned to TOA” would mean constrained to zero unless there was other information on the expected imbalance.
And I think 0.7 W/m2 is fairly small, especially as Willis’ regression will subtract the average.

AndyG55

HadCrud is NOT an instrumental record. !! certainly not pre 1979.
The continued use of this heap of adjusted garbage as being anywhere representative of the past temperature, really is borderline stupidity.

Willis Eschenbach

Terry Oldberg says:
December 2, 2013 at 11:26 am

Willis Eschenbach:
Thank you for taking the time to reply.

My thanks to you. I try to answer any honest question.

If we interpret ∆T as you say we should then Equation 1 states a falsehood, for many different values for ∆T are associated with every value for ∆F but Equation 1 implies there is only 1 value. To state this objection in mathematical terms: the relation from the values of ∆F to the values of ∆T is not a functional relation but Equation 1 implies that it is.

I’m not defending Equation 1. I’m just pointing out that it is the current paradigm. It is what the programmers believe. Equation 1 is taken as being true in some longer-term average sense.
I don’t think it’s true in any sense. Me, I find the idea that the output of a horrendously complex driven resonant natural system is a simple linear function of the inputs to be risible, but I was born yesterday …
w.

Willis Eschenbach

RC Saumarez says:
December 2, 2013 at 8:06 am

I have some difficulty with this analysis.
You spent some time producing a model of cloud feedback, which you felt has implications for climate sensitivity.
Now you have abandoned this in terms of a simple model. This model is simply a first order system which is represented by a first order differential equation.

Thanks, RC. You are mixing up two things. One is my idea about how the climate works. I’ve shown and provided a host of observational support for the idea that temperatures are regulated by emergent climate phenomena on a host of temporal and spatial scales. That’s one model, a model of the climate.
The other model is the one I describe above. It is not a model of the climate like the first one. It is a model of the climate models. I show above that assuming that Forster’s estimates of the forcings are accurate, the models are doing nothing more than lagging and scaling the inputs to produce the outputs.
Those are two quite distinct and different models, which I have generally described in different posts, and I have not “abandoned” one for the other.
All the best,
w.

Jquip

@Eschenbach: “I’m not defending Equation 1. I’m just pointing out that it is the current paradigm. It is what the programmers believe.”
Uh, a point of clarity here. It is not what programmers believe — it is expressly what the scientists believe.

“The second choice is that Forster et al have NOT accurately calculated the forcings.
In that case, we have no idea what is happening, because we don’t know what the forcings are that resulted in the modeled temperatures.”

The forcings that are actually used in say CMIP5 are spelt out in some detail, in terms of GHG gas concentrations. That’s what GCM’s work with.
Forster et al don’t claim to have accurately calculated the inputs. They are trying, by studying (and effectively modelling) the output, to attribute reasons for variation. They say in their conclusion (my emphasis):
“Issues remain around the definitions of AF and the assumption of constant climate sensitivity within a transient forcing framework. The forcing/climate sensitivity concept developed essentially for slab-ocean models at equilibrium obviously does not provide a complete picture of climate evolution in today’s non-linear AOGCMs. Nevertheless, we argue that forcings are useful for understanding why models differ in their gross behaviour and forcings explain the spread of RCP projections rather well.”

Willis Eschenbach:
Thanks for taking the time to respond and for stipulating your agreement with me on an issue. The paradigm that I hear from the climatological establishment is that there is a linear functional relation from the change in the forcing to the change in the equilibrium temperature. It is this paradigm which results in the popular contention that the equilibrium climate sensitivity has a numerical value of around 3 Celsius per CO2 doubling. As it is non-falsifiable, this paradigm is non-scientific. Do we agree on this issue?

donald penman

“Finally, to the question of the elusive “climate sensitivity”. Me, I hold that in a system such as the climate which contains emergent thermostatic mechanisms, the concept of “climate sensitivity” has no real meaning. In part this is because the climate sensitivity varies depending on the temperature. In part this is because the temperature regulation is done by emergent, local phenomena.”
If this means “climate sensitivity” to co2 then this raises the question if this sensitivity is a constant or is variable over time.If the “climate senstivity” is variable then what use are computer models using co2 as a forcing.I would have more confidence in climate models if we were to see a large rise in global temperature in the next 30 years or so but I feel that we are more likely to see a fall in global temperature in the next 30 years.

Svend Ferdinandsen

Nice to see you do these calculations.
I have for a long time wondered what all these complex simulations in a lot of points should do anyway when they average over many years and over the whole globe. Sometimes they also claim that differences is because they have not sufficiently exact start conditions, but what does it matter when they averages it all out and any difference in the beginning is anyway clompletely lost after a few weeks.
It is however not the same as saying the climate models are useless. They can be used to investigate processes in the weather and climate or in the models, but not to make any usefull forecast, not for any timescale.

ferdberple

Climate Sensitivity
…. However, the models are built around the hypothesis that the change in temperature is a linear function of temperature.
===============
Willis, maybe that should read “is a linear function of forcings”.
[Thanks, Fred, fixed. -w.]

cd says:
December 2, 2013 at 3:14 am
This brings me on to your point on model being a Black Box. They aren’t,
=============
Willis didn’t say they were. He is analyzing them as a black box, to see if the internal logic (method) can be simplified. Computer programmers do this as a matter of routine.
What Willis has found is that the climate models are basically Rube Goldberg machines. They perform a very complicated set of operations to deliver an extremely simple result.
Willis would have more correctly labelled the posting:
“Rube Goldberg Models”

Terry Oldberg says:
December 2, 2013 at 11:26 am
for many different values for ∆T are associated with every value for ∆F but Equation 1 implies there is only 1 value.
=============
That is actually an extremely important and broad ranging concept in physics. Victorian era (deterministic) physics held that for any 1 ∆F there could be only 1 ∆T. Quantum mechanics (probabilistic physics) allows many different ∆T’s for any 1 ∆F.
However, in such a system the future cannot be realized as an average of all possible futures, which explains why the ensemble mean can be both more accurate than any single model, but at the same time will not converge to the actually future. Rather, it will drift in an unpredictable fashion, and no amount of computer simulation under our current understanding of physics can overcome this.

ferdberple says:
“What Willis has found is that the climate models are basically Rube Goldberg machines. They perform a very complicated set of operations to deliver an extremely simple result.”
Excellent!
And at least Goldberg’s opened the door for the dog, so they did something productive…

∆T = lambda * ∆F [Equation 1]
=============
this relationship is implies that there can be only 1 ∆T for any 1 ∆F. However, the spaghetti graph of the individual model runs shows quite clearly that not even the models believe this to be true.
it is only the scientists themselves that believe you can average the future and arrive at a meaningful result. it is a complete nonsense.
For any 1 given starting position, there are a near infinite number of futures. The forcings do not determine the temperature, they only determine the probability of the temperature.
To explain, lest say that the future is 1/3 chance hotter, 1/3 chance cooler, 1/3 chance unchanged. You will arrive at one of these futures, but you don’t know which one.
What do the climate models do? They average the future, and say that there is a 100% chance you will arrive in a future that is unchanged. Thus any change we see must be caused by humans.
But that is not what physics tells us. We will not arrive at an average future, we will arrive at a specific future. And the models cannot tells us which specific future we will arrive at, because they see the future as an average, not a specific.

cd says:
December 2, 2013 at 3:44 am
“This really explained the issue so eloquently that I had no problems following it – one for the layman: http://www.youtube.com/watch?v=hvhipLNeda4
Thanks for that link, I’d not seen those time-lapse shots. The fog of ‘recurrent cars’ in the carpark shot, struck a powerful cord in me! It was like a glimpse into the quantum world. Also amazed how many of my lay intuitions about the fundamental assumptions were supported in the video. I honestly can’t get past the first two words in the debate, “Global Warming”, the first impossible thing is the assumption that there can even be such a thing as global temperature! As for warming how does a system that is out of-equilibrium warm or cool! 😉