Guest Post by Willis Eschenbach
I haven’t commented much on my most recents posts, because of the usual reasons: a day job, and the unending lure of doing more research, my true passion. To be precise, recently I’ve been frying my synapses trying to twist my head around the implications of the finding that the global temperature forecasts of the climate models are mechanically and accurately predictable by a one-line equation. It’s a salutary warning: kids, don’t try climate science at home.
Figure 1. What happens when I twist my head too hard around climate models.
Three years ago, inspired by Lucia Liljegren’s ultra-simple climate model that she called “Lumpy”, and with the indispensable assistance of the math-fu of commenters Paul_K and Joe Born, I made what to me was a very surprising discovery. The GISSE climate model could be accurately replicated by a one-line equation. In other words, the global temperature output of the GISSE model is described almost exactly by a lagged linear transformation of the input to the models (the “forcings” in climatespeak, from the sun, volcanoes, CO2 and the like). The correlation between the actual GISSE model results and my emulation of those results is 0.98 … doesn’t get much better than that. Well, actually, you can do better than that, I found you can get 99+% correlation by noting that they’ve somehow decreased the effects of forcing due to volcanoes. But either way, it was to me a very surprising result. I never guessed that the output of the incredibly complex climate models would follow their inputs that slavishly.
Since then, Isaac Held has replicated the result using a third model, the CM2.1 climate model. I have gotten the CM2.1 forcings and data, and replicated his results. The same analysis has also been done on the GDFL model, with the same outcome. And I did the same analysis on the Forster data, which is an average of 19 model forcings and temperature outputs. That makes four individual models plus the average of 19 climate models, and all of the the results have been the same, so the surprising conclusion is inescapable—the climate model global average surface temperature results, individually or en masse, can be replicated with over 99% fidelity by a simple, one-line equation.
However, the result of my most recent “black box” type analysis of the climate models was even more surprising to me, and more far-reaching.
Here’s what happened. I built a spreadsheet, in order to make it simple to pull up various forcing and temperature datasets and calculate their properties. It uses “Solver” to iteratively select the values of tau (the time constant) and lambda (the sensitivity constant) to best fit the predicted outcome. After looking at a number of results, with widely varying sensitivities, I wondered what it was about the two datasets (model forcings, and model predicted temperatures) that determined the resulting sensitivity. I wondered if there were some simple relationship between the climate sensitivity, and the basic statistical properties of the two datasets (trends, standard deviations, ranges, and the like). I looked at the five forcing datasets that I have (GISSE, CCSM3, CM2.1, Forster, and Otto) along with the associated temperature results. To my total surprise, the correlation between the trend ratio (temperature dataset trend divided by forcing dataset trend) and the climate sensitivity (lambda) was 1.00. My jaw dropped. Perfect correlation? Say what? So I graphed the scatterplot.
Figure 2. Scatterplot showing the relationship of lambda and the ratio of the output trend over the input trend. Forster is the Forster 19-model average. Otto is the Forster input data as modified by Otto, including the addition of a 0.3 W/m2 trend over the length of the dataset. Because this analysis only uses radiative forcings and not ocean forcings, lambda is the transient climate response (TCR). If the data included ocean forcings, lambda would be the equilibrium climate sensitivity (ECS). Lambda is in degrees per W/m2 of forcing. To convert to degrees per doubling of CO2, multiply lambda by 3.7.
Dang, you don’t see that kind of correlation very often, R^2 = 1.00 to two decimal places … works for me.
Let me repeat the caveat that this is not talking about real world temperatures. This is another “black box” comparison of the model inputs (presumably sort-of-real-world “forcings” from the sun and volcanoes and aerosols and black carbon and the rest) and the model results. I’m trying to understand what the models do, not how they do it.
Now, I don’t have the ocean forcing data that was used by the models. But I do have Levitus ocean heat content data since 1950, poor as it might be. So I added that to each of the forcing datasets, to make new datasets that do include ocean data. As you might imagine, when some of the recent forcing goes into heating the ocean, the trend of the forcing dataset drops … and as we would expect, the trend ratio (and thus the climate sensitivity) increases. This effect is most pronounced where the forcing dataset has a smaller trend (CM2.1) and less visible at the other end of the scale (CCSM3). Figure 3 shows the same five datasets as in Figure 2, plus the same five datasets with the ocean forcings added. Note that when the forcing dataset contains the heat into/out of the ocean, lambda is the equilibrium climate sensitivity (ECS), and when the dataset is just radiative forcing alone, lambda is transient climate response. So the blue dots in Figure 3 are ECS, and the red dots are TCR. The average change (ECS/TCR) is 1.25, which fits with the estimate given in the Otto paper of ~ 1.3.
Figure 3. Red dots show the models as in Figure 2. Blue dots show the same models, with the addition of the Levitus heat content data to each forcing dataset. Resulting sensitivities are higher for the equilibrium condition than for the transient condition, as would be expected. Blue dots show equilibrium climate sensitivity (ECS), while red dots (as in Fig. 2) show the corresponding transient climate response (TCR).
Finally, I ran the five different forcing datasets, with and without ocean forcing, against three actual temperature datasets—HadCRUT4, BEST, and GISS LOTI. I took the data from all of those, and here are the results from the analysis of those 29 individual runs:
Figure 4. Large red and blue dots are as in Figure 3. The light blue dots are the result of running the forcings and subsets of the forcings, with and without ocean forcing, and with and without volcano forcing, against actual datasets. Error shown is one sigma.
So … my new finding is that the climate sensitivity of the models, both individual models and on average, is equal to the ratio of the trends of the forcing and the resulting temperatures. This is true whether or not the changes in ocean heat content are included in the calculation. It is true for both forcings vs model temperature results, as well as forcings run against actual temperature datasets. It is also true for subsets of the forcing, such as volcanoes alone, or for just GHG gases.
And not only did I find this relationship experimentally, by looking at the results of using the one-line equation on models and model results. I then found that can derive this relationship mathematically from the one-line equation (see Appendix D for details).
This is a clear confirmation of an observation first made by Kiehl in 2007, when he suggested an inverse relationship between forcing and sensitivity.
The question is: if climate models differ by a factor of 2 to 3 in their climate sensitivity, how can they all simulate the global temperature record with a reasonable degree of accuracy. Kerr [2007] and S. E. Schwartz et al. (Quantifying climate change–too rosy a picture?, available [here]) recently pointed out the importance of understanding the answer to this question. Indeed, Kerr [2007] referred to the present work, and the current paper provides the ‘‘widely circulated analysis’’ referred to by Kerr [2007]. This report investigates the most probable explanation for such an agreement. It uses published results from a wide variety of model simulations to understand this apparent paradox between model climate responses for the 20th century, but diverse climate model sensitivity.
However, Kiehl ascribed the variation in sensitivity to a difference in total forcing, rather than to the trend ratio, and as a result his graph of the results is much more scattered.
Figure 5. Kiehl results, comparing climate sensitivity (ECS) and total forcing. Note that unlike Kiehl, my results cover both equilibrium climate sensitivity (ECS) and transient climate response (TCR).
Anyhow, there’s a bunch more I could write about this finding, but I gotta just get this off my head and get back to my day job. A final comment.
Since I began this investigation, the commenter Paul_K has since written two outstanding posts on the subject over at Lucia’s marvelous blog, The Blackboard (Part 1, Part 2). In those posts, he proves mathematically that given what we know about the equation that replicates the climate models, that we cannot … well, I’ll let him tell it in his own words:
The Question: Can you or can you not estimate Equilibrium Climate Sensitivity (ECS) from 120 years of temperature and OHC data (even) if the forcings are known?
The Answer is: No. You cannot. Not unless other information is used to constrain the estimate.
An important corollary to this is:- The fact that a GCM can match temperature and heat data tells us nothing about the validity of that GCM’s estimate of Equilibrium Climate Sensitivity.
Note that this is not an opinion of Paul_K’s. It is a mathematical result of the fact that even if we use a more complex “two-box” model, we can’t constrain the sensitivity estimates. This is a stunning and largely unappreciated conclusion. The essential problem is that for any given climate model, we have more unknowns than we have fundamental equations to constrain them.
CONCLUSIONS
Well, it was obvious from my earlier work that the models were useless for either hindcasting or forecasting the climate. They function indistinguishably from a simple one-line equation.
On top of that, Paul_K has shown that they can’t tell us anything about the sensitivity, because the equation itself is poorly constrained.
Finally, in this work I’ve shown that the climate sensitivity “lambda” that the models do exhibit, whether it represents equilibrium climate sensitivity (ECS) or transient climate response (TCR), is nothing but the ratio of the trends of the input and the output. The choice of forcings, models and datasets is quite immaterial. All the models give the same result for lambda, and that result is the ratio of the trends of the forcing and the response. This most recent finding completely explains the inability of the modelers to narrow the range of possible climate sensitivities despite thirty years of modeling.
You can draw your own conclusions from that, I’m sure …
My regards to all,
w.
Appendix A : The One-Line Equation
The equation that Paul_K, Isaac Held, and I have used to replicate the climate models is as follows:
Let me break this into four chunks, separated by the equals sign and the plus signs, and translate each chunk from math into English. Equation 1 means:
This year’s temperature (T1) is equal to
Last years temperature (T0) plus
Climate sensitivity (λ) times this year’s forcing change (∆F1) times (one minus the lag factor) (1-a) plus
Last year’s temperature change (∆T0) times the same lag factor (a)
Or to put it another way, it looks like this:
T1 = <— This year’s temperature [ T1 ] equals
T0 + <— Last year’s temperature [ T0 ] plus
λ ∆F1 (1-a) + <— How much radiative forcing is applied this year [ ∆F1 (1-a) ], times climate sensitivity lambda ( λ ), plus
∆T0 a <— The remainder of the forcing, lagged out over time as specified by the lag factor “a”
The lag factor “a” is a function of the time constant “tau” ( τ ), and is given by
This factor “a” is just a constant number for a given calculation. For example, when the time constant “tau” is four years, the constant “a” is 0.78. Since 1 – a = 0.22, when tau is four years, about 22% of the incoming forcing is added immediately to last years temperature, and rest of the input pulse is expressed over time.
Appendix B: Physical Meaning
So what does all of that mean in the real world? The equation merely reflects that when you apply heat to something big, it takes a while for it to come up to temperature. For example, suppose we have a big brick in a domestic oven at say 200°C. Suppose further that we turn the oven heat up suddenly to 400° C for an hour, and then turn the oven back down to 200°C. What happens to the temperature of the big block of steel?
If we plot temperature against time, we see that initially the block of steel starts to heat fairly rapidly. However as time goes on it heats less and less per unit of time until eventually it reaches 400°C. Figure B2 shows this change of temperature with time, as simulated in my spreadsheet for a climate forcing of plus/minus one watt/square metre. Now, how big is the lag? Well, in part that depends on how big the brick is. The larger the brick, the longer the time lag will be. In the real planet, of course, the ocean plays the part of the brick, soaking up
The basic idea of the one-line equation is the same tired claim of the modelers. This is the claim that the changing temperature of the surface of the planet is linearly dependent on the size of the change in the forcing. I happen to think that this is only generally the rule, and that the temperature is actually set by the exceptions to the rule. The exceptions to this rule are the emergent phenomena of the climate—thunderstorms, El Niño/La Niña effects and the like. But I digress, let’s follow their claim for the sake of argument and see what their models have to say. It turns out that the results of the climate models can be described to 99% accuracy by the setting of two parameters—”tau”, or the time constant, and “lambda”, or the climate sensitivity. Lambda can represent either transient sensitivity, called TCR for “transient climate response”, or equilibrium sensitivity, called ECS for “equilibrium climate sensitivity”.
Figure B2. One-line equation applied to a square-wave pulse of forcing. In this example, the sensitivity “lambda” is set to unity (output amplitude equals the input amplitude), and the time constant “tau” is set at five years.
Note that the lagging does not change the amount of energy in the forcing pulse. It merely lags it, so that it doesn’t appear until a later date.
So that is all the one-line equation is doing. It simply applies the given forcing, using the climate sensitivity to determine the amount of the temperature change, and using the time constant “tau” to determine the lag of the temperature change. That’s it. That’s all.
The difference between ECS (climate sensitivity) and TCR (transient response) is whether slow heating and cooling of the ocean is taken into account in the calculations. If the slow heating and cooling of the ocean is taken into account, then lambda is equilibrium climate sensitivity. If the ocean doesn’t enter into the calculations, if the forcing is only the radiative forcing, then lambda is transient climate response.
Appendix C. The Spreadsheet
In order to be able to easily compare the various forcings and responses, I made myself up an Excel spreadsheet. It has a couple drop-down lists that let me select from various forcing datasets and various response datasets. Then I use the built-in Excel function “Solver” to iteratively calculate the best combination of the two parameters, sensitivity and time constant, so that the result matches the response. This makes it quite simple to experiment with various combinations of forcing and responses. You can see the difference, for example, between the GISS E model with and without volcanoes. It also has a button which automatically stores the current set of results in a dataset which is slowly expanding as I do more experiments.
In a previous post called Retroactive Volcanoes, (link) I had discussed the fact that Otto et al. had smoothed the Forster forcings dataset using a centered three point average. In addition they had added a trend fromthe beginning tothe end of the dataset of 0.3 W per square meter. In that post I had I had said that the effect of that was unknown, although it might be large. My new spreadsheet allows me to actually determine what the effect of that actually is.
It turns out that the effect of those two small changes is to take the indicated climate sensitivity from 2.8 degrees/doubling to 2.3° per doubling.
One of the strangest findings to come out of this spreadsheet was that when the climate models are compared each to their own results, the climate sensitivity is a simple linear function of the ratio of the trends of the forcing and the response. This was true of both the individual models, and the average of the 19 models studied by Forster. The relationship is extremely simple. The climate sensitivity lambda is 1.07 times the ratio of the trends for the models alone, and equal to the trends when compared to all the results. This is true for all of the models without adding in the ocean heat content data, and also all of the models including the ocean heat content data.
In any case I’m going to have to convert all this to the computer language R. Thanks to Stephen McIntyre, I learned the computer language R and have never regretted it. However, I still do much of my initial exploratory forays in Excel. I can make Excel do just about anything, so for quick and dirty analyses like the results above I use Excel.
So as an invitation to people to continue and expand this analysis, my spreadsheet is available here. Note that it contains a macro to record the data from a given analysis. At present it contains the following data sets:
IMPULSES
Pinatubo in 1900
Step Change
Pulse
FORCINGS
Forster No Volcano
Forster N/V-Ocean
Otto Forcing
Otto-Ocean ∆
Levitus watts Ocean Heat Content ∆
GISS Forcing
GISS-Ocean ∆
Forster Forcing
Forster-Ocean ∆
DVIS
CM2.1 Forcing
CM2.1-Ocean ∆
GISS No Volcano
GISS GHGs
GISS Ozone
GISS Strat_H20
GISS Solar
GISS Landuse
GISS Snow Albedo
GISS Volcano
GISS Black Carb
GISS Refl Aer
GISS Aer Indir Eff
RESPONSES
CCSM3 Model Temp
CM2.1 Model Temp
GISSE ModelE Temp
BEST Temp
Forster Model Temps
Forster Model Temps No Volc
Flat
GISS Temp
HadCRUT4
You can insert your own data as well or makeup combinations of any of the forcings. I’ve included a variety of forcings and responses. This one-line equation model has forcing datasets, subsets of those such as volcanoes only or aerosols only, and the simple impulses such as a square step.
Now, while this spreadsheet is by no means user-friendly, I’ve tried to make it at least not user-aggressive.
Appendix D: The Mathematical Derivation of the Relationship between Climate Sensitivity and the Trend Ratio.
I have stated that the relationship where climate sensitivity is equal to the ratio between trends of the forcing and response datasets.
We start with the one-line equation:
Let us consider the situation of a linear trend in the forcing, where the forcing is ramped up by a certain amount every year. Here are lagged results from that kind of forcing.
Figure B1. A steady increase in forcing over time (red line), along with the situation with the time constant (tau) equal to zero, and also a time constant of 20 years. The residual is offset -0.6 degrees for clarity.
Note that the only difference that tau (the lag time constant) makes is how long it takes to come to equilibrium. After that the results stabilize, with the same change each year in both the forcing and the temperature (∆F and ∆T). So let’s consider that equilibrium situation.
Subtracting T0 from both sides gives
Now, T1 minus T0 is simply ∆T1. But since at equilibrium all the annual temperature changes are the same, ∆T1 = ∆T0 = ∆T, and the same is true for the forcing. So equation 2 simplifies to
Dividing by ∆F gives us
Collecting terms, we get
And dividing through by (1-a) yields
Now, out in the equilibrium area on the right side of Figure B1, ∆T/∆F is the actual trend ratio. So we have shown that at equilibrium
But if we include the entire dataset, you’ll see from Figure B1 that the measured trend will be slightly less than the trend at equilibrium.
And as a result, we would expect to find that lambda is slightly larger than the actual trend ratio. And indeed, this is what we found for the models when compared to their own results, lambda = 1.07 times the trend ratio.
When the forcings are run against real datasets, however, it appears that the greater variability of the actual temperature datasets averages out the small effect of tau on the results, and on average we end up with the situation shown in Figure 4 above, where lambda is experimentally determined to be equal to the trend ratio.
Appendix E: The Underlying Math
The best explanation of the derivation of the math used in the spreadsheet is an appendix to Paul_K’s post here. Paul has contributed hugely to my analysis by correcting my mistakes as I revealed them, and has my great thanks.
Climate Modeling – Abstracting the Input Signal by Paul_K
I will start with the (linear) feedback equation applied to a single capacity system—essentially the mixed layer plus fast-connected capacity:
C dT/dt = F(t) – λ *T Equ. A1
Where:-
C is the heat capacity of the mixed layer plus fast-connected capacity (Watt-years.m-2.degK-1)
T is the change in temperature from time zero (degrees K)
T(k) is the change in temperature from time zero to the end of the kth year
t is time (years)
F(t) is the cumulative radiative and non-radiative flux “forcing” applied to the single capacity system (Watts.m-2)
λ is the first order feedback parameter (Watts.m-2.deg K-1)
We can solve Equ A1 using superposition. I am going to use timesteps of one year.
Let the forcing increment applicable to the jth year be defined as fj. We can therefore write
F(t=k ) = Fk = Σ fj for j = 1 to k Equ. A2
The temperature contribution from the forcing increment fj at the end of the kth
year is given by
ΔTj(t=k) = fj(1 – exp(-(k+1-j)/τ))/λ Equ.A3
where τ is set equal to C/λ .
By superposition, the total temperature change at time t=k is given by the summation of all such forcing increments. Thus
T(t=k) = Σ fj * (1 – exp(-(k+1-j)/τ))/ λ for j = 1 to k Equ.A4
Similarly, the total temperature change at time t= k-1 is given by
T(t=k-1) = Σ fj (1 – exp(-(k-j)/τ))/ λ for j = 1 to k-1 Equ.A5
Subtracting Equ. A4 from Equ. A5 we obtain:
T(k) – T(k-1) = fk*[1-exp(-1/τ)]/λ + ( [1 – exp(-1/τ)]/λ ) (Σfj*exp(-(k-j)/τ) for j = 1 to k-1) …Equ.A6
We note from Equ.A5 that
(Σfj*exp(-(k-j)/τ)/λ for j = 1 to k-1) = ( Σ(fj/λ ) for j = 1 to k-1) – T(k-1)
Making this substitution, Equ.A6 then becomes:
T(k) – T(k-1) = fk*[1-exp(-1/τ)]/λ + [1 – exp(-1/τ)]*[( Σ(fj/λ ) for j = 1 to k-1) – T(k-1)] …Equ.A7
If we now set α = 1-exp(-1/τ) and make use of Equ.A2, we can rewrite Equ A7 in the following simple form:
T(k) – T(k-1) = Fkα /λ – α * T(k-1) Equ.A8
Equ.A8 can be used for prediction of temperature from a known cumulative forcing series, or can be readily used to determine the cumulative forcing series from a known temperature dataset. From the cumulative forcing series, it is a trivial step to abstract the annual incremental forcing data by difference.
For the values of α and λ, I am going to use values which are conditioned to the same response sensitivity of temperature to flux changes as the GISS-ER Global Circulation Model (GCM).
These values are:-
α = 0.279563
λ = 2.94775
Shown below is a plot confirming that Equ. A8 with these values of alpha and lamda can reproduce the GISS-ER model results with good accuracy. The correlation is >0.99.

This same governing equation has been applied to at least two other GCMs ( CCSM3 and GFDL ) and, with similar parameter values, works equally well to emulate those model results. While changing the parameter values modifies slightly the values of the fluxes calculated from temperature , it does not significantly change the structural form of the input signal, and nor can it change the primary conclusion of this article, which is that the AGW signal cannot be reliably extracted from the temperature series.
Equally, substituting a more generalised non-linear form for Equ A1 does not change the results at all, provided that the parameters chosen for the non-linear form are selected to show the same sensitivity over the actual observed temperature range. (See here for proof.)





Nick,
You seem like a nice guy, and I appreciate your insights. I also agree with your statement to stay very close to the physics. So looking now at the GISS forcing data page – http://data.giss.nasa.gov/modelforce/
– It looks like stratospheric aerosols is the candidate for fine tuning. Some of the references to the data sources used are themselves the result of other modeling exercises. Volcanic eruptions which apparently decay fast do effect climate over longer periods due to the tau (15 year) relaxation time. Willis’s argument for a climate rebound after volcanoes works only for low values of tau (~ 2.8y)
– Likewise as far as I can see – the increasing negative offset from tropospheric aerosols is the result of more modeling exercises rather than using direct measurements.
– Finally I don’t understand why the “well mixed” greenhouse gases takes a downturn after 1990. CO2 emissions per year have actually increased since then.
Nick Stokes says:
June 4, 2013 at 1:06 am
Clive,
“Since the models have tuned F so as to correctly reproduce past temperatures”
I don’t believe they have, but I also don’t think that’s relevant.
=============
They have and it is why their past predictions have gone off the rails. It is why the model estimates of ECS are now falling. Something that would be impossible if the models were actually predicting the future. They aren’t. They are predicting what the model builders believe the future will be. If they weren’t, the models builders would think the models were in error and change them.
Nick Stokes says:
on June 4, 2013 at 3:27 am
Nick, that reminds me of an old dissident Soviet joke:
Clive Best:
“Volcanic eruptions which apparently decay fast do effect climate over longer periods due to the tau (15 year) relaxation time. Willis’s argument for a climate rebound after volcanoes works only for low values of tau (~ 2.8y)”
http://wattsupwiththat.com/2013/06/03/climate-sensitivity-deconstructed/#comment-1325354
This is not “Willis’ argument” it’s the data’s argument. In the face of the evidence (which maybe you missed if you have not read the thread) the idea of a 15 year relaxation time needs to be reassessed. Where did you find 15 years? You state it like a fact.
“- Finally I don’t understand why the “well mixed” greenhouse gases takes a downturn after 1990. CO2 emissions per year have actually increased since then.”
Then maybe you have been misinformed about what causes changes in atmospheric CO2 !
http://climategrog.wordpress.com/?attachment_id=233
Greg Goodman writes “Where did you find 15 years? You state it like a fact.”
I got the 15 years by fitting an old GISS model response to a sudden doubling of CO2 – see: http://clivebest.com/blog/?p=3729
Then taking tau=15 years and using the digitized average CMIP5 forcings from Gregory et al. I get the temperature response very similar to CMIP5 models for ECS = 2.5C. see : http://clivebest.com/blog/?p=4923
The longer the stalling of temperatures remains the lower ECS will fall. CO2 forcing alone suggests ECS ~ 2.0C
I agree that CO2 must depend on SST according to Henry’s law. Warm beer goes flat faster than cold beer.
I also have an intuition that the current “natural” value for CO2 in the atmosphere of ~ 300ppm is not a coincidence. Why is it not say 5000ppm ?
I once made a simple model of the greenhouse effect and discovered that the peak for atmospheric OLR occurs for ~ 300ppm which just happens to be that found on Earth naturally. Can this really be a coincidence ? It is almost as if convection and evaporation act to generate a lapse rate which maximizes radiative cooling of the atmosphere by CO2 to space. If this conjecture is true in general, then any surface warming due to a doubling of CO2 levels would be offset somewhat by a change in the average environmental lapse rate to restore the radiation losses in the main CO2 band. In this case the surface temperature would hardly change.
see: http://clivebest.com/blog/?p=4475
and also: http://clivebest.com/blog/?p=4597
clivebest:
Your remarks assume the existence of the equilibrium climate sensitivity (ECS). However, it is easy to show that, as a scientific concept, ECS does not exist.
By the definition of terms, ECS is the ratio of the change in the equilibrium temperature to the change in the logarithm of the CO2 concentration. As the equilibrium temperature is not an observable, when it is asserted that ECS has a particular numerical value, this assertion is insusceptible to being tested.
TerryOldberg writes : “Your remarks assume the existence of the equilibrium climate sensitivity (ECS). However, it is easy to show that, as a scientific concept, ECS does not exist.
is the temperature response to an increment in forcing.

due to the Stefan Boltzmann.
I kind of agree with you. Climate sensitivity only makes sense on the differential level.
Climate sensitivity
In the case of no “feedbacks”
Confusingly however the term “Climate Sensitivity” is usually defined as the change in temperature after a doubling of CO2. This means that the assumed “cause” is built into the definition and linear calculus approximations are no longer valid. Perhaps climate sensitivity to CO2 forcing behaves more like quark confinement in the nucleon. The more you kick it the stronger the restoring force (negative feedback). That would mean negative feedbacks such as clouds start small but increase strongly with forcing. How else could the oceans have survived the last 4 billion years ?
Unfortunately ECS has been promoted by the “team” as the “bugle call” to action for the world’s political elite. Therefore we have to work with that in the short term.
clivebest:
Thanks for taking the time to respond. In AR4, IPCC Working Group 1 uses “climate sensitivity” and “equilibrium climate sensitivity” as synonyms. In each case, the quantity being referenced is the change in the equilibrium temperature per unit change in the logarithm to the base 2 of the CO2 concentration. The unit of measure is Celsius degrees per doubling of the CO2 concentration but the concept applies to concentration increases that are not doublings.
In an earlier message to you, I pointed out that the climate sensitivity does not exist as a scientific concept due to the non-observability of the equilibrium temperature. The non-observability has another consequence that is not often appreciated. This is that when the IPCC provides a policy maker with the magnitude which it estimates for the climate sensitivity it provides this policy maker with no information about the outcomes from his or her policy decisions; this conclusion follows from the definition of the “mutual information” as the measure of a relationship among observables. In view of the lack of mutual information between the increase in the logarithm of the CO2 concentration and the increase in the equilibrium temperature, to have the IPCC’s estimate of the magnitude is useless for the purpose of making policy. However, the IPCC has led policy makers to believe it is useful for this purpose.
clivebest says:
June 4, 2013 at 6:21 am
– Likewise as far as I can see – the increasing negative offset from tropospheric aerosols is the result of more modeling exercises rather than using direct measurements.
========
because without increased negative offsets one cannot account for the current stall in temperatures in the face of increased human emissions of CO2 and high estimates of CS.
So, rather than re-examine the high estimate of CS, which are mandatory if we are to believe CO2 is a danger, the only option is to assume that aerosols have a much bigger negative effect than was previously assumed.
The problem is that none of the models are attempting to solve for CS. They are attempting to solve for temperature, given a value of CS. The other parameters such as aerosols are used to train the hind-cast, with no attempt to validate the models using hidden data or similar methods. It is a gigantic curve fitting exercise. A pig wearing diamonds and a designer gown, all paid for by the taxpayers.
Colorado Wellington says:
…. that reminds me of an old dissident Soviet joke:
The future is inevitable and certain; it is only the past that is unpredictable.
… and that reminds me of a climate joke. Oh, hang on, I don’t think it was intended to be a joke.
Pretty much sums the last 20 years of mainstream climatology.
PS ~Clive Best http://climategrog.wordpress.com/?attachment_id=223
Willis Eschenbach: Figure B1 is just a theoretical situation I showed to clarify the math, nothing to do with the models directly other than it uses the one-line equation.
It did clarify the math. What it has to do with the models (directly or indirectly?) is that it is part of your model of the models, and your model tits the other models well.
Willis Eschenbach: Everyone’s suddenly a genius now, after the fact?
It is neither expected nor is it intuitively obvious.
True, it is not expected. Points for you on that. However, it is intuitively obvious to everyone who has studied calculus, once you clarified what exactly your assumptions were. “Equilibrium” was incorrect; “steady state” was incorrect; but linearly increasing (in time) F and T was correct, and with dF/dt and dT/dt assumed constant, the rest was intuitively obvious.
clivebest says:
Greg Goodman writes “Where did you find 15 years? You state it like a fact.”
I got the 15 years by fitting an old GISS model response to a sudden doubling of CO2 – see: http://clivebest.com/blog/?p=3729
===
So what you found and blandly stated as though it was fact was a time constant “an old GISS model”. Thanks for making that clear.
How you go on to explain that climate is controlled by CO2 rather than water and water vapour leaves in amazement.
“I agree that CO2 must depend on SST according to Henry’s law. ”
No, this is Henry’s law. We see it in action post 2000 when long term trend in temp is flat.
http://climategrog.wordpress.com/?attachment_id=259
Now I’ve pointed out how CO2 changes with both temperature and air pressure in the real climate perhaps you can come up with a novel explanation or criticism of the true climate reaction to volcanism.
http://climategrog.wordpress.com/?attachment_id=286
http://climategrog.wordpress.com/?attachment_id=285
http://climategrog.wordpress.com/?attachment_id=278
No many takers on that one yet, apart from Nick not being “enthusiastic” about that kind of plot because he “believes” it does some that it does not.
I’d expected a vigorous response to something so fundamentally important.
oops, forgot the too many links trip wire.
Matthew R Marler says:
June 4, 2013 at 10:19 am
Matt, you are about the fifth person to make this claim.
If you think my results are so obvious, then assuredly you can point out several other people who have demonstrated both experimentally and theoretically that what the modelers’ call “climate sensitivity” is nothing more than the trend ratio of the input and output datasets.
And if you can’t demonstrate that, then why are you trying to bust me?
Roy Spencer made the same claim, that my results were nothing new, and I made the same invitation to him, saying:
Roy did not come with a damn thing, which saddened me greatly, as he is one of my heroes. Then Mosher took up the same BS, and I made the same invitation to him, saying:
He has not replied to this point either. Then jorgekafkazar tried the same nonsense, and I replied saying
Then KR tried the same cr*p, and I replied:
Now you want to start up with the same claim?
Wonderful.
I make you the same invitation I made to the others. If it’s so damn obvious that the climate sensitivities displayed by the models are nothing but the trend ratio of input and output, please provide me with someone making that claim in the past (and preferably supporting the claim both experimentally and mathematically as I have done) . Kiehl tried, but I guess it wasn’t so dang obvious to him, because he came up with the wrong answer … where were you? You could have pointed out the “obvious” to him, and his paper wouldn’t have been incorrect …
w.
Nick Stokes says:
Clive,
How do we know they are bad at predicting the future? Have you been there?
Nick, you make it too easy. Verifying whether models can predict is called hindcasting.
Why not just admit you’re on the wrong track? Would it kill you to admit that Willis is right?
Clive Best: “The more you kick it the stronger the restoring force (negative feedback). That would mean negative feedbacks such as clouds start small but increase strongly with forcing. How else could the oceans have survived the last 4 billion years ?”
Yes, a strongly non-linear negative feedback is what is needed to explain the plots I posted.
I pointed out to Willis some time ago that the tropical storm was a negative feedback with internal positive feedback makeing it strong and non-linear. In view of cumulative integral plots, I think it is clear that it is even more powerful a control than a “governor” in that, at least in the tropics it is restoring the degree.day sum as well.
That makes it more like PID-controller as “onlyme” pointed out recently. I think that description merits further development.
Greg Goodman says:
June 4, 2013 at 11:25 am
I don’t know if you saw this, but here’s the evidence in the surface station record that it is regulated.
Night time cooling Basically what I found is that there’s no loss of cooling in the temperature record, even though Co2 has almost doubled.
This implies that the overnight cooling rate (over land) has not changed in over 60 years. At night solar radiation is zero and CO2 levels are constant . Only H2O can maintain a constant cooling rate. So long term change in water vapour content of the upper atmosphere is crucial to understand what is meant by “climate sensitivity”.
PID controllers are under some criteria optimum controllers, frequently used in industrial process control.
Like many things we invent, it looks like Mother Nature got there first.
Willis Eschenbach says:
June 4, 2013 at 11:01 am
I did, but you have it backwards, CS drives the models output, they made the models respond to Co2 because they believe it’s the “control knob”. Not to make light of all of your work, you “reverse engineered” this relationship.
My statement:
I have read somewhere (and I can’t find it now, it was years ago), that GCM’s didn’t create rising temps while Co2 went up, and they didn’t know why. They then linked Co2 to water vapor either directly to temp, or with a Climate Sensitivity factor. I’m trying to find this “proof”.
In the mean while you can go to EdGCM.edu for your own GCM, or http://www.giss.nasa.gov/tools/modelE/ there’s a link about 3/4 down for the Model E1 source code. You can also probable get Model I & II at the same tools link. If you really want to understand what they’re doing, the code is available for review or even to run.
MiCro says:
June 4, 2013 at 12:12 pm
Since on that page you only mention climate sensitivity once in passing, and you don’t mention either the input datasets or the outputs datasets of the climate models … no, you didn’t.
w.
clivebest says:
June 4, 2013 at 12:39 pm
I was a little sloppy with what I wrote, nightly cooling matches day time warming, with some years showing a slightly larger daily warming, and others slightly larger cooling, taken in it’s entirety cooling is slightly larger than warming.
But yes, if Co2 isn’t regulating temperature, water vapor must be. Surface data makes this clear (at least to me).
Willis Eschenbach says:
June 4, 2013 at 11:01 am
“I make you the same invitation I made to the others. If it’s so damn obvious that the climate sensitivities displayed by the models are nothing but the trend ratio of input and output, please provide me with someone making that claim in the past (and preferably supporting the claim both experimentally and mathematically as I have done).”
There is a confusion that is common to those who are criticizing Willis’ claim that his result is important. The confusion is between “the result” and “the fact that the result can be deduced from the model formalism.”
The confusion is the basis of critics’ claim that Willis’ result is obvious. Willis’ “result” is not just that the climate sensitivities displayed by the models are nothing but the trend ratio of input and output but includes the fact that it can be deduced mathematically from the formalism that is the model. The fact of deduction is part of Willis’ result.
By contrast, the claim that “the climate sensitivities displayed by the models are nothing but the trend ratio of input and output” is an ideal standard that models are evaluated against. The fact that the claim is a standard is what makes it seem obvious.
Now we must put together two facts, the fact of the standard and the fact of the deduction. What we get is the fact that the standard can be deduced from the model formalism. Such a deduction shows that the standard is embedded in the model formalism and, thereby, that the model is a circular argument to that standard.
What should happen in the real world is that the model formalism should yield an equation whose instantiations approximate the standard. That equation must contain the term “climate sensitivity” and the terms that are scientifically necessary to give a meaning to “climate sensitivity.” Presumably, those additional terms would include a term for “water vapor forcing/feedback,” a term for “cloud forcing/feedback,” and so on for whatever ineliminable terms are found in climate theory. In Trenberth’s case, there will be a term for “deep ocean sequester.” But climate science has offered us no such equation and we are left asking what role climate theory has to play in climate computer models.
Willis has answered our question. If the ideal standard can be deduced (this is the key word) from the model formalism then the ideal standard is found in the model formalism. The model formalism and the models amount to one grand circular argument.
Willis Eschenbach: If you think my results are so obvious, then assuredly you can point out several other people who have demonstrated both experimentally and theoretically that what the modelers’ call “climate sensitivity” is nothing more than the trend ratio of the input and output datasets.
And if you can’t demonstrate that, then why are you trying to bust me?
Bust you? don’t be absurd, thin-skinned and all that. I have at least 3 times written that you have discovered something interesting. It is “intuitively obvious” post-hoc, like the relativity of motion, the chain rule of differentiation, or Newton’s 3 laws of motion — but only to people who have studied, in this case people who have studied calculus. Your result does depend on the counterfactual assumption that dF/dt and dT/dt are both constant, which you misattributed to equilibrium and then steady-state, before stating it as a bald assumption compatible with what you have found.
In this post you are batting 1 for 2, so to speak.
Theo Goodwin says:
June 4, 2013 at 1:09 pm
This is a Modeling issue, is the modeler modeling the system in question, or how he/she thinks the system behaves. Only by comparing model results to actual results can you tell. In electronics which is where my modeling experience comes from, you can drag a real thing into a lab and test it. You can even test things outside a lab is you can isolate it’s inputs. Climatologists can’t do this, and have to rely on statistics to compare two non-deterministic systems a model vs earths climate.
Earth is still poorly sampled, spatially models still can’t simulate accurate results, so they average parameters so they can have some kind of result that match.
I don’t have an issue with this as a scientific endeavor, I do have an issue when it’s used for policy.
Willis at 11:51 pm on 6/03 says:
” I doubt if the ‘more accurate models’ (whatever that may be) would be any different”
Willis, thanks for the response. I went in search of the spaghetti graphs I had remembered; I found an example at realclimate/2008/05/what-the-ipcc-models-really-say.
Apparently I was using the wrong terminology (never happened to me before). The individual runs of a given model are referred to as “simulations”, a term which seems to be interchangeable with “individual realization” of the model. A number of simulations are run, and the ensemble of simulations is averaged to produce the mean for the model.
Interestingly, though most of the 55 shown simulations project T increasing over time, a few show flat or falling Ts, closer to what has been observed. Now I don’t know how this variability among simulations is generated; perhaps they merely insert random variations of the forcings.
My original question was whether there is some fundamental difference between the small number of more accurate simulations and the large number of inaccurate simulations. Are there any systematic differences between them? Does anyone out there know how the variation among simulations is generated? Nick Stokes? Anyone?
And would these differences among simulations in any way affect the results which Willis has found?
MiCro says:
June 4, 2013 at 1:29 pm
Anyone who can contribute substantially to the creation of a professional grade model is going to be highly concerned by the number of terms in the model. The number of terms has a great impact on what must be done to solve the model and to do so as efficiently as possible. My point is that professionals are highly aware of the number of terms that they must use. It is a matter of first importance to them.
A model that reduces to three terms is a non-starter. By “reduces,” I mean that it can be shown deductively that input and output are related through one term. No honest person would agree to create such a model.
Greg Goodman says:
June 4, 2013 at 12:16 am
Nick Stokes says:
June 3, 2013 at 9:40 pm
No, these statements do not appear anywhere. Forcings of course are supplied. But feedbacks and sensitivity are our mental constructs to understand the results. The computer does not need them. It just balances forces and fluxes, conserves mass and momentum etc.
===
“Nick, that would be true if ALL the inputs were known and measured and the only thing is the models was basic physics laws. In reality neither is true. There are quantities, like cloud amount, that are “parametrised” ( aka guestimated ). What should be an output becomes an input and a fairly flexible and subjective one.
From your comments I think you know enough about climate models to realise this, so don’t try to snow us all with the idea that this is all known physical relationships of the “resistors and capacitors” of climate and the feedbacks naturally pop out free of any influence from the modellers, their biases and expectations.”
Greg, good answer. Nick, I have to agree with Greg that your response might not be worth a reply. Something to remember.
Theo Goodwin says:
June 4, 2013 at 2:09 pm
And GCM’s have more than there terms, but we’re also comparing the values for the entire surface of the Earth averaged to a single value, all of the effects of those terms are compressed to a single value.
Here’s an entry level GCM model doc.