Climate Sensitivity Deconstructed

Guest Post by Willis Eschenbach

I haven’t commented much on my most recents posts, because of the usual reasons: a day job, and the unending lure of doing more research, my true passion. To be precise, recently I’ve been frying my synapses trying to twist my head around the implications of the finding that the global temperature forecasts of the climate models are mechanically and accurately predictable by a one-line equation. It’s a salutary warning: kids, don’t try climate science at home.

your brain on climateFigure 1. What happens when I twist my head too hard around climate models.

Three years ago, inspired by Lucia Liljegren’s ultra-simple climate model that she called “Lumpy”, and with the indispensable assistance of the math-fu of commenters Paul_K and Joe Born, I made what to me was a very surprising discovery. The GISSE climate model could be accurately replicated by a one-line equation. In other words, the global temperature output of the GISSE model is described almost exactly by a lagged linear transformation of the input to the models (the “forcings” in climatespeak, from the sun, volcanoes, CO2 and the like).  The correlation between the actual GISSE model results and my emulation of those results is 0.98 … doesn’t get much better than that. Well, actually, you can do better than that, I found you can get 99+% correlation by noting that they’ve somehow decreased the effects of forcing due to volcanoes. But either way, it was to me a very surprising result. I never guessed that the output of the incredibly complex climate models would follow their inputs that slavishly.

Since then, Isaac Held has replicated the result using a third model, the CM2.1 climate model. I have gotten the CM2.1 forcings and data, and replicated his results. The same analysis has also been done on the GDFL model, with the same outcome. And I did the same analysis on the Forster data, which is an average of 19 model forcings and temperature outputs. That makes four individual models plus the average of 19 climate models, and all of the the results have been the same, so the surprising conclusion is inescapable—the climate model global average surface temperature results, individually or en masse, can be replicated with over 99% fidelity by a simple, one-line equation.

However, the result of my most recent “black box” type analysis of the climate models was even more surprising to me, and more far-reaching.

Here’s what happened. I built a spreadsheet, in order to make it simple to pull up various forcing and temperature datasets and calculate their properties. It uses “Solver” to iteratively select the values of tau (the time constant) and lambda (the sensitivity constant) to best fit the predicted outcome. After looking at a number of results, with widely varying sensitivities, I wondered what it was about the two datasets (model forcings, and model predicted temperatures) that determined the resulting sensitivity. I wondered if there were some simple relationship between the climate sensitivity, and the basic statistical properties of the two datasets (trends, standard deviations, ranges, and the like). I looked at the five forcing datasets that I have (GISSE, CCSM3, CM2.1, Forster, and Otto) along with the associated temperature results. To my total surprise, the correlation between the trend ratio (temperature dataset trend divided by forcing dataset trend) and the climate sensitivity (lambda) was 1.00. My jaw dropped. Perfect correlation? Say what? So I graphed the scatterplot.

sensitivity vs trend ratio models transientFigure 2. Scatterplot showing the relationship of lambda and the ratio of the output trend over the input trend. Forster is the Forster 19-model average. Otto is the Forster input data as modified by Otto, including the addition of a 0.3 W/m2 trend over the length of the dataset. Because this analysis only uses radiative forcings and not ocean forcings, lambda is the transient climate response (TCR). If the data included ocean forcings, lambda would be the equilibrium climate sensitivity (ECS). Lambda is in degrees per W/m2 of forcing. To convert to degrees per doubling of CO2, multiply lambda by 3.7.

Dang, you don’t see that kind of correlation very often, R^2 = 1.00 to two decimal places … works for me.

Let me repeat the caveat that this is not talking about real world temperatures. This is another “black box” comparison of the model inputs (presumably sort-of-real-world “forcings” from the sun and volcanoes and aerosols and black carbon and the rest) and the model results. I’m trying to understand what the models do, not how they do it.

Now, I don’t have the ocean forcing data that was used by the models. But I do have Levitus ocean heat content data since 1950, poor as it might be. So I added that to each of the forcing datasets, to make new datasets that do include ocean data. As you might imagine, when some of the recent forcing goes into heating the ocean, the trend of the forcing dataset drops … and as we would expect, the trend ratio (and thus the climate sensitivity) increases. This effect is most pronounced where the forcing dataset has a smaller trend (CM2.1) and less visible at the other end of the scale (CCSM3). Figure 3 shows the same five datasets as in Figure 2, plus the same five datasets with the ocean forcings added. Note that when the forcing dataset contains the heat into/out of the ocean, lambda is the equilibrium climate sensitivity (ECS), and when the dataset is just radiative forcing alone, lambda is transient climate response. So the blue dots in Figure 3 are ECS, and the red dots are TCR. The average change (ECS/TCR) is 1.25, which fits with the estimate given in the Otto paper of ~ 1.3.

sensitivity vs trend ratio models tcr ecsFigure 3. Red dots show the models as in Figure 2. Blue dots show the same models, with the addition of the Levitus heat content data to each forcing dataset. Resulting sensitivities are higher for the equilibrium condition than for the transient condition, as would be expected. Blue dots show equilibrium climate sensitivity (ECS), while red dots (as in Fig. 2) show the corresponding transient climate response (TCR).

Finally, I ran the five different forcing datasets, with and without ocean forcing, against three actual temperature datasets—HadCRUT4, BEST, and GISS LOTI. I took the data from all of those, and here are the results from the analysis of those 29 individual runs:

lambda vs trend ratio allFigure 4. Large red and blue dots are as in Figure 3. The light blue dots are the result of running the forcings and subsets of the forcings, with and without ocean forcing, and with and without volcano forcing, against actual datasets. Error shown is one sigma. 

So … my new finding is that the climate sensitivity of the models, both individual models and on average, is equal to the ratio of the trends of the forcing and the resulting temperatures. This is true whether or not the changes in ocean heat content are included in the calculation. It is true for both forcings vs model temperature results, as well as forcings run against actual temperature datasets. It is also true for subsets of the forcing, such as volcanoes alone, or for just GHG gases.

And not only did I find this relationship experimentally, by looking at the results of using the one-line equation on models and model results. I then found that can derive this relationship mathematically from the one-line equation (see Appendix D for details).

This is a clear confirmation of an observation first made by Kiehl in 2007, when he suggested an inverse relationship between forcing and sensitivity.

The question is: if climate models differ by a factor of 2 to 3 in their climate sensitivity, how can they all simulate the global temperature record with a reasonable degree of accuracy. Kerr [2007] and S. E. Schwartz et al. (Quantifying climate change–too rosy a picture?, available [here]) recently pointed out the importance of understanding the answer to this question. Indeed, Kerr [2007] referred to the present work, and the current paper provides the ‘‘widely circulated analysis’’ referred to by Kerr [2007]. This report investigates the most probable explanation for such an agreement. It uses published results from a wide variety of model simulations to understand this apparent paradox between model climate responses for the 20th century, but diverse climate model sensitivity.

However, Kiehl ascribed the variation in sensitivity to a difference in total forcing, rather than to the trend ratio, and as a result his graph of the results is much more scattered.

kiehl sensitivity vs total forcingFigure 5. Kiehl results, comparing climate sensitivity (ECS) and total forcing. Note that unlike Kiehl, my results cover both equilibrium climate sensitivity (ECS) and transient climate response (TCR).

Anyhow, there’s a bunch more I could write about this finding, but I gotta just get this off my head and get back to my day job. A final comment.

Since I began this investigation, the commenter Paul_K has since written two outstanding posts on the subject over at Lucia’s marvelous blog, The Blackboard (Part 1, Part 2). In those posts, he proves mathematically that given what we know about the equation that replicates the climate models, that we cannot … well, I’ll let him tell it in his own words:

The Question:  Can you or can you not estimate Equilibrium Climate Sensitivity (ECS) from  120 years of temperature and OHC data  (even) if the forcings are known?

The Answer is:  No.  You cannot.  Not unless other information is used to constrain the estimate.

An important corollary to this is:- The fact that a GCM can match temperature and heat data tells us nothing about the validity of that GCM’s estimate of Equilibrium Climate Sensitivity.

Note that this is not an opinion of Paul_K’s. It is a mathematical result of the fact that even if we use a more complex “two-box” model, we can’t constrain the sensitivity estimates. This is a stunning and largely unappreciated conclusion. The essential problem is that for any given climate model, we have more unknowns than we have fundamental equations to constrain them.

CONCLUSIONS

Well, it was obvious from my earlier work that the models were useless for either hindcasting or forecasting the climate. They function indistinguishably from a simple one-line equation.

On top of that, Paul_K has shown that they can’t tell us anything about the sensitivity, because the equation itself is poorly constrained.

Finally, in this work I’ve shown that the climate sensitivity “lambda” that the models do exhibit, whether it represents equilibrium climate sensitivity (ECS) or transient climate response (TCR), is nothing but the ratio of the trends of the input and the output. The choice of forcings, models and datasets is quite immaterial. All the models give the same result for lambda, and that result is the ratio of the trends of the forcing and the response. This most recent finding completely explains the inability of the modelers to narrow the range of possible climate sensitivities despite thirty years of modeling.

You can draw your own conclusions from that, I’m sure …

My regards to all,

w.

Appendix A : The One-Line Equation

The equation that Paul_K, Isaac Held, and I have used to replicate the climate models is as follows:

OLE equation 1

Let me break this into four chunks, separated by the equals sign and the plus signs, and translate each chunk from math into English. Equation 1 means:

This year’s temperature (T1) is equal to

Last years temperature (T0) plus

Climate sensitivity (λ) times this year’s forcing change (∆F1) times (one minus the lag factor) (1-a) plus

Last year’s temperature change (∆T0) times the same lag factor (a)

Or to put it another way, it looks like this:

T1 =                      <—  This year’s temperature [ T1 ] equals

    T0 +                  <—  Last year’s temperature [ T0 ] plus

    λ  ∆F1  (1-a) +    <— How much radiative forcing is applied this year [ ∆F1 (1-a) ],  times climate sensitivity lambda ( λ ), plus

    ∆T0  a                 <— The remainder of the forcing, lagged out over time as specified by the lag factor “a

The lag factor “a” is a function of the time constant “tau” ( τ ), and is given by

OLE equation 1a

This factor “a” is just a constant number for a given calculation. For example, when the time constant “tau” is four years, the constant “a” is 0.78. Since 1 – a = 0.22, when tau is four years, about 22% of the incoming forcing is added immediately to last years temperature, and rest of the input pulse is expressed over time.

Appendix B: Physical Meaning

So what does all of that mean in the real world? The equation merely reflects that when you apply heat to something big, it takes a while for it to come up to temperature. For example, suppose we have a big brick in a domestic oven at say 200°C. Suppose further that we turn the oven heat up suddenly to 400° C for an hour, and then turn the oven back down to 200°C. What happens to the temperature of the big block of steel?

If we plot temperature against time, we see that initially the block of steel starts to heat fairly rapidly. However as time goes on it heats less and less per unit of time until eventually it reaches 400°C. Figure B2 shows this change of temperature with time, as simulated in my spreadsheet for a climate forcing of plus/minus one watt/square metre. Now, how big is the lag? Well, in part that depends on how big the brick is. The larger the brick, the longer the time lag will be. In the real planet, of course, the ocean plays the part of the brick, soaking up

The basic idea of the one-line equation is the same tired claim of the modelers. This is the claim that the changing temperature of the surface of the planet is linearly dependent on the size of the change in the forcing. I happen to think that this is only generally the rule, and that the temperature is actually set by the exceptions to the rule. The exceptions to this rule are the emergent phenomena of the climate—thunderstorms, El Niño/La Niña effects and the like. But I digress, let’s follow their claim for the sake of argument and see what their models have to say. It turns out that the results of the climate models can be described to 99% accuracy by the setting of two parameters—”tau”,  or the time constant, and “lambda”, or the climate sensitivity. Lambda can represent either transient sensitivity, called TCR for “transient climate response”, or equilibrium sensitivity, called ECS for “equilibrium climate sensitivity”.

one line equation on pulseFigure B2. One-line equation applied to a square-wave pulse of forcing. In this example, the sensitivity “lambda” is set to unity (output amplitude equals the input amplitude), and the time constant “tau” is set at five years.

Note that the lagging does not change the amount of energy in the forcing pulse. It merely lags it, so that it doesn’t appear until a later date.

So that is all the one-line equation is doing. It simply applies the given forcing, using the climate sensitivity to determine the amount of the temperature change, and using the time constant “tau” to determine the lag of the temperature change. That’s it. That’s all.

The difference between ECS (climate sensitivity) and TCR (transient response) is whether slow heating and cooling of the ocean is taken into account in the calculations. If the slow heating and cooling of the ocean is taken into account, then lambda is equilibrium climate sensitivity. If  the ocean doesn’t enter into the calculations, if the forcing is only the radiative forcing, then lambda is transient climate response.

Appendix C. The Spreadsheet

In order to be able to easily compare the various forcings and responses, I made myself up an Excel spreadsheet. It has a couple drop-down lists that let me select from various forcing datasets and various response datasets. Then I use the built-in Excel function “Solver” to iteratively calculate the best combination of the two parameters, sensitivity and time constant, so that the result matches the response. This makes it quite simple to experiment with various combinations of forcing and responses. You can see the difference, for example, between the GISS E model with and without volcanoes. It also has a button which automatically stores the current set of results in a dataset which is slowly expanding as I do more experiments.

In a previous post called Retroactive Volcanoes, (link) I had discussed the fact that Otto et al. had smoothed the Forster forcings dataset using a centered three point average. In addition they had added a trend fromthe beginning tothe end of the dataset of 0.3 W per square meter. In that post I had I had said that the effect of that was unknown, although it might be large. My new spreadsheet allows me to actually determine what the effect of that actually is.

It turns out that the effect of those two small changes is to take the indicated climate sensitivity from 2.8 degrees/doubling to 2.3° per doubling.

One of the strangest findings to come out of this spreadsheet was that when the climate models are compared each to their own results, the climate sensitivity is a simple linear function of the ratio of the trends of the forcing and the response. This was true of both the individual models, and the average of the 19 models studied by Forster. The relationship is extremely simple. The climate sensitivity lambda is 1.07 times the ratio of the trends for the models alone, and equal to the trends when compared to all the results. This is true for  all of the models without adding in the ocean heat content data, and also all of the models including the ocean heat content data.

In any case I’m going to have to convert all this to the computer language R. Thanks to Stephen McIntyre, I learned the computer language R and have never regretted it. However, I still do much of my initial exploratory forays in Excel. I can make Excel do just about anything, so for quick and dirty analyses like the results above I use Excel.

So as an invitation to people to continue and expand this analysis, my spreadsheet is available here. Note that it contains a macro to record the data from a given analysis. At present it contains the following data sets:

IMPULSES

Pinatubo in 1900

Step Change

Pulse

FORCINGS

Forster No Volcano

Forster N/V-Ocean

Otto Forcing

Otto-Ocean ∆

Levitus watts Ocean Heat Content ∆

GISS Forcing

GISS-Ocean ∆

Forster Forcing

Forster-Ocean ∆

DVIS

CM2.1 Forcing

CM2.1-Ocean ∆

GISS No Volcano

GISS GHGs

GISS Ozone

GISS Strat_H20

GISS Solar

GISS Landuse

GISS Snow Albedo

GISS Volcano

GISS Black Carb

GISS Refl Aer

GISS Aer Indir Eff

RESPONSES

CCSM3 Model Temp

CM2.1 Model Temp

GISSE ModelE Temp

BEST Temp

Forster Model Temps

Forster Model Temps No Volc

Flat

GISS Temp

HadCRUT4

You can insert your own data as well or makeup combinations of any of the forcings. I’ve included a variety of forcings and responses. This one-line equation model has forcing datasets, subsets of those such as volcanoes only or aerosols only, and the simple impulses such as a square step.

Now, while this spreadsheet is by no means user-friendly, I’ve tried to make it at least not user-aggressive.

Appendix D: The Mathematical Derivation of the Relationship between Climate Sensitivity and the Trend Ratio.

I have stated that the relationship where climate sensitivity is equal to the ratio between trends of the forcing and response datasets.

We start with the one-line equation:

OLE equation 1

Let us consider the situation of a linear trend in the forcing, where the forcing is ramped up by a certain amount every year. Here are lagged results from that kind of forcing.

lagged results ramp forcing

Figure B1. A steady increase in forcing over time (red line), along with the situation with the time constant (tau) equal to zero, and also a time constant of 20 years. The residual is offset -0.6 degrees for clarity.

Note that the only difference that tau (the lag time constant) makes is how long it takes to come to equilibrium. After that the results stabilize, with the same change each year in both the forcing and the temperature (∆F and ∆T). So let’s consider that equilibrium situation.

Subtracting T0 from both sides gives

OLE equation 3a

Now, T1 minus T0 is simply ∆T1. But since at equilibrium all the annual temperature changes are the same, ∆T1 = ∆T0 = ∆T, and the same is true for the forcing. So equation 2 simplifies to

OLE equation 4a

Dividing by ∆F gives us

OLE equation 5actual

Collecting terms, we get

OLE equation 6

And dividing through by (1-a) yields

OLE equation 7actual

Now, out in the equilibrium area on the right side of Figure B1, ∆T/∆F is the actual trend ratio. So we have shown that at equilibrium

OLE equation 8

But if we include the entire dataset, you’ll see from Figure B1 that the measured trend will be slightly less than the trend at equilibrium.

And as a result, we would expect to find that lambda is slightly larger than the actual trend ratio. And indeed, this is what we found for the models when compared to their own results, lambda = 1.07 times the trend ratio.

When the forcings are run against real datasets, however, it appears that the greater variability of the actual temperature datasets averages out the small effect of tau on the results, and on average we end up with the situation shown in Figure 4 above, where lambda is experimentally determined to be equal to the trend ratio.

Appendix E: The Underlying Math

The best explanation of the derivation of the math used in the spreadsheet is an appendix to Paul_K’s post here. Paul has contributed hugely to my analysis by correcting my mistakes as I revealed them, and has my great thanks.

Climate Modeling – Abstracting the Input Signal by Paul_K

I will start with the (linear) feedback equation applied to a single capacity system—essentially the mixed layer plus fast-connected capacity:

C dT/dt = F(t) – λ *T                                                            Equ.  A1

Where:-

C is the heat capacity of the mixed layer plus fast-connected capacity (Watt-years.m-2.degK-1)

T is the change in temperature from time zero  (degrees K)

T(k) is the change in temperature from time zero to the end of the kth year

t is time  (years)

F(t) is the cumulative radiative and non-radiative flux “forcing” applied to the single capacity system  (Watts.m-2)

λ  is the first order feedback parameter (Watts.m-2.deg K-1)

We can solve Equ A1 using superposition.  I am going to use  timesteps of one year.

Let the forcing increment applicable to the jth year be defined as fj.   We can therefore write

F(t=k )  = Fk =  Σ fj       for j = 1 to k                                Equ. A2

The temperature contribution from the forcing increment fj at the end of the kth

year is given by

ΔTj(t=k) =  fj(1 – exp(-(k+1-j)/τ))/λ                                                     Equ.A3

where τ is set equal to C/λ   .

By superposition, the total temperature change at time t=k is given by the summation of all such forcing increments.  Thus

T(t=k) = Σ fj * (1 – exp(-(k+1-j)/τ))/ λ     for j = 1 to k                                   Equ.A4

Similarly, the total temperature change at time t= k-1 is given by

T(t=k-1) =  Σ fj (1 – exp(-(k-j)/τ))/ λ         for j = 1 to k-1                               Equ.A5

Subtracting Equ. A4 from Equ. A5 we obtain:

T(k) – T(k-1) = fk*[1-exp(-1/τ)]/λ    +  ( [1 – exp(-1/τ)]/λ ) (Σfj*exp(-(k-j)/τ) for j = 1 to k-1)     …Equ.A6

We note from Equ.A5 that

(Σfj*exp(-(k-j)/τ)/λ for j = 1 to k-1)  =  ( Σ(fj/λ ) for j = 1 to k-1)   – T(k-1)

Making this substitution, Equ.A6 then becomes:

T(k) – T(k-1) = fk*[1-exp(-1/τ)]/λ    + [1 – exp(-1/τ)]*[( Σ(fj/λ ) for j = 1 to k-1)   – T(k-1)]      …Equ.A7

If we now set α = 1-exp(-1/τ) and make use of Equ.A2, we can rewrite Equ A7 in the following simple form:

T(k) – T(k-1) = Fkα /λ   – α * T(k-1)                                          Equ.A8

Equ.A8 can be used for prediction of temperature from a known cumulative forcing series, or can be readily used to determine the cumulative forcing series from a known temperature dataset.  From the cumulative forcing series, it is a trivial step to abstract the annual incremental forcing data by difference.

For the values of α and λ, I am going to use values which are conditioned to the same response sensitivity of temperature to flux changes as  the GISS-ER Global Circulation Model (GCM).

These values are:-

α  = 0.279563

λ    = 2.94775

Shown below is a plot confirming that  Equ. A8 with these values of alpha and lamda can reproduce the GISS-ER model results with good accuracy.  The correlation is >0.99.

This same governing equation has been applied to at least two other GCMs ( CCSM3 and GFDL ) and, with similar parameter values, works equally well to emulate those model results. While changing the parameter values modifies slightly the values of the fluxes calculated from temperature , it does not significantly change the structural form of the input signal, and nor can it change the primary conclusion of this article, which is that the AGW signal cannot be reliably extracted from the temperature series.

Equally, substituting a more generalised non-linear form for Equ A1 does not change the results at all, provided that the parameters chosen for the non-linear form are selected to show the same sensitivity over the actual observed temperature range. (See here for proof.)

The climate data they don't want you to find — free, to your inbox.
Join readers who get 5–8 new articles daily — no algorithms, no shadow bans.
5 1 vote
Article Rating
291 Comments
Inline Feedbacks
View all comments
markx
June 3, 2013 10:48 pm

jorgekafkazar says: June 3, 2013 at 4:25 pm
The fact that you derive an r² of 1.00 should have told you something, Willis, something really important. As I understand it, climate models calculate temperature changes from forcing changes for individual cells, n° by n°, using the same algorithm everywhere.
Forcings are not measured, temperature is.
So forcings are simply derived from measured temperature changes?

Barry Elledge
June 3, 2013 11:13 pm

Willis at 6:54 on 6/3/2013 refers to Richard “Racehorse” Haynes & his over-the-top everything -plus- the- kitchen- sink defenses.
Many years ago I had a brief but revealing encounter with Haynes. Fully clothed and family friendly let me hasten to add.
I was a grad student in chemistry/biochemistry/pharmacology and the campus law school housed
the national college for criminal defense lawyers. The NCCDL held a summer training program for criminal defense lawyers which was heavily populated by very earnest public defenders along with a smattering of private attorneys with actual paying clients. In order to present a realistic program in white-powder criminal defense, the NCCDL recruited some of us grad students to impersonate police forensic chemists in mock trials. I did very well at the impersonation, and the grateful criminal defense lawyers invited me to the end-of-year banquet. The featured speaker was Racehorse.
That evening I stepped into the elevator to the top-floor restaurant, and to my surprise encountered Haynes; his face was unmistakable to anyone familiar with the Houston newspapers in the 1970s. He was perhaps 5’7″, small and wiry looking. I attempted to introduce myself, but he resolutely looked straight ahead and avoided eye contact.
Lots of non-Texans imagine that Texans are all larger than life. Certainly wasn’t true of Haynes, although he was wearing some nice boots. Haynes was more Kit Carson than Buffalo Bill.
And yet once he stepped in front of a jury he apparently found a different personality from the one he inhabited in the ordinary world.
The public defenders in his audience that evening were on the whole true believers. Even the ones who had been doing it for 30 years. I’m not revealing any secret to observe that most of the wreakage which washes onto the shore of the PD is guilty of something, though not necessarily the particular crime with which they are charged. Yet the PDs uniformly regard themselves as the last line of defense of civilization.
And here one may see a similarity between the PDs and the mainstream climate science types. Both are on a mission greater than themselves.An occasional tweaking of facts in the interests of a grander vision of justice is surely good.

Janice Moore
June 4, 2013 12:01 am

“… a similarity between the PDs and the mainstream climate science types. … .” [Barry Elledge]
Nicely put. I agree. I would say, though, that public defenders and climatologists do waaaay more than an “occasional tweaking of facts.” They regularly LIE.
Climatologists regularly tell blatant untruths, but at least the majority of the climatologists are rationally (though corruptly) motivated by greed and or power or personal “prestige” (within their own slimy circle). A large part of the public criminal defense bar, on the other hand, is motivated solely by a misguided zealotry; they lie, as you pointed out, for their “cause.” Sickening. The P.D.’s correspond not so much to the climatologist “scientists” as to those in the pro-AGW movement who are the “true believers,” who shrilly vent their rage at “the rich” or “the religious right” or what-EVER, yelling, “Save the planet!” and “No blood for oil” and such nonsense.
Some, like racehorse, are sick. They lie simply for the sport of it. They love deceiving people. If they had to choose between earning a good salary at an honest occupation or barely making ends meet by defrauding, they would choose to lie for a living.

Greg Goodman
June 4, 2013 12:16 am

Willis, “trend” usually means slope of an OLS fit of a straight line , it is the same as using OLS to fit a constant to dT/dt. This is exactly what you get if you divide each term by the time increment in your eqn 7. That equation was the solution from imposing the supplementary condition of constant deltaT on the linear model , so the instantaneous dT/dF is also the longer term average once the transient response has settle (the condition you referred to as “equilibrium” in that context).
Nick Stokes says:
June 3, 2013 at 9:40 pm
Theo Goodwin
” Is the set of statements that represent the relationships between forcings and feedbacks buried deep within the model? What work does it do? What are statements that create the theoretical context that defines “climate sensitivity?” Where are they buried?”
No, these statements do not appear anywhere. Forcings of course are supplied. But feedbacks and sensitivity are our mental constructs to understand the results. The computer does not need them. It just balances forces and fluxes, conserves mass and momentum etc.
===
Nick, that would be true if ALL the inputs were known and measured and the only thing is the models was basic physics laws. In reality neither is true. There are quantities, like cloud amount, that are “parametrised” ( aka guestimated ). What should be an output becomes an input and a fairly flexible and subjective one.
From your comments I think you know enough about climate models to realise this, so don’t try to snow us all with the idea that this is all known physical relationships of the “resistors and capacitors” of climate and the feedbacks naturally pop out free of any influence from the modellers, their biases and expectations.
That is not the case.
Now, in view of what I posted here:
http://wattsupwiththat.com/2013/06/03/climate-sensitivity-deconstructed/#comment-1325354
the whole concept of a linear response to radiative forcing seems pretty much blow apart.
Maybe we need to address that issue before spending the next 20 years discussing the statistical robustness of the CS in a model that has no physical relevance.

June 4, 2013 12:19 am

Nick Stokes says: June 3, 2013 at 9:40 pm
An electrical circuit is a collection of resistors, capacitors, transistors etc. There is no box in there labelled underneath “feedback”.
June 3, 2013 at 9:56 pm
Well, Anthony, could you build that circuit from the diagram?
Nick
Here is an analogue electronics feedback circuit applied to climate change
http://www.vukcevic.talktalk.net/FB.htm
Electronic feedback circuits can be ‘modelled’ and build to a great accuracy, due to the fact that the exact properties of every component are known, which unfortunately is not the case with components controlling climate change.
If climate statisticians and model designers did appreciate that, they would save themselves great deal of embarrassment.

Greg Goodman
June 4, 2013 12:33 am

I should add that the non linear response to a negative perturbation which seems to be corrected by tropics capturing a higher percentage of the (reduced) solar input, is not the same as the way it will handle a positive perturbation, which is dumping the excess surface heat to the troposphere.
The latter is not the end of line. Some will radiate to space , some will go to temperate zones through Walker circulation and also end up affecting the polar regions.
Once we dump the erroneous assumption of a simple linear feedback we can get to look at that in more detail but FIRST we dump the erroneous assumption of a simple linear feedback.
We will then need to look at what is really causing the peaks in paramaters like the Pacific wind data that Stuecker et al 2013 found (without reporting the values of the peaks).
As I pointed out, having extracted all the peaks from their graph, there is a lot of evidence there of lunar related periodicity.
http://wattsupwiththat.com/2013/05/26/new-el-nino-causal-pattern-discovered/#comment-1321186
http://wattsupwiththat.com/2013/05/26/new-el-nino-causal-pattern-discovered/#comment-1321374

June 4, 2013 12:44 am

Nick Stokes says
An electrical circuit is a collection of resistors, capacitors, transistors etc. There is no box in there labelled underneath “feedback”. But the circuit does what it does, and we use the notion of feedback to explain it.
You obviously never designed an electrical circuit. A circuit does what it does, because the designer wanted to implement a function. He has to explicitly calculate the feedback that he wants in the function of the circuit and put it into the “box” of his functional diagram.
Hardware,software it is all the same, if you want something to work a certain way you have establish the functionality and then implement it.
You continue to amaze me with the way you fling your BS.Racehorse indeed.
Hal

Nick Stokes
June 4, 2013 12:46 am

Greg Goodman,
“the “resistors and capacitors” of climate and the feedbacks naturally pop out free of any influence from the modellers, their biases and expectations.”
Greg, I didn’t say anything like that. I’m simply pointing out that a GCM doesn’t operate at the level of defining feedbacks and sensitivities as entities. They mainly define exchanges between gridcells. My analogy was with circuits which consist of elements interacting according to Ohm’s Law etc. Feedback concepts are used to describe the circuit operation, but they are not present in the actual circuit elements. Despite AW’s curious notion of a circuit diagram, real ones do not specify feedback. It;s not something you can solder.
Do you think one can find feedbacks and sensitivities as entities in a GCM code?

Nick Stokes
June 4, 2013 1:06 am

Clive,
“Since the models have tuned F so as to correctly reproduce past temperatures”
I don’t believe they have, but I also don’t think that’s relevant. I think the ratio you’ve calculated should have an exponential smooth in the denominator – my derivation is here.
co2fan says: June 4, 2013 at 12:44 am
“You obviously never designed an electrical circuit.”

I have in fact designed and built many electrical circuits. Electronic music was a youthful hobby. But I’m not talking about how they’re designed; I’m talking about what they are. Feedback is an abstraction, as it is in GCM’s.

Reply to  Nick Stokes
June 4, 2013 2:14 am

Nick,
1. To quote from the Met Office:

There has been a debate about why the decadal forecasts from 2012 are indicating a slower rate of warming in the next 5 years than the forecasts made in 2011. Such a change in the forecast is entirely possible from the natural variability in ocean ‘weather’ and the impact that has on global mean temperature.

In other words data assimilation is being used to “guide” the models. If the stalling in global temperatures continues to 2030 (60 year cycle) so climate sensitivity will likewise continue to fall.
2. I agree that the net effect will be a smoothed exponential. However the formula works fine if one assumes a single yearly pulse in forcing Then the sum is made annually from 1750 to 2012 using CO2 data from Mauna Loa interpolated backwards to 280ppm in 1750. This can then be compared to the the result using the forcings published in Otto e al. kindly digitized for us by Willis.

Greg Goodman
June 4, 2013 2:06 am

Nick Stokes says:
Clive,
“Since the models have tuned F so as to correctly reproduce past temperatures”
I don’t believe they have, but I also don’t think that’s relevant.
If that idea is still at the level of a belief you maybe need to look for some factual basis for forming an opinion.
Let me help. Search the comments in my article on Judith Curry site for the word “tuned”.
http://judithcurry.com/2012/03/15/on-the-adjustments-to-the-hadsst3-data-set-2
It is precisely the term John Kennedy of Met. Office Hadley centre used to explain the process of how models were developed to reproduce past temperatures.
” I’m simply pointing out that a GCM doesn’t operate at the level of defining feedbacks and sensitivities as entities.” That is true in general and a valid point to make because several people here seem to think that is explicitly part of the models.
Which brings us back to what I said previously:
that would be true if ALL the inputs were known and measured and the only thing is the models was basic physics laws. In reality neither is true. There are quantities, like cloud amount, that are “parametrised” ( aka guestimated ). What should be an output becomes an input and a fairly flexible and subjective one.
Now perhaps, rather than continuing to get bogged down in pointless discussion about the workings of the erroneous linearity assumption that has lead us down a blind alley to 20 years, you would care to comment on what looks like a clear proof that assumption of a linear response is totally and fundamentally wrong:
http://wattsupwiththat.com/2013/06/03/climate-sensitivity-deconstructed/#comment-1325354
Until that is addressed, any further discussion of linear models is futile.
You seem competent and well informed. You also seem to be of an inclination to disprove such a conclusion. I’d be interested to see if you can find fault with it and explain as a linear reaction what the climate does following a major eruption.

Dudley Horscroft
June 4, 2013 2:12 am

This brings to mind the state of weather forecasting in the 1950s. Someone realized that the claims from the Met Office that their forecasting was 50% right was exactly equal to saying that they were 50% wrong and therefore totally useless. It was pointed out that better results were obtained by looking out the window and saying that tomorrow’s weather would be the same as today, which from memory had a chance of being between 75% and 90% right. “Rain today = rain tomorrow! Fine today = fine tomorrow!” was a very good predictor. Which can be written as Wi = Wo + E, where Wi is weather tomorrow, Wo is weather today, and E is a variable error factor.
Seems to me that is what your equation (1) boils down to. And it appears you have shown that effectively that is what the climate models boil down to, but they have added factors C and T, which represent Carbon dioxide in the atmosphere and temperature. They have put in a positive linkage so that as C increases so does T. T = kC where k is some constant, though I suggest better results would be for using T = aYC + (sine theta)kC, where a is a constant, Y is the year, and sine theta is a sine wave with a period of about 40 years, This should give the necessary results that as CO2 increases so does the temperature, as the year increases so does the temperature, but this is subjected to a periodic fluctuation so for 20 years the climate warms and then for 20 years the climate is near constant.
Take that for what you wish to make of it!

Nick Stokes
June 4, 2013 2:44 am

Greg and Clive,
Clive’s proposition was specific – the forcings have been tuned to match the response. I believe that is quite wrong – you both then talk about something quite different.
Forcings are published. Those of GISS are here. They change infrequently, usually following some published research (apart from each year’s new data).
“It is precisely the term John Kennedy of Met. Office Hadley centre used to explain the process of how models were developed to reproduce past temperatures.”
Greg, I cannot see that there. He said
“Your later explanation that the models have been tuned to fit the global temperature curve (reiterated in a comment by Greg Goodman on March 23, 2012 at 3:30 pm), is likewise incorrect.” Later on some specific issue, he said he wasn’t expert and would ask. That’s all I could see.
Of course people test their models against observation, and go back to check their assumptions if they are going astray. That’s how progress is made. But it isn’t tuning parameters.
And Clive, I simply can’t see what you claim in what you have quoted. Obviously, forecasts change because there is another year of forcing data. And every model run starts from an observed state. For a decadal forecast, this would be a recent state. But that doesn’t mean they are tweaking model parameters. It’s a data based initial condition, which you have to have.

June 4, 2013 3:18 am

Nick,
If what you say is correct, then why are the models so good at predicting the past and yet so bad at predicting the future ?

June 4, 2013 3:27 am

Clive,
How do we know they are bad at predicting the future? Have you been there?

RCSaumarez
June 4, 2013 3:28 am

I have one problem with treating the temperature signal as basically as an autoregressive single pole digital filter:
The autocorrelation function of temperature definitely does not conform to this model. The arguments about persistance in the climate system creating trends has looked at this in some depth and the persistance in temperature appears to have either power law dynamics or be represented by multicompartment model.
There is no doubt that a simply linear model will reproduce the major features of a temperature record, but this is simply a description. It does not prove that the system is physically represented by this model because it has not been perturbed sufficiently to make the deviations clear.
However, looking at the last figure in the post (eq 8 vs GISS) there is significant overshoot of the linear model at inflections, Although these would not effect the crude correlation between the signals by much, it is nevertheless a systematic error term.
However, other models may give a better fit to the temperature record – if you have been following the controversy over the statistical model used by the Met Office in determining the likelihood of the temperature trends being natural fluctuations, in general higher oder ARMA models are used. In fact a first order model such as this does not produce long trends in response to random inputs.

Nick Stokes
June 4, 2013 3:59 am

Greg,
” I’d be interested to see if you can find fault with it and explain as a linear reaction what the climate does following a major eruption.”
I don’t share your enthusiasm for degree-days. I think they exaggerate fairly minor effects. I also don’t have much enthusiasm for stacking, even with your greater accuracy. Too much else is going on. I was sympathetic to Willis’ dropping El Chichon, because it was immediately followed by a big El Nino. But that is the hazard of this approach.
So I’m agnostic. I think there’s more to be gained by looking at more variables as in the Pinatubo paper I linked. But that means even fewer volcanoes available.

Nigel Harris
June 4, 2013 4:16 am

Willis
nor anyone else has noticed what I noticed, which is that the climate sensitivity displayed by any of the models is nothing more than the ratio of the input and output trends. Not only that, but this relationship is common to all of the models as well as to the average of the models.
But surely climate sensitivity is defined as the ratio of the input and output trends! It is the change in surface temperature that results from a unit change in forcing. So if forcing is increasing with a trend of 1, temperature will increase with a trend of 1xsensitivity.

A C Osborn
June 4, 2013 4:24 am

Nick Stokes says:
June 4, 2013 at 3:27 am
Clive,
How do we know they are bad at predicting the future? Have you been there?
One of Nick’s more stupid statements, he obviously doesn’t realise that today is the future of yesterday and last year and the year before that etc.
How long have the models been around?

David
June 4, 2013 4:24 am

Nick Stokes says…
Clive,
How do we know they are bad at predicting the future? Have you been there?
================================================
We are there now…. http://suyts.wordpress.com/2013/06/01/a-repost-of-dr-john-christys-testimony/

Greg Goodman
June 4, 2013 4:30 am

“I was sympathetic to Willis’ dropping El Chichon, because it was immediately followed by a big El Nino.”
That is called selection bias, nothing else. You can possibly dismiss a point if it is so much of an outlier that it is clear that there is an experimental error or data recording/transcription error or similar. You do not remove data because you don’t where it lies.
The cumulative integral , like all integrals is a kind of low pass filter. I used it precisely because it removes “fairly minor effects” . If you wish to object to the technique please show evidence of how it can exaggerate whatever and rather than stating your level of personal “enthusiasm” for it.
Stacking is a means of averaging out other effects which is precisely why we most not arbitrarily remove El Chichon. The stacking is crude because we only have six large eruptions to work with but it is bettern than looking at one or two and falsely concluding cooling because you did not notice that it was already happening beforehand .
The fact that the stacking reveals an underlying circa 2.75 periodicity is in itself remarkable and unexpected. But in such cases it is our expectations that should be brought into question first , not the data.
What those graphs show is fundamentally important, it kicks the legs out from the whole linear feedback / climate sensitivity paradigm. Now, there may be something in there that is questionable / invalid, and no one is the best placed to see the defects in their own work. So I hope you will be able to come up with something more concrete than your “enthusiasm” to criticise it with.

Nick Stokes
June 4, 2013 4:37 am

A C Osborn says: June 4, 2013 at 4:24 am
“he obviously doesn’t realise that today is the future of yesterday and last year and the year before that etc.”

Alas, I do – I have too much future behind me. But I was responding to a charge that models predict the past well by tuning (something) but fail in the future. But where’s that past, then? If that was happening, they would be doing well right now.
My background is computational fluid dynamics, and one thing I learnt very strongly was, stay very close to the physics. Anything else is far too complicated. Getting the physics right is the only thing that will make the program work at all.

June 4, 2013 5:16 am

Nick Stokes, you are behaving like Ken Ham.

June 4, 2013 5:47 am

Nick Stokes says:
June 4, 2013 at 2:44 am
Forcings are published. Those of GISS are here. They change infrequently, usually following some published research (apart from each year’s new data).
=======
The forcings are changed in response to model inaccuracies. The changes are used to bring the models back into line with observation. If you use the model to calculate the forcings, then feed these forcings back into the model, it is statistical nonsense, a circular argument. It is the models that are making the forcings appear correct, not the underlying physics. GIGO.

1 5 6 7 8 9 12