Climate Sensitivity Deconstructed

Guest Post by Willis Eschenbach

I haven’t commented much on my most recents posts, because of the usual reasons: a day job, and the unending lure of doing more research, my true passion. To be precise, recently I’ve been frying my synapses trying to twist my head around the implications of the finding that the global temperature forecasts of the climate models are mechanically and accurately predictable by a one-line equation. It’s a salutary warning: kids, don’t try climate science at home.

your brain on climateFigure 1. What happens when I twist my head too hard around climate models.

Three years ago, inspired by Lucia Liljegren’s ultra-simple climate model that she called “Lumpy”, and with the indispensable assistance of the math-fu of commenters Paul_K and Joe Born, I made what to me was a very surprising discovery. The GISSE climate model could be accurately replicated by a one-line equation. In other words, the global temperature output of the GISSE model is described almost exactly by a lagged linear transformation of the input to the models (the “forcings” in climatespeak, from the sun, volcanoes, CO2 and the like).  The correlation between the actual GISSE model results and my emulation of those results is 0.98 … doesn’t get much better than that. Well, actually, you can do better than that, I found you can get 99+% correlation by noting that they’ve somehow decreased the effects of forcing due to volcanoes. But either way, it was to me a very surprising result. I never guessed that the output of the incredibly complex climate models would follow their inputs that slavishly.

Since then, Isaac Held has replicated the result using a third model, the CM2.1 climate model. I have gotten the CM2.1 forcings and data, and replicated his results. The same analysis has also been done on the GDFL model, with the same outcome. And I did the same analysis on the Forster data, which is an average of 19 model forcings and temperature outputs. That makes four individual models plus the average of 19 climate models, and all of the the results have been the same, so the surprising conclusion is inescapable—the climate model global average surface temperature results, individually or en masse, can be replicated with over 99% fidelity by a simple, one-line equation.

However, the result of my most recent “black box” type analysis of the climate models was even more surprising to me, and more far-reaching.

Here’s what happened. I built a spreadsheet, in order to make it simple to pull up various forcing and temperature datasets and calculate their properties. It uses “Solver” to iteratively select the values of tau (the time constant) and lambda (the sensitivity constant) to best fit the predicted outcome. After looking at a number of results, with widely varying sensitivities, I wondered what it was about the two datasets (model forcings, and model predicted temperatures) that determined the resulting sensitivity. I wondered if there were some simple relationship between the climate sensitivity, and the basic statistical properties of the two datasets (trends, standard deviations, ranges, and the like). I looked at the five forcing datasets that I have (GISSE, CCSM3, CM2.1, Forster, and Otto) along with the associated temperature results. To my total surprise, the correlation between the trend ratio (temperature dataset trend divided by forcing dataset trend) and the climate sensitivity (lambda) was 1.00. My jaw dropped. Perfect correlation? Say what? So I graphed the scatterplot.

sensitivity vs trend ratio models transientFigure 2. Scatterplot showing the relationship of lambda and the ratio of the output trend over the input trend. Forster is the Forster 19-model average. Otto is the Forster input data as modified by Otto, including the addition of a 0.3 W/m2 trend over the length of the dataset. Because this analysis only uses radiative forcings and not ocean forcings, lambda is the transient climate response (TCR). If the data included ocean forcings, lambda would be the equilibrium climate sensitivity (ECS). Lambda is in degrees per W/m2 of forcing. To convert to degrees per doubling of CO2, multiply lambda by 3.7.

Dang, you don’t see that kind of correlation very often, R^2 = 1.00 to two decimal places … works for me.

Let me repeat the caveat that this is not talking about real world temperatures. This is another “black box” comparison of the model inputs (presumably sort-of-real-world “forcings” from the sun and volcanoes and aerosols and black carbon and the rest) and the model results. I’m trying to understand what the models do, not how they do it.

Now, I don’t have the ocean forcing data that was used by the models. But I do have Levitus ocean heat content data since 1950, poor as it might be. So I added that to each of the forcing datasets, to make new datasets that do include ocean data. As you might imagine, when some of the recent forcing goes into heating the ocean, the trend of the forcing dataset drops … and as we would expect, the trend ratio (and thus the climate sensitivity) increases. This effect is most pronounced where the forcing dataset has a smaller trend (CM2.1) and less visible at the other end of the scale (CCSM3). Figure 3 shows the same five datasets as in Figure 2, plus the same five datasets with the ocean forcings added. Note that when the forcing dataset contains the heat into/out of the ocean, lambda is the equilibrium climate sensitivity (ECS), and when the dataset is just radiative forcing alone, lambda is transient climate response. So the blue dots in Figure 3 are ECS, and the red dots are TCR. The average change (ECS/TCR) is 1.25, which fits with the estimate given in the Otto paper of ~ 1.3.

sensitivity vs trend ratio models tcr ecsFigure 3. Red dots show the models as in Figure 2. Blue dots show the same models, with the addition of the Levitus heat content data to each forcing dataset. Resulting sensitivities are higher for the equilibrium condition than for the transient condition, as would be expected. Blue dots show equilibrium climate sensitivity (ECS), while red dots (as in Fig. 2) show the corresponding transient climate response (TCR).

Finally, I ran the five different forcing datasets, with and without ocean forcing, against three actual temperature datasets—HadCRUT4, BEST, and GISS LOTI. I took the data from all of those, and here are the results from the analysis of those 29 individual runs:

lambda vs trend ratio allFigure 4. Large red and blue dots are as in Figure 3. The light blue dots are the result of running the forcings and subsets of the forcings, with and without ocean forcing, and with and without volcano forcing, against actual datasets. Error shown is one sigma. 

So … my new finding is that the climate sensitivity of the models, both individual models and on average, is equal to the ratio of the trends of the forcing and the resulting temperatures. This is true whether or not the changes in ocean heat content are included in the calculation. It is true for both forcings vs model temperature results, as well as forcings run against actual temperature datasets. It is also true for subsets of the forcing, such as volcanoes alone, or for just GHG gases.

And not only did I find this relationship experimentally, by looking at the results of using the one-line equation on models and model results. I then found that can derive this relationship mathematically from the one-line equation (see Appendix D for details).

This is a clear confirmation of an observation first made by Kiehl in 2007, when he suggested an inverse relationship between forcing and sensitivity.

The question is: if climate models differ by a factor of 2 to 3 in their climate sensitivity, how can they all simulate the global temperature record with a reasonable degree of accuracy. Kerr [2007] and S. E. Schwartz et al. (Quantifying climate change–too rosy a picture?, available [here]) recently pointed out the importance of understanding the answer to this question. Indeed, Kerr [2007] referred to the present work, and the current paper provides the ‘‘widely circulated analysis’’ referred to by Kerr [2007]. This report investigates the most probable explanation for such an agreement. It uses published results from a wide variety of model simulations to understand this apparent paradox between model climate responses for the 20th century, but diverse climate model sensitivity.

However, Kiehl ascribed the variation in sensitivity to a difference in total forcing, rather than to the trend ratio, and as a result his graph of the results is much more scattered.

kiehl sensitivity vs total forcingFigure 5. Kiehl results, comparing climate sensitivity (ECS) and total forcing. Note that unlike Kiehl, my results cover both equilibrium climate sensitivity (ECS) and transient climate response (TCR).

Anyhow, there’s a bunch more I could write about this finding, but I gotta just get this off my head and get back to my day job. A final comment.

Since I began this investigation, the commenter Paul_K has since written two outstanding posts on the subject over at Lucia’s marvelous blog, The Blackboard (Part 1, Part 2). In those posts, he proves mathematically that given what we know about the equation that replicates the climate models, that we cannot … well, I’ll let him tell it in his own words:

The Question:  Can you or can you not estimate Equilibrium Climate Sensitivity (ECS) from  120 years of temperature and OHC data  (even) if the forcings are known?

The Answer is:  No.  You cannot.  Not unless other information is used to constrain the estimate.

An important corollary to this is:- The fact that a GCM can match temperature and heat data tells us nothing about the validity of that GCM’s estimate of Equilibrium Climate Sensitivity.

Note that this is not an opinion of Paul_K’s. It is a mathematical result of the fact that even if we use a more complex “two-box” model, we can’t constrain the sensitivity estimates. This is a stunning and largely unappreciated conclusion. The essential problem is that for any given climate model, we have more unknowns than we have fundamental equations to constrain them.

CONCLUSIONS

Well, it was obvious from my earlier work that the models were useless for either hindcasting or forecasting the climate. They function indistinguishably from a simple one-line equation.

On top of that, Paul_K has shown that they can’t tell us anything about the sensitivity, because the equation itself is poorly constrained.

Finally, in this work I’ve shown that the climate sensitivity “lambda” that the models do exhibit, whether it represents equilibrium climate sensitivity (ECS) or transient climate response (TCR), is nothing but the ratio of the trends of the input and the output. The choice of forcings, models and datasets is quite immaterial. All the models give the same result for lambda, and that result is the ratio of the trends of the forcing and the response. This most recent finding completely explains the inability of the modelers to narrow the range of possible climate sensitivities despite thirty years of modeling.

You can draw your own conclusions from that, I’m sure …

My regards to all,

w.

Appendix A : The One-Line Equation

The equation that Paul_K, Isaac Held, and I have used to replicate the climate models is as follows:

OLE equation 1

Let me break this into four chunks, separated by the equals sign and the plus signs, and translate each chunk from math into English. Equation 1 means:

This year’s temperature (T1) is equal to

Last years temperature (T0) plus

Climate sensitivity (λ) times this year’s forcing change (∆F1) times (one minus the lag factor) (1-a) plus

Last year’s temperature change (∆T0) times the same lag factor (a)

Or to put it another way, it looks like this:

T1 =                      <—  This year’s temperature [ T1 ] equals

    T0 +                  <—  Last year’s temperature [ T0 ] plus

    λ  ∆F1  (1-a) +    <— How much radiative forcing is applied this year [ ∆F1 (1-a) ],  times climate sensitivity lambda ( λ ), plus

    ∆T0  a                 <— The remainder of the forcing, lagged out over time as specified by the lag factor “a

The lag factor “a” is a function of the time constant “tau” ( τ ), and is given by

OLE equation 1a

This factor “a” is just a constant number for a given calculation. For example, when the time constant “tau” is four years, the constant “a” is 0.78. Since 1 – a = 0.22, when tau is four years, about 22% of the incoming forcing is added immediately to last years temperature, and rest of the input pulse is expressed over time.

Appendix B: Physical Meaning

So what does all of that mean in the real world? The equation merely reflects that when you apply heat to something big, it takes a while for it to come up to temperature. For example, suppose we have a big brick in a domestic oven at say 200°C. Suppose further that we turn the oven heat up suddenly to 400° C for an hour, and then turn the oven back down to 200°C. What happens to the temperature of the big block of steel?

If we plot temperature against time, we see that initially the block of steel starts to heat fairly rapidly. However as time goes on it heats less and less per unit of time until eventually it reaches 400°C. Figure B2 shows this change of temperature with time, as simulated in my spreadsheet for a climate forcing of plus/minus one watt/square metre. Now, how big is the lag? Well, in part that depends on how big the brick is. The larger the brick, the longer the time lag will be. In the real planet, of course, the ocean plays the part of the brick, soaking up

The basic idea of the one-line equation is the same tired claim of the modelers. This is the claim that the changing temperature of the surface of the planet is linearly dependent on the size of the change in the forcing. I happen to think that this is only generally the rule, and that the temperature is actually set by the exceptions to the rule. The exceptions to this rule are the emergent phenomena of the climate—thunderstorms, El Niño/La Niña effects and the like. But I digress, let’s follow their claim for the sake of argument and see what their models have to say. It turns out that the results of the climate models can be described to 99% accuracy by the setting of two parameters—”tau”,  or the time constant, and “lambda”, or the climate sensitivity. Lambda can represent either transient sensitivity, called TCR for “transient climate response”, or equilibrium sensitivity, called ECS for “equilibrium climate sensitivity”.

one line equation on pulseFigure B2. One-line equation applied to a square-wave pulse of forcing. In this example, the sensitivity “lambda” is set to unity (output amplitude equals the input amplitude), and the time constant “tau” is set at five years.

Note that the lagging does not change the amount of energy in the forcing pulse. It merely lags it, so that it doesn’t appear until a later date.

So that is all the one-line equation is doing. It simply applies the given forcing, using the climate sensitivity to determine the amount of the temperature change, and using the time constant “tau” to determine the lag of the temperature change. That’s it. That’s all.

The difference between ECS (climate sensitivity) and TCR (transient response) is whether slow heating and cooling of the ocean is taken into account in the calculations. If the slow heating and cooling of the ocean is taken into account, then lambda is equilibrium climate sensitivity. If  the ocean doesn’t enter into the calculations, if the forcing is only the radiative forcing, then lambda is transient climate response.

Appendix C. The Spreadsheet

In order to be able to easily compare the various forcings and responses, I made myself up an Excel spreadsheet. It has a couple drop-down lists that let me select from various forcing datasets and various response datasets. Then I use the built-in Excel function “Solver” to iteratively calculate the best combination of the two parameters, sensitivity and time constant, so that the result matches the response. This makes it quite simple to experiment with various combinations of forcing and responses. You can see the difference, for example, between the GISS E model with and without volcanoes. It also has a button which automatically stores the current set of results in a dataset which is slowly expanding as I do more experiments.

In a previous post called Retroactive Volcanoes, (link) I had discussed the fact that Otto et al. had smoothed the Forster forcings dataset using a centered three point average. In addition they had added a trend fromthe beginning tothe end of the dataset of 0.3 W per square meter. In that post I had I had said that the effect of that was unknown, although it might be large. My new spreadsheet allows me to actually determine what the effect of that actually is.

It turns out that the effect of those two small changes is to take the indicated climate sensitivity from 2.8 degrees/doubling to 2.3° per doubling.

One of the strangest findings to come out of this spreadsheet was that when the climate models are compared each to their own results, the climate sensitivity is a simple linear function of the ratio of the trends of the forcing and the response. This was true of both the individual models, and the average of the 19 models studied by Forster. The relationship is extremely simple. The climate sensitivity lambda is 1.07 times the ratio of the trends for the models alone, and equal to the trends when compared to all the results. This is true for  all of the models without adding in the ocean heat content data, and also all of the models including the ocean heat content data.

In any case I’m going to have to convert all this to the computer language R. Thanks to Stephen McIntyre, I learned the computer language R and have never regretted it. However, I still do much of my initial exploratory forays in Excel. I can make Excel do just about anything, so for quick and dirty analyses like the results above I use Excel.

So as an invitation to people to continue and expand this analysis, my spreadsheet is available here. Note that it contains a macro to record the data from a given analysis. At present it contains the following data sets:

IMPULSES

Pinatubo in 1900

Step Change

Pulse

FORCINGS

Forster No Volcano

Forster N/V-Ocean

Otto Forcing

Otto-Ocean ∆

Levitus watts Ocean Heat Content ∆

GISS Forcing

GISS-Ocean ∆

Forster Forcing

Forster-Ocean ∆

DVIS

CM2.1 Forcing

CM2.1-Ocean ∆

GISS No Volcano

GISS GHGs

GISS Ozone

GISS Strat_H20

GISS Solar

GISS Landuse

GISS Snow Albedo

GISS Volcano

GISS Black Carb

GISS Refl Aer

GISS Aer Indir Eff

RESPONSES

CCSM3 Model Temp

CM2.1 Model Temp

GISSE ModelE Temp

BEST Temp

Forster Model Temps

Forster Model Temps No Volc

Flat

GISS Temp

HadCRUT4

You can insert your own data as well or makeup combinations of any of the forcings. I’ve included a variety of forcings and responses. This one-line equation model has forcing datasets, subsets of those such as volcanoes only or aerosols only, and the simple impulses such as a square step.

Now, while this spreadsheet is by no means user-friendly, I’ve tried to make it at least not user-aggressive.

Appendix D: The Mathematical Derivation of the Relationship between Climate Sensitivity and the Trend Ratio.

I have stated that the relationship where climate sensitivity is equal to the ratio between trends of the forcing and response datasets.

We start with the one-line equation:

OLE equation 1

Let us consider the situation of a linear trend in the forcing, where the forcing is ramped up by a certain amount every year. Here are lagged results from that kind of forcing.

lagged results ramp forcing

Figure B1. A steady increase in forcing over time (red line), along with the situation with the time constant (tau) equal to zero, and also a time constant of 20 years. The residual is offset -0.6 degrees for clarity.

Note that the only difference that tau (the lag time constant) makes is how long it takes to come to equilibrium. After that the results stabilize, with the same change each year in both the forcing and the temperature (∆F and ∆T). So let’s consider that equilibrium situation.

Subtracting T0 from both sides gives

OLE equation 3a

Now, T1 minus T0 is simply ∆T1. But since at equilibrium all the annual temperature changes are the same, ∆T1 = ∆T0 = ∆T, and the same is true for the forcing. So equation 2 simplifies to

OLE equation 4a

Dividing by ∆F gives us

OLE equation 5actual

Collecting terms, we get

OLE equation 6

And dividing through by (1-a) yields

OLE equation 7actual

Now, out in the equilibrium area on the right side of Figure B1, ∆T/∆F is the actual trend ratio. So we have shown that at equilibrium

OLE equation 8

But if we include the entire dataset, you’ll see from Figure B1 that the measured trend will be slightly less than the trend at equilibrium.

And as a result, we would expect to find that lambda is slightly larger than the actual trend ratio. And indeed, this is what we found for the models when compared to their own results, lambda = 1.07 times the trend ratio.

When the forcings are run against real datasets, however, it appears that the greater variability of the actual temperature datasets averages out the small effect of tau on the results, and on average we end up with the situation shown in Figure 4 above, where lambda is experimentally determined to be equal to the trend ratio.

Appendix E: The Underlying Math

The best explanation of the derivation of the math used in the spreadsheet is an appendix to Paul_K’s post here. Paul has contributed hugely to my analysis by correcting my mistakes as I revealed them, and has my great thanks.

Climate Modeling – Abstracting the Input Signal by Paul_K

I will start with the (linear) feedback equation applied to a single capacity system—essentially the mixed layer plus fast-connected capacity:

C dT/dt = F(t) – λ *T                                                            Equ.  A1

Where:-

C is the heat capacity of the mixed layer plus fast-connected capacity (Watt-years.m-2.degK-1)

T is the change in temperature from time zero  (degrees K)

T(k) is the change in temperature from time zero to the end of the kth year

t is time  (years)

F(t) is the cumulative radiative and non-radiative flux “forcing” applied to the single capacity system  (Watts.m-2)

λ  is the first order feedback parameter (Watts.m-2.deg K-1)

We can solve Equ A1 using superposition.  I am going to use  timesteps of one year.

Let the forcing increment applicable to the jth year be defined as fj.   We can therefore write

F(t=k )  = Fk =  Σ fj       for j = 1 to k                                Equ. A2

The temperature contribution from the forcing increment fj at the end of the kth

year is given by

ΔTj(t=k) =  fj(1 – exp(-(k+1-j)/τ))/λ                                                     Equ.A3

where τ is set equal to C/λ   .

By superposition, the total temperature change at time t=k is given by the summation of all such forcing increments.  Thus

T(t=k) = Σ fj * (1 – exp(-(k+1-j)/τ))/ λ     for j = 1 to k                                   Equ.A4

Similarly, the total temperature change at time t= k-1 is given by

T(t=k-1) =  Σ fj (1 – exp(-(k-j)/τ))/ λ         for j = 1 to k-1                               Equ.A5

Subtracting Equ. A4 from Equ. A5 we obtain:

T(k) – T(k-1) = fk*[1-exp(-1/τ)]/λ    +  ( [1 – exp(-1/τ)]/λ ) (Σfj*exp(-(k-j)/τ) for j = 1 to k-1)     …Equ.A6

We note from Equ.A5 that

(Σfj*exp(-(k-j)/τ)/λ for j = 1 to k-1)  =  ( Σ(fj/λ ) for j = 1 to k-1)   – T(k-1)

Making this substitution, Equ.A6 then becomes:

T(k) – T(k-1) = fk*[1-exp(-1/τ)]/λ    + [1 – exp(-1/τ)]*[( Σ(fj/λ ) for j = 1 to k-1)   – T(k-1)]      …Equ.A7

If we now set α = 1-exp(-1/τ) and make use of Equ.A2, we can rewrite Equ A7 in the following simple form:

T(k) – T(k-1) = Fkα /λ   – α * T(k-1)                                          Equ.A8

Equ.A8 can be used for prediction of temperature from a known cumulative forcing series, or can be readily used to determine the cumulative forcing series from a known temperature dataset.  From the cumulative forcing series, it is a trivial step to abstract the annual incremental forcing data by difference.

For the values of α and λ, I am going to use values which are conditioned to the same response sensitivity of temperature to flux changes as  the GISS-ER Global Circulation Model (GCM).

These values are:-

α  = 0.279563

λ    = 2.94775

Shown below is a plot confirming that  Equ. A8 with these values of alpha and lamda can reproduce the GISS-ER model results with good accuracy.  The correlation is >0.99.

This same governing equation has been applied to at least two other GCMs ( CCSM3 and GFDL ) and, with similar parameter values, works equally well to emulate those model results. While changing the parameter values modifies slightly the values of the fluxes calculated from temperature , it does not significantly change the structural form of the input signal, and nor can it change the primary conclusion of this article, which is that the AGW signal cannot be reliably extracted from the temperature series.

Equally, substituting a more generalised non-linear form for Equ A1 does not change the results at all, provided that the parameters chosen for the non-linear form are selected to show the same sensitivity over the actual observed temperature range. (See here for proof.)

5 1 vote
Article Rating

Discover more from Watts Up With That?

Subscribe to get the latest posts sent to your email.

291 Comments
Inline Feedbacks
View all comments
Stephen Richards
June 3, 2013 11:21 am

Bloody brilliant. I love maths. You cannot be wrong if the answer is correct it takes you back to the question.
Willis, you are a bloody genius.

wws
June 3, 2013 11:24 am

There’s only one factor that you left out, which I think is very important: the amount of warming predicted is directly proportional to the amount of funding expected to be realized by said prediction. Which leads to the corollary: A climate modeler’s Income stream is inversely proportional to the amount of cooling which he allows his model to show.

Rob Schneider
June 3, 2013 11:44 am

Brilliant. Simply brilliant.
(Look forward to seeing the R results as it would be a worthwhile project for me to learn both R and this modelling issue.)

milodonharlani
June 3, 2013 11:51 am

Without assumed water vapor feedback, CS is one degree C or less for first CO2 doubling. Unfortunately for the Team, this key but evidence-free assumption has been shown false by the only “climate science expert” who counts, Mother Nature.

June 3, 2013 12:04 pm

err
I pointed this out to you back in 2008
http://climateaudit.org/2008/05/09/giss-model-e-data/#comment-148141
http://rankexploits.com/musings/2008/lumpy-vs-model-e/
nothing surprising about this. You can fit the super complicated Line by Line radiative transfer models with a simple function delta forcing = 5.35ln(C02a/C02b)
Finally
‘Now, I don’t have the ocean forcing data that was used by the models. ”
there is no ocean forcing data. Forcings are all radiative components. The ocean is forced.
the atmosphere is forced. the land is forced. They respond to this forcing.

USDOTguy
June 3, 2013 12:04 pm

I’m not a climatologist, but I’ve built a number of models to forecast and/or assign to specific traffics the costs of a shared transportation network. I’ve found, generally, that no matter how many complex factors are included, there are usually only one or two that determine the result. It’s nice to know that is true for climate models as well.

June 3, 2013 12:07 pm

• λ ∆F1 (1-a) + <— How much radiative forcing is applied this year [ ∆F1 (1-a) ], times climate sensitivity lambda
—–
So, all the warmists need do is be able to predict annual radiative forcings for the next century and their predictions will spot on allowing for spikes like eruptions. Wait. How are they doing with SC 24? Not so good eh.
I say the warmists should release all their funding towards solar modeling.

milodonharlani
June 3, 2013 12:13 pm

It should be obvious that CO2 is among the least important potential climate forcings. Mean global T, as nearly as can be reconstructed, was about the same at 7000 ppm & 700. It was also about the same as today with carbon dioxide at 4000 to 7000 ppm during the Ordovician glaciation, although the sun was four percent cooler then.

beng
June 3, 2013 12:16 pm

Billions of dollars & hordes of researchers for something I could do w/a slide rule….

June 3, 2013 12:18 pm

What is it with climate scientists? Not only did they all appear to have skipped statistics classes, they appear to have skipped the philosophy of science courses as well.
“entities must not be multiplied beyond necessity” was written by John Punch from Cork in 1639, although generally referred to as Occam’s Razor.
Excellent work, Willis.

H.R.
June 3, 2013 12:20 pm

[…] So … my new finding is that the climate sensitivity of the models, both individual models and on average, is equal to the ratio of the trends of the forcing and the resulting temperatures. This is true whether or not the changes in ocean heat content are included in the calculation. It is true for both forcings vs model temperature results, as well as forcings run against actual temperature datasets. It is also true for subsets of the forcing, such as volcanoes alone, or for just GHG gases.[…]”
==================================================================
How do you make a jaw-dropper emoticon? wOw!
P.S. I want my tax money back. Meanwhile, they can pay Willis his usual day rate for the same results – what the heck! 2-3x(day rate) – and put how many billions back in our pockets?

Matthew R Marler
June 3, 2013 12:23 pm

But since at equilibrium all the annual temperature changes are the same, ∆T1 = ∆T0 = ∆T, and the same is true for the forcing.
At equilibrium, all of the temperature changes and forcing changes are 0. The first is the definition of equilibrium,, and the second is one of the necessary conditions for an equilibrium to be possible.
As you wrote, you have modeled the models, but you have not modeled the climate. You have taken two time series, modeled temperature and forcing, where modeled temperature is a function of the forcing. From those two correlated time series you have written modeled T as a linear function of changed forcing and changed temperature, where the two changes are not computed on the same interval.

mwhite
June 3, 2013 12:24 pm

“Energy Secretary Ed Davey is to make an unprecedented attack later on climate change sceptics.”
http://www.bbc.co.uk/news/science-environment-22745578
“n a speech, the Lib Dem minister will complain that right-wing newspapers are undermining science for political ends.”
Pot to kettle!!!

Mark Bofill
June 3, 2013 12:26 pm

Thanks Willis! Saves a lot of time and effort and headaches to have a simple expression like this to approximate climate models, looking forward to playing around with this.

DirkH
June 3, 2013 12:26 pm

Steven Mosher says:
June 3, 2013 at 12:04 pm
“nothing surprising about this. You can fit the super complicated Line by Line radiative transfer models with a simple function delta forcing = 5.35ln(C02a/C02b)”
Yet you still think it is not a pseudoscience?

GlynnMhor
June 3, 2013 12:27 pm

Matthew writes: “… you have modeled the models, but you have not modeled the climate.”
That’s exactly the point. The models do not model the climate either, and are in effect just a representation of the forcing assumptions input to them.

MJB
June 3, 2013 12:31 pm

This makes intuitive sense and long over-due to be articulated in such a clear way – thanks. Climate modellers have always been quick to demonstrate how well they can hindcast, but really they’re just saying 2 + 2 – 4 = 0 and congratulating each other on figuring out the third parameter was 4. Of course their colleague was solving 2 + 6 – 8 = 0 which is equally impressive and worthy of funding. I don’t have the reference in front of me, but I recall at least one GCM being criticized for including an interactive “solver” type application integrated into the parameter setting process to handle just such gaming.

Roy Spencer
June 3, 2013 12:33 pm

I’m happy that Willis is understanding some of the math in simple one-line climate models, but as Steve Mosher has alluded to, there is really nothing new here. Of course the global average behavior of a complex 3-D climate model can be accurately approximated with a 0-D model, mainly because global average temperature change is just a function of energy in (forcings) versus energy out (feedbacks). But that really doesn’t help us know whether either one has anything to do with reality.)
Willis, you are a smart guy and a quick learner, and you have a talent for writing. Try to understand what has already been done, and build upon that… rather than reinventing it.

Svend Ferdinandsen
June 3, 2013 12:38 pm

It shows again, what i suspected, that the climate models only makes it all look more scientific by wrapping it in super complicated programming.
The models have value when you analyse parts of the climate or weather, but when you take the average of the whole earth, and on top of that average over a year or more and then furthermore average over 30+ models, then the only part left is the forcings and sensitivity.
I have seen it stated from different sources, that it more or less is so.

David Harrington
June 3, 2013 12:47 pm

Wow, respect to you for this. Are you expecting the “Big Oil” check to arrive soon?
🙂

Richard G
June 3, 2013 12:47 pm

Blinded me with science.
You da man with the math.
to paraphrase you: So many variables, so little time. (or is that dimes)

June 3, 2013 1:02 pm
Roger Hird
June 3, 2013 1:03 pm

Fascinating and intuitively convincing (from what I know about modelling) BUT a passing comment and a friendly warning – and if you are already aware of this, my apologies, Spreadsheets generally and Excel in particular can be false friends. I was involved in a UK government programme on software quality which had as one theme, the dangers of dependence on the internal mathematics/functions in spreadsheets in critical calculations. We were mainly looking at metrological (rather than meteorological!) applications but the point was that the internal functions/algorithms in spreadsheets, particularly Exel, could not be safely depended upon if the data concerned was other than very straightforward. We found some of the more specifically science oriented packages were much more reliable. This was a few years ago but I think you may still find some more reliable and tested algorithms that can be plugged into Exel on the NPL website at http://www.npl.gov.uk/ssfm/ – ssfm was the software support for metrology programme (excuse the British spelling).

1 2 3 12