An Observational Estimate of Climate Sensitivity

Guest Post by Willis Eschenbach

“Climate sensitivity” is the name for the measure of how much the earth’s surface is supposed to warm for a given change in what is called “forcing”. A change in forcing means a change in the net downwelling radiation at the top of the atmosphere, which includes both shortwave (solar) and longwave (“greenhouse”) radiation.

There is an interesting study of the earth’s radiation budget called “Long-term global distribution of Earth’s shortwave radiation budget at the top of atmosphere“, by N. Hatzianastassiou et al. Among other things it contains a look at the albedo by hemisphere for the period 1984-1998. I realized today that I could use that data, along with the NASA solar data, to calculate an observational estimate of equilibrium climate sensitivity.

Now, you can’t just look at the direct change in solar forcing versus the change in temperature to get the long-term sensitivity. All that will give you is the “instantaneous” climate sensitivity. The reason is that it takes a while for the earth to warm up or cool down, so the immediate change from an increase in forcing will be smaller than the eventual equilibrium change if that same forcing change is sustained over a long time period.

However, all is not lost. Figure 1 shows the annual cycle of solar forcing changes and temperature changes.

Figure 1. Lissajous figure of the change in solar forcing (horizontal axis) versus the change in temperature (vertical axis) on an annual average basis.

So … what are we looking at in Figure 1?

I began by combining the NASA solar data, which shows month-by-month changes in the solar energy hitting the earth, with the albedo data. The solar forcing in watts per square metre (W/m2) times (1 minus albedo) gives us the amount of incoming solar energy that actually makes it into the system. This is the actual net solar forcing, month by month.

Then I plotted the changes in that net solar forcing (after albedo reflections) against the corresponding changes in temperature, by hemisphere. First, a couple of comments about that plot.

The Northern Hemisphere (NH) has larger temperature swings (vertical axis) than does the Southern Hemisphere (SH). This is because more of the NH is land and more of the SH is ocean … and the ocean has a much larger specific heat. This means that the ocean takes more energy to heat it than does the land.

We can also see the same thing reflected in the slope of the ovals. The slope of the ovals is a measure of the “lag” in the system. The harder it is to warm or cool the hemisphere, the larger the lag, and the flatter the slope.

So that explains the red and the blue lines, which are the actual data for the NH and the SH respectively.

For the “lagged model”, I used the simplest of models. This uses an exponential function to approximate the lag, along with a variable “lambda_0” which is the instantaneous climate sensitivity. It models the process in which an object is warmed by incoming radiation. At first the warming is fairly fast, but then as time goes on the warming is slower and slower, until it finally reaches equilibrium. The length of time it takes to warm up is governed by a “time constant” called “tau”. I used the following formula:

ΔT(n+1) = λ∆F(n+1)/τ + ΔT(n) exp(-1/ τ)

where ∆T is change in temperature, ∆F is change in forcing, lambda (λ) is the instantaneous climate sensitivity, “n” and “n + 1” are the times of the observations,and tau (τ) is the time constant. I used Excel to calculate the values that give the best fit for both the NH and the SH, using the “Solver” tool. The fit is actually quite good, with an RMS error of only 0.2°C and 0.1°C for the NH and the SH respectively.

Now, as you might expect, we get different numbers for both lambda_0 and tau for the NH and the SH, as follows:

Hemisphere         lambda_0     Tau (months)

    NH               0.08           1.9

    SH               0.04           2.4

Note that (as expected) it takes longer for the SH to warm or cool than for the NH (tau is larger for the SH). In addition, as expected, the SH changes less with a given amount of heating.

Now, bear in mind that lambda_0 is the instantaneous climate sensitivity. However, since we also know the time constant, we can use that to calculate the equilibrium sensitivity. I’m sure there is some easy way to do that, but I just used the same spreadsheet. To simulate a doubling of CO2, I gave it a one-time jump of 3.7 W/m2 of forcing.

The results were that the equilibrium climate sensitivity to a change in forcing from a doubling of CO2 (3.7 W/m2) are 0.4°C in the Northern Hemisphere, and 0.2°C in the Southern Hemisphere. This gives us an overall average global equilibrium climate sensitivity of 0.3°C for a doubling of CO2.

Comments and criticisms gladly accepted, this is how science works. I put my ideas out there, and y’all try to find holes in them.

w.

NOTE: The spreadsheet used to do the calculations and generate the graph is here.

NOTE: I also looked at modeling the change using the entire dataset which covers from 1984 to 1998, rather than just using the annual averages (not shown). The answers for lambda_0 and tau for the NH and the SH came out the same (to the accuracy reported above), despite the general warming over the time period. I am aware that the time constant “tau”, at only a few months, is shorter than other studies have shown. However … I’m just reporting what I found. When I try modeling it with a larger time constant, the angle comes out all wrong, much flatter.

While it is certainly possible that there are much longer-term periods for the warming, they are not evident in either of my analyses on this data. If such longer-term time lags exist, it appears that they are not significant enough to lengthen the lags shown in my analysis above. The details of the long-term analysis (as opposed to using the average as above) are shown in the spreadsheet.

0 0 votes
Article Rating

Discover more from Watts Up With That?

Subscribe to get the latest posts sent to your email.

252 Comments
Inline Feedbacks
View all comments
George E. Smith;
May 31, 2012 9:48 am

“”””” Phil. says:
May 31, 2012 at 8:53 am
George E. Smith; says:
May 29, 2012 at 6:27 pm
The logarithmic “climate sensitivity” is a point where Phil and I part company……”””””
Now you’ve got me all confused Phil .
Over what region of the mathematical function: y = log (x) , does it have a constant slope ,
y = m(x)+C and in what region does it follow y = a(x)^0.5 ?
I always thought y = log (x) tracked y = log (x) for all values of (x)
Or are you saying climate sensitivity is digital; it is the Temperature rise for CO2 going from 280 ppm to 560 ppm (doubling) and for no other values of CO2 ?
Can we just say it is “non-linear” The experimental data certainly agrees with that ; and it is too noisy to confirm any actual , over any range of values. mathematical formula

May 31, 2012 9:51 am

Magic Turtle says:
May 30, 2012 at 1:02 pm
Sorry, but I am not following you. How does the optical thickness of CO2 absorption prevent one from deriving a valid formula for radiation-absorption by CO2 from the Beer-Lambert law? I cannot understand an argument that you are not putting, Phil!

Because the B-L law doesn’t hold for the optical thickness of the CO2 absorption in our present atmosphere. Your citation explicitly says so too!
MT: My approach does not focus on one particular frequency of absorption as yours does, but focuses on the sizes of the absorption wavebands instead. I can imagine that the optical thickening effect would broaden that waveband to a degree and render it more shallow to a degree as well, but I doubt that such modifications would be significant.
Your imagination and doubts are rendered moot by the available experimental evidence!
Form the Wiki cite by you:
“The derivation assumes that every absorbing particle behaves independently with respect to the light and is not affected by other particles. Error is introduced when particles are lying along the same optical path such that some particles are in the shadow of others. This occurs in highly concentrated solutions. In practice, when large absorption values are measured, dilution is required to achieve accurate results. Measurements of absorption in the range of Iℓ/Io=0.1 to 1 are less affected by shadowing than other sources of random error. In this range, the ODE model developed above is a good approximation; measurements of absorption in this range are linearly related to concentration. At higher absorbances, concentrations will be underestimated due to this shadow effect unless one employs a more sophisticated model that describes the non-linear relationship between absorption and concentration.
As shown in the absorption spectra for current atmospheric conditions and doubled CO2 concentration which I linked before:
http://i302.photobucket.com/albums/nn107/Sprintstar400/CO2spectra-1.gif
the majority of the band has Iℓ/Io much less than 0.1, thus as your own cite states “a more sophisticated model that describes the non-linear relationship between absorption and concentration” is needed. The curve of growth which I supplied the derivation for is exactly that, at current concentrations of the atmosphere the CO2 band is in the logarithmic regime. The Q-branch saturates even at 1ppm.

May 31, 2012 10:29 am

George E. Smith; says:
May 31, 2012 at 9:48 am
“”””” Phil. says:
May 31, 2012 at 8:53 am
George E. Smith; says:
May 29, 2012 at 6:27 pm
The logarithmic “climate sensitivity” is a point where Phil and I part company……”””””
Now you’ve got me all confused Phil .
Over what region of the mathematical function: y = log (x) , does it have a constant slope ,
y = m(x)+C and in what region does it follow y = a(x)^0.5 ?

y=fn(x) is y=m(x) as x➔0, y=a√x as x➔∞ in a transition region in between y≅log(x)
I always thought y = log (x) tracked y = log (x) for all values of (x)
Or are you saying climate sensitivity is digital; it is the Temperature rise for CO2 going from 280 ppm to 560 ppm (doubling) and for no other values of CO2 ?

As shown above there is a range of values for which the dependance is logarithmic, the present conditions fall in that range for the CO2 absorption band.
I suggest you read the note I referenced George, it’s explained there.
Can we just say it is “non-linear” The experimental data certainly agrees with that ; and it is too noisy to confirm any actual , over any range of values. mathematical formula
The data for the CO2 absorption isn’t noisy at all George.

joeldshore
May 31, 2012 10:43 am

George E. Smith says:

Over what region of the mathematical function: y = log (x) , does it have a constant slope ,
y = m(x)+C and in what region does it follow y = a(x)^0.5 ?
I always thought y = log (x) tracked y = log (x) for all values of (x)

George, I have to admit I am a bit puzzled by your hangup on this. We’ve had discussions about it several times and the basic points are not that complicated. The log(x) is not a fundamental law of nature; it is an approximation that holds pretty well in a certain regime (that regime being, as I understand it, when the primary absorption band contributing to the radiative forcing is saturated in the middle but not in the wings). It turns out that this is the regime that we are currently in with CO2 (and remain in over a fairly broad range of going down or up in temperature).
However, in the dilute regime, a better approximation is that the forcing is linear in the concentration. And, then there is a high concentration regime where the forcing is roughly proportional to the square root of the concentration.
These are, of course, approximations, and there are deviations from them. In fact, the deviations are such that if you use the logarithmic formula with the values that hold for the regime we are in and extend it up past doubling, then it does underestimate the forcing a bit at these higher values, but the underestimate is not that bad and, relative to other uncertainties, such as the climate sensitivity, it is not worth making too big a fuss about.

KR
May 31, 2012 10:47 am

Willis Eschenbach – The issues I have with single-box models such as the one you fitted in this thread are that they give different answers for different time scales (http://wattsupwiththat.com/2010/12/19/model-charged-with-excessive-use-of-forcing/, http://wattsupwiththat.com/2011/01/17/zero-point-three-times-the-forcing/, here), they require arbitrary scaling to fit (http://wattsupwiththat.com/2011/05/14/life-is-like-a-black-box-of-chocolates/ where you iteratively modified the volcanic forcing), and they do not represent known and fairly simple physics – that the atmosphere with very little thermal mass warms and cools at a different rate than the oceans.
A two-box model is a huge improvement in fitting forcings of different temporal variation, and doesn’t need arbitrary scaling of those forcings with different time scales. I will note that it is, as well, likely insufficient – I recall an article (can’t lay my hands on the reference at the moment) wherein a five-box model was the minimum complexity for ocean thermal diffusion in a 1D thermal model (constrained by observed temperatures), as only at that complexity or higher did the variation from model to model become small WRT the values.
But the two-box model (http://tinyurl.com/7t8rs8b, http://tinyurl.com/6tvk3wt) is a huge improvement upon the single-box model in all regards, in all categories of fit and residuals. And hence sensitivity estimated from that model fit (~2.5C/doubling of CO2) is going to be correspondingly more accurate than from your single-box fit.
“Nor does it reduce the implications of my work to point out that the volcanic adjustment increases the accuracy. That is simply more evidence that my model is accurate, and that somewhere in the bowels of the models they treat volcanic forcing slightly differently than radiative forcing …”
That’s evidence if and only if (IFF) such volcanic specific adjustments are made in other models – otherwise special case treatments such as those are evidence that your model is less accurate. The two-box model (here: http://tinyurl.com/7t8rs8b) does not need to treat volcanic forcing differently, and fits the data better, which I feel is solid evidence that it’s a better model. Given that code for many of the commonly used models are directly available, can you point to such exceptional treatment and adjustment in those models? Over and above the observationally supported forcing efficiencies primarily related to forcing locations?

joeldshore
May 31, 2012 10:57 am

Willis says:

Thanks, KR. I don’t understand why you (and others) keep saying my results are “unrealistic” or “wrong”. My results fit the observations quite closely. So what do you mean when you say that they are “unrealistic”? Unrealistic (in my world) means that they don’t agree with reality … but my results agree with reality.

Willis: When you are critiquing other people’s work, whether it be Nikolov and Zeller or climate scientists, you seem to understand that being able to fit one piece of data does not indicate that a model is realistic in all respects. Not surprisingly, these seem principles hold true even for your models!

So you’ll have to make your objection clearer. My model reproduces the annual cycle, so no “two-box” model is needed … but why not? If you say a “two-box” model is necessary, why is it not necessary for the annual cycle?

Because the annual cycle has a single frequency in it. However, one hint that your model would fail with multiple frequencies is that you got a very different climate sensitivity when you fit to the GISS model emulation of the instrumental temperature record than you got when you fit to the annual cycle. Another hint is how in that situation, you had to introduce a fudge factor to correctly model the response to the volcanic forcing, which came in at a shorter timescale (higher frequency).

So yes, I can emulate the model, with a correlation well over 0.90, with or without the the volcanic adjustment. The volcanic adjustment is trivial, 7% reduction.

That is simply because the volcanic eruptions are rare enough and short-lived enough that doing a bad job modeling them does not penalize you very much.

You still don’t seem to have grasped the implications of that finding. It means that the models are NOT doing some kind of thing with two timescales as you assume. If they were, I couldn’t emulate the models with any accuracy at all … but I can emulate them with high accuracy with only one timescale.

That is because you are sticking to modeling phenomena occurring in one narrow range of frequency. In such a case, climate sensitivity and relaxation time can trade off with each other. I.e., you can get a good fit to higher-frequency phenomena by using an artificially-low climate sensitivity.

Nor does it reduce the implications of my work to point out that the volcanic adjustment increases the accuracy. That is simply more evidence that my model is accurate, and that somewhere in the bowels of the models they treat volcanic forcing slightly differently than radiative forcing … which is no surprise at all, they depend on very different mechanisms.

No…It is evidence of how your model fails when you have two very disparate time frequencies. The evidence of your model’s limitations are clear if you are willing to see it. If it was someone else’s model reaching a conclusion that you didn’t like (be it completely denying the greenhouse effect or arguing for a climate sensitivity in the range that the IPCC says it is), I don’t think you would have any difficulty seeing the limitations. I guess it is always harder to see the problems in one’s own work.

richardscourtney
May 31, 2012 11:09 am

Joe Born:
Thankyou for your answer at (May 31, 2012 at 9:14 am) to me.
That is more than I asked for, and I appreciate it.
Richard

Reply to  Willis Eschenbach
May 31, 2012 12:48 pm

Willis:
Thank you for taking the time to reply. When a model is legitimately scientific, it provides information to the users of this model about the unobserved but observable outcomes of statistical events;. that they have this information gives these users a basis for controlling the outcomes. The equilibrium temperature fails the test of observability Thus, while public officials appear to believe that models such as yours provide them with a basis for controlling Earth’s surface temperature, this belief is mistaken.
Terry.

joeldshore
May 31, 2012 11:31 am

Willis says (to Jim D):

I asked for a citation to your claim. It seems you have none, which is fine, but in that case you need to present your actual calculations.

JimD presented the basic calculation here: http://wattsupwiththat.com/2012/05/29/an-observational-estimate-of-climate-sensitivity/#comment-997506 You could argue that his way of estimating the climate sensitivity is a little more naive than yours, but it is likely not that different in the result it gets: I just tried estimating the climate sensitivity from the seasonal cycle using his naive approach of just looking at the ratio of the amplitude of the temperature cycle over the amplitude of the forcing cycle, and I get estimates of the climate sensitivity for CO2 doubling of 0.11 C and 0.29 C from this for the Southern and Northern hemispheres, which are not far off from your estimates of 0.2 and 0.4 C, respectively.
His basic point is in line with what all of us are saying: The higher frequency the data that you use to estimate climate sensitivity using a model with a single relaxation time scale, the lower the value of climate sensitivity you will get. So, for the diurnal cycle over the ocean, as Jim points out, you’d get an estimate on the order of 0.01 C per CO2 doubling. Using the annual cycle, you get an estimate on the order of 0.3 C per doubling. Using the instrumental temperature record, you get an estimate on the order of 1 C or so per doubling.
Do you see the pattern here?

richardscourtney
May 31, 2012 12:05 pm

KR:
I am grateful for your taking the trouble to answer me in your post at May 31, 2012 at 8:33 am.
Unfortunately, I fail to see how you have refuted my point which was

the data permits almost any interpretation one wants to make.

Indeed, despite your claim to the contrary, you have provided another illustration of my point.
Perhaps as you say, Willis’ model “produces widely different results when looking at forcings of different time scales”. But so what? It works over the time scale he assessed.
The worst one could say from that is that Willis’ model has limited usefulness because it only works over short time scales.
But you say much more than that. You assert

On the other hand, using a two-box model (http://tinyurl.com/7t8rs8b – an analysis from Tamino, not mine) no arbitrary forcing scale constants are needed, dropping (or using alone) any single forcing shows clear statistical fit issues (which is what should be expected), but using all forcings matches behaviors quite well.

I have an important observation about anything “Tamino” posts on his blog which I mention below. Before that, I observe that you and Tamino fail to show Tamino’s model is validated for all time scales.
So, why is a “two-box” model right? You say it addresses annual effects and oceanic thermal delay. OK, let us assume that is true. In that case why is a ‘three-box’ model which assesses biological response delay not right? And why is a “four-box” model which …. etc.
I understand you are likely to say Willis’ model only assesses the first order effect and Tamino’s model also assesses the second order effect. But there is no determination of the relative magnitudes of the third to n order effects.
There is no way to determine that Tamino’s model is more useful than Willis’ model when the relative magnitudes of the third to n order effects are not known. All one can say is that both models fit your criterion that “The data and the physics constrain the interpretations, and the strength of your conclusions”.
As Willis says to you at May 31, 2012 at 10:01 am

I don’t understand why you (and others) keep saying my results are “unrealistic” or “wrong”. My results fit the observations quite closely. So what do you mean when you say that they are “unrealistic”? Unrealistic (in my world) means that they don’t agree with reality … but my results agree with reality.

Indeed, Terry Oldberg states the real difficulty when he says to me at May 31, 2012 at 8:15 am

That the data permits almost any interpretation one cares to make can be traced to a fundamental error in the construction of modern climatology. This is that the models reference no statistical population. Absent the associated population, the models cannot be tested.

Subsequent to his having seen your post that I am answering at May 31, 2012 at 9:50 am Terry Oldberg says to you:

As you point out, a model may be useful. However, among the useful models is not the one that is the focus of Willis’s article. It is useless in the sense of conveying no information to a policy maker about the outcomes from his/her policy decisions. The lack of information is a consequence from the unobservability of the equilibrium temperature.

He is saying the same as me; i.e. the data does not constrain the model to an adequate degree for it to be useful “to a policymaker” because much, much too little is known. Or, as I succinctly phrase it,
the data permits almost any interpretation one wants to make.
As an addendum, I add my observation about anything Tamino posts on his blog.
“Tamino” is an academic and, therefore, his career benefits from anything he publishes in the peer-reviewed literature: it increases his publication count. But anything published under a false name on his blog cannot be published in the peer-reviewed literature. So, when “Tamino” posts work on his blog he is declaring that he has decided the work is so unworthy that it cannot be raised to a standard worthy of publication. The possibility exists that he may have made a misjudgement about the value of a particular item he has chosen to post on his blog, but I need some encouragement to bother evaluating his stuff that he (n.b. himself) has thrown away as being worthless.
Richard

KR
May 31, 2012 1:04 pm

richardscourtney – As I said above, the data doesn’t permit any interpretation, but rather constrains interpretations. Which is a major point in developing models – using observational data to find the best model available, and rejecting models that perform poorly in terms of that data.
A two-box model certainly isn’t perfect (see my comment at http://wattsupwiththat.com/2012/05/29/an-observational-estimate-of-climate-sensitivity/#comment-998046 regarding ocean thermal models, and I believe most radiative atmospheric models run to about twenty levels [twenty boxes], at which point they are usefully and asymptotically approaching the behavior of an infinite level model), but it’s far better than a one-box model – the tendency of a one-box model to produce different results at different time scales makes all of those results less certain.

Regarding Tamino and his blog – the vast majority of his blog posts are applications of standard textbook statistical analysis to various issues, and I believe are sub-LPU in size (http://en.wikipedia.org/wiki/Least_publishable_unit 🙂 ). I would consider pointing out bad statistical practices a public service – I’m pleased that he feels strongly enough about his field of study to make such posts. I’ll also note that he has on occasion expanded his blog posts into published peer-reviewed work (http://iopscience.iop.org/1748-9326/6/4/044022), and on that basis I’m just going to have to disagree with your valuation.

Capo
May 31, 2012 1:56 pm

Joe Born:
Take a look at the one-box model Isaac Held describes here:
http://www.gfdl.noaa.gov/blog/isaac-held/2011/03/05/2-linearity-of-the-forced-response/
(Be careful, the lambda in Held’s equation is the inverse of Willis’ lambda)
If I try to bring it to Willis’ form, I get
dT/dt = lambda*F(t)/tau – T(t)/tau
Linearisation of the differential equation gives
T_i+1 – T_i = (lambda*F_i/tau – T_i / tau) * delta t
The left side is Willis’ delta T_n+1, the problem on the right side is F_i. As far as I understand Willis’ excel sheet, he didn’t use F_i (the total Forcing), but only the monthly increment in forcing.
So he does something brutal: Each monthly increment in forcing is calculated in one single step (!!?) of 1 month, in the next month this forcing vanishes and the next monthly increment comes into play. I think, this is wrong, because a monthly increment has an effect for a much longer time.
Hence my conclusion, that Willis’ lambda is not the usual climate sensitivity, it’s rather a fit parameter. Similar to Willis’ tau, which fits the data quite well, but it’s not the usual time constant in the one-box model.
Next problem:
Held’s simple one-box model c*dT/dt is the energy flux into the ocean. I’m skeptic if this model can describe the situation used by Willis. For example if the northern hemisphere warms, there is a further energy flux from the northern to the southern hemisphere, so I think, one should use a two-box model as the simplest one.

Matthew R Marler
May 31, 2012 2:43 pm

Willis Eschenbach: [UPDATE—ERROR] I erroneously stated above that the climate sensitivity found in my analysis of the climate models was 0.3 for a doubling of CO2. In fact, that was the sensitivity in degrees per W/m2, which means that the sensitivity for a doubling of CO2 is 1.1°C. -w.
What is the standard error of the estimate?

richardscourtney
May 31, 2012 3:29 pm

KR:
I appreciate your taking the trouble to provide your answer to me that you have posted at May 31, 2012 at 1:04 pm.
You say a two-box model is “better” than a one-box model. Perhaps.
The point I (and I think also Terry Oldberg) am making is that there is no way to define what is an adequate model.
It is a fact that – as Willis says – Willis “one box” model DOES work. It DOES emulate physical reality. For sake of argument, I will accept that Tamino’s “two box” model also works (but I have not checked that). And, as I explained, other models are also possible by adding more “boxes”.
Which of the many models (each constrained by the available data) is preferable and for what?
The range of published values of climate sensitivity shows that the data allows interpretation such that the determinations of climate sensitivity vary by an order of magnitude. In my terms, that says the data permits almost any interpretation one wants to make.
At present, Willis’ determination of climate sensitivity is as valid as any other. Arm-waving about “add another box” or boxes does not change that.
Richard

May 31, 2012 3:38 pm

Capo:
I understand your problem with Mr. Eschenbach’s approach; by presenting it in the way he did, he made it much harder to follow than necessary.
Be that as it may, it turns out that he isn’t really throwing away the forcing history at each step, or at least not any history that matters. Implicitly, the previous-temperature-change term ΔT(n) captures all the forcing history you need. If there was no previous-period temperature change, then the temperature has reached the steady-state value dictated by the cumulative forcing value, so there’s no need to concern oneself with that value anymore; only changes from it are of interest. If there was a temperature change last period, on the other hand, then we know the change was partial execution of an exponential approach to steady-state, and, if we know the time constant, we in essence thereby know the steady-state value to which that forcing history is driving the temperature–and what the forcing history’s contribution to the next temperature change has to be.
So, opaque as it is, his approach is basically sound. And, by limiting himself to changes, he automatically confines himself to a quasi-linear regime in a decidedly non-linear system; he doesn’t have to subtract an average before doing the linear operations, for example.
As to the n-box discussion, I’m afraid I can’t contribute much. Certainly it should be possible to obtain more-accurate results with more-complicated models. And it would be trivial to conjure up a system in which Mr. Eschenbach’s simple approach would seem to find low sensitivity in the presence of a large high-frequency stimulus even though that system’s sensitivity to low-frequency stimuli is actually high. When I do that, though, I’m struck with a sense of unreality. Remember, it the clouds’ response to surface heating that is supposed to provide the positive feedback on which the grant-financed models’ calamity scenarios are based, and a low-frequency (multi-year) path for such a cloud-response mechanism exceeds my powers of imagination. So I”ve spared myself thinking about that much. Sorry I can’t help.

Matthew R Marler
May 31, 2012 3:46 pm

capo: The left side is Willis’ delta T_n+1, the problem on the right side is F_i. As far as I understand Willis’ excel sheet, he didn’t use F_i (the total Forcing), but only the monthly increment in forcing.
So he does something brutal: Each monthly increment in forcing is calculated in one single step (!!?) of 1 month, in the next month this forcing vanishes and the next monthly increment comes into play. I think, this is wrong, because a monthly increment has an effect for a much longer time.

that’s not much of a problem. Say that the CO2 concentration were doubled over a period of 70 years, 840 months: then instead of a 1-time increment of 3.7W/m^2 in the forcing there would be 840 increments of (3.7/840) W/m^2; in the model these effects simply summate (or sum) to the total forcing change, which works out to 1.1K over 70 years.
Willis’ model can now be used to predict the increase in global mean temp over the next 20 years (conditional on measured increases in CO2), or to predict the 2012 mean temp of any month starting with the temp at any previous month.

Reply to  Willis Eschenbach
May 31, 2012 4:31 pm

Willis (May 31, 2012 at 3:59 pm):
I’m thinking of your assignment of a numerical value to the equilibrium climate sensitivity (TECS). TECS is the proportionality constant in a functional relation that maps a change in the logarithm of the CO2 concentration to a change in the equilibrium temperature. From the form of this relation, it might appear that one can control the equilibrium temperature by controlling the CO2 concentration.

Matthew R Marler
May 31, 2012 4:18 pm

Willis: Since it is the result of an interative procedure, one which is simultaneously fitting two values (tau and lambda) I’m not sure how to even go about calculating it … suggestions?
You have a vector autoregressive model of low dimension. The math is described in a number of books, of which I opened “Time Series Analysis and Its Applications, with R Examples” by Robert Shumway and David Stoffer, p 303. Their R software is available at CRAN. You probably have to estimate a and b in my notation, transform to lambda and tau in your notation, and use the multivariate delta-method to get the asymptotic normal theory approximate variance-covariance matrix for lambda and tau. That’s some matrix multiplications, which you can look up or I can show you in a letter. Or you could just bootstrap it: simulate 1000 realizations that have the same sample statistics as your data, and look at the distribution of the estimates.
Matt

Verified by MonsterInsights