Guest Post by Willis Eschenbach
After I published my previous post, “An Observational Estimate of Climate Sensitivity“, a number of people objected that I was just looking at the average annual cycle. On a time scale of decades, they said, things are very different, and the climate sensitivity is much larger. So I decided to repeat my analysis without using the annual averages that I used in my last post. Figure 1 shows that result for the Northern Hemisphere (NH) and the Southern Hemisphere (SH):
Figure 1. Temperatures calculated using solely the variations in solar input (net solar energy after albedo reflections). The observations are so well matched by the calculations that you cannot see the lines showing the observations, because they are hidden by the lines showing the calculations. The two hemispheres have different time constants (tau) and climate sensitivities (lambda). For the NH, the time constant is 1.9 months, and the climate sensitivity is 0.30°C for a doubling of CO2. The corresponding figures for the SH are 2.4 months and 0.14°C for a doubling of CO2.
I did this using the same lagged model as in my previous post, but applied to the actual data rather than the averages. Please see that post and the associated spreadsheet for the calculation details. Now, there are a number of interesting things about this graph.
First, despite the nay-sayers, the climate sensitivities I used in my previous post do an excellent job of calculating the temperature changes over a decade and a half. Over the period of record the NH temperature rose by 0.4°C, and the model calculated that quite exactly. In the SH, there was almost no rise at all, and the model calculated that very accurately as well.
Second, the sun plus the albedo were all that were necessary to make these calculations. I did not use aerosols, volcanic forcing, methane, CO2, black carbon, aerosol indirect effect, land use, snow and ice albedo, or any of the other things that the modelers claim to rule the temperature. Sunlight and albedo seem to be necessary and sufficient variables to explain the temperature changes over that time period.
Third, the greenhouse gases are generally considered to be “well-mixed”, so a variety of explanations have been put forward to explain the differences in hemispherical temperature trends … when in fact, the albedo and the sun explain the different trends very well.
Fourth, there is no statistically significant trend in the residuals (calculated minus observations) for either the NH or the SH.
Fifth, I have been saying for many years now that the climate responds to disturbances and changes in the forcing by counteracting them. For example, I have held that the effect of volcanoes on the climate is wildly overestimated in the climate models, because the albedo changes to balance things back out.
We are fortunate in that this dataset encompasses one of the largest volcanic eruptions in modern times, that of Pinatubo … can you pick it out in the record shown in Figure 1? I can’t, and I say that the reason is that the clouds respond immediately to such a disturbance in a thermostatic fashion.
Sixth, if there were actually a longer time constant (tau), or a larger climate sensitivity (lambda) over decade-long periods, then it would show up in the NH residuals but not the SH residuals. This is because there is a trend in the NH and basically no trend in the SH. But the calculations using the given time constants and sensitivities were able to capture both hemispheres very accurately. The RMS error of the residuals is only a couple tenths of a degree.
OK, folks, there it is, tear it apart … but please remember that this is science, and that the game is to attack the science, not the person doing the science.
Also, note that it is meaningless to say my results are a “joke” or are “nonsense”. The results fit the observations extremely well. If you don’t like that, well, you need to find, identify, and point out the errors in my data, my logic, or my mathematics.
All the best,
w.
PS—I’ve been told many times, as though it settled the argument, that nobody has ever produced a model that explains the temperature rise without including anthropogenic contributions from CO2 and the like … well, the model above explains a 0.5°C/decade rise in the ’80s and ’90s, the very rise people are worried about, without any anthropogenic contribution at all.
[UPDATE: My thanks to Stephen Rasey who alertly noted below that my calculation of the trend was being thrown off slightly by end-point effects. I have corrected the graphic and related references to the trend. It makes no difference to the calculations or my conclusions. -w.]
[UPDATE: My thanks to Paul_K, who pointed out that my formula was slightly wrong. I was using
∆T(k) = λ ∆F(k)/τ + ∆T(k-1) * exp(-1 / τ)
∆T(k) = λ ∆F(k)(1 – exp(-1/ τ)) + ∆T(k-1) * exp(-1 / τ)
The result of the error is that I have underestimated the sensitivity slightly, while everything else remains the same. Instead of the sensitivities for the SH and the NH being 0.04°C per W/m2 and 0.08°C per W/m2 respectively, the correct sensitivities should have been 0.05°C per W/m2 and 0.10°C per W/m2.
-w.]
Willis, looking at your datasheet it appears to still have the term exp(-1/tau). As one commentator pointed out, this is impermissible–exponents cannot have a dimension. Usually these sorts of terms appear as exp(-t/tau). Can you fix your datasheet properly?
Willis..I’m reminded of the time I was given a paper on a variational method in transient heat transfer…with the 1/4 of the semester grade contingient upon elucidating the method shown in the 4 page paper…and then applying it to a “sample” problem.
I walked in to my “night” in the graduate heat transfer course, and started with the 38 pages of “overheads” (yes, I’m dating myself, I hate that..I have to use a mirror…)and one hour and 10 minutes later I concluded. I apologized to the professor, Dr. Lu (R.I.P.) and said, “I had all I could do to figure out what this fellow had condensed in these 4 pages…I did NOT have time to apply this method to a “sample problem”.
Dr. Lu used it as an object lesson. ‘Papers are very condensed, they oft times lack many details to truly explain the method they present.’…He went on to say, ‘They can contain valuable information, but it may take much effort to find out WHAT that information is!’..
HA! I think the same as your presentation. To really understand your method, takes more than looking at the write up for 1/2 hour, and spouting out a instant judgement based on one’s EMOTIONS rather than intellect.
I just ask that you FORMALIZE this concept in a longer and more detailed writing, perhaps a 150 page PDF, with citations!
Max
Willis–
Regarding the need for providing a standard error–if you are working in Excel and used Solver to determine tau, there is a software package called SolverAid that will supply uncertainty estimates in addition to the solution. The program was developed by Robert de Levie and described in his book Advanced Excel. I have found the 2nd Edition to be very helpful. There is a Third Edition now out and information is here: http://www.bowdoin.edu/~rdelevie/excellaneous/
BarryW says:
May 31, 2012 at 7:50 pm
Well, since my numbers fit the data very well with no CO2 involved, and furthermore there is no significant trend in the residuals, I’d say about zero is attributable to CO2 …
w.
Lance Wallace says:
May 31, 2012 at 8:01 pm
You are correct, 1998 is not included, the data runs to December 1997. Sorry for the confusion.
w.
Lance Wallace says:
May 31, 2012 at 8:04 pm
Thanks, Lance. The “1” simply means that “t” = 1, so it has the units of months and cancels out the units of the tau (months) and so it is dimensionless.
w.
I’m not a big fan of a “one-box” constant tau with an instantaneous factor added in. In a retarded attempt to come up with an “infinite box” model, I chose to have the *apparent* tau being a log function of time. That is, tau ~ 4/3 ln(1+time). My *apparent* tau is based on the lag given different cycle periods. Doing some contrived curve fitting, I come up with a lag of just under 6hrs on a daily cycle, 2 months on a yearly cycle, 5 months on the ENSO cycle, and 6.5 months on the 11yr Solar cycle.
It’s a bit cryptic, but here are my calculations:
https://docs.google.com/spreadsheet/ccc?key=0AiP3g3LokjjZdEh5bURCN1F5VGgxejRlcVlNU1JvcWc
Playing Devil’s Advocate, CO2 can affect the albedo, so it’s included. The theory is that an increase in temperature due to CO2 will result in increased water vapour being held in the atmosphere, which can manifest as clouds, altering the albedo.
So this model doesn’t impact on AGW concepts – it’s a much more empirical analysis of temperature based on how much energy from the input source (the sun) gets absorbed by the Earth (determined by albedo). Conceptually, straight forward and hard to refute. The next level of discussion is then to determine what impact various factors (such as CO2, aerosols, etc) have on the Earth’s albedo. It would be interesting is to see if there are any existing studies on that subject.
Willis,
Again, thanks for the work you’ve put in. I don’t think I’m going to have time to go through the details. However, one of the comments indicated that the albedo information ended in 1997. There is another source of albedo information that might help expand that a bit. Goode and Palle’ have some papers for their Earth Shine project that determines Earth’s albedo from Earth Shine light reflecting off the Moon (unlit area) and as I recall, they infer some albedo information from cloud cover records to extend albedo records beyond the very limited measurements available.
Willis you said …
“Not is any way, shape, or form. I cannot be strong enough in saying that this work has absolutely nothing to do with the bogus theories and claims of N&Z. It doesn’t confirm them, nor does it falsify them, it has no connection with them at any point.
If that’s not clear enough, let me know …”
It is certainly clear to me who is doing the ranting …
Willis,
One issue that immediately springs to mind is your exclusion of factors that we know affect the climate, such as aerosols. This has been pointed out to me before by others when I made an attempt to determine climate sensitivity without taking them into account. If we know from physics that something has an effect, then excluding that from a model is a potential problem for that model.
It might mean – for example – that it is entirely a coincidence that your model explains the temperature changes. Or it might not be a coincidence: it might mean that albedo changes are effects that are a close proxy for the temperature changes, meaning that you will get a pretty good match. But if this was the case this would render your model invalid for its purpose – all you would be matching would be temperature change verus temperature change.
I am not saying that that is what has happened here; just that excluding things that we know have physical effects from models of those physical effects can be problematic.
David Gould says:
May 31, 2012 at 9:38 pm
One issue that immediately springs to mind is your exclusion of factors that we know affect the climate, such as aerosols.
=====
It is accounted for in its effect on albedo, or there would be residuals during aerosol peaks such as volcanoes.
In effect they are being counted twice if you also include them separately with a non-zero coefficient, which will lead to an induced error in the model.
Willis,
If you have a look at the GISS annual snow albedo data and plot it against the GISS temperature data, you get an r^2 value of 0.8. Depending on which you plot as the dependent variable, either the snow albedo changes explain much of the annual temperature variation over the last 130 years or the temperature variation explains much of the snow albedo change.
Data: http://data.giss.nasa.gov/modelforce/RadF.txt and http://data.giss.nasa.gov/gistemp/tabledata_v3/GLB.Ts+dSST.txt
Snow albedo is not the only albedo factor, of course.
I agree with the conclusion, namely that:
“the climate responds to disturbances and changes in the forcing by counteracting them.”
and that it is all down to albedo variations affecting the amount of solar energy able to penetrate the oceans.
The difficulty is that cloudiness decreased during the late 20th century warming period and is now increasing with a cessation of that warming whereas Willis’s proposition requires more clouds when it is warming so that the warming is offset by the increased albedo.
That is a serious problem which must be addressed.
My solution is to propose that instead of increased cloudiness from warming such warming comes from reduced cloudiness caused by an expansion of the tropical air masses allowing more solar energy into the oceans by intensifying the subtropical high pressure cells, pushing the entire air circulation poleward to give more zonal / poleward jets and reduced cloudiness overall.
That reduction of cloudiness and consequent warming from more energy getting into the oceans is then offset by an increase in the hydrological cycle via my GLOBAL version of Willis’s own Thermostat Hypothesis whereby there is an increase of convective overturning along the ITCZ AND an intensification of cyclogenesis along the more zonal but also more vigorous mid latitude jet streams.
The thing is that such intensification of the hydrological cycle is accompanied by reduced GLOBAL cloudiness because the increased activity is compressed into smaller surface areas by the more zonal air flow configuration.
Thus more energy getting into the oceans is caused by the expanded tropical air masses with a reduction of cloudiness but the poleward shifting of the entire air circulation pattern then increases the rate of energy transfer from surface to space for a zero or near zero effect on the equilibrium temperature of the entire system.
Expansion of the tropical air masses can result either from faster energy release by the oceans OR changes in the vertical temperature profile of the atmosphere caused by variations in the mix of particles and wavelengths from the sun influencing ozone concentrations differently at different levels.
Climate variability is just a consequence of the continual interplay between the solar and oceanic influences as each of them varies in relation to the other all the time. The positions, sizes and intensities of the permanent climate zones shift about over time as part of the balancing process. All observed climate variations can be explained by that mechanism.
Changes in GHG amounts would have a similar effect but too small to measure compared to solar and oceanic variability.
The proposition of Nikolov and Zeller is relevant in that it provides a basic physical mechanism for the necessary redistribution of the surface air pressure pattern but I am aware of Willis’s disagreement on that issue which can be left for another time so as to avoid derailing this thread.
Willis I remove my earlier objection to the seasonal lag in your earlier work. The longer lag and lower sensitivity in the SH are consistent with the reduced land mass and antarctic ice cap.
Coupled with the earlier work, this is an extremely powerful result because it shows a consistency between seasonal and annual sensitivity, in spite of different lags. What is truly remarkable is the closeness of the fit, as it suggests that climate forgings are nowhere near as complicated as suspected.
To predict future global average temperatures one one need be able to predict:
1. CO2 levels
2. Solar output
3. Global albedo
Thus, if one want to control climate change, if may be much more cost effective to control the earth’s albedo than CO2 levels. Especially as the sun’s output is largely unpredictable and outside our control, and CO2 levels are directly tied to economic performance and thus hard to modify without also affecting the economic climate.
Kevan Hashemi says:
May 31, 2012 at 7:19 pm
Perhaps you and your collaborators could do something similar, and see if you come up with the same answers.
=====
It would appear this is what Willis has done. Rather than calculating what change the clouds have in albedo, he has simply taken the observed change in albedo as the calculated effect. Not only for clouds, but for the net sum of all effects such as clouds, volcanoes, carbon black, etc, etc. Since the underlying assumption of climate science is that net W/M2 is the ultimate driver of global temperature, he has reduced the problem to simplest terms to enable a solution.
Stephen Wilde says:
May 31, 2012 at 10:22 pm
Willis’s proposition requires more clouds when it is warming so that the warming is offset by the increased albedo.
====
I don’t see that anywhere in the model. Willis makes no stipulations as to the cause of the change in albedo. He has simply asked if the observe change in temperature, CO2 and albedo are closely related. And it would appear they are very closely related, with a modest amount of warming predicted for a doubling of CO2.
The lack of residuals is compelling evidence that there are no significant hidden variables. In other words, we don’t need to add any factors or assumption to improve the fit, because the fit is much better than anything yet found anywhere in any climate model.
The proof of the pudding will come when the model moves outside of the “training” area. It could be that Willis is simply curve fitting in which case the model will have no predictive ability. However, if it maintains the low residuals as it moves into the unknown, then Willis has the making of a new scientific theory.
Potentially this is the breakthrough in understanding that is missing in climate science. Or, like the climate models, it could prove to be useless. Predicting accelerated warming at the exact point warming leveled.
What I like about Willi’s approach is that it holds true to Occam’s Razor
from wikibible
Occam’s razor (also written as Ockham’s razor, Latin lex parsimoniae) is the law of parsimony, economy or succinctness. It is a principle urging one to select among competing hypotheses that which makes the fewest assumptions and thereby offers the simplest explanation of the effect.
I think this is a ‘spoof’ post.
You can manipulate data to give desirable result. Here is one I produced to show that ‘doubling C02 will lead to 3C of warming’
with graph Sp.giff on http://www.vukcevic.talktalk.net/00f.htm
I doubt that it trivial to detect long time constants in a record where nearly all the variance is at intra annual scales. It is practically certain that Eschenbach’s method severely underestimates climate sensitivity. There are several possible ways to demonstrate this: A sensitivity analysis. If realistic time constants are added to the analysis, do they appreciably change the results. If not, you cannot exclude them. Simulated data. If you repeat the analysis with simulated climate data from a model with known climate sensitivity is the estimate of model sensitivity underestimated. This could be done with anything from an energy balance model to a fully coupled CGM, and does not assume that the models are correct (it is axiomatic that they, like all models , are wrong but possibly useful). You could try to reconcile your estimate of climate sensitivity with the climate changes over the last 100000 years. Is it possible to generate a glacial maximum several degrees colder than modern with so low a climate sensitivity. I predict Eschenbach will do none of these, prefering to amuse the crowd with cheap numerical tricks. If he could show a climate sensitivity with robust methods, it would be published immediately in Science.
Hi Willis
Thanks for your work, I really don’t know where you find the time.
What I find interesting about all your work is that it points to a very stable climate system; Now, when you think about it that fact is, in effect, a necessary conclusion of 4.5billion years of catastrophes and we and the planet are still here.
So, how does an ice age form? From all your work there would seem to be only one possibiltity, THE SUN. You have shown, really clearly, that the climate is so stable that only changing it’s fundamental energy source could radically change it cyclically.
I’m wondering about something here. It seems to me that this model is time independent, so that you will get the same constant trend forever, and hindcasting you will also get the same constant trend in the past. Which we didn’t have.
So if you do explain the current warming, you will have to explain the previous lack of warming.
Maybe I misunderstand the model.
Willis:
Thankyou for this. It is a direct rebuttal of KR and Latimer Alder in the previous thread who claimed need for a “two-box” model to emulate longer-term changes than the instantaneous changes. They did not have an argument as is demonstrated by – at the end of that thread – their abandoning rational argument and resorting to insult and abuse.
You now show by demonstration that they were wrong when they claimed your “one-box” model does not emulate changes other than only instantaneous changes. As you say
I note that detractors have responded in this thread by saying you need to assess very long time periods. That was predictable. This response is most clearly expressed by Richard Telford who says (at May 31, 2012 at 11:54 pm)
His assertion is wrong: you have obtained a realistic result by excluding them so his assertion is a falsehood. It is his duty to justify testing of “realistic time constants” (whatever they are). He is ‘blowing smoke’ until he shows the effect of adding such “realistic time constants” and explains the need for them.
And you disproved the arm-waving by your detractors who claimed NH and SH were different so your model “must” be wrong when you report
You state the important indication of your model when you say
Yes, you have demonstrated that
Sunlight and albedo seem to be necessary and sufficient variables to explain the temperature changes over that time period.
However, an ability to attribute a factor as a cause of a change only demonstrates the possibility that the factor is the cause of the change. An ability to attribute a factor as a cause of a change does NOT demonstrate that the factor is the true cause in part or in whole.
As I argued in the previous thread, there are other determinations of climate sensitivity than yours and – at present – there is no way to determine which is ‘right’. But your determination does not agree with what some people want to think is ‘true’.
So, you now find yourself in the same situation I have been in for a decade.
• I have been showing that the recent rise in atmospheric CO2 concentration can be attributed to factors other than anthropogenic CO2 (and have been vilified for it).
• You are showing the recent rise in global temperature can be attributed to factors other than the rise in atmospheric CO2 concentration (and probably will be vilified for it).
I advise that you fasten your seat belt: you are in for a bumpy ride.
Richard
ferd berple says:
May 31, 2012 at 10:50 pm
Actually we can do some of that now. Below I have calculated the standard deviation of the residuals. The first row shows when all of the data is used for the training.

The next row shows the “in sample” results when the first half is used for training, and is then used to evaluate the “out of sample” second half.
The final row shows when the second half is used for the training.
As you can see, there’s not much difference in the size of the residuals whether you use all, just the first half, or just the second half for the training.
w.
Moderator:
My post seems to have gone into the bin. Please find it. Thanking you in anticipation.
Richard