Guest Post by Willis Eschenbach
After I published my previous post, “An Observational Estimate of Climate Sensitivity“, a number of people objected that I was just looking at the average annual cycle. On a time scale of decades, they said, things are very different, and the climate sensitivity is much larger. So I decided to repeat my analysis without using the annual averages that I used in my last post. Figure 1 shows that result for the Northern Hemisphere (NH) and the Southern Hemisphere (SH):
Figure 1. Temperatures calculated using solely the variations in solar input (net solar energy after albedo reflections). The observations are so well matched by the calculations that you cannot see the lines showing the observations, because they are hidden by the lines showing the calculations. The two hemispheres have different time constants (tau) and climate sensitivities (lambda). For the NH, the time constant is 1.9 months, and the climate sensitivity is 0.30°C for a doubling of CO2. The corresponding figures for the SH are 2.4 months and 0.14°C for a doubling of CO2.
I did this using the same lagged model as in my previous post, but applied to the actual data rather than the averages. Please see that post and the associated spreadsheet for the calculation details. Now, there are a number of interesting things about this graph.
First, despite the nay-sayers, the climate sensitivities I used in my previous post do an excellent job of calculating the temperature changes over a decade and a half. Over the period of record the NH temperature rose by 0.4°C, and the model calculated that quite exactly. In the SH, there was almost no rise at all, and the model calculated that very accurately as well.
Second, the sun plus the albedo were all that were necessary to make these calculations. I did not use aerosols, volcanic forcing, methane, CO2, black carbon, aerosol indirect effect, land use, snow and ice albedo, or any of the other things that the modelers claim to rule the temperature. Sunlight and albedo seem to be necessary and sufficient variables to explain the temperature changes over that time period.
Third, the greenhouse gases are generally considered to be “well-mixed”, so a variety of explanations have been put forward to explain the differences in hemispherical temperature trends … when in fact, the albedo and the sun explain the different trends very well.
Fourth, there is no statistically significant trend in the residuals (calculated minus observations) for either the NH or the SH.
Fifth, I have been saying for many years now that the climate responds to disturbances and changes in the forcing by counteracting them. For example, I have held that the effect of volcanoes on the climate is wildly overestimated in the climate models, because the albedo changes to balance things back out.
We are fortunate in that this dataset encompasses one of the largest volcanic eruptions in modern times, that of Pinatubo … can you pick it out in the record shown in Figure 1? I can’t, and I say that the reason is that the clouds respond immediately to such a disturbance in a thermostatic fashion.
Sixth, if there were actually a longer time constant (tau), or a larger climate sensitivity (lambda) over decade-long periods, then it would show up in the NH residuals but not the SH residuals. This is because there is a trend in the NH and basically no trend in the SH. But the calculations using the given time constants and sensitivities were able to capture both hemispheres very accurately. The RMS error of the residuals is only a couple tenths of a degree.
OK, folks, there it is, tear it apart … but please remember that this is science, and that the game is to attack the science, not the person doing the science.
Also, note that it is meaningless to say my results are a “joke” or are “nonsense”. The results fit the observations extremely well. If you don’t like that, well, you need to find, identify, and point out the errors in my data, my logic, or my mathematics.
All the best,
w.
PS—I’ve been told many times, as though it settled the argument, that nobody has ever produced a model that explains the temperature rise without including anthropogenic contributions from CO2 and the like … well, the model above explains a 0.5°C/decade rise in the ’80s and ’90s, the very rise people are worried about, without any anthropogenic contribution at all.
[UPDATE: My thanks to Stephen Rasey who alertly noted below that my calculation of the trend was being thrown off slightly by end-point effects. I have corrected the graphic and related references to the trend. It makes no difference to the calculations or my conclusions. -w.]
[UPDATE: My thanks to Paul_K, who pointed out that my formula was slightly wrong. I was using
∆T(k) = λ ∆F(k)/τ + ∆T(k-1) * exp(-1 / τ)
∆T(k) = λ ∆F(k)(1 – exp(-1/ τ)) + ∆T(k-1) * exp(-1 / τ)
The result of the error is that I have underestimated the sensitivity slightly, while everything else remains the same. Instead of the sensitivities for the SH and the NH being 0.04°C per W/m2 and 0.08°C per W/m2 respectively, the correct sensitivities should have been 0.05°C per W/m2 and 0.10°C per W/m2.
-w.]
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.
I see the legend on the chart you are refering to, and concede the point regarding the trend line. However, the data that Willis analyzed in this case happens to encompass the steepest part of the late 20th century warming. His model explains this warming using albedo and TSI. Now, CO2 effects that alter albedo are thus folded into his model, but it is incorrect to a priori ascribe all the changes in albedo to CO2. He also specifically eschews making any predictions, so extrapolating the slope over the next 100 years is making claims for the model that the model maker does not. Plus, if you look at the last 15 years of temperature, there is significant flattening already. Granted, “past performance is no guarantee of future results”, but if you think it’s OK to extrapolate, I can play that game too.
In addition, in the note below Figure 1, you can see Willis’ conclusion regarding a doubling of CO2 and it’s effects. I looked at your response in his previous post, and I’ve seen the elements thereof elsewhere. The problem is that all those elements IIRC are “estimates” of the “consensus” and not directly measured values. They are WAG’s and nothing more. “He who builds his house on sand,” and all that. As one note, global humidity is more or less the same over the last 40 years, so the primary feedback mechanism of the AGW thesis is conspicuously absent. I think you need a new epicycle on your model.
D. J. Hawkins says:
Ah…No it’s not:
http://www.sciencemag.org/content/310/5749/841.abstract
http://www.sciencemag.org/content/323/5917/1020.summary
The water vapor feedback is now well-verified. Those who desire a low climate sensitivity basically have to put all their hopes in a strongly-negative cloud feedback.
Well, I would expect that you might be able to get the trend fit better if, as I said, you varied the sensitivity and the time constant. Admittedly, the fit to the phase of the annual cycle will still be off…but that is because we are trying to approximate multiple time scales by a single time scale.
But also, what are your errorbars on those trends? In particular, what is mainly responsible for the trend in the forcing…Is it the albedo…and how accurate are those measurements? I’m a bit skeptical that you are not stretching the data beyond its capabilities in looking at these small trends over time. (I think the data is fine for the annual cycle, where the forcing will clearly be dominated by the change in solar angle.)
You really don’t need albedo data…The sun either being there in the sky or being below the horizon is what is going to dominate the forcing for the diurnal cycle. And, you can estimate temperature variations or find some data somewhere. The point is to just get a rough estimate, which in fact Jim D already provided for you in the other thread. You could quibble about his numbers but not enough to escape the fact that the estimate one would get for the climate sensitivity using the diurnal cycle would be considerably lower than the estimate that one gets using the annual cycle, just as one would expect from basic physical principles. There is no great mystery here…I think that we all understand that the reason we don’t get frigidly cold at night is because of the considerable damping of the solar forcing that occurs due to the thermal inertia of the atmosphere and oceans.
The reason for the difference, or most of it anyway, is pretty obvious. Your estimate is more like an estimate of the transient climate response than the equilibrium climate sensitivity. The equilibrium climate sensitivity is obtained by doubling CO2 and then allowing the model to run long enough that it actually equilibrates. As you can see in Table 8.2 of the IPCC AR4 Working Group 1 report, transient climate responses and equilibrium climate sensitivities are a fair bit different. For example, the GISS Model ER has an ECS of 2.7 degC but a TCR of only 1.5 degC. So, really the only part of things that is any mystery at all is why you got 1.1 deg C instead of 1.5 degC…but that’s not too big a difference (and what you have computed is indeed not exactly a transient climate response, which has a specific definition…but it is much closer to being that than to being an equilibrium climate sensitivity).
The whole point is that a simple lagged model (or simple one-box model) does not correctly diagnose the climate sensitivity. It is because the climate system is complicated and operates on a variety of different time scales. You need to include AT LEAST 2 independent time scales to do any justice to this.
You seem to be making the mistake that skeptics often accuse climate scientists of making of putting too much faith in a model. You are putting too much faith in your very simple model, which is too simple to describe even full-scale climate models and is certainly too simple to describe reality. The equilibrium climate sensitivity that you diagnose with the model is not correct because neither the real world nor the climate models obey the simple picture of having a single relaxation time.
Willis Eschenbach says:
June 1, 2012 at 12:53 am
As you can see, there’s not much difference in the size of the residuals whether you use all, just the first half, or just the second half for the training.
=========
Willis, I’m impressed by these results. My caution is that the residuals are almost too good. So, I would be very cautious that there isn’t a math error. This will reveal itself quick enough going forward.
What is going to bother many people is what bothers all experts. Once you have seen the answer it looks so simple you wonder why someone didn’t think of it earlier. But of course all advances are like that. Once you have been shown the answer it looks so simple it is obvious.
However, if what you have discovered holds, then billions of dollars in computer models and climate science is going to be shown to be worthless rubbish. In which case a lot of people’s jobs are on the line and they are going to fight tooth and nail to try and rubbish your result.
Not because the result is wrong, but because it is a direct threat to the welfare of themselves, their families and their standing and prestige in the community. They will fight.
Gird your loins and formalize this work. Make the prediction going forward – that is the only true test – and if it turns out to be correct there can be no arguments. Propose 3 test as did Einstein for GR. The rule of three. http://en.wikipedia.org/wiki/Rule_of_three_%28writing%29
Looking up the rule of three I came across a statistical rule of three which would appear to apply to climate science. It is interesting the climate is weather averaged over 30 years, which is the same interval for the rule of three.
http://en.wikipedia.org/wiki/Rule_of_three_%28medicine%29
In the statistical analysis of clinical trials, the rule of three states that if no major adverse events occurred in a group of n people, then the interval from 0 to 3/n can be used as a 95% Confidence interval that for the probability that a corresponding major event will arise for a single new individual. This is an approximate result, but is a very good approximation when n is greater than 30.
Willis, I have tried to think of an internal control for you analysis.
Would it be a lot of work to just analyze the belly of the beast between the Tropics of Cancer and Capricorn? Here I would expect the time constant to be smaller than when comparing the the hemispheres. In this band one should get the least swing between winter/summer.
Stephen Wilde (June 1, 2012 at 2:22 pm) wrote:
“Since I first promulgated such ideas there have been numeroius papers which appear supportive and many contributors here and elsewhere have been setting out similar if less complete formulations.
A few years ago my propositions were ‘way out there’. Now, not so much.”
I’m digging through some older articles on solar-terrestrial circulatory morphology and its evident that some had remarkably clear vision at least 3 decades ago. Did their peers sufficiently understand & appreciate? Possibly not. The publicly projected narrative seems to be that lack of atmospheric angular momentum records for the early 20th century leaves experts with some particularly nagging worries. I see one – possibly 2 – workaround(s) – (details sometime down the road….)
Thanks for your regular contributions Stephen.
Best Regards.
Willis , this is very interesting. It does suggest that climate is inherently stable and dominated by strong neg feedbacks , not the artificially invented positive feedbacks fed into the models.
One correction in your text:
“But the calculations using the given time constants and sensitivities were able to capture both hemispheres very accurately. The RMS error of the residuals is only a couple tenths of a percent.”
Err, wasn’t that “a couple of tenths of a degree” ?! Not quite the same level of accuracy.
“The fit is actually quite good, with an RMS error of only 0.2°C and 0.1°C for the NH and the SH respectively.”
One thing you do need to add for this to be meaningful is some uncertainty calculation. What is the uncertainty of the source data and how do changes within that range affect your results.
In fact, someone else pointed out that using 3.2 W/m2 gave a significantly higher climate sensitivity ( 1.6 C IIRC). So what is the uncertainty in that figure and how does its range affect your result?
This is one of the biggest problems in climate “science”, there is a total lack of regard for uncertainty evaluation ( or totally fictitious ones are often provided when they are given).
One other thing you could try is to separate out the tropics rather than doing a simple NH,SH split. The tropics have a 6m cycle in irradiance, not and annual one. This may account for the way your fits have noticeable deviations near the ends of the ellipses.
It would also seem to be a significant omission that you do not seem to state anywhere just what “temperature” you are using in all this.
Overall this is a great article. The astounding simplicity and the very small residuals suggests you are on the right track.
One more point, forcing due to doubling CO2 is referring to pre-industrial levels. The next doubling will have less effect. Atmospheric CO2 conc is well above the “linear” log relation to gas concentration where each doubling causes the same effect.
Nice post.
Joeldshore:
At June 1, 2012 at 6:31 pm you write:
No and no.
The ‘tropospheric hotspot’ is missing. That missing elevated temperature at altitude in the tropics is the anticipated result of a water vapour feedback.
You have linked to abstracts of papers which claim to have determined increased water vapour at altitude in the tropics. But there has been no accelerated warming at altitude relative to the surface in the tropics. In other words, the increased humidity has NOT provided a positive feedback.
So, if the papers you cite are right then you have cited evidence which shows
The water vapour feedback is now REFUTED.
Richard
“Matthew, the standard form for exponential decay over time is exp(-t/tau), where “t” is the elapsed time. ”
For those having trouble grasping this , it should be written exp(-delta_t / tau), since in this case the formula is given in a iterative form, so delta_t is the time interval from T(n) to T(n+1) ie , in this case one month. The exponential is dimensionless as it should be and tau is in the same time unit as the “1”.
Willis
Net Sun (NH+SH) is 2* Net Sun Global in your spreadsheet.
D. J. Hawkins says:
June 1, 2012 at 6:14 pm
“As one note, global humidity is more or less the same over the last 40 years…”
Can you back that claim with sources? I can and it’s the opposite of what you are saying.
– Dai 2006 – Recent Climatology, Variability, and Trends in Global Surface Humidity.
Take a good look at Figure 11. It shows global humidity. What’s the trend?
– Willett et al 2008 – Recent Changes in Surface Humidity: Development of the HadCRUH Dataset.
“Between 1973 and 2003 surface specific humidity has increased significantly over the globe, tropics, and
Northern Hemisphere.”
Willis,
Accepting the fact that solar radiation and albedo explain the data, it would seem that the next logical step would be to examine the linkage, if any, between albedo and the rest of the “forcing agents” used by the AGW models?
Bean
Willis Eschenbach says:
June 1, 2012 at 6:04 pm
“I see that you don’t like them, and I see that you think if you say that very loudly and with great vehemence, it will make you right … unfortunately, your passion is not relevant.
In the other thread you refer to, you gave the standard explanation, which is that water vapor will be the dominant feedback, and it is strongly positive. Me, I think that the dominant feedback is clouds and thunderstorms, and they are strongly negative.”
First the evidence:
Schmidt et al 2010 – Attribution of the present‐day total greenhouse effect.
“The actual mean surface temperature is larger (by around 33°C, assuming a constant planetary albedo) due to the absorption and emission of long‐wave (LW) radiation in the atmosphere by a number of different “greenhouse” substances.”
That’s all we need 33°C. Without “greenhouse” substances Earth would be -18°C. It’s not it’s 15°C. Every well respected climate scientist accepts that.
Here is another graph coming from Roy Spencer’s Climate Confusion
http://www.klimaatgek.nl/klimaatimg/lapse%20rate.jpg
It simply states that if clouds were removed from the atmosphere and everything else stays equal the temperature will rise to 60°C due to the greenhouse effect. That given fact is known in the climate world and accepted by many, if not all, climate scientists.
So clouds do cause a 58% cooling ((60-15)/78(total greenhouse warming)x100%) and not the 45% I claimed. I made a calculation error there. Yes clouds cause cooling (58%) (which I accept), but the net effect will be warming on top of the 2xCO2 warming effect or the total warming from 2xCO2 and water vapor together. Nobody knows that exactly. (Schmidt et al 2010 “For instance, one cannot simply take the attribution to CO2 of the total greenhouse effect (20% of 33°C) and project that onto a 2 × CO2 scenario.”) The extra clouds won’t cause the huge negative feedback you believe in. At least not in the real world.
Now to some of your quotes: “Sunlight and albedo seem to be necessary and sufficient variables to explain the temperature changes over that time period.”
Just to name a few papers: Solanki 2003 – Can solar variability explain global warming since 1970?, Lockwood 2008 – Recent changes in solar outputs and the global mean surface temperature. III. Analysis of contributions to global mean air surface temperature rise, Benestad 2009 – Solar trends and global warming, Pittock 2009 – Can solar variations explain variations in the Earth’s climate?, etc etc etc. If you want some more just ask for it. The list is so long that one can rule out that the Sun will be sufficient enough to cause the temperature changes over that time period.
And: “For example, I have held that the effect of volcanoes on the climate is wildly overestimated in the climate models, because the albedo changes to balance things back out.”
What causes the albedo to change? I can see Pinatubo very clearly here:
http://www.drroyspencer.com/wp-content/uploads/UAH_LT_1979_thru_April_2012.png
but not in cloud variations here:
http://isccp.giss.nasa.gov/zD2BASICS/B8glbp.anomdevs.jpg
Furthermore: “I can’t, and I say that the reason is that the clouds respond immediately to such a disturbance in a thermostatic fashion.”
Name your source for that statement please. I cannot see the cloud response in my presented graph. Can you?
How can clouds respond to such a disturbance when you have just ruled out aerosols, volcanic forcing and indirect aerosols? It is aerosols emitted by volcanoes which cause volcanic cooling.
http://vulcan.wr.usgs.gov/Glossary/VolcWeather/description_volcanoes_and_weather.html
“increases in volcanism that could have thrown more airborne volcanic material into the stratosphere, thereby creating a dust veil and lowered temperatures.”
Where are your sources/evidence for your claims? No computer models please.
Robbie says: June 2, 2012 at 5:49 am
Atmospheric RH% is going down. http://i38.tinypic.com/30bedtg.jpg
richardscourtney says:
That is very impressive amount of incorrect science to pack into just a few sentences!
(1) It is quite debatable that the “hotspot” is missing. The analyses / re-analyses of the different radiosonde and satellite data sets show quite different results for the multidecadal trends in the tropical troposphere. This is because both the satellite and radiosonde data have serious issues that can produce artifacts in these long term trends. And, in fact, for fluctuations in temperature on monthly to yearly time scales where artifacts are not an issue in the data, the tropical tropospheric amplification is well-confirmed.
(2) The “hotspot” (enhanced warming at altitude predicted for the tropical troposphere) is not a result of the water vapor feedback, i.e., it has absolutely nothing to do with water vapor absorbing additional greenhouse gases. Rather, it is just a result of the basic physics of the lapse rate as long as it follows the moist adiabatic lapse rate. Hence, your reasoning that this “refutes” the water vapor feedback is completely spacious.
(3) As Isaac Held has explained, although the predicted enhancement of warming at altitude is predicted to increase the water vapor feedback somewhat over its value in the absence of this effect, it also produces the lapse rate feedback, a negative feedback in the climate models that occurs because the “hotspot” means the surface does not need to warm as much as it otherwise would in order to increase the radiation emitted back out into space and restore radiative balance. And, in fact, it turns out that this lapse rate feedback produced is predicted to be a bit larger in magnitude than the enhancement of the water vapor feedback. Hence, the net effect of the purported absence of the “hotspot”, were it to prove real, would be that the climate models are probably slightly underestimating, not overestimating, the climate sensitivity.
Steve Keohane says:
(1) Relative humidity going down is not incompatible with absolute humidity going up. (Most climate models predict relative humidity to stay about constant or decrease a bit overall as the climate warms.)
(2) You give no source or other information for the data set you show but I believe you have cherry-picked a particular re-analysis of radiosonde data with known severe problems. This data does not agree with the much better satellite data available (and I think it even disagrees with other re-analyses of the radiosonde data).
joeldshore says:
June 2, 2012 at 7:41 am
“That is very impressive amount of incorrect science to pack into just a few sentences!”
Followed by three paragraphs of incorrect science.
Well Joel, today in the high desert plains of Oregon, it is freaking cold! Snow is predicted at pass level by Tuesday. So I tell you what, when the day comes that that hot spot starts to warm my lilly white —–, I will agree with you. Till then, you are arguing for a signal that my tomato plants have no knowledge of as they sit sulking in their pots on my porch…she said kindly.
joeldshore says:
June 2, 2012 at 7:41 am
Hence, the net effect of the purported absence of the “hotspot”, were it to prove real, would be that the climate models are probably slightly underestimating, not overestimating, the climate sensitivity.
========
In which case, temperatures would not have flat-lined post WWII and post 2000, when industrialization skyrocketed. The would not have increased sharply during the 20’s and 30′ and during the 80’s and 90’s when there was nothing remarkable happening with CO2.
What we do have today is a world in which we are feeding 7 billion people with only minor problems with famine as compared to 50 years ago in which we had a constant struggle to feed 3 billion. Over this same period food prices have dropped in real dollar terms. This is completely with odds with the predictions of virtually all “experts” in high places.
The problem is that “doom and gloom” forecasts sell, so they get more publicity than the facts. The facts are simple. Mainstream Climate Science predicted an accelerating warming post 2000. It didn’t happen and there are no signs that it will be happening anytime in the near future. In science this is graded as a FAIL. The theory failed in its prediction, thus the theory as stated is wrong.
Now we see the situation where climate science is trying to rationalize the failed prediction after the facts. All scientific theories are equally correct in those circumstances. Every theory can be adjusted to fit the past. It is a meaningless exercise. The one and only test that has meaning is for a theory to predict something that is unexpected and thus hard to predict.
P. Solar says:
June 2, 2012 at 12:49 am
Thanks, fixed. Please note that a couple tenths of a degree RMS error in a system running at ~ 288K means my calculation of the temperature based solely on available sunlight is accurate to about 7 hundredths of one percent … just sayin’ …
w.
P. Solar says:
June 2, 2012 at 12:49 am
The source doesn’t give any uncertainty figures for the albedo, so I’m unable to do that.
I assume you are talking about this comment:
The gentleman is totally confused. First, 3.7 W/m2 is not the “equilibrium climate sensitivity” as he claims, it is the IPCC value for the expected change in DLR from a doubling of CO2. Unfortunately, I’ve never seen any uncertainty figure for that either. Next, there is no “normally accepted conversion of 3.2 W /m2/ C”, that makes no sense. Finally, I don’t have a clue how he gets from that to “the value of 1.16°C per CO2e doubling”, that’s totally opaque. The IPCC gives the expected warming from a doubling of CO2 as being 1.5°C—4.5°C, or more recently as 2°C—4.5°C, but I’ve never seen 1.16
So I don’t have a clue what either of you are trying to say. I use 3.7 W/m2 per doubling purely so my numbers can be compared to the results from other studies, which also use 3.7 W/m2 per doubling … so there’s no need for an uncertainty figure on that.
I couldn’t agree more.
I love how people always tell me that I should try this and that … how many times do I have to say I don’t have the data? What I’m using is the only albedo dataset I can find that is split by hemisphere. I know of no such dataset for just the tropics. I know of no such dataset for other time periods. If anyone comes up with one I’ll be very happy to analyze it, but until then …
You are the first person to comment on the error at the end of the ellipses, which has been puzzling me as well. The deviation at the end of the ellipses actually occurs mostly at the cold end, and not the hot end. I suspect that there are a couple reasons for the error. One is the effect of the melting/freezing of ice/snow, which involves the transfer of energy with no change in temperature. The other is that I suspect that the climate sensitivity (lambda) is a function of absolute temperatures, rather than being a constant. Always more to investigate … and little time in which to do it.
My bad, I’m using the HadCRUT hemispherical temperature dataset.
Thank you kindly. In the rush to find errors, many folks seem to have overlooked the fact that as far as I know, this is clearly both the most accurate and the simplest emulation of earth’s temperature that has been done to date. Sure it has limitations, and as many have pointed out it may fail outside the temporal range (1 month to 14 years) of my study, but within that range it is shockingly accurate.
Actually, it makes no difference where you refer to as the starting point for the CO2 doubling. You get the same numbers regardless of where you start. Also, you say “the next doubling will have less effect”. This is not true. Since the changes are logarithmic, each doubling (within the range seen on Earth) will have the same effect. I know of no evidence to support your claim that “Atmospheric CO2 conc is well above the “linear” log relation to gas concentration where each doubling causes the same effect” … cite? Let me say that MODTRAN disagrees with you …
My thanks again,
w.
ferd berple says:
June 2, 2012 at 9:08 am
“The problem is that “doom and gloom” forecasts sell, so they get more publicity than the facts. The facts are simple. Mainstream Climate Science predicted an accelerating warming post 2000. It didn’t happen and there are no signs that it will be happening anytime in the near future. In science this is graded as a FAIL. The theory failed in its prediction, thus the theory as stated is wrong.
Now we see the situation where climate science is trying to rationalize the failed prediction after the facts. All scientific theories are equally correct in those circumstances. Every theory can be adjusted to fit the past. It is a meaningless exercise. The one and only test that has meaning is for a theory to predict something that is unexpected and thus hard to predict.”
Furthermore, they care not one whit whether their ‘predictions’ are correct or not. The issue has been politicized. The US could be energy self sufficient tomorrow but in doing so the left would lose “Devil Oil’ as a scapegoat and they can’t allow that to happen. “Forcing’ is BS. it exists in an equation and in computer programs but there is no way to experimentally increase CO2 in a vessel containing air that results in an increase the temperature inside the vessel which is what they claim exists in the atmosphere. I vote we not give them one damn penny more and throw the SOBs out on their arses (SOB is Texan for “Sweet Old Boy”) until they can produce, verify, and reproduce again an experiment that proves their fundamental statement of an increase in atmospheric CO2 concentration leads to an increase in the Earth’s atmosphere’s temperature.
Willi’s have found an interesting correlation that tends to show that climate forcing’s are constrained by very simple observational parameters. The correlation itself says nothing about the cause of the correlation.
The correlation shows that solar activity, albedo and CO2 may be highly accurate predictive proxies for temperature. We see this for example in the earth’s tides. The tides are driven by gravity, but we use the position of the sun, moon and jupiter as predictive proxies for the effects of gravity on earth’s oceans.
Based on these observational parameters, this gives us a highly accurate prediction of the tides. A much more accurate prediction that can be achieved by using any theory of gravity to predict the tides. Even though we “know” that gravity is forcing the tides, it does a very poor job of predicting the tides.
Nothing in Willi’s work says that solar activity, albedo or CO2 are “directly” the forcing mechanisms. Rather, what Willis is showing is that whatever the forcing mechanisms, temperature is constrained by simple observational parameters. Exactly like we see with the tides. Gravity is the forcing mechanism, but tidal height is constrained by the position of the sun and planets, not by gravitational forcings.
We can argue and speculate all day about “why” such constraints might exists, but this will not change the numbers. This really is the crucial point. “Why” something happens can never be completely answered in science. However, even without understanding the “why”, science can answer “what”, “when” and “where”. This is how science is validated. Not because it satisfies human curiosity to know “why”. Rather, theory is validated based on its ability to predict “what”, “when” and “where”.
In contrast, pseudo science is based largely on explaining the “why”, with little or no ability to predict “what”, “when” and “where” better than a toss of a coin.
Willis: I don’t get why you’d want to do that. What’s the advantage?
There is no advantage either way: an infinite number of two-parameter models are equivalent. To put it differently, you have a non-linear lagged regression model where an equivalent linear lagged regression model will do.
It may be that a and b estimates have smaller correlations than lambda and tau estimates; I’ll know that later.
If I understand your simulations correctly, from your spreadsheet, almost 100% of the effect of a step change in forcing occurs in just 10 months after the step change. If that is so, then the long-term sensitivity equals the short-term sensitivity. Is that a fair interpretation of your model output?