Anomalies are unsuitable measure of global temperature trends
Guest post by David M. Hoffer
An anomaly is simply a value that is arrived at by comparing the current measurement to some average measurement. So, if the average temperature over the last 30 years is 15 degrees C, and this year’s average is 16 degrees, that gives us an anomaly of one degree. Of what value are anomalies? Are they a suitable method for discussing temperature data as it applies to the climate debate?
On the surface, anomalies seem to have some use. But the answer to the second question is rather simple.
No.
If the whole earth was a single uniform temperature, we’d have no need of anomalies. But the fact is that temperatures don’t vary all that much in the tropics, while variations in the high temperate zones are frequently as much as 80 degrees over the course of a year. How does one compare the temperatures of say Khartoum, which on a monthly basis ranges from an average of 25 degrees to 35 degrees C, to say Winnipeg, which might range from -40 in the winter to +40 in the summer?
Enter anomalies. By establishing a base line average, usually over 30 years, it is possible to see how much temperatures have changed in (for example) winter in Winnipeg Canada versus Khartoum in summer. On the surface, this makes sense. But does the physics itself support this method of comparison?
It absolutely does NOT.
The theory of CO2’s direct effects on earth’s surface temperature is not terribly difficult to understand. For the purposes of this discussion, let us ignore the details of the exact physical mechanisms as well as the order and magnitude of feedback responses. Let us instead assume that the IPCC and other warmist literature is correct on that matter, and then see if it is logical to analyze that theory via anomaly data.
The “consensus” literature proposes that direct effects of CO2 result in a downward energy flux of 3.7 watts/m2 for a doubling of CO2. Let’s accept that. Then they propose that this in turn results in a temperature increase of one degree. That proposal cannot be supported.
Let us start with the one degree calculation itself. How do we convert watts/m2 into degrees?
The answer can be found in any text book that deals with radiative physics. The derivation of the formula requires some in depth understanding of the matter, and for those that are interested, Wikipedia has as good an explanation as we need:
http://en.wikipedia.org/wiki/Stefan%E2%80%93Boltzmann_law
For the purposes of this discussion however, all we need to understand is the formula itself, which is:
P=5.67*10^-8*T^4
It took Nobel Prize winning work in physics to come up with that formula, but all we need to use it is a calculator.
For the mathematically inclined, the problem ought to be immediately obvious. There is no direct correlation between w/m2 and temperature. Power varies with T raised to the power of 4. That brings up an obvious question. At what temperature does the doubling of CO2 cause a rise in temperature of one degree? If we use the accepted average temperature of earth surface as +15 degrees C (288 degrees K) simply applying the formula suggests that it is NOT at the average surface temperature of earth:
For T = 288K
P = 5.67*10^-8*288^4 = 390.1
For T = 289K (plus one degree)
P = 5.67*10^-8*289^4 = 395.5
That’s a difference of 5.4 w/m2, not 3.7 w/m2!
So, how does the IPCC justify their claim? As seen from space, the earth’s temperature is not defined at earth surface, nor can it be defined at the TOA (Top of Atmosphere). Photons escaping from earth to space can originate at any altitude, and it is the average of these that defines the “effective black body temperature of earth” which turns out to be about -20 C (253 K), much colder than average temperatures at earth surface. If we plug that value into the equation we get:
253K = 232.3 w/m2
254K = 236.0 w/m2
236.0 – 232.3 = 3.7
There’s the elusive 3.7 w/m2 = 1 degree! It has nothing to do with surface temperatures! But if we take this analysis a step further, it gets even worse. The purpose of temperature anomalies in the first place was supposedly to compare temperature changes at different temperature ranges. As we can see from the analysis above, since w/m2 means very different things at different temperature ranges, this method is completely useless for understanding changes in earth’s energy balance due to doubling of CO2.
To illustrate the point further, at any given time, some parts of earth are actually in cooling trends while others are in warming trends. By averaging temperature anomalies across the globe, the IPCC and “consensus” science has concluded that there is an overall positive warming trend. The following is a simple example of how easily anomaly data can report not only a misleading result, but worse, in some cases it can report a result the OPPOSITE of what is happening from an energy balance perspective. To illustrate, let’s take four different temperatures and consider their value when converted to w/m2 as calculated by Stefan-Boltzmann Law:
-38 C = 235K = 172.9 w/m2
-40 C = 233K = 167.1 w/m2
+35 C = 318K = 579.8 w/m2
+34 C = 317K = 587.1 w/m2
Now let us suppose that we have two equal areas, one of which has an anomaly of +2 due to warming from -40 C to -38 C. The other area at the same time posts an anomaly of -1 due to cooling from +35 to +34.
-38 C anomaly of +2 degrees = +5.8 w.m2
+35 C anomaly of -1 degree = -7.3 w/m2
“averaged” temperature anomaly = +0.5 degrees
“averaged” w/m2 anomaly = -0.75 w.m2
The temperature went up but the energy balance went down? The fact is that because temperature and power do not vary dirfectly with one another, averaging anomaly data from dramaticaly different temperature ranges provides a meaningless result.
Long story short, if the goal of measuring temperature anomalies is to try and quantify the effects of CO2 doubling on earth’s energy balance at surface, anomalies from winter in Winnipeg and summer in Khartoum simply are not comparable. Trying to average them and draw conclusions about CO2’s effects in w/m2 simply makes no sense and produces a global anomaly that is meaningless.
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.
The climate and weather numbers are always hash, sliced, diced and cherry picked in order to paint the most alarming picture possible. It’s been that way for decades now. Obfuscation seems to be an important secondary goal as well.
John Finn: “A simple energy balance model demonstrates the figures.”
Indeed. For example we have the following from http://answers.yahoo.com/question/index?qid=20090123131816AAeXRwL
“I have seen asphalt on the apron of a race track get to 138 degrees
Source(s):
Race engineer using an infrared gun at Bristol Motor Speedway in 2008”
Which, if you care to do the math, gives an emission of 689 W/m^2. That this is vastly higher than the estimated 390 W/m^2 is obviously a consequence of all the Mad-Max petrol burners making lap times.
This is, of course, farcical. But then we note that Bristol, TN has a latitude, an the Earth has an axial tilt and albedo. Assume that the axial tilt of the Earth was pointed directly at the sun at the time of the measurement, and that the measurement was taken at solar noon. Then if we use the Bond albedo we find that the inbound radiation is 922 W/m^2. And we might wish to thank our lucky stars that we have all that evil CO2 blocking inbound radiation. As otherwise the given black-body temperature would be 183 degrees (Fahrenheit obviously.)
This is a little bit stupid of course. And mostly for the reason that we’re using Bond albedo. Which default as the geometric albedo as corrected for scattering and other issues across the illuminated hemisphere of the sphere. This then is corrected by all other manner and means based on actual measurement. The problem is that, for this example, we really want the geometric albedo of the Earth on a day of a given cloudiness over a square meter of asphalt.
And the reason we want this is that the Bond albedo is farcically high and inappropriate for something like a simple radiation balance model for a patch of land in Tennessee. But if we simply take the other boundary condition and treat only the albedo as being that of asphaly, at 0.1, then we find the inbound insolation on the apron is 1196.94 W/m^2. Which dictates that we ought see a temperature of 226 degrees.
So our simple radiation balance model can do no more than state that GHGs and geometry are responsible for reducing the insolation received by the apron somewhere in the range of 25 to 42%. Making any statement about predicted temperature is a nonsense for something so trivial as a bit of tarmac as we have a predicted range of 43 degrees; the least point of that prediction being still 45 degrees higher than empirical measurement. And the actual temperautre being 79 degrees warming that the 59 degree temperature we would predict based on a the wonderous notion of 390 W/m^2.
So yes, depending on your assumptions we can make statements about a 3.7 W/m^2 forcing within a window of modelling errors spanning a 806 W/m^2 range of uncertainty in a simple energy balance model.
CO2 -> Control -> Money
SUN -> NO Control -> No Money
Rain is Weather. Sun is Climate.
Unfortunately, both of the “words” CO2 and Sun have three letters. No wonder politicians get confused!!
Baa Humbug;
+35 C = 318K = 579.8 w/m2
+34 C = 317K = 587.1 w/m2
The 579.8 w/m2 and 587.1 w/m2 in the above are backassward.
>>>>>>>>>>>>>>>>>>>>
CEH;
+34 C = 317K = 587.1 w/m2 is not correct
+34 C = 317K = 572,5 w/m2
>>>>>>>>>>>>>>>>>>>>>>>
Gents, you are both correct. I used temps of +44 and +45, and transposed the results to boot, good catch both of you.
That said, the point doesn’t change.
This post appears to have a clear objective (to ridicule what the author calls “warmist literature”) but carries with it absolutely no substance that I can detect.
The author holds forth with use of the Stefan-Boltzmann equation, only, – which tells us primarily what the effective temperature of the Earth’s upper atmosphere must be when viewed from space – while the temperature “anoma-lies” (as the author so cleverly calls anomalies) concern surface temperatures – which, of course, are effectively hidden from space view by the greenhouse effect.
So Kasuh seems to also have this post figured out when he correctly says:
“For measuring the effect of greenhouse gases on the temperature, Stefan-Boltzmann equation is irrelevant. Greenhouse gases act as an insulator in the atmosphere and to measure their effect, we need to measure thermal conductivity of the atmosphere.”
I have seen similar “work” by DavidMHoffer on two previous threads. He seems to put “stuff” out there that might possibly look impressive to the lay public – but is often way off the main point and seems to be intended only to confuse rather than explain. In this case, the big question concerning the surface temperatures of the Earth is the GH effect. The bit he has provided here about the S-B equation and the effective T as viewed from space has been known for what, about 150 years?, but has little to do with surface temperatures of our planet.
If there is a scientific point that DH is trying to make here concerning surface temperature anomalies, perhaps he could make it clearer? As DH will verygladly tell you, I am not the brightest bulb on the tree and certainly am not bright enough to see any relevant or significant scientific point whatsoever in this post.
John Finn;
Yes – the 3.7 w/m2 for a doubling is the forcing at the TOA.
>>>>>>>>>>>>>>>>>>>>>>>
No, it is not. I refer you to the definition in IPCC AR4 WG1 Chapter 2:
http://www.ipcc.ch/publications_and_data/ar4/wg1/en/ch2s2-2.html
Which reads:
“The definition of RF from the TAR and earlier IPCC assessment reports is retained. Ramaswamy et al. (2001) define it as ‘the change in net (down minus up) irradiance (solar plus longwave; in W m–2) at the tropopause after allowing for stratospheric temperatures to readjust to radiative equilibrium, but with surface and tropospheric temperatures and state held fixed at the unperturbed values’.”
So for starters, the definition is NOT at TOA, it is at the tropopause, and it is the sum total of change in net (down minus up). You then have to contemplate what they mean by “net”. The supposed 3.7 w/m2 doesn’t happen at TOA or at any other single point in the atmosphere, it is a value derived from the change in energy flux from top to bottom of the tropospheric air column. No single altitude would yield that number, only a tiny fraction of it.
Jeff Norman says:
August 27, 2012 at 6:07 am
For there to be a net increase of 3.7 W/m^2 at the Earth’s surface, there has to be a net decrease of 3.7 W/m^2 into space (assuming a constant influx (that isn’t actually constant)). Kasuha is correct, David M. Hoffer’s analysis and conclusion is wrong.
>>>>>>>>>>>>>>>>>>>>>>>
As I said in the article itself, my point was not to debate the physical processes themselves, but to demonstrate that temperature anomalies as measured at the surface are unsuitable for tracking those processes. That said, you may be interested to know that doubling of CO2, at equilibrium, changes neither the amount of energy absorbed from the sun, nor does it change the amount of energy radiated by earth back out to space. What it changes is the average altitude at which energy is radiated to space. This, combined with the lapse rate, changes the temperature gradient from earth surface to TOA.
In an earlier WUWT? post, it was noted (I think by Dr. Brown of Duke U) that, since the T term is to the 4th power, that to take an average of day and night temps is invalid. The night temps changes yield a different power/T than daytime temps – working with their averages is just wrong. I recall he was explaining why the relation gives the wrong answer for the moon’s temps.
The apparent difficulty raised by Mr. Hoffer’s head posting may be resolved as follows.
To determine temperature change, the radiative forcing per CO2 doubling of 3.71 Watts per square meter is multiplied by some climate-sensitivity parameter.
In the absence of temperature feedbacks, or where they sum to zero, that parameter is – to a first approximation – the first differential of the fundamental equation of radiative transfer at the characteristic-emission altitude, which is that altitude (varying inversely with latitude) at which incoming and outgoing fluxes of radiation are in balance.
The incoming flux is measured by satellites at 1362 Watts per square meter, which is divided by 4 to allow for the ratio of (the area of the disk the Earth presents to the Sun) to the Earth’s spherical surface area. Thus, the downward flux at the mid-troposphere is about 340.5 Watts per square meter, which is multiplied by (1 minus the Earth’s reflectance or albedo of 0.3) to give 238.35 Watts per square meter.
Plugging this value into the Stefan-Boltzmann equation, assuming emissivity is unity, gives the characteristic-emission temperature of 254.6 K.
To a first approximation, then, the zero-feedback or “Planck” climate-sensitivity parameter is delta-T / (4 delta-F) = 254.63 / (4 x 238.35) = 0.267 Kelvin per Watt per square meter.
However, as the head posting rightly points out, it is necessary to make adjustments to allow for the effect of the fourth-power Stefan-Boltzmann equation on the non-uniform latitudinal distribution of surface temperatures by latitude.
In fact, the models relied upon by the IPCC do make the appropriate adjustment, and the IPCC’s estimate of the Planck parameter is 0.313 Kelvin per Watt per square meter, a little over one-sixth greater than the first approximation. The IPCC’s estimate will be found in a footnote on p. 631 of the Fourth Assessment Report (2007), where its reciprocal is expressed as 3.2 Watts per square meter per Kelvin.
I have verified the IPCC’s value using 30 years of mid-troposphere temperature-anomaly data kindly supplied by John Christy of the University of Alabama at Huntsville.
The value of the Planck parameter is crucial, because not only the direct warming of about 1.16 K caused by a CO2 doubling but also (and separately) the value of the overall feedback gain factor is dependent upon it.
Therefore, I have additionally determined various values of the Planck parameter, which varies over time depending upon the magnitude of the temperature feedbacks that it triggers. Here is a summary of these values:
Planck or instantaneous parameter: 0.3 Kelvin per Watt per square meter (determined as above).
Bicentennial parameter: 0.5 Kelvin per Watt per square meter (this value is deduced on each of the six SRES emissions scenarios by close inspection of Fig. 10.26 on p. 803 of IPCC (2007).
Equilibrium parameter, when the climate has finished responding to a given forcing (typically after 1000-3000 years): 0.9 Kelvin per Watt per square meter (the IPCC’s current 3.26 K multi-model mean sensitivity to a CO2 doubling, divided by the 3.71 Watts per square meter CO2 radiative forcing).
From these values, one may deduce that an appropriate climate-sensitivity parameter for a CO2-mitigation policy designed to operate over ten years is about 0.33 Kelvin per Watt per square meter; a 50-year parameter is about 0.36; and a centennial-scale parameter is about 0.4 Kelvin per Watt per square meter. From Table 10.26, taken with Table SPM.3, it is possible to estimate that the IPCC’s estimate of the centennial-scale climate-sensitivity parameter is approximately 0.435 Watts per square meter.
However, if – as Drs. Lindzen, Choi, Spencer, Braswell, Shaviv, etc., etc. have found – the temperature feedbacks acting in response to a forcing are net-negative, then the appropriate sensitivity parameter will be 0.2.
So, to second-guess the IPCC and make your own forecasts of CO2-driven warming, just pick your parameter and multiply it by 5.35 times the natural logarithm of the proportionate change in CO2 concentration.
An example. The IPCC’s central estimate of CO2 concentration in 2100, taken as the mean of all six emissions scenarios shown in Fig. 10.26, is 713 ppmv (though at present the concentration is rising very much more slowly than necessary to reach that value. And today’s concentration is 393 ppmv. So, assuming the net-negative feedbacks posited by Lindzen & Choi (2009, 2011) and by Spencer & Braswell (2011, 2011), the CO2-driven warming of the 21st century will be 0.2[5.35 ln (713 / 393)] = 0.6 K. Even if one adds a bit for other greenhouse gases, we shall not cause much more than 1 K warming this century.
Very well written post (wish I was smart enough to absorb it).
You even drew Kate into the discussion 🙂
I don’t want to get into a debate about the physical processes in this thread, my purpose was only to show that the use of anomaly data is not suitable for tracking them. But since the issue had arisen, I’ll refer readers to this link also:
http://www.ipcc.ch/publications_and_data/ar4/wg1/en/ch2s2-8.html
Which clearly states:
““It should be noted that a perturbation to the surface energy budget involves sensible and latent heat fluxes besides solar and longwave irradiance; therefore, it can quantitatively be very different from the RF, which is calculated at the tropopause, and thus is not representative of the energy balance perturbation to the surface-troposphere (climate) system.”
RF in the above quote stands for Radiative Forcing. In brief, even the “consensus” literature of the IPCC makes it clear that a simple extrapolation of radiative forcing to surface temperatures is not possible, and for a multitude of reasons. Despite plainly saying that radiative forcing and surface temperatures cannot be directly linked, the literature goes on to present surface temperatures as if they are.
Great article. I learned a lot. And I enjoy the comments section as well. The back and forth exchange of ideas, all in a civilized, adult manner is wonderful. Watts Up With That? is the best climate science site on the internet. Everyone who has contributed should be proud of themselves. I look forward to learning much more about climate science from people like David and others who don’t have to do this, but do it anyways. Thanks !
A bit confusing but I fully agree that attempting to monitor an average global surface temperature is indeed a completely meaningless concept. It tells us nothing about the radiative balance. monitoring of surface stations is a complete and utter waste of time for global radiative balance. Surface stations are useful ONLY for monitoring local weather conditions. The only thing I can see as being useful for radiative balance is a satellite!
Global Warming science based on the analysis of surface station data is not science it is FRAUD. Real scientists would never endorse such an approach. The ONLY way to explain this is that there is money and jobs to be had by playing with this data and drawing conclusions that serve a political agenda.(Alternatively, one must accept the unlikely scenario whereby all true scientists have exited from atmospheric sciences and that this entire academic domain has become dominated by geography majors.)
How many apples in a barrel of grapes?…… Exactly…. there some “averages” which are meaningless. It is possible for some “average” temperature to be going up while, at the same time, the earth could be losing net heat. [and vice versa]
Essex, McKitrick & Andresen
http://www.uoguelph.ca/~rmckitri/research/globaltemp/GlobTemp.JNET.pdf
gives a good explanation [the first part of the paper can be read separately from the second, more mathematical, part.]
NB typo in the penultimate paragraph above “dirfectly”
Monkton of Brenchley;
In fact, the models relied upon by the IPCC do make the appropriate adjustment,
>>>>>>>>>>>>>>>>>>>>>
There’s little in your comment that I would disagree with. My point in this essay however is not in regard to the models, for that is a different discussion. The question at hand is the value of surface temperature anomaly data to confirm the warming that the models claim should be happening. Use of anomaly data simply averaged without regard to the 4th power relationship and depicted as a general rise in global temperatures by HadCrut and GISS inadvertanty biases the calculated average such that cold temperatures (high latitudes, winter seasons, night time lows) are over represented and warm temperatures (low latitudes, summer seasons, day time highs) are under represented.
David M. Hoffer,
If “that” was all you wanted to say about surface temperature anomalies, then you could have simply said the heat content of the Earth’s atmosphere is a function of the energy fluxes and the chemical makeup of the Earth’s atmosphere. This effects the heat content of the atmosphere near the surface which when combined with the chemical makeup of the atmosphere near the Earth’s can be used to estimate temperatures. Which is all trivial compared to the heat content of the oceans as someone indicated above.
Sorry, I agree with ericgrimsrud above.
David M Hoffer says: “That said, the point doesn’t change”
That is correct.
“doubling of CO2, at equilibrium, changes neither the amount of energy absorbed from the sun, nor does it change the amount of energy radiated by earth back out to space. What it changes is the average altitude at which energy is radiated to space. This, combined with the lapse rate, changes the temperature gradient from earth surface to TOA.”
Correct in my view. With varying effects on the temperature gradient through the various atmospheric layers.
The change in atmospheric heights results in an air circulation response but an infinitesimal change compared to the solar induced (modulated by the oceans) natural changes such as from MWP to LIA to date.
The atmosphere always reconfigures itself to ensure that energy in equals energy out if anything other than surface pressure and insolation seek to disturb the energy budget.
Peter Miller says: August 27, 2012 at 1:26 am
“At least we now know the ‘science’ is settled.”
Well, the more I read and learn about it, I would dare to say that the science is septic.
Thank you, David Hoffer, for a very clear essay on how tenuous is the link between temperature anomaly analysis and the reality of heat. Indeed, to do the science properly, it is important to measure anomalies of heat over the earth, not temperature. It is clear that the average[(K*T^4)] is not equal to K*(average[T])^4.
Not only does temperature vary by latitude from equator to pole, it varies by time of day and night at every location. Even taking the simplest average of (Tmax+Tmin)/2 by location is committing the error described by David.
But how big is the error? Take Denver, predicted 8/27/12: Tmax = 93F = 34C, Tmin = 68F = 20C. If you average temps, you will get 300.10 deg K. But if you convert to heat via Boltzman, (503.99, 419.73) you get an average of 461.36 W/m2 and an equivalent temperature of 300.34 deg K. So there is only a difference of a quarter of one degree Kelvin between the average of the temps and the temp of the average heat. This is a pretty standard difference over time at any one location, at least on seasonal basis. So maybe we can live with the average of temps, at least within each location.
Where I think there is an underappreciated element of error is that we forget that Tave at each location is never a measured value. It is a calculated “average” of the min and max values. Forget about TOB issues. The simple fact that you “average” 34 and 20 to get Tave = 27K implies that the uncertainty of that average is up to 7 deg K.
Take 30 of those daily Tave estimates to get a Tave for a month, and the uncertainty on that monthly Tave is 1.3 deg K. That is quite an error bar; the 80% confidence on the Tave is plus or minus 2.0 deg K, with its own weak simplifying assumptions about the shape of the curve between the min and max non-randomly sampled points.
Now, you attempt to string 30 years * 12 months/yr of Tave looking for trend y=mx+b hoping to find m 0 at statistical significance for each location. Sometimes you find it, sometimes it is insignificant. But when you add a 1.2 deg K uncertainty to each Tave point, even over 30 years, it will be very difficult to disprove a slope m = 0.0 deg/K/decade at statistical significance. Even when you can disprove 0.0, the uncertainty on m will still be uselessly large.
The statistical abuses do not end there. When the modeler’s grid their temperature data by month and grid cell, each and every data point contains uncertainty of the Tave derived from the min-max original data points. The resulting weighted gridded Tave must contain an uncertainty derived from the control points and with added uncertainty from the weighting method. Every adjustment to a data point adds to uncertainty because there is uncertainty in the adjustment and statistical variances at least add (unless you have SOLID evidence that sources of error negatively correlate in nature).
David,
Don’t get too hung about whether ‘radiative forcing’ applies at the Top of the Atmosphere (TOA) or at the Tropopause. If the stratosphere is allowed to respond to this forcing (as stated in your link) then these become the same thing. This was made clear in the first IPCC assessment report
“The definition of radiative forcing requires some clarification Strictly speaking, it is defined as the change in net downward radiative flux at the tropopause, so that for an instantaneous doubling of CO2 this is approximately 4 Wm^2 and constitutes the radiative heating of the surface-troposphere system If the stratosphere is allowed to respond to this forcing, while the climate parameters of the surface-tropospherc system are held fixed, then this 4 W/m^2 flux change also applies at the top of the atmosphere It is in this context that radiative forcing is used in this section”
http://www.ipcc.ch/ipccreports/far/wg_I/ipcc_far_wg_I_chapter_03.pdf (Page 78)
So both you and John Finn are correct, for the purpose of calculating the surface temperature response.
MikeB;
So both you and John Finn are correct, for the purpose of calculating the surface temperature response.
>>>>>>>>>>>>>>>>>>
Technically neither of us are correct, because the IPCC not only admits that RF and surface temperature responde cannot be directly linked (see my comment upthread) but they actually consider multiple models of surface temperature response as depicted here:
http://www.ipcc.ch/publications_and_data/ar4/wg1/en/figure-2-2.html
@Monckton: when the climate has finished responding to a given forcing (typically after 1000-3000 years):
Whoa! Where did “1000-3000 years” to equilibrate come from?
When we have a KE=8 volcanic eruption, we get “a YEAR without summer”. A year — not a millennium. On a planet with day-night temperature swings, with seasonal cycles, any instantaneous forcing is quickly blended into the atmosphere. The evidence from volcanos are that the relaxation from the impulse is on the order of a couple of years, not millennia. Speculation that forcings are carried into the deep ocean for 1000 years are just that: speculations of disparate advocates trying to square the circle of their own making.
Use actual values from a very large series of readings (lets say N).
Add the actual values and find their average (Nav) by dividing by N
Raise the individual values to the power of four and add them together.
Divide by N
Take this new value and extract the 0.25 root to get a new average (Nr)
You will find Nr much higher than Nav.
I would suggest that Nr is the more physically realistic value
For the lay people out there, it is impossible to know if something is an anomoly if you have no understanding of what is “normal”. Hubert Lamb was trying to establish what is “normal” for the earth, thereby having an understanding of “unusual” or “man-made”. The baseline has NOT been established (despite claims to the contrary), so to claim “anomoly” is rubbish.
Well written article Mr. Hoffer. Glad you chose Winnipeg. Spent majority of my 50 years there. You can still enjoy life at -40 below, but it is MUCH easier at 40 above.