*Anomalies are unsuitable measure of global temperature trends*

**Guest post by David M. Hoffer**

An anomaly is simply a value that is arrived at by comparing the current measurement to some average measurement. So, if the average temperature over the last 30 years is 15 degrees C, and this year’s average is 16 degrees, that gives us an anomaly of one degree. Of what value are anomalies? Are they a suitable method for discussing temperature data as it applies to the climate debate?

On the surface, anomalies seem to have some use. But the answer to the second question is rather simple.

No.

If the whole earth was a single uniform temperature, we’d have no need of anomalies. But the fact is that temperatures don’t vary all that much in the tropics, while variations in the high temperate zones are frequently as much as 80 degrees over the course of a year. How does one compare the temperatures of say Khartoum, which on a monthly basis ranges from an average of 25 degrees to 35 degrees C, to say Winnipeg, which might range from -40 in the winter to +40 in the summer?

Enter anomalies. By establishing a base line average, usually over 30 years, it is possible to see how much temperatures have changed in (for example) winter in Winnipeg Canada versus Khartoum in summer. On the surface, this makes sense. But does the physics itself support this method of comparison?

It absolutely does NOT.

The theory of CO2’s direct effects on earth’s surface temperature is not terribly difficult to understand. For the purposes of this discussion, let us ignore the details of the exact physical mechanisms as well as the order and magnitude of feedback responses. Let us instead assume that the IPCC and other warmist literature is correct on that matter, and then see if it is logical to analyze that theory via anomaly data.

The “consensus” literature proposes that direct effects of CO2 result in a downward energy flux of 3.7 watts/m2 for a doubling of CO2. Let’s accept that. Then they propose that this in turn results in a temperature increase of one degree. That proposal cannot be supported.

Let us start with the one degree calculation itself. How do we convert watts/m2 into degrees?

The answer can be found in any text book that deals with radiative physics. The derivation of the formula requires some in depth understanding of the matter, and for those that are interested, Wikipedia has as good an explanation as we need:

http://en.wikipedia.org/wiki/Stefan%E2%80%93Boltzmann_law

For the purposes of this discussion however, all we need to understand is the formula itself, which is:

P=5.67*10^-8*T^4

It took Nobel Prize winning work in physics to come up with that formula, but all we need to use it is a calculator.

For the mathematically inclined, the problem ought to be immediately obvious. There is no direct correlation between w/m2 and temperature. Power varies with T raised to the power of 4. That brings up an obvious question. At what temperature does the doubling of CO2 cause a rise in temperature of one degree? If we use the accepted average temperature of earth surface as +15 degrees C (288 degrees K) simply applying the formula suggests that it is NOT at the average surface temperature of earth:

For T = 288K

P = 5.67*10^-8*288^4 = 390.1

For T = 289K (plus one degree)

P = 5.67*10^-8*289^4 = 395.5

That’s a difference of 5.4 w/m2, not 3.7 w/m2!

So, how does the IPCC justify their claim? As seen from space, the earth’s temperature is not defined at earth surface, nor can it be defined at the TOA (Top of Atmosphere). Photons escaping from earth to space can originate at any altitude, and it is the average of these that defines the “effective black body temperature of earth” which turns out to be about -20 C (253 K), much colder than average temperatures at earth surface. If we plug that value into the equation we get:

253K = 232.3 w/m2

254K = 236.0 w/m2

236.0 – 232.3 = 3.7

There’s the elusive 3.7 w/m2 = 1 degree! It has nothing to do with surface temperatures! But if we take this analysis a step further, it gets even worse. The purpose of temperature anomalies in the first place was supposedly to compare temperature changes at different temperature ranges. As we can see from the analysis above, since w/m2 means very different things at different temperature ranges, this method is completely useless for understanding changes in earth’s energy balance due to doubling of CO2.

To illustrate the point further, at any given time, some parts of earth are actually in cooling trends while others are in warming trends. By averaging temperature anomalies across the globe, the IPCC and “consensus” science has concluded that there is an overall positive warming trend. The following is a simple example of how easily anomaly data can report not only a misleading result, but worse, in some cases it can report a result the OPPOSITE of what is happening from an energy balance perspective. To illustrate, let’s take four different temperatures and consider their value when converted to w/m2 as calculated by Stefan-Boltzmann Law:

-38 C = 235K = 172.9 w/m2

-40 C = 233K = 167.1 w/m2

+35 C = 318K = 579.8 w/m2

+34 C = 317K = 587.1 w/m2

Now let us suppose that we have two equal areas, one of which has an anomaly of +2 due to warming from -40 C to -38 C. The other area at the same time posts an anomaly of -1 due to cooling from +35 to +34.

-38 C anomaly of +2 degrees = +5.8 w.m2

+35 C anomaly of -1 degree = -7.3 w/m2

“averaged” temperature anomaly = +0.5 degrees

“averaged” w/m2 anomaly = -0.75 w.m2

The temperature went up but the energy balance went down? The fact is that because temperature and power do not vary dirfectly with one another, averaging anomaly data from dramaticaly different temperature ranges provides a meaningless result.

Long story short, if the goal of measuring temperature anomalies is to try and quantify the effects of CO2 doubling on earth’s energy balance at surface, anomalies from winter in Winnipeg and summer in Khartoum simply are not comparable. Trying to average them and draw conclusions about CO2’s effects in w/m2 simply makes no sense and produces a global anomaly that is meaningless.

“An anomaly is simply a value that is arrived at by comparing the current measurement to some average measurement.”

I would agree, to a degree, because comparing to an average is a weak method, but more generally an anomaly is an abnormal value when compared to a history of values. The definition of “abnormal” is many. and may include comparing to a distribution of values, an example is n-sigma from the mean, or the distance from n-nearest neighbors, where n may be 1 or the entire sample set, or a violation of a rule, or frequency based (near the mean, but does not happen often as in the case of multi-modal distributions), or based on a normal range, etc. Anomalies may be computed in one dimension, or multi-dimensional and to truly determine an anomaly one may need to look at it in many ways, not just “distance” from “average”.

Greenhouse warming theory is like a Russian matryoshka doll; there is always a deeper fraud inside.

Speaking of which, when did the word “average” get replaced with “normal” by our weather overlords?

Excellent analysis, David. This corresponds with my understanding as well.

The rest of this derives from the assumption that the anomalies live in L2 and parametric statistics can describe them. Weather and climate are highly coupled non-linear processes. Such processes are by their nature chaotic and live in a dimension less than L2. Second moments just plain do not exist. Parametric correlations do not exist. You can do rank correlations and that sort of non-parametric measure, but that is about all you can say with certainty. The tails of the distribution are way too fat. The hundred year flood comes way too often. To get hysterical about those hundred year floods is an admission that the person who does so does not know statistics.

What if the average is itself an anomaly?

wot jocky scot said…

And that begs the question: Is an anomaly of an anomaly an anomaly?

Happens often when you manipulate intensive variables in isolation.

I take some heart from your essay, David… and I expect others will, too… those who have asked from time to time on WUWT? why anomalies are used at all.

The explanations have always been profound, but have always left me with a deeper furrow on my brow than before the explanation was given… that and a sense that I must be rather dense.

Perhaps I am brighter than I thought?

Whichever; I now feel I am at least in good company.

A while back, I think it was on the Blackboard, I had expressed my “hair on the back of neck up” when people talk anomalies. Anomalous to what and when is my first question. Secondly, why is it that ALL anomalies are bad? History is choked full of anomalies.

Anyhow, Zeke H. explained to me, in many, many, many words how an anomaly is a normalization of a data-set, and I fell asleep and gave up (I am not sure in which order).

This post will be in my bookmarks. Thank you David!

“The temperature went up but the energy balance went down?”

FOOLS. Don’t you know that the IPCC, the Useless Nations, NASA, NOAA, Mann, etc., and the eco-cultists have re-written the laws of thermodynamics? Sheesh. Get with the pogro….er…..program.

And where the simple anomaly number really makes a difference is in the Arctic. Suppose for example that the Arctic warmed by 2 C over the last 60 years. It would make a huge difference if the low and dry temperature of 245 K warmed by 2 C and the high and more moist temperature of 275 K also warmed by 2 C, or if the lower, dryer temperature warmed by 4 C and the high temperature was unchanged. I believe it is the latter that is closer to the truth. And if the warming of cold and dry air is driving global anomalies, we must take these with a grain of salt.

That reminds me of this quotation:

I get the point of the post, and I agree that the method is lacking.

My comment won’t exactly be on point. But, is it really settled that co2 increases will cause temperature increases? I don’t think it is.

Reginald Newell, MIT, NASA, IAMAP, co2 and possible cooling

1 minute video

“Trying to average them and draw conclusions about CO2′s effects in w/m2 simply makes no sense and produces a global anomaly that is meaningless.”

Agreed. Saying adding co2 to the atmosphere will cause warming and then seeing manmade co2 is on the rise and temperature is on the rise is not enough to prove that proposed guess. The price of milk went up at the same time. So can I also conclude the price of milk controls temperature?

Short and sweet David.

Just think how the discrepancies multiply if you consider tha the actual range of temperatures on eart’s surface is much greater than the numbers you chose for your example locations.

Vostok Station has been measured officially below -128 deg F, and with anecdotal evidence of nearby higher altitude points as low as -130 or -90 deg C, 183 K, while at the exact same time, tropical desert locations have air temperatures over 135 deg F, and surface Temperatures of maybe 140 F or +60 deg C, 333 K.

The S-B Total radiant emittance then covers a range of more than 11:1, and Temperatures near those extremes ca and do exist at exactly the same moment, since the dark of the Antarctic winter night, is the northern summertime. Even more dramatic is the result if you calculate the peak of the spectral radiant emittance over that temperature range, which is where the bulk of the radiant energy is emitted.

That goes as T^5, not T^4 which gives almost a 20 : 1 ratio. And at those highest Temperatures, the wavelength of the peak emission hasmoved from around 10.1 microns at 288 K to around 8.75 microns, which is further inside the “atmospheric window” making CO2 even less important.

There is also another issue. The disciples of the greenhouse gremlin, are willing to swear on a stack of bibles, that earth’s atmosphere DOES NOT radiate ANY “black body” radiation, since gases cannot do so; only solids (rocks) and liquids (oceans) do. Well clouds are water or ice, so clearly clouds can and must emit thermal continuum radiation depending on the cloud Temperature per the S-B law. Otherwise only the surface can emit thermal radiation, so the non cloud part of the atmosphere can only emit the LWIR spectra of the various GHGs, which is NOT spectrally dependent on Temperature to a first order.

So earth’s external emission spectrum, should reflect the earth surface Temperatures, with their higher value and shorter wavelength range; not some 253 K signature spectrum.

Well of course, the non cloud atmosphere does radiate a thermal continuum spectrum, characteristic of its Temperature, and quite independent of its composition; but its density is so low, that even the total atmospheric optical absorption density doesn’t come close to the total absortion required of a “black” body. The deep oceans absorb 97% in about 700 metres,with around 3% surface Fresnel reflection, so they make a pretty good imitation of a black body radiator.

So when James Hansen re-adjusts the historic baseline Temperatures for “discrepancies”, as he seems fond of doing, does he apply the exact same fudge factor to the actual historic Temperature readings taken back then. That would seem to be essential to do, or else there would be a fictitious discrepancy generated anomaly change, every time he discrepancizes the baseline reference ??

From exploration geochemistry perspectives, (my bachground) an anomaly is a value significantly different to those historically or geographically around it, expressed in the same, correct units and with due consideration to noise and ditribution of values.

Whereas the occasional anomaly in climate science time series tends to be smoothed in long time series, it is can be a pity to downplay the anomaly (expressed this way), because it is information-rich. The “What” in “Whay made it different?” can unravel puzzles.

The use of “anomaly” to denote a property like temperature that has been elevated to an artificial baseline is relatively new to me, but I don’t like it. Once the statistician starts to truncate number sets, many types of possible analysis are subsequently invalid. So, David, the arguments you present are both correct and needed. The examples are quite well chosen, thank you.

I have some residual confusion about the mechanics of calculating anomalies in the climate change sense. If there is a single site with a single thermometer, it is easy to select and to adopt a 30-year reference period. (it certainly should not be called a normal. It is an artifice). The math to reduce the set to ‘anomalies’ is then trivial – until someone makes an adjustment. Pardon my ignorance, but for an area with many sites, is the ‘normal’ the average of all of the sites taken over that time, a constant that is then subtracted one-by-one from each site; or is each site first converted to anomaly form, then the composite calculated by averaging the residuals?

In any event, any change to the number of observations in the reference term, as happens frequently, will produce a new normal and a new anomaly string. Given that people like Ross McKitrick have published about the change on the number of sites used in the CONUS, has the ‘nomal’ been recalculated day after day as the number of observations changes, or are we free of errors from this type of mechanism?

What is the situation on normals when, like Australia had earlier this year, a completely different, revised time series named Acorn? If it becomes the official, accepted version, does this mean that uses such as proxy calibration have to be recalculated because the time-temperature response in the calibration period has been changed?

Reblogged this on Climate Ponderings and commented:

” averaging anomaly data from dramaticaly different temperature ranges provides a meaningless result.”

You knits deviation with mean power to a tee. Fore!

Absolutely spot on – I’ve been aware of this irreconcilable “anomaly” for several years. Average temperature does not give average radiation, and vice-versa. The inherent error over the surface of the Earth may well be equal (or greater) in magnitude to the “warming effect” of a doubling of CO2.

However, Kiehl & Trenberth 2009 didn’t calculate outgoing surface radiation from an average surface temperature, as is often supposed. They used a method of gridded temperature data over the surface to calculate outgoing radiation for reach grid, averaging the results (though that has it’s own problems, too detailed to go into here).

The 579.8 w/m2 and 587.1 w/m2 in the above are backassward.

Werner Brozek says: “…It would make a huge difference if the low and dry temperature of 245 K warmed by 2 C and the high and more moist temperature of 275 K also warmed by 2 C, or if the lower, dryer temperature warmed by 4 C and the high temperature was unchanged….”

Right. Temperatures don’t tell the whole story. Net heat flux is what counts, and that requires knowing what the humidity is in every call. The models don’t simulate real clouds rigorously, so they don’t do net heat flux well, either. “Global temperature” is an almost meaningless concept.

Roger Pielke Sr. has another critique of the average global surface temperature anomaly as a measure of warming:

http://pielkeclimatesci.wordpress.com/2012/05/07/a-summary-of-why-the-global-average-surface-temperature-is-a-poor-metric-to-diagnose-global-warming/

I assume David M. Hoffer’s argument would also apply to the UAH and RSS atmospheric temperature anomalies?

The “consensus” literature proposes that direct effects of CO2 result in a downward energy flux of 3.7 watts/m2 for a doubling of CO2.

So, how does the IPCC justify their claim? As seen from space, the earth’s temperature is not defined at earth surface, nor can it be defined at the TOA (Top of Atmosphere). Photons escaping from earth to space can originate at any altitude, and it is the average of these that defines the “effective black body temperature of earth” which turns out to be about -20 C (253 K), much colder than average temperatures at earth surface. If we plug that value into the equation we get:

253K = 232.3 w/m2

254K = 236.0 w/m2

236.0 – 232.3 = 3.7

——————————————————————————-

So the direct effect (before feedbacks) of doubling CO2 raises the earths surface T how much less then 1 degree?

Great article BTW. Also it should be pointed out the CO2 in the atmosphere does, as the IPCC shows, increase the residence time of LWIR energy. However it DECREASES the residence time of conducted energy which must stay in the atmosphere longer if it cannot radiate away. Our atmosphere is full of both conducted and radiated energy. additional CO2 accelerates the escape to space of conducted atmospheric energy.

For measuring the effect of greenhouse gases on the temperature, Stefan-Boltzmann equation is irrelevant. Greenhouse gases act as an insulator in the atmosphere and to measure their effect, we need to measure thermal conductivity of the atmosphere. It’s no simple matter but the question is, if we do it right, how much different results from average anomaly will we get.

For start we’d need to calculate differences between lower troposphere and stratosphere temperatures, and note that stratosphere is actually cooling over past decades which is enhancing the effect rather than diminishing it.

The “consensus” literature proposes that direct effects of CO2 result in a downward energy flux of 3.7 watts/m2 for a doubling of CO2.

So, how does the IPCC justify their claim? As seen from space, the earth’s temperature is not defined at earth surface, nor can it be defined at the TOA (Top of Atmosphere). Photons escaping from earth to space can originate at any altitude, and it is the average of these that defines the “effective black body temperature of earth” which turns out to be about -20 C (253 K), much colder than average temperatures at earth surface. If we plug that value into the equation we get:

253K = 232.3 w/m2

254K = 236.0 w/m2

236.0 – 232.3 = 3.7

——————————————————————————-

So the direct effect (before feedbacks) of doubling CO2 raises the earths surface T how much less then 1 degree?

Great article BTW. Also it should be pointed out the CO2 in the atmosphere does, as the IPCC shows, increase the residence time of LWIR energy. However it DECREASES the residence time of conducted energy which must stay in the atmosphere longer if it cannot radiate away. Our atmosphere is full of both conducted and radiated energy; additional CO2 accelerates the escape to space of conducted atmospheric energy.

+34 C = 317K = 587.1 w/m2 is not correct

+34 C = 317K = 572,5 w/m2

“I would agree, to a degree, because comparing to an average is a weak method, but more generally an anomaly is an abnormal value when compared to a history of values.”

“something that deviates from the normal” Collins english dictionary. How do you know what the “normal” is?. How do you measure it? What are historical values? I asked at RC some years ago and got a Gavin reply. It was that reply that persuaded me of their utter stupidity.

At least we now know the ‘science’ is settled.

I think the main implication of this piece is that Climate Scientists are going to need MUCH bigger super computers.

You’ve got it David. What you are basically saying is climate “science” is not using proper science methods at all, they are using improper statistical tricks … and you are correct, but most here already know that well by now. To me all global figures tossed about above 0.3 – 0.4°C since 1900 are just a “scientific” illusion, it was solar based and anyone viewing SOHO regularly throughout the 90’s and into the very early 2000’s realizes this and the tick upward occured starting during the 1930’s.

In richard Feynman’s book he gives the following story:

“Anyhow, I’m looking at all these books, all these books, and none of them has said anything about using arithmetic in science. ..

Finally I come to a book that says, “Mathematics is used in science in many ways. We will give you an example from astronomy, which is the science of stars.” I turn the page, and it says, “Red stars have a temperature of four thousand degrees, yellow stars have a temperature of five thousand degrees . . .” — so far, so good. It continues: “Green stars have a temperature of seven thousand degrees, blue stars have a temperature of ten thousand degrees, and violet stars have a temperature of . . . (some big number).” There are no green or violet stars, but the figures for the others are roughly correct. … Then comes the list of problems. It says, “John and his father go out to look at the stars. John sees two blue stars and a red star. His father sees a green star, a violet star, and two yellow stars. What is the total temperature of the stars seen by John and his father?” — and I would explode in horror.”

Are anomalies the same as this but with minus signs instead of plus signs?

Yes – the 3.7 w/m2 for a doubling is the forcing at the TOA. The forcing at the surface willl be greater. Remember (using averages again) the earth emits about 390 w/m2 from the surface – but only about 240 w/m2 from the TOA, i.e. the equivalent of the incoming solar energy. There is, therefore, a factor of 390/240 or ~1.6 in the “surface to TOA” ratio. If the flow of outgoing energy is reduced by 3.7 w/m2 (at TOA) due to an increase in greenhouse gases then the surface temperature will need to increase by about 1 deg C (assuming no feedback) in order that equilibrium is re-established (i.e. incoming solar energy = outgoing LW energy)

A simple energy balance model demonstrates the figures.

Having defined your problem with anomalies, please describe “a suitable method for discussing temperature data as it applies to the climate debate?”

“The data doesn’t matter. We’re not basing our recommendations

on the data. We’re basing them on the climate models.”

– Prof. Chris Folland,

Hadley Centre for Climate Prediction and Research

I’m not sure I agree. While there is significant regional variation, global temperature variation remains remarkably small. For example, only about half a degree covers temperature readings across all years for Aug 24th (i.e. the most recent UAH CH 5 readings). The same is true for all other days in the year. UAH (and RSS, GISS and Hadley) appear to be using sufficiently robust statistical sampling methods such that the anomalies are meaningful.

Using anomalies in describing the time series requires stationary. Does the temperature record pass a unit root test? I would be very surprised if it did

Entropic man says:

August 27, 2012 at 2:49 am

“It

feelscold today.”“I

believethis summer has beenfine, overall.”Touchy, feely stuff like that?

Followed by, “Send me your first born so I may save your grandchildren. Or else.”

For those mathematically inclined (college maths) this phenomenon is just a translation of a rather trivial truth known for 2 thousands years : “The average of a power is NOT a power of the average”

Example, let us take 1 and 2. The average is 1.5.

Now let us take a power law like a 4th power.

Then average ^4 = (1.5)^4 = 5.06 (Formula 1)

While the average of the 4th powers of these numbers is (1^4 + 2^4) /2 = 8.5 (Formula 2)

So there is a huge difference between the 4th power of an average and and average of 4th powers.

Getting back to physics it means that the Earth does NOT radiate at its average temperature (Formula 1) as it is ususally computed but its radiation is teh sum of places which are at different temperatures (Formula 2). The difference between these 2 values is of course important.

The only mathematical complication to get from this simple example to the Earth is to replace the arithmetic averaging by a surface integral of temperatures (1/S Integral(T.dS) ) but the result is the same.

The global average is irrelevant and the Earth radiation is very different from the “simple” k.(Global average)^4. But because temperature averages don’t give correct answers about energy flows, anomalies (which are just differences to an average) don’t give correct answers either.

Having defined your problem with anomalies, please describe “a suitable method for discussing temperature data as it applies to the climate debate?”I will not talk for the author of the post.

But the only “suitable” way to deal with the dependencies between the temperatures and the energy flows is to take the Temperature field as it is in reality with all its spatial dependencies.

In other words you can obtain correct flows only by first computing then integrating T^4(x,y,z) where x, y and z are the spatial variables.

As soon as you begin with spatial averagings of temperatures (e.g making disappear the x,y and z) of any kind BEFORE you compute the flows you fall in the problem I described above and your answers about energy flows will be wrong.

The maximal “wrongness “will be realized when the spatial averaging of temperatures takes place over the whole globe what is called the global temperature average.

To measure total downward energy flux from the temperature anomalies the P=5.67*10^-8 * T^4 formula can be differentiated (as dP/dT = 4 * 5.67*10^-8 * T^3) to produce a “scaling set” for the energy fluxes – so 1°C up or down from a -10C mean (climatology) is scaled 4.13w/m2, at 0C mean the scaling is 4.62w/m2, at 10C it is 5.15w/m2, etc – the scaled fluxes can then be integrated spatially. Doesn’t this deal with the problem – and doesn’t it practically happen anyway?

More fundamentally, I agree that global annual-average surface temperature is not a very good metric by which to measure global warming or cooling – but what is to replace it?

You can also think of it in terms of 1 Watt/m2 = 1 Joule/second/m2

Now we have “time” in the equation. We have 1 joule of energy moving through an infinitely thin area of a metre by a metre each Second.

And 1 joule is the equivalent of 3,018,412,315,122,250,000 solar photons and 15,092,061,575,611,200,000 Earth temperature long-wave photons (per second).

Now we have to start thinking about energy accumulation/loss per second.

On average, the Earth warms up or cools down by only 0.0075 joules/m2/second over any period of time (Day, Night, or Seasonal change throughout the year). At the height of the day, 960 joules/m2/second are coming in from the Sun, but the air temperature is only recording a change in energy of 0.008 joules/m2/second.

So, at the height of noon-day Sun, 2,897,675,822,517,360,000,000 solar photons are coming in per second but only 24,147,298,520,978,000 worth of those solar photons are accumulating in the air temperature. 99.9992% of the energy in the solar photons is going somewhere else per second. Either building up in the land, soil, vegetation or they are being re-emitted back to the upper atmosphere or space – basically as fast as they are coming in.

This is, of course, an infinitely thin layer or area of 1 m2. Now we have to start thinking of the volume (rather than area) of air that is being recorded by a thermometre. Now it gets even more bizarre.

Just a different take on it. Not something a climate model thinks about.

Entropic man asks [for someone to] please describe “a suitable method for discussing temperature data as it applies to the climate debate[?]”

Sure. A suitable method for discussing temperature data as it applies to the climate debate would start with the recognition that most, especially average, temperature data is worthless.

I am no scientist. What struck me about this (excellent) piece , is that there has been no warming for the last 13+ years.

Even using their shonky anomaly methods, they still cannot rustle up any warming. It makes me wonder what the true situation is

Probably better to stick to simple measurements.

http://tamino.files.wordpress.com/2012/08/piomas1.gif

Entropic:

It’s really very simple, you have an offset to absolute temperature that only depends on the the mean annual cycle of the period that you baselined over. So, assuming e.g. a monthly series, just add that back in at the end to get the annualized version of the series.

Anomalization/deannualization of time series is common in many fields, e.g., most famously econometrics. Anomalization is just a mechanism for removing the short-period fluctuations in order to remove the otherwise

visually obscuredlonger-term forcings, and a convenient technique for reconstruction.I decided not to comment on the quality of the paper other than to say I don’t agree with the general assessment made in the comments.

And… a brilliant headline!

Lies, Damn Lies, and Anoma-Liesgive “a suitable method for discussing temperature data as it applies to the climate debate?”

answer:

Use only ocean heat content data.

For there to be a net increase of 3.7 W/m^2 at the Earth’s surface, there has to be a net decrease of 3.7 W/m^2 into space (assuming a constant influx (that isn’t actually constant)). Kasuha is correct, David M. Hoffer’s analysis and conclusion is wrong.

Uh, no… that model is derived from a static view of the earths atmosphere. I nearly fell off my chair laughing as I watched a U of Chicago lecture on “radiative physics” and the derivation of that result. I could not believe that the professor actually ignored thunderstorms lifting heat above most of the CO2 “blanket”, not to mention hurricane Issac which can lift all the man made heat for the last twenty years and dump it above the CO2 “blanket”. The atmosphere ain’t static!

/HT to Willis for his thermostat hypothesis paper. I suspect he is right with 0.3C or so AGW.