How reliable are global temperature "anomalies" ?

Guest post by Clive Best

Perhaps like me you have  wondered why “global warming” is always measured using temperature “anomalies” rather than by directly measuring the absolute temperatures ?

Why can’t we simply average  the surface station data together to get one global temperature for the Earth  each year ? The main argument to work with anomalies  (quoting from the CRU website) is:  ”Stations on land are at different elevations, and different countries estimate average monthly temperatures using different methods and formulae. To avoid biases that could result from these problems, monthly average temperatures are reduced to anomalies from the period with best coverage (1961-90)….” In other words although measuring an average temperature is “biased”, measuring an average anomaly (deltaT)  is not. Each monthly station anomaly is actually the difference between the measured monthly temperature and so-called “normal” monthly values.  In the case of Hadley Cru the normal values are the 12 monthly averages from 1961 to 1990.

The basic assumption is that global warming is  a universal, location independent phenomenon which can be measured by averaging all station anomalies wherever they might be distributed. Underlying all this of course is the belief that CO2 forcing and hence warming is everywhere the same. In principal this also implies that global warming could be measured by just one station alone.  How reasonable is this assumption and could the anomalies themselves depend on the way the monthly “normals” are derived?

Despite temperatures in Tibet being far lower than say the Canary Islands at similar latitudes, local average temperatures for each place on Earth must exist. The temperature anomalies are themselves calculated using an area-weighted yearly average over a 5×5 degree (lat,lon) grid. Exactly the same calculation can be made for the temperature measurements in the same 5×5 grid which then reflect the average surface temperature over the Earth’s topography.  In fact the assumption that it is possible to measure a globally averaged temperature “anomaly” or DT also implies that there must be a globally averaged surface temperature relative to which this anomaly refers. The result calculated in  this way for the CRUTEM3 data is shown below:

image

Fig1: Globally averaged temperatures based on CRUTEM3 Station Data

So why is this never shown ?

The main reason for this I believe is that averaged temperatures highlight something different about the station data. They instead reflect an evolving bias in the geographic sampling of the station data used over the last 160 years. To look into this I have been working with all station data available here and adapting the PERL programs kindly included. The two figures below show the location of stations used dating from 1860 compared to all stations.

image

Fig 2: Location of all stations in the Hadley Cru set. Stations with long time series are marked with slightly larger red dots.

image

Fig 3: Stations with data back before 1860

Note how in Figure 1 there is a step rise in temperatures for both hemispheres around 1952. This coincides with a sudden expansion in included land station data  as shown below. Only after this time does the data properly cover the warmer tropical regions, although there still remain gaps in some areas. The average temperature rises because gaps for  grid points in tropical areas are now filled. (There is no allowance made in the averaging for empty grid points neither for average anomalies nor temperatures).  The conclusion is that systematic problems due to poor geographic coverage of stations affects average temperature measurements prior to around 1950.

image

Fig 4: Percentage of points on a 5×5 degree grid with at least one station. 30 % is roughly the land surface of Earth

Can empty grid points similarly affect the anomalies? The argument against this, as discussed above, is that we measure just the changes in temperature and these should be independent of any location bias i.e. CO2 concentrations rise the same everywhere ! However it is still possible that the monthly averaging itself introduces biases. To look into this I calculated a new set of monthly normals and then recalculated all the global anomalies. The new monthly normals are calculated by taking the monthly averages of all the stations within the same (lat,lon) grid point. These represent the local means of monthly temperatures over the full period, and each station then contributes to its near neighbours. The anomalies are area-weighted and averaged in the same way as before. The new results are shown below and compared to the standard CRUTEM3 result.

image

Fig5: Comparison of standard CRUTEM3 anomalies(BLACK) and anomalies calculated using monthly normals averaged per grid point rather than averaged per station (BLUE).

The anomalies are significantly warmer for early years (before about 1920), changing the apparent trend. Therefore systematic errors due to the normalisation method for temperature anomalies are of the order of 0.4 degrees in the 19th century. The origin of these errors is due to the poor geographic coverage in early station data and the method used to normalise the monthly dependences. Using monthly normals averaged per lat,lon grid point instead of per station causes the resultant temperature anomalies to be warmer before 1920. Early stations are concentrated in Europe and North America, with poor coverage in Africa and the tropics. After about 1920 these systematic effects disappear. My conclusion is that anomaly measurements before 1920 are unreliable, while those after 1920 are reliable and  independent of normalisation method. This reduces evidence of AGW since 1850 from a quoted 0.8 +- 0.1 degrees to about 0.4 +- 0.2 degrees

Note: You can view all the station data through a single interface here or in 3 time slices starting here. Click on a station to see the data. Drag a rectangle to zoom in.

0 0 votes
Article Rating
82 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
February 7, 2012 10:03 pm

Hi, I’d love to interview you on how mean global temperatures are recorded/determined (and by whom). Also global cooling – which I’m starting read about. It would be an in ‘layman’s terms’ chat. Karyn Wood ABC Coast FM (in Australia)

DirkH
February 7, 2012 10:04 pm

I take it that once CAGW raises the temperatures of the Northern Hemisphere by, say, 4 deg C,
(which would be possible given the high end of the IPCC climate sensitivity estimates for a doubling of CO2) we will have the same apocalyptic temperatures in Germany as, say, Rio de Janeiro?
Oh the horrors! Quick, somebody erect me a wind turbine.

Philip Peake
February 7, 2012 10:11 pm

So the argument is that “the great dying of the thermometers” led to the rise in temperature anomalies.
This is quite believable. I think if you care to look closely, the current temperature plateau coincides with the cessation of the thermometers dying off.
They will have to start kicking out more stations (the lower reading ones, of course) sometime soon, to get the curve to continue its upwards trend.

Jos Verhulst
February 7, 2012 10:22 pm

But fig.1 shows no warming at all for the SH, at least from 1952 onwards.
How can this be?
Is ‘global warming’ a phenomenon that only affects the Northern hemisphere?

Michael Schaefer
February 7, 2012 10:42 pm

Cool!
Dissecting the IPCC-report – one station at a time.
Well Done.
Mike

BioBob
February 7, 2012 10:49 pm

I appreciate the effort you put into this article BUT note the lack of standard error/deviation bars. You should know better. Provide some sort of statistical measure of sampling error, at least, no matter how bogus it’s basis. We need not delve into the other sources of error in this data since data reliability is quite tenuous as it is.
The fact remains that a non-random sample of 1 (N=1) will NEVER provide a “reliable and independent” sample from ANY sort of population, let alone something as complex as temperature.
GIGO – no “scientist” with any background in sampling and basic stats would ever rely on this kind of crap data to draw any sort of accurate conclusion. Certainly reporting results to tenths and hundreths of a degree are totally absurd.

pat
February 7, 2012 10:52 pm

I have been saying this for years. The reportage of anomalies is a way of not reporting actual temperatures. It not only accentuates by a factor of 100 the so called anomalies, but they are impossible to verify. There is no substantiating data.
If current climatology were a science that needed detailed data, it would be a disaster. If it were a weather report, it would be off the air as an incompetent charade.

John
February 7, 2012 10:57 pm

A couple of questions:
When deriving the global average temperatures above, how is the ocean area taken into account? Or is the temperature a land surface temperature only?
Also, how does this sort of data compare to the sattelite data?
Another question which comes to mind relates to the averaging calculation. How are the different stations weighted? Are all stations within a grid (5×5 degree) cell given the same weighting? This can be important as the data looks highly clustered and could be biasing the calculation (either up or down) due to more stations in certain areas. In geostatistics (I’m a geologist invloved in mineral resource calculations) we typically employ Kriging algorithms to provide an unbiased (or as unbiased as possible!) estimate of metal concentrations based on spatial data….I think the concept could be adapted for climate analysis.

MangoChutney
February 7, 2012 11:13 pm

@Jos Verhulst
That was my first thought too, but if GW only affects NH, then the cause of GW has to be something else. If CO2 was the real culprit, then the warming should be more or less uniform over the globe (presumably subject to lag from the redistribution of CO2 from the NH to the SH).
Also, if the alarmists are correct, shouldn’t the rise in temperature be easily predictable from the individual temperature record and the known warming caused by CO2?
For instance, take the CET and measure the CO2 level, then calculate the difference in temperature between 1750 (280ppm CO2) and 2010 (390ppm CO2), and compare against what the rise in CO2 should have caused.

Nylo
February 7, 2012 11:22 pm

Actual temperature is not used because you cannot assume that the reading of one thermometer in one location is a valid temperature representative of a large area. Not even in the surrounding kilometer. Local conditions affect the temperature too much, so the absolute data is quite meaningless. However, the temperature anomaly CAN be assumed to be about the same in a large area. Because local conditions change in space, but remain quite the same in time. So if one spot is 2C higher than normal, it is quite reasonable to assume that, whatever the temperature is 1 kilometer further, it will be about 2C higher than normal as well for the corresponding location.
In other words, it is not possible to recreate a Global Temperature with a few thousands of thermometers (what you would get is an average temperature of the specific places where you have thermometers, and not of the Globe), but it IS possible to do the same with the global temperature anomaly. It may not be perfectly accurate, but it is a very good aproximation (well, at least it would be, if they didn’t move the thermometers, and their surroundings didn’t change, and temperature registers were long…)
And more to the point, we don’t care what the global temperature is, but how it is changing. And for that, the anomaly is all the information you need.

Robert Clemenzi
February 7, 2012 11:42 pm

So, how do you compute the averages? Linear or using the fourth power?
(15 + 25)/2 = 20 C
or
(((15+273)^4 + (25+273)^4)/2)^0.25 – 273 = 20.13 C
This makes a big difference. For example, there are several ways to increase the linear average by only 1 C. But the computed average using the fourth power is different.
if 15 goes to 17, then dT = 0.95
if 25 goes to 27, then dT = 1.06
If they both increase by 1, then dT = 1.00
This may look insignificant, but when averaged over the whole planet for a full year, the two methods produce significantly different values.

Other_Andy
February 7, 2012 11:57 pm

Nylo nailed it. An interesting exercise to ‘construct’ a Global temperature but not very usefull. However, the upwards trend in the 20’s and 50’s caused by an increase in measuring stations IS an interesting observation. Is there, as some commenters have asked above, a different temperature trend between the Southern and the Northern hemisphere? Is this because because the Southern Hemisphere has significantly more ocean and much less land?
Water heats up and cools down more slowly than land but would this effect temperatures in the long term?

Jordan
February 8, 2012 12:04 am

A fundamental question is whether the data series suffers from aliasing. Perhaps not so much in the time domain, but surely in the spatial domain with sparse or even non-existent coverage of large parts of the surface.
Until this question is fully addressed, I would recommend these series are treated as not reliable. Error bars produced from aliased data would be similarly unreliable.

February 8, 2012 12:20 am

Just give me 30 years. I will have a preliminary result in about 10.
The absolute value is meaningless but it does show if it’s gone up or down.
Just click on the Dave A above
Straight average from the same set of 2586 stations taken hourly, end of, no gridded extrapolation. No waiting for days while it is “worked out” it just is what it is. As they “die off” I just strip them from the data and re-run every hour from day 1 again. 27,253,962 individual temperature measurements so far from (alphabetically) Afghanistan to Yeman and most places in between. Being NOAA a heavy bias towards North America. If the Globe really is going to be 6C warmer by the end of the century, on account of a trace gas responsible for the entire food chain of the planet, you will see it reflected in the graphs.
As sure as eggs are eggs
Watch an experiment in progress. The website will be developed further as the tabs allude to.
I promise never to let the dog eat my raw data.
Around 12.8C on our last trip around our local Star. What do you think the chances of it being 18.8C by 2100?

Rob R
February 8, 2012 12:35 am

Nylo
It is possible, if you have enough nicely spaced thermometers, to get a pretty good average ofthe temperature of a defined region and the trend in that average. You have to correct or normalise for regional altitudinal and latitudinal effects but that is not too hard to do.
To get the bigger picture you can then combine regional averages, weighted by area if necessary, to get a global land temperature and trend. This does not require one to first anomalise the data. It alows one to use abundant data from numerous relatvely short lived climate stations in addition to the normally preferred long-lived stations. Naturally there would be plenty of issues to work through but it would not be impossible.
Note that there is more data out there in regional and national databases than is present in GHCN so any thought that the data are too sparse is basically untested.

Rhys Jaggar
February 8, 2012 12:42 am

Interesting and provocative piece.
Thank you for writing it.

Scottish Sceptic
February 8, 2012 12:58 am

local average temperatures for each place on Earth must exist
Global temperatures display a 1/f type noise. That is to say, the longer you wait, the higher the noise level. In principle, the noise level is infinite for an infinite period.
That means that there is not an average temperature for any place on the earth.
In other words, it means that no matter how long the period, the next equal length period will have a significant difference in temperature. Unlike gaussian noise where the noise decreases if you take a long enough sample period, 1/f noise just doesn’t go away, averaging doesn’t get rid of it … indeed in a real sense, the longer the period, the higher the noise!!

February 8, 2012 1:11 am

@BioBob I agree with you – all such plots should have error bars. However – none of them ever do! The statistical errors on the annual averaged values change each year as more stations are added and subtracted. The CRU people quote a statistical error of 0.05C after 1951 and 4 times that in 1850(0.2C). However it is the systematic error that really matters. The systematic error depends on the way you calculate the anomalies. By calculating the normals in a different way I got up to a 0.4C shift in values in the 19th century. Furthermore the annual values are derived from a grid with 2592 cells. Many of these cells have no points at all, others have just one station. So the error per (lat,lon) cell is large.

HAS
February 8, 2012 1:24 am

As an interesting aside GCMs of course work in absolute temperatures (and anomalies are then calculated from them). The absolute temps show bias between models as shown in Slide 20 of the ppt off http://www.newton.ac.uk/programmes/CLP/seminars/082310001.html that using the anomalies tends to hide.

Lawrie Ayres
February 8, 2012 1:52 am

Clive,
An interesting observation and one I’m not qualified to comment on. Your first responder, Karyn Wood, would like an interview with you. I don’t know her work but I do know the ABC is a fervent flagwaver for AGW. I have never heard a sceptical view expressed on the aBC without some follow up from a believer who strives to debunk the words of the sceptic. If you do respond be aware that you may be set up. The ABC in Australia is the least trusted media by anyone who wants the truth.

TBear
February 8, 2012 1:53 am

Is this a fair summary: half of the suggested 20th Century `warming’ may be an artefact of trying to construct a long term record from data taken from incomplete and incommeasurable measuring systems?
And if the actual 20th Century temp rise is, in fact, just 0.4C, that the IPCC forcing assumptions are totally undermined?
So, the whole thing is (more likely than not) utter and total crap? The greatest make-work science-off-the-rails episode in all history?
Where are the hordes of indigant scientists, banging on the doors of their national governments, demanding the reputation of science be no longer trashed with this s%&t?
Can’t you guys get your act together? Trillions of dollars (as a proxy for wasted human effort) are at stake. Get it together, guys, and take these shysters down, please!!!!

jjm gommers
February 8, 2012 1:55 am

@JosVerhulst, Mango Chutney.
Your assumption concerning CO2 might be correct.
After WW2 the progress in the NH was substantial, changing the albedo and increasing the H2O emission. Especially the H2O emissions above the lager NH landmass by coolingtowers , riversurfacewatercoolers, irrigations (for example the famous Soviet example, Egypt lake Nasser and subsequent irrigation), combustion(change from coal to natural gas), population increase. More cloud cover might be expected in autumn and early winter.
So it will be interesting what the coming 5 to 10 years will bring in the expected cooling.
If it outweighs the increase or even goes lower in temperature, this will put the CO2 problem ad acta.

Bart
February 8, 2012 1:56 am

FTA: “The basic assumption is that global warming is a universal, location independent phenomenon which can be measured by averaging all station anomalies wherever they might be distributed.”
I see no basis for such an assumption. The change in temperature should be proportional to the local temperature, which in turn is dependent on latitude, wind currents, and local heat capacity, at the very least.
I have never seen a satisfactory explanation of what information is contained in the average of this intensive variable. What does it even mean?
Jos Verhulst says:
February 7, 2012 at 10:22 pm
“Is ‘global warming’ a phenomenon that only affects the Northern hemisphere?”
Sure looks that way, doesn’t it? Which, of course, appears immediately to falsify the assumption above.
Nylo says:
February 7, 2012 at 11:22 pm
“It may not be perfectly accurate, but it is a very good aproximation.”
If I got the “strike” tag to work through the “very good” portion, I will have fixed that for you. How do we know how good it really is?
Jordan says:
February 8, 2012 at 12:04 am
“A fundamental question is whether the data series suffers from aliasing…Error bars produced from aliased data would be similarly unreliable.”
Or, at least, tentative. This is a big question with me, too. What is the spatial frequency distribution? Does anyone know? Maybe satellite data could determine this.
I have recommended doing a fit to a spherical harmonic expansion instead of all this ad-hoc area averaging. At least doing that, there is a little relief from the sampling theorem (Shannon-Nyquist is a sufficient condition for being able to retrieve a signal from sampled data, but not strictly necessary). Nick Stokes claims to have done it and seen little difference, but he was not forthcoming with the details of precisely what he did.
Scottish Sceptic says:
February 8, 2012 at 12:58 am
“That is to say, the longer you wait, the higher the noise level.”
True, that can happen. The question is, though, how much is signal, and how much is noise? Music tends to produce a 1/f frequency spread, too. But, aside from some of the grunge bands popular when I was in grad school, it’s not generally all noise.

February 8, 2012 1:58 am

.
1. The data I am using are just the land station data. This is called CRUTEM3 in the jargon. The combined data set HadCruT3 includes sea surface temperature data provided by Hadley centre. However, including the sea surface data changes the anomalies only slightly ( they are reduced). It is the land data which is the main driver for warming. You can see that here
2. The station data are first sorted into (lat,lon) grid points. The data may be either the anomalies or the actual temperatures. Then a simple average is made over any grid points with 2 or more stations in them. The result is a grid file with 72×36 points for each month from 1850 to 2010. Remember that many of these grid points are empty.
The monthly grid time series is then converted to an annual series by averaging the grid points over each 12 month period. The result is a grid series (36,72,160) ie. 160 years of data.
Finally the yearly global temperatures/anomalies are calculated by taking an area weighted average of only the populated grid points in each year. The formula for this is $Weight = cos( $Lat * PI/ 180 ) where $Lat is the value in degrees of the middle of each grid point. All empty grid points are excluded from this average.
.
A good argument for using anomalies. But note the implicit assumption you are making : Temperatures can’t rise (or fall) locally – they can only be part of a global phenomenon (caused by rising CO2) – now I will go and measure it under this assumption.
One other problem with your argument is that the anomaly is also only measured at one place. In many cases we have a 5×5 degree grid point spanning 90,000 square miles containing just one station ! So a change of 0.2 +- 0.2 degrees at one station in one year is interpreted as applying to a huge surrounding area. Perhaps that station is also in the middle of town which grows over many decades.
I think it must be possible to define a single global temperature since otherwise Energy balance arguments become circular. There must be an effective average surface temperature of the Earth.
Clemenzi
I am using exactly the same procedure as the Hadley CRU team. I am actually using and extending their software. The averaging they use is very simple. The monthly temperature for each cell in the 5×5 degree grid is the simple average of all stations lying within the cell. They use your first equation.
However I like your second equation better – so I will try and use that one and see what the result is !

Good work – I like it very much.

John Marshall
February 8, 2012 2:41 am

The ‘average’ temperature of Earth is an impossible concept even using satellites though these give a better result that the surface system in use today. Taken over a few million years the average could be higher or lower than today’s depending on the distance back in time chosen.
In fact there is no correct ‘average’ temperature for the planet. The one you have to suffer is the correct one for that time but not the average.

Mindert Eiting
February 8, 2012 2:44 am

Suggestion: “the great dying of the thermometers” made that the remaining station time series correlated higher with latitude-region time series. Many dissident stations were dropped and the remaining stations showed a more homogeneous result. Because I found a 27 sigma effect of mean correlations before and after, I’m quite sure that this happened. As a second suggestion I propose to compute the (missing data) variance-covariance matrix of regional time series. If these cover the whole earth, the true variance of the global series can be estimated as the mean covariance in the matrix. I found for 1850-2010 a true variance of 0.0828. This determines the play-room for all temperature change in this period.

February 8, 2012 2:47 am

@Bart “FTA: “The basic assumption is that global warming is a universal,
location independent phenomenon which can be measured by averaging all
station anomalies wherever they might be distributed.”
I see no basis for such an assumption. The change in temperature should
be proportional to the local temperature, which in turn is dependent on
latitude, wind currents, and local heat capacity, at the very least.”

There is no basis for such an assumption! It could be that the Earth is covered by mini “el Ninos” and local temperatures rise and fall over decades. Supposing that North America rises by 2 degrees while simultaneously a similar area in North Africa falls in temperature by 2 degrees, but globally the temperature remains static. North America is covered in weather stations whereas there are just a handfull in the Sahara. The algorithm used to calculate the global anomaly will then result in a net global increase – simply due to the bias in spatial sampling.
“I have recommended doing a fit to a spherical harmonic expansion instead of all this ad-hoc area averaging. At least doing that, there is a little relief from the sampling theorem (Shannon-Nyquist is a sufficient condition for being able to retrieve a signal from sampled data, but not strictly necessary). Nick Stokes claims to have done it and seen little difference, but he was not forthcoming with the details of precisely what he did.”
Yes this has occured to me as well. What I was imaging was to derive a “normal” spatial temperature distribution. It would need to be a harmonic function of lat and lon. Then we could fit this function to the best station data we have say between 1961 to 1990. The assumption would be that the function remains the same but the amplitude changes. Then the function would be sampled each year at fixed locations. – Out pops the time dependence of the amplitude. Of course this again assumes that “warming” is a global phenomenon and not a local phenomenon.

Geoff Sherrington
February 8, 2012 3:27 am

Nylo says: February 7, 2012 at 11:22 pm “And more to the point, we don’t care what the global temperature is, but how it is changing. And for that, the anomaly is all the information you need.”
Wrong. If you research phase changes like water vapour to rain or rain to ice, you need temperatures, not anomalies. If you research proxies like the growth of trees, or isotope-derived temperatures, you need temperatures, not anomalies, to construct calibrations.
Wrong again. There is also a mathematical problem. One can take a single station record and calculate its 1961-1990 average and so derive an anomaly. Next, you can do this to N stations in a grid cell, then simply combine the anomalies. OTOH, if you average the temperatures on the N stations in the grid cell, derive the 1961-1990 term and subtract it, you’ll get a different answer. The difference will flow through as you construct a global average.
Wrong again. It becomes more complicated when you find you are more interested in energy than temperature and their relationship by equations that can involve raising to powers of 4. The sequence in which you do the conversion affects the outcome. The conversion of an anomaly temperature to watts per sq m would seem rather meaningless.
Wrong again. Even more complicated is the estimation of confidence. Are you justified in assessing errors when you have missing values in the 1961-1990 period? After all, different people might infill by using different methods. The resultant anomalies would be different and their error bars would be in different places.
And so on ad nauseam. You’ll find it’s best to use math and physics in the conventional way.

P. Solar
February 8, 2012 3:45 am

excellent article, serious science. Reworking these these sloppily produced (and probably rigged) temperature datasets is essential.
The marked difference you note brings the land data much closer to the general form of the SST data in that earlier period
http://i41.tinypic.com/2s8k9ih.png

Jobrag
February 8, 2012 3:50 am

Slightly off topic but I hope smeone can help;
Q1 I heard / read somewhere that the three peak temperatures in a month where used as a basemark for whether that month was warmer or cooler then the norm, is this correct?
Q2 if the answer to 1 is yes what do you get if tou use the three coolest temperatures?

Kelvin Vaughan
February 8, 2012 3:51 am

That’s a coincidence that’s exactly the same warming I got from just using the Cambridge UK data.
0.4 degrees C.

P. Solar
February 8, 2012 3:55 am

Clive , could you post your revised dataset (and preferably your modified version of the perl script that produces it) ?
I’m doing some work analysing SST and other land data , it would be very interesting to pass this version of the data through the same processing. It could be quite illuminating.
You could call this new time series the “Best land surface temperature record” if no one’s thought of that yet 😉
Kudos.

Dr. John M. Ware
February 8, 2012 4:00 am

Just a quick note on the concept of a “normal” temperature for the earth. A norm is an established standard of what should be. Human body temperature (by long observation and experimentation) should be about 98.6 DF according to certain instruments used certain ways; a large deviation from that norm could kill you. We all know what our normal health looks like from how it feels to breathe to our bowel movements; we have established those parameters throughout our life and experience. Other norms exist or are established in other areas of life.
However: There is no norm for the temperature of the earth, nor for the stock market, nor for human population, nor for crop production, etc., etc. There are, to be sure, averages; but they may or may not be helpful (what, for example, is the Dow Jones stock average from its inception to the present?). Even if the average temperature for the earth’s surface could be reliably established–and I hope it can, someday–what use will be made of it? Scientific, or political? Whatever that average figure may turn out to be, let’s not mistake it for a norm; just because a current temperature can be determined, don’t say it should be exactly that temperature and must stay there. It won’t.

J. Fischer
February 8, 2012 4:01 am

“The basic assumption is that global warming is a universal, location independent phenomenon which can be measured by averaging all station anomalies wherever they might be distributed.”
Nonsense. No-one has ever said anything remotely resembling this.
“Underlying all this of course is the belief that CO2 forcing and hence warming is everywhere the same.”
Pure nonsense. Again, such a statement has never appeared anywhere in the climate-related literature.
“In principal this also implies that global warming could be measured by just one station alone.”
Utter rubbish. Seriously, what is your intention in peddling such complete fiction? You must know that what you are saying is completely untrue.

February 8, 2012 4:14 am

Hi Clive,
Thank you for all your hard work. No opinion, as a proper understanding would require considerably more time.
I was always comfortable with the concept of the anomaly, but much less so with the validity of Surface Temperature (ST) data, since the satellite Lower Troposphere (LT) data showed considerably less warming.
In 2008, I informally compared ~30 years of Hadcrut3 ST with UAH LT, and observed an incremental warming in the ST of about 0.2C, or 0.07C per decade.
http://icecap.us/images/uploads/CO2vsTMacRae.pdf
Whether this apparent ST “warming bias” can be extended back in time is perhaps a matter of subjective opinion.
Anthony Watts and his team has shown that location errors, etc. in ST measurement stations have contributed to a warming bias in the USA ST dataset. Whether due to poor ST measurement location, urban sprawl (UHI), land use change, and/or other causes, there does seem to be a warming bias in the ST data. Furthermore, there is no reason to believe this warming bias started in 1979, when the satellites were launched. The ST warming bias can probably be extended back to 1940, or even earlier.
You may find it helpful to run your own comparison of Hadcrut3 to UAH, to help calibrate your studies on ST.
Best regards, Allan
P.S.
When temperatures cooled in 2008, I made the following observation, again based on unadjusted Hadcrut3 ST and UAH LT:
“The best data shows no significant warming since ~1940. The lack of significant warming is evident in UAH Lower Troposphere temperature data from ~1980 to end April 2008, and Hadcrut3 Surface Temperature data from ~1940 to ~1980.”.
See the first graph at
http://www.iberica2000.org/Es/Articulo.asp?Id=3774
I know, it’s a complicated subject and I may be wrong. However, compared to the IPCC, my predictive track record is looking pretty good to date.
P.P.S.
On another subject, major conclusions have made from CO2 and temperature data from ice cores. I accept that CO2 lags temperature in the ice core data, the lag being in the order of ~600-800 years (from memory) on a long time cycle. What I do question is the reliance on these numbers as absolute values as oppose to relative values. From the ice core data, we have concluded that pre-industrial atmospheric CO2 levels were about 275 ppm, and have now increased about 30% due to combustion of fossil fuels. Actual CO2 measurements at Mauna Loa started in 1958 at about 315 ppm, and now exceed 390 ppm. Earlier data on thousands of CO2 measurements complied by the late Ernst Beck suggest that even higher CO2 levels were measured worldwide, but Ernst’s data has been dismissed, apparently brushed aside with little thought, because it disagreed with the modern paradigm. While Ernst may be wrong, I doubt that his data has been given a fair examination. I suggest that brushing aside thousands of contradictory data points because they do not fit one’s paradigm is poor science practice.

Tom in Florida
February 8, 2012 4:26 am

The problem with using anomalies is that the average person does not understand that the anomaly figure is representative of a difference from a baseline number. Unless one is aware of what baseline is being used, the anomaly carries no weight as changing baselines changes the anomaly. I believe that those using anomalies for their argument are fully aware of this ignorance by the average person and try to capitalize on that to win them over. I always look at anomalies and percentages with the following caveat: figures lie and liars figure.

A. C. Osborn
February 8, 2012 4:56 am

Nylo says:
February 7, 2012 at 11:22 pm
Actual temperature is not used because you cannot assume that the reading of one thermometer in one location is a valid temperature representative of a large area. Not even in the surrounding kilometer. Local conditions affect the temperature too much, so the absolute data is quite meaningless. However, the temperature anomaly CAN be assumed to be about the same in a large area. Because local conditions change in space, but remain quite the same in time. So if one spot is 2C higher than normal, it is quite reasonable to assume that, whatever the temperature is 1 kilometer further, it will be about 2C higher than normal as well for the corresponding location.
This is the biggest load of Warmist B**lsh*t ever, so much actual data is being lost through this stupid process it is unbeleivable “Scientists” would use it. This part in particular “So if one spot is 2C higher than normal, it is quite reasonable to assume that, whatever the temperature is 1 kilometer further, it will be about 2C higher than normal as well for the corresponding location.”
It completely ignores wind direction, Blocking, Pressure, Humidity, Elevation, wind off the Sea.
I am sure that other posters can name quite a few other affects that make this an “averaging” too far.
The Temperatures are the Temperatures and reflect what the locality experiences.

February 8, 2012 4:58 am

More on Beck, etc.
Note the alleged Siple data 83 year time shift – amazing!
http://hidethedecline.eu/pages/posts/co2-carbon-dioxide-concentration-history-of-71.php
The well known graph for CO2 is based on Ice core data (”Siple”) and direct measurements from Hawaii (Mauna Loa). The Siple data ended with a CO2 concentration of 330 ppm in 1883. 330 ppm CO2 in 1883 is way to high, 330 ppm was first reached by Mauna Loa data around 1960-70. The two graphs (Siple and Mauna Loa) was then united by moving Siple data 83 years forward in time. The argument to do this was, that the atmospheric content of the ice was around 83 years older than the ice. So rather “fresh” atmospheric air should be able to travel down in the snow and ice corresponding to the 83 year old ice? This is perhaps 50 meters down or probably more. And then the fresh air is locked in the 83 year old ice. So a good ventilation down 83 year old ice, and then the ice closes. This hypothesis is still debated – but the classic Siple-Mauna Loa CO2 graph is used widely as solid fact.

Jordan
February 8, 2012 5:13 am

Bart (on February 8, 2012 at 1:56 am)
Imagine a data sampling system (time domain) where we stop taking readings whenever temperature drops below a certain level. Who could claim that such a system has a design which addresses the issue of aliasing?
Yet the spatial coverage of the surface record does something similar: we sample at convenient locations. That means not at the poles, not at the tops of mountain ranges, and so forth.
If we had started off with a design to avoid aliasing, we would have considered the properties of the signal at the outset. And used this to determine the sampling regime.
The issue is treated as a statistical problem, and studiously ignores the question of “quality” attaching to the sampling regime in the hope that it should be averaged-out. Even that is not guaranteed.
I know we are in agreement Bart, but I take a harder line on the issue of “sufficient or necessary”. This point relates (IMO) to the availability of other information which could assist us in designing a sampling regime. But isn’t that the heart of the issue – we don’t have any.

P. Solar
February 8, 2012 5:16 am

Clive Best says:
>>
Clemenzi
I am using exactly the same procedure as the Hadley CRU team. I am actually using and extending their software. The averaging they use is very simple. The monthly temperature for each cell in the 5×5 degree grid is the simple average of all stations lying within the cell. They use your first equation.
However I like your second equation better – so I will try and use that one and see what the result is !
>>
Taking an average of the T^4 value implies you are trying to take the average of the grey-body radiation. If you want to look at radiation do so, and take the average. Don’t look at temperature.
There is a lot of climate that depends on T not T^4 , such an average is not a better indication of average temperature.

Good work – I like it very much.
Dave. Your 30 day average is off-set from your daily values. You are using monthly mean and attributing it to the end of the period not the middle. I would guess by that error that you will also be using a running mean. That is one god-awful frequency filter. Have a look at using a gaussian filter. It’s also a sliding average and can be done in much the same way just by adding a weighting factor.

higley7
February 8, 2012 5:38 am

What bothers me about anomalies is that the raw temperatures are rarely shown. When the monthly highs and lows are shown, it is clear that the highs increase only slightly, with the rise in the lows being the main change. During a warming phase there is an increase in average temperatures, but it only means normal summers, slightly milder winters and higher low temperatures at night, which is good for growing plants.

February 8, 2012 5:49 am

Even more so, we should measure the energy given the change in temperature..in other words the true measure has to be figuring out if the measured energy is increasing in the lower trop, which would indicate trapping. a parcel of saturated air at 80 degrees contains far more energy than a parcel of dry air at -20. It take very little energy change to raise arctic temps quite a bit and its more than offset by a 1 degree fall in the tropics. If we can quantify that, we will simply see “global warming” is merely distortions of existing temp patterns against the normals, and there is no net change in the energy budget and hence no global warming, just natural oscillations back and forth!

1DandyTroll
February 8, 2012 5:51 am

“The argument against this, as discussed above, is that we measure just the changes in temperature and these should be independent of any location bias i.e. CO2 concentrations rise the same everywhere ! ”
As far as I’ve been told the met office measures the temperatures and then calculates the changes. And you only need a gadget for measuring CO2 to know CO2 concentration doesn’t rise the same everywhere but fluctuate wildly between cities and rural places fields and forests and vice versa.
How can citing not be an issue? The most coverage today is amongst the EU countries high/free way systems. That introduce problems with more roads and far wider roads since and more stations concentrated around these road system compared to before 1990 and especially since 1960’s.

Jean Parisot
February 8, 2012 5:54 am

John, I agree. There are many techniques from a spatial stats perspectives that could improve the temperature record and the modelling. Unfortunately, most of them would require the raw station data, which seems to have been a bridge that has been burned.

February 8, 2012 5:57 am

@Clive Best
Perhaps like me you have wondered why “global warming” is always measured using temperature “anomalies” rather than by directly measuring the absolute temperatures ?
No.
In science there is a difference between accuracy and precision. Let me give an example. If 10 of 10 bowshots hit the centre of the target the accuracy is high and the precision is high. If 10 of 10 bowshots hit right from the centre of the target exact on the first ring, the accuracy is bad, but the precision is still high. That means that the process is perfect; there is only a precise offset, maybe from a constant floating wind into the right direction from the viewpoint of the bowman. This can be solved by a calibration, if needed.
If you have two temperature tables, maybe from different latitudes and there is an offset of a precise value, this means that the anomaly is equal for both latitudes. This means in a conclusion that the cause of the anomaly is independent from the latitude.
If you would measure absolute temperatures in Kelvin [K], it becomes clear that the values must be different on different latitudes. This tells you only that the measured temperatures are all precise and accurate, but you cannot conclude anything about the cause of the anomaly.
Why can’t we simply average the surface station data together to get one global temperature for the Earth each year ?
Sure you can do this, but I don’t see any scientific sense in ‘One global’ temperature for a calendar year’, because you flat the high frequency anomaly values from twelve month.
If you look at the monthly UAH satellite data , there are about 6 to 7 soft peaks per calendar year for GL, NH, SH and TR temperatures.
Now, the point is that already the solar tide function of the synodic system of Mercury and Earth effects the Sun 6.3040 times a year [period = 2 * ((1/0.24085) – (1/1.00002))], and this function is recognizable in the UAH GL temperature anomalies. Moreover the anomalies of the sealevel satellite data, where the (doubtful) linear increase of 3.2 mm per year is subtracted from the original data, show about 6.4 peaks per year [16 in 2.5 years] (red curve) nearly phase coherent to the solar tide frequency of Mercury and Earth.
That sealevel spikes are phase coherent to the UHA GL temperature spikes is not a surprise, because the volume of the heated global oceans is greater then at cold phases. But it surprises that these high frequency functions are in phase with the solar tide functions, leaded by the synodic couple of Mercury and Earth.
Because a frequency analysis (for precise, but not accurate data) is possible, and especially for original high frequency data of month, without the absolute accuracy of a global temperature, which can lead to the solar/terrestrial physics, it can be understood that an absolute “Temperature of The Year” has not really a scientific value.
V.

February 8, 2012 6:23 am

Figure #1 and # 2
So all we really know well is America, Western Europe and Eastern Australia????
From that we get “Global” anomalies???

climatebeagle
February 8, 2012 6:56 am

Nylo – “So if one spot is 2C higher than normal, it is quite reasonable to assume that, whatever the temperature is 1 kilometer further, it will be about 2C higher than normal as well for the corresponding location.”
Easy to disprove, take any region, the size of the area used in the grids, which way more than 1km, and then see on any day how much each location differs from its average. Living in the San Francisco Bay Area such a statement definitely does not apply locally, temperatures can range from ~12C to 38C within less than 30 miles, I’ve seen changes of around 10C across 3 miles. It’s hard to see how with such ranges a delta from normal would have the same value.

February 8, 2012 6:57 am

@J.Fischer
To try and answer your point: The way the annual global temperature anomaly is calculated assumes a global phenomenon because it uses a simple annual average af all (area weighted) individual station anomalies. This value is then presented to politicians as hard evidence that the Earth has been warming for the last 160 years as a result of AGW.
Suppose the Earth’s climate is actually driven by a series of regional decadal oscillations such as PDO AMO etc. The geographic spread of stations is such that the averaging will bias the global average to those regions with lots of stations. So if North America rises by 2 degrees while the Sahara falls 2 degrees then the net result will be strong positive. That is my main point. The result assumes global phenomena and rules out local phenomena.
@P.Solar
I have put the new normalised data for the annual temperature anomalies here. The 3 columns are 1) Global 2) North Hemisphere 3)South Hemisphere. You can also view the subroutine that calculates it here. If you use the Hadley PERL scripts then the “Field” passed to the routine should be Temperatures (not anomalies). Remember to respect MET Office copyrights.
MacRae
I agree that there is a tendency in the AGW mainstream to ignore data which doesn’t quite fit the story. I also find the whole paleoclimate debate fascinating. What really causes Ice Ages ? Why have they been coincident with minima of orbital eccentricity for 1 million years ? I have spent a long time thinking about this.. The next Ice age will begin in ~2000 years time. Can global warming save us ?
@P. Solar, Robert Clemenzi
I just did the T**4 averaging and the results can be seen here It is quite possible there could be a mistake as I did this in a hurry but the values are massively weighted to the warm zones and high radiative terms.

February 8, 2012 7:03 am

@P.Solar Correction:
You can also view the subroutine that calculates it here

More Soylent Green!
February 8, 2012 7:28 am

I’m sure I’m not the first to post this thought, but using an anomaly as a measuring stick allows for an arbitrary baseline to be used and therefore the warming may be exaggerated. Exhibit A: The lowering of historical temperature data makes the late-20th century warming anomaly appear greater.

Paul Linsay
February 8, 2012 7:39 am

I think that Phil Jones recently admitted that 30% of all the stations showed a temperature decline. This strongly argues against any kind of global mean. Or to paraphrase a famous political saying, all climate is local.

oldtimer
February 8, 2012 8:07 am

EM Smith aka Chiefio has also analysed temperature records in depth. He adopted a different method. Along the way he discovered and revealed the changes in station counts and commented on their implications. They can be found here: http://chiefio.wordpress.com/category/dtdt/
Among other things, his approach identified what he called pivot points that marked significant changes in the stations used to record temperatures, notably around the 1990/91 period when the overall number dropped from c6000 to c1200 with only 200 of those stations common to the pre and post 1990/91 periods. He observed that scientists do not change their measuring equipment during the course of an experiment and then expect to get a valid, usable result. Yet that is a problem in the historical data sets used in climate science. It is a particular problem when the 1961-90 baseline includes c6000 stations, and the post 1990 period only includes c1200. As far as I am aware, no one knows whect effect this has on the reported temperature anomalies.
A word of warning to those that visit his site. You need time to spare for he embarked on an epic journey analysing temperature records in every country that reported data, producing graphs for each one.

A. C. Osborn
February 8, 2012 8:27 am

oldtimer says:
February 8, 2012 at 8:07 am
EM Smith aka Chiefio has also analysed temperature records in depth. He adopted a different method. You need time to spare for he embarked on an epic journey analysing temperature records in every country that reported data, producing graphs for each one.
The best analysis I have seen to date, backed up by others in the Comments who have done similar analysis on individual ares.

pat
February 8, 2012 8:27 am

.
“A good argument for using anomalies. But note the implicit assumption you are making : Temperatures can’t rise (or fall) locally – they can only be part of a global phenomenon (caused by rising CO2) – now I will go and measure it under this assumption.”
Exactly. NIWA immediately comes to mind. 11 or so weather stations, all but 2 in the same location for many years. Yet the use of anomalies rather than real data covered up the fact the real data was constantly ‘adjusted.’ NIWA seemed unaware that the historic readings were actually available to the public. A quick review of the data showed that the sharp rise in New Zealand’s temperature, as measured via the anomalies was in fact nonexistent, and individual station increases likely due to UHI.

Alan S. Blue
February 8, 2012 8:32 am

Nylo says at 11:22 pm
Local conditions affect the temperature too much, so the absolute data is quite meaningless. However, the temperature anomaly CAN be assumed to be about the same in a large area.

Nylo,
For the anomaly to remain about the same across the 5×5 gridcell, you are accepting an underlying assumption: That neither the weather nor the climate changed here.
You’re right about how bogus measuring ‘temperature’ with a ludicrously low number of thermometers is, but none of these thermometers have actually been ‘calibrated’ into determining gridcell anomalies. They’re used ‘as is’.
The instrumental error bars are not appropriate measures of either actual temperature or anomaly measurements for the gridcell.
The ‘anomaly method’ seems quite reasonable for getting -some- data, and the way the math all ends up with unknowables cancelling out is definitely fortuitous. But the omnipresent assumption that a given thermometer has a constant relationship with the true anomaly even in different weather and climate patterns, and the failure to calculate sane error bars are both still glaring and fatal flaws.

Editor
February 8, 2012 8:38 am

Clive
An interesting article, thank you. For those that prefer real temperatures rather than anomalies clicking on my name will take you to my site where I collect (mostly) pre 1860 temperature data sets from around the world expressed in real terms. Its also here;
http://climatereason.com/LittleIceAgeThermometers/
However, anomalies do help to better compare temperature changes between the various highly disparate data sets so they do perform a worthwhile function. However they have had the side effect of becoming the basis for a meaningless (in my view) single ‘global’ temperature.
A global temperature has a number of problems, not the least of which is that it disguises regional nuances.Around one third of global stations have been cooling for some time as we observed in this article below. Separating out the individual stations from the composite of stations used to create an ‘average global temperature’ shows that warming is by no means global as there are many hundreds of locations around the world that have exhibited a cooling trend for at least 30 years-a statistically meaningful period in climate terms.
http://wattsupwiththat.com/2010/09/04/in-search-of-cooling-trends/
These general figures were confirmed by the recent BEST temperature reconstruction which reckoned that 30% of all the stations they surveyed were cooling. Many of the rest (but by no means all) are in urban areas, which many of us believe do not reflect the full amount of localised warming caused by buildings/roads etc. Add in that many stations are not where they started out and have migrated to often warmer climes such as the local airport, and that many stations have become replaced by others or been deleted, and we start to see an immensely complex picture emerging where we are not comparing like with like.
There is a further complication with lack of historic context. For reasons best known to themselves GISS began their global temperatures at 1880 and as such do not differentiate themselves enough to Hadley which began thirty years earlier. I suspect this date was chosen as this was when many of the US stations started to be established, but as regards a global reach, a start date around 1910 or so would bring in more global stations and have the advantage of greater consistency, as by that time the Stephenson screen was in almost universal use.
The start date of 1880 does not allow the context of the warmer period immediately preceding it to be seen, which means the subsequent decline and upward hockey stick effect is accentuated (the hockey stick uptick commenced with instrumental readings from 1900) . I wrote about the 1880 start date here; where I link three long temperature records along the Hudson river in the USA.
http://noconsensus.wordpress.com/2009/11/25/triplets-on-the-hudson-river/#comment-13064
I think the most we can say with certainty is that we have warmed a little since the depths of the Little Ice age, which would surely come as a relief to most of us, but instead seems to be the source of much angst afflicting most of the Western World, who apparently have stopped learning history and are confused by statistics and context.
I hope you will be continuing your work and develop your ideas.
Tonyb

February 8, 2012 9:00 am

Clive MacRae
I agree that there is a tendency in the AGW mainstream to ignore data which doesn’t quite fit the story. I also find the whole paleoclimate debate fascinating. What really causes Ice Ages ? Why have they been coincident with minima of orbital eccentricity for 1 million years ? I have spent a long time thinking about this.
The next Ice age will begin in ~2000 years time. Can global warming save us ?
__________________________________________________
Hi Clive,
Re your question: Can global warming save us from the next Ice Age?
My best guess answer is no, not even close. Even if the mainstream argument is correct (that CO2 drives temperature), this will be no contest – like firing a peashooter into a hurricane.
Geo-engineering might work; don’t know.
Ironic isn’t it, that our society is obsessed with insignificant global warming, just prior to an Ice Age?
I recommend Jan Veizer and Nir Shaviv on the science.

J. Fischer
February 8, 2012 9:41 am

Clive Best: “The way the annual global temperature anomaly is calculated assumes a global phenomenon because it uses a simple annual average af all (area weighted) individual station anomalies.” – actually no, that’s not how it is done. Even if it was, that would not imply anything about the global or otherwise nature of the change measured.
“The geographic spread of stations is such that the averaging will bias the global average to those regions with lots of stations. So if North America rises by 2 degrees while the Sahara falls 2 degrees then the net result will be strong positive. That is my main point. The result assumes global phenomena and rules out local phenomena.”
I’m afraid you’re totally wrong. Biases could indeed be introduced by coverage issues, but this has absolutely no bearing on whether phenomena are local or global. And one can estimate the size of those biases, and this has of course been done. Why do you think GISS temperatures only start at 1880, even though there are temperature records which go back two centuries before that? Have you, in fact, read any of the papers describing the GISS and other ground-based temperature datasets?

Alan S. Blue
February 8, 2012 10:13 am

Clive,
One recurring question I have that you might be able to answer would involve an actual calibration with the satellite data.
That is:
Pick a gridcell.
Calculate the ‘average’ for that gridcell.
(You’ve done this for all the cells.)
Now:
Do a -calibration- between a single gridcell and the satellite data’s best estimate of surface temperature for that same cell.
There are regular reports that the surface stations ‘agree in general’ with the satellites. But these are -correlation- studies, not intended to be calibrations. And they tend to compare the end results (GMST) instead of comparing gridcell-by-gridcell.
This can arrive at an estimate of an actual error for a gridcell’s temperature measurement.

CoonAZ
February 8, 2012 10:22 am

Even if anomalies are calculated on a station-by-station basis, it seems to me that, in general, 50% of the data should be in the positive range and 50% should be in the negative range (unless the date range for the “mean” is different than the data range). So when I look at Figure 2 of Mann et. al. (2012) it doesn’t seem to fit that impression, as less than 10% of the data is in the positive region.
[IMG]http://i44.tinypic.com/r1zmhf.jpg[/IMG]

Paul
February 8, 2012 10:24 am

Clive Best asked about Fig1: Globally averaged temperatures based on CRUTEM3 Station Data,
“So why is this never shown ?” Well I don’t know, but if I had to guess it’s because it’s not scary looking enough.

henrythethird
February 8, 2012 10:46 am

“…So if one spot is 2C higher than normal, it is quite reasonable to assume that, whatever the temperature is 1 kilometer further, it will be about 2C higher than normal as well for the corresponding location…”
The problem with this assumption is that we actually know what “normal” is. The charts are drawn so that it’s eiter above or below “zero”, with no explination of how the “zero” was determined.
I’ve constantly harped on the fact that each database uses a different grouping of stations, adds or drops the Arctic, and uses a different averaging period.
Since we can all agree that the earth is warming, we can state that any past averaging period will be colder than today. As your averaging period moves closer to today, the “zero” will change.
They get around this by stating “well, it’s the TREND that’s more important, not the zero”.
This is a problem.
Let’s say that your accountant told you you’ve seen a 30 dollar rise in your income. Have you:
1. Started at 45 below, and are now 15 below,
2. Started at 30 below and are now at zero,
3. Started at 15 below and now at 15 above…
You can see that the trend is the same, and can continue on this track.
Add to that the fact we’re only tracking trends of 1 degree of rise, makes the use of a “zero” on an anomaly chart useless.

February 8, 2012 10:56 am

Thanks for everyone for the comments.
@Volker Doorman
I agree with you that averaging over monthly and local data to get just one temperature per year is not that smart. However a cynic might suspect this is exactly what is being done to produce “Hockey Stick” type graphs for the public as hard evidence of AGW.

Also agree that the local effects are important. I once did this back of the envelope calculation for the Urban Heating effect and came up with reasonably large values.
Total Average World Energy consumption Rate ( fossil, nuclear,hydro) = 15 TW (wikipedia, for 2005 and increasing by 2%/year). My guess is 80% is eventually converted to heat (2nd law thermodynamics)
Land Surface Area of the Earth = 150 x10**12 m2 of which urban areas are ~1.5%
If we assume that this energy consumption is concentrated in these urban areas then the human heating effect there works out at 5.5 watts/m2.
Radiant energy from the sun is distributed unevenly on the earth’s surface but the average absorbed energy globally is 288 watts/m2. This energy is then radiated from the surface as heat (infrared) and can be comparable to the human generated heating also at the surface.
Direct heating by Man in urban areas comes out at approximately 2% of direct solar radiant heating.
These are just ball park figures and will depend on the latitude of the urban areas and their locations. Also not all human energy consumption is generated in urban areas but counteracting this is the fact that most large cities are in high northern latitudes. The anthropogenic effect at night and in winter could be even higher.
What temperature increase does this lead to ? We use Stefan Boltzman’s law.
(T+DT)**4 = 1.02T**4
(1 +DT/T)**4 = 1.02
DT = (0.02/4)T ( T = 285)
DT = 1.4 degrees ! (i.e. averaged temperature increase in urban areas)

February 8, 2012 11:08 am

@Clive Best
Cheers – I am glad that you like it, I’ll keep going. Hopefully one day it will become a useful resource in this debate. I was appalled by the contents of the ClimateGate emails and decided I needed to know the truth. I grew tired of the “It’s warm in Wagga Wagga” jibes. Incidentally this is Wagga Wagga since the beginning of the year http://www.theglobalthermometer.com/igraphs/stations/YSWG.png – there’s plenty of ways to splice the data once collected 😉
@P Solar
“You are using monthly mean and attributing it to the end of the period not the middle. I would guess by that error that you will also be using a running mean. That is one god-awful frequency filter.”
Spot on – it’s the 30 day running mean updated hourly. It probably represents the average temperature of around a couple of weeks earlier. As I say, it is what it is, raw and unadulterated.

February 8, 2012 11:14 am

Third Reason already alluded to by previous commenter, there is no global warming in the Southern Hemisphere since 1952 as shown by figure 1. The GAT is derived from COMBINING the NH and SH temps, hence through unethical (or incompetent) mathematical trickery the GAT is biased by the NH temp trend. This is a violation of basic mathematical theory and statistical practice which why most engineers and meteorologists reject AGW. The entirety of the AGW hoax should have been unravelled years ago by focusing on this embarrassingly obvious glaring fact. No need to calculate W/m^2, blah, blah, blah as this is an obfuscation OF THE DATA. No matter how flawed the siting of the temp recording system or any of the other multitude of issues, the irreconcilable fact of ZERO SH positive temp trend falsifies the AGW hypothesis, not withstanding the hand waving using water vapor.
AGW is not a theory as it is not universally recognized by the scientific community, it never has been when such notable people were dissenting, e.g. Fred Singer, the father of the US weather satellite system, Bill Gray, hurricane researcher, etc, That’s why the AGW hoaxers were always playing upon the imprimatur of “consensus”.

mwhite
February 8, 2012 11:26 am

“Perhaps like me you have wondered why “global warming” is always measured using temperature “anomalies” rather than by directly measuring the absolute temperatures ?”
http://junksciencearchive.com/MSU_Temps/NCDCabs.html
NCDC Global Absolute Monthly Mean Temperatures from 1978 to present.
http://junksciencearchive.com/MSU_Temps/NCDCanom.html
Now the Global Monthly Mean Temperature Anomalies.
Ask yourself what graph would you publicise to promote the AGW cause?

February 8, 2012 11:39 am

@ J.Fischer
I don’t know about GISS – but I do know what the Hadley software does. Each station has 12 monthly normals which are the average temperature for each month (eg. Jan, Feb, Mar…) between 1961 and 1990. Then…..
1.Anomalies are defined for each station by subtracting the monthly values for a particular month from these normal values. Stations without normals for 1961-1990 or where any anomaly is > 5 standard deviations are excluded.
2. The world is divided into a 5×5 degree grid of 2592 points. For each month the grid is populated by averaging the anomalies of any station present within each grid point. Most grid points are actually empty – especially for those early years. Furthermore the distribution of grid points with latitude is highly asymmetric with over 80 percent of all stations outside the tropics.
3. The monthly grid time series is then converted to an annual series by averaging the grid points over each 12 month period. The result is a grid series (36,72,160) ie. 160 years of data.
4. Finally the yearly global temperature anomalies are calculated by taking an area weighted average of all the populated grid points in each year. The formula for this is $Weight = cos( $Lat * PI/ 180 ) where $Lat is the value in degrees of the midle of each grid point. All empty grid points are excluded from this average.
I’m afraid you’re totally wrong. Biases could indeed be introduced by coverage issues, but this has absolutely no bearing on whether phenomena are local or global. And one can estimate the size of those biases, and this has of course been done. Why do you think GISS temperatures only start at 1880, even though there are temperature records which go back two centuries before that? Have you, in fact, read any of the papers describing the GISS and other ground-based temperature datasets?
I think that starting at 1880 is better than starting at 1850. But already in 1880 only 8% of all grid points contain at least one station. I think the normalisation study shows that the data are consistent and free from bias only after 1900 – Look at Figure 5. After 1920 the data are a reliable reflection of the station data.
For the local/global issue. There are essentially zero stations in Saudi Arabia and Yemen. How would GISS detect if temperatures were to fall there by 2 degrees ?

common sense
February 8, 2012 12:44 pm

This got me thinking…curious if there is a sleight of hand with the relationship of Celsius to Fahrenheit, considering there is a gap without decimal measures. The gap in typical habitable temp zones is about 1.8F from Celsius integers, correct?
Perhaps is nothing, perhaps small, but I’m not just skeptical, I’m a pessimist as I just do not trust people. If truth cannot stand up to scrutiny, then by default, it is not true.

Steve Garcia
February 8, 2012 12:52 pm

Can empty grid points similarly affect the anomalies? The argument against this, as discussed above, is that we measure just the changes in temperature and these should be independent of any location bias i.e. CO2 concentrations rise the same everywhere ! [emphasis added]

I know Clive is snarc-ing, but this is almost certainly what the warmists do say.
But in jumping from temperatures to CO2 – and equating the two – they are jumping ahead to the conclusion/asserted correlation and using the conclusion to prove the conclusion. They don’t see the contradiction or the flawed logic, but it is as plain as the nose on your face. They would get an ‘F’ in Logic 101 with that kind of reasoning. Since they believe that CO2 concentrations equal increased temperatures, they have no trouble saying this. But that is exactly why there is a skeptical community in the first place – because their conclusion came before the science, so it drives their science. But they simply can’t do that and still maintain it is scientific reasoning.

David A. Evans
February 8, 2012 1:31 pm

Joe Bastardi
Thanks Joe.
I along with many others have been saying similar things here for quite some time. If the temperature in low latitude areas were to change by just -0.1°C, it would be possible for many degrees of change in say the high Arctic. This lets them say “Look at x in the Arctic circle, it’s boiling!”.
Without taking humidity into account, temperature is meaningless in terms of energy!
DaveE.

1DandyTroll
February 8, 2012 1:53 pm

@More Soylent Green! says:
February 8, 2012 at 7:28 am
“I’m sure I’m not the first to post this thought, but using an anomaly as a measuring stick allows for an arbitrary baseline to be used and therefore the warming may be exaggerated. Exhibit A: The lowering of historical temperature data makes the late-20th century warming anomaly appear greater.”
Exactly.
The current WMO base line is 1961-1990, and since a few years back, like around 2007 onwards and more and more my country and EU countries have adopted to compare national averages to it.
But if one compares to the previous WMO base line, not much, if any, warming.
If one compares to the national compiled base lines, not much, if any, warming.
If compared to satellite era, not much if any warming.
The same applies for the supposed 4 inch sea level rise in the baltic sea over the last 30 years. At all the diving spots I’ve been too, the land rise seem to have won though.
Some EU countries, like Bulgaria, has it’s climate statistics compiled by the Hong Kong Observatory, which spells WMO, and apparently, snow is something rare in Bulgaria, just ask the News stations of the northern EU countries who’s amazed at the recent years “extreme weather” in Bulgaria.
Is it the weather or climate that is extreme and exaggerated or is the WMO data and the abuse of statistics, I wonder. For instance how do you check how much of the data is missing if you’re not allowed to see the original unmolested raw data for Swedish data? Apparently Swedes can access everybody else’s data but nobody are allowed to access their data, not even the Swedes themselves. :p

Nylo
February 8, 2012 2:01 pm

Wow, I’m astonished as to the number of responses my comment has received. A few clarifications:
1) I was explaining why anomalies are used instead of absolute temperatures. I explained why using anomalies is preferable. I never meant that it is perfect. when I say that you can get a very good aproximation to the average anomaly… I mean in comparison with trying to do the same calculation of the average temperature of the planet using temperature values from thermometers. I would not believe any error bars of less than 5 full degrees with respect to the second, with the current available thermometers. Trying to estimate the average temperature of a 500x500km area from the reading of 1 thermometer? Cmon. I may believe, however, that IF that single thermometer shows an average temperature increase of half a degree in the previous 50 years, and IF its surroundings have not changed significantly, then it is very likely that the temperature in all that 500x500km area has also raised, somewhat close to that half of a degree as well. I can believe that the error bars in calculating anomalies for a big area in that way are very probably less than half a degree. Which still is a quite poor performance given the small change that we are trying to measure, but it is about an order of magnitude better than the other alternative.
2) That anomalies do not change much in distances of even a few hundreths of a kilometer is an underlying assumption when calculating global temperature anomalies that I have not yet seen discredited. Please show me two good temperature records from two well placed thermometers not too far from each other in which the resulting anomalies are very different from each other’s.
3) Even if it results, not only that it is a bad assumption, but that it is a completely wrong assumption, to use absolute temperature instead of temperature anomalies, you would need to demonstrate that you get a BETTER result, a better representation of reality. And there’s no way you can demonstrate that.
All of this is common sense. I live in Madrid. I know that the temperature in Retiro Park is 1 or 2 degrees colder than the temperature in the surrounding streets, always. If I have a thermometer in Retiro and another one 1km into the city, I can use NONE of them to represent what is the temperature of the city. It would be wrong in both cases. However, they are both very likely to agree that today’s temperature is X degrees hotter or cooler than normal in the city. So I can look at ANY of them for that information. And not only that. If the Retiro thermometer says that the city is 5 degrees colder than normal, it is very likely colder than normal as well in all cities around Madrid, even in distances of hundreths of kilometers. The farthest, the less likely to agree, of course. But absolute temperature? It could differ in more than 20 degrees due to the different local conditions. So when you have a problem of sampling, and it is not possible not to have it in this big planet, the only reasonalbe thing to use is anomalies. In big areas the assumption may be wrong, but there is no reason to believe that it will be wrong in a particular direction (warm or cold). So you can put bigger error bars if you want, but not claim bias of any kind.

clipe
February 8, 2012 4:48 pm
sky
February 8, 2012 6:20 pm

Nylo says:
February 8, 2012 at 2:01 pm
” So when you have a problem of sampling, and it is not possible not to have it in this big planet, the only reasonalbe thing to use is anomalies. In big areas the assumption may be wrong, but there is no reason to believe that it will be wrong in a particular direction (warm or cold). So you can put bigger error bars if you want, but not claim bias of any kind.”
You are correct that resort to anomalies per se doesn’t introduce bias. However, the common assumption that anomalies at stations within the same region are coherent enough to be interchangeable certainly can. Look at the anomaly discrepancies betweeen Retiro Park and Barajas airport (14km away), as well as those at Valladolid (166km), Zaragosa Aero (266km) and Badajoz (316km). You will find not only appreciable differences in the year-by-year anomalies, but significant differences in multi-decadal “trends.” Bias arises from the UHI-affected station-shuffle that is endemic throughout “climate science” in synthesizing “average anomaly” series over time scales of a century or longer.
Only intact century-long records at relatively UHI-free stations can provide unbiased results. Unfortunately, such records are very much in short supply throughout much of the globe, leaving many grid-boxes totally devoid of unbiased data series. Neither blind anomaly averaging nor the the statistical splicing of fragmented actual temperature series done by BEST overcomes that fundamental deficit of high-quality data.

February 9, 2012 6:06 am

Without taking humidity into account, temperature is meaningless in terms of energy!
DaveE.

Exactly, they are not measuring total heat in Btu/# or Kcal/kg, they are only measuring partial heat in the form of sensible heat.
Additionally, the lower the RH% of the air, the greater the swing in sensible heat measure in response to heat input, i.e. temperature. A 5 degree rise in dry air is NOT the same energy increase as a 5 degree rise in humid air. Hence using the anomaly method introduces a bias dependent on RH%. So if the prevalent number of monitoring stations are in high altitudes, high latitudes and areas of consistent low rainfall, i.e. areas of low RH%, these will bias the anomaly UP. This is probably why the SH anomaly trend shows NO INCREASE in the past 30 years being a water dominated area. And no, the decrease is NOT going to be proportional on the down side as you are more apt to encounter the DEW Point on the down side skewing/minimalizing the temperature decrease due to the energy conversion while condensation is occurring, i.e. the latent heat of condensation. It’s called basic physical science. Anyone who knows the psychrometric chart and knows the mechanics of water state change (meteorologists and engineers) realizes how truly scientifically ignorant the AGW cultist are. It is stunning to me as an engineer that any scientist is fooled by the AGW argument. If I were said scientist I would demand my money back from the degree issuing institution.

Robert Clemenzi
February 9, 2012 10:16 pm

Clive, I like your T**4 plots.
Question – why is the temperature so much higher? Is that the actual data, or did you add an offset? I expected the data to cluster around 15°C, not 23°C.
I find it very interesting that the change in temperature using T**4 is about one fourth the official value. Do you have any idea why the discontinuity around 1951 disappeared? Very strange.

Reply to  Robert Clemenzi
February 10, 2012 3:03 am

Robert,
Sorry – I did it too fast and I made a mistake in coding !! I actually calculated ((sum(T^4)^0.25-273)/N ! and it should be ((sum(T^4)/N)^0.25-273. The correct result can now be seen here.
The red curves are the T4 averaging. The 1951 step up is still there . What is interesting is the Southern Hemisphere results are almost identical. It is the NH which changes. From 1951 onwards there is no evidence of warming in the Southern Hemisphere.
Apologies for mistake.

George E. Smith;
February 12, 2012 12:52 pm

So the warmest that the Northern hemisphere has ever been as like right now, is 16 deg C, and the coldest that the Southern hemisphere has ever been as in 1865 is also 16 degrees.
Now we know that the earth is furthest from the sun during the Southern winter, and closest during the Northern winter, so maybe that is why the Northern hemisphere has never been as cold as the Southern hemisphere.
Maybe something is wrong with this picture.

George E. Smith;
February 12, 2012 1:50 pm

Well any global Temperature data based on a 5 x 5 degree grid cell is bound to give eroneous impressions. That sort of gridding makes the SF Bay area Temperature the same as the Mojave Desert.
A lot of this total silliness can be traced right back to Trenberth et al’s global energy budget. He has an average of 342 W/m^2 all over the planet arriving, and 390 W/m^2 emitted from the surface.
Well the fatal error is in that 342 W/m^2 arrival rate.
WATTS IS POWER; NOT ENERGY !!!
The current official value of TSI released recently by NASA is (roughly) 1362 W/m^2. It is NOT 342 w/m^2.
If you have 342 W/m^2 arrival power density, and 342 W/m^2 exit power density, then basically nothing happens; it’s roughly an equilibrium situation; and the earth IS NOT in thermal equilibrium.
The arrival power density on earth; averaged over the year for the radial orbit variation, is 1362 W/m^2, not 342, which is only about 1/4th as much.
That means that the point directly in line with the sun has a net Insolation over exit power density of maybe 3/4 of that 1362 value or about 1020 W/m^2.
actually, it will be a bit less than that because the sunlit portion of the earth will be substantially hotter than the average of 288 K from which Trenberths 390 W/m^2 surface power density emission dervives, and some hot desert areas can actually emit as much as twice the power rate as for the global average Temperature.
So the earth is absorbing far more solar energy than Trenberth gives credit for, because the incoming power density is four times his number, and the large portion of that high power density goes right into the deep oceans, which never reach the very high Radiant Emittance as the tropical deserts. As the earth rotates, each portion of the surface that comes under sunlight, receives an incoming solar power density that is much higher than Trenberth’s numbers. That 1362 is of course reduced by atmospheric losses such as cloud isotropic scattering (of sunlight) , blue sky scattering losses, GHG atmospheric trapping losses, at least by H2O, O3, and CO2, all of which have absorption bands within the Solar spectrum, where at least 99.9 percent of the solar energy resides.
Any cook knows that you do not get the same result while cooking, if you supply four times the power density for one quarter of the time to your soufle, as what is stated in the recipe book.
And you don’t get the same result in meteorology or climatism either.
Need I repeat it. TSI is the POWER DENSITY of arriving solar energy; you cannot integrate POWER to get an averaged total integrated ENERGY accumulation, and expect that Physical phenomena will respond exactly the same to those changed conditions.
Trenberth’s cartoon global energy budget, is at the heart of the phony story that climatists spread, and Trenberth is one of those members of the Climate Science Rapid Response Teamthat has been dreamed up by the desparados to counterract the effects of people like Chris de Freitas, Roy Spencer, Fred Singer, Willie Soon, and Sallie Baliunas, all of whom are known to be in the deep pockets of BIG OIL. WELL THAT’S WHAT THE PARTY LINE CLAIMS.
The feb 2012 issue of Physics Today carries a piece of tripe by one Toni Feder about the “harrassment” of climate scientists; no doubt (s)he is thinking of the arrest of James Hansen for his public antics.
No the article doesn’t say a word about the hounding of Soon/Baliunas or Cris de Freitas, or the much publicized threat to bash someone’s head in, or the equivalent, next time he met one of the well known skeptics.
So now we have the climate Physics police force to tell us what science to believe. Believable science is self convincing; it’s the observed facts that you can believe not the terracomputer simulations, and certainly not factually incorrect depictions like Trenberth et al’s phony “global energy budget”.

George E. Smith;
February 12, 2012 2:00 pm

I should add to the above that I AM NOT knocking Dr Kevin Trenberth above. Don’t know him; never met him; his name just happens to be on that silly chart of global energy budgets. I have no idea what his formal academic credentials are; but I suppose I could giggle that.
As a Kiwi, it is rather embarassing for me to think that sloppy Physics is being purveyed by someone who presumably had the benefit of the same education system that was available to me; so my criticism is of the work; not the person, who I assume is a good Kiwi chap. Well so are Vincent Gray, and Chris de Freitas. Now I suppose Trenberth and Gavin Schmidt’s climate police will go after Professor Davies for noticing that THE CLOUDS ARE FALLING. Hey it’s the clouds NOT the sky.

February 20, 2012 10:03 am

R. Gates In short, camliing ?natural variability? as a cause in no explanation at all as science is all about finding the reasons behind that variability. Thanks. This seems to be a particularly hard point for some to get.