Excerpt from The Inconvenient Skeptic by John Kehr
The longer I am involved in the global warming debate the more frustrated I am getting with the CRU temperature data. This is the one of the most commonly cited sources of global temperature data, but the numbers just don’t stay put. Each and every month the past monthly temperatures are revised. Since I enter the data into a spreadsheet each month I am constantly seeing the shift in the data. If it was the third significant digit it wouldn’t bother me (very much), but it is much more than that.
For example, I have two very different values for January of 2010 since September 2010. Here are the values for January based on the date I gathered it.
Sep 10th, 2010: January 2010 anomaly was 0.707 °C
Jan 30th, 2011: January 2010 anomaly is now 0.675 °C
That is a 5% shift in the value for last January that has taken place in the past 4 months. All of the initial months of the year show a fairly significant shift in temperature.
Monthly Temperature values for global temperature change on a regular basis.
Read the entire post here
=============================================================
Some of this may be related to late reporting of GHCN stations, a problem we’ve also seen with GISS. Both GISS and CRU use GHCN station data which are received via CLIMAT reports. Some countries report better than others, some update quickly, some take weeks or months to report in to NOAA/NCDC who manage GHCN.
The data trickle-in problem can have effects on determining the global temperature and making pronouncements about it. What might be a record month at the time of the announcement may not be a few months later when all of the surface data is in. It might be valuable to go back and look at such claims later to see how much monthly temperature values quoted in news reports of the past have shifted in the present.
More on CLIMAT reports here in the WMO technical manual.
UPDATE: For more on the continually shifting data issue, see this WUWT post from John Goetz on what he sees in the GISS surface temperature record:
http://wattsupwiththat.com/2008/04/08/rewriting-history-time-and-time-again/

BBC4 The bbc examines climate sceptics.
I suspect we won`t be feeling the love.
http://www.bbc.co.uk/programmes/b00y5j3v
Murray Grainger at 5:14 am:
“three decimal places? Does any one reporting these temperatures read the instrument to that level of precision?”
That is the first, most obvious, and ongoing proof of the overall preposterous CAGW fraud.
A quick look at Anthony’s surfacestations.org is proof enough that three decimals is just ludicrous, and you can extrapolate that factual evidence to the third-world hellholes and formerly communist backwaters, that even zero decimal accuracy is quite a ridiculous assertion when it comes to the historical surface temperature “record.”
One decimal of accuracy is a joke. Three decimals is a knee-slapping laugh riot.
Sometimes up and sometimes down as new data comes in late and while the underlying trend that the anomaly is based on is adjusted. Clearly the overall effect is minimal. It is explained here http://www.cru.uea.ac.uk/cru/data/temperature/#faq and is again making a mountain out of a molehill to sew seeds of doubt on a non issue.
The point is to get the data out as soon as possible but make sure adjustments are made to keep it accurate as new information comes in. Theirs nothing sneaky or hidden going on.
In cases like this, what do they need data for except to give the impression of honesty and the cover of credibility? In other words, what good is a claim without full disclosure of how the claim is made?
The 6th month on your graph seems to be lower for the Sept measurement than the Jan 2011 measurement why it is the other way round for the 3rd month. This would suggest that it swings both way and is not just for the properganda effect that seems to be claimed by a number of comments.
MattN says:
January 31, 2011 at 6:24 am
Why can’t the reporting be automated? This is 2011 for the love off God….
———-
A couple of years back I looked act the report forms for station in Berkeley,CA, the most recent form had several days missing, the original ones, none. Thus, the quality of the data was degraded at that station compared to its beginning, even with all the technology available.
Not at all unlike what the gubmint is constantly doing with economic numbers. Initially put out great numbers, which grab the headlines, then later revise downward.
Since I enter the data into a spreadsheet each month I am constantly seeing the shift in the data. If it was the third significant digit it wouldn’t bother me (very much), but it is much more than that.
I use Linux bash. This is just an example of a script I use to pipe out the first column of CRU monthly data.
#!/bin/bash
SED=’/bin/sed’
AWK=’/usr/bin/awk’
CAT=’/bin/cat’
$CAT monthly |
$AWK ‘{ print $1″,”$2″,0″ }’ |
$SED ‘s/\t//’ |$SED ‘s/ //’ >hadlong.csv
If I want to output any month in the years then (NOV):-
#!/bin/bash
SED=’/bin/sed’
$SED ‘/\/11/!d’ hadlong.csv > Nov.csv
It takes time to learn and each data set has it’s own foibles but once
written the scripts save me from firing up a spread sheet on my old clunker.
Ric Werme says:
January 31, 2011 at 4:54 am
“I had to visit John’s blog to figure out the graph (coffee hadn’t kicked in yet). the caption there is helpful – “Monthly Temperature values for global temperature change on a regular basis.”
Each line in the graph links the monthly anomaly as of some reporting date. Hence the September 2010 line has temps from January to July, the late January 2011 line is the only one to include December.
All in all, it’s another reason to rely on the satellite temps.”
Getting the global temperature right is not a trivial matter. Satellite temperatures are not a solution to the problem. A few years back, the UAH was shown to have some major errors, and Spencer and Christy were forced to admit some bad mistakes and correct the error.
Tamino has pointed out that the 3 station based and two satellite temperature records all look alike if you look at the big picture. The satellite records appear to be more affected by El Nino.
http://tamino.files.wordpress.com/2010/12/5t12.jpg
These differences to not appear to be a real issue in the discussion of Global Warming.
Ric Werme
Unfortunately, it doesn’t make statistical sense to express changes of interval scaled variables as percentages. If a different baseline is chosen for the anomalies, the value of the percentage will change also.
Shift up the baseline by .6 (to anomalies of .075 and .107) and you now get a whopping 43% or 30% (depending on which of the two you choose as the percentage comparison base).
MattN: I don’t really know what difference it makes. Nobody thought of using weather data to track the global mean temperature, until a few decades ago. The system was set up to measure weather. We could have automatic measurement now but it wouldn’t fix the problem that temperature from the earlier years weren’t gathered in the same way. If we start now collecting accurate data, we would still need at least 30 year to see the true climate is doing. By that time it will be clear to everyone whether or not global warming is happening and what the cause is.
Interesting comment in the original article:
“Oddly enough the yearly average seems to stay the same, but the monthly values that create the average are in constant flux.”
That would seem to be extremely unlikely, late data changes individual months significantly (~5%) but the yearly average stays the same.
It isn’t just the constant diddling and fiddling with the numbers – though, God wot, that is objectionable enough in itself, particularly when retroactively applied to figures which have sat there undisturbed for decades. If, as one or two people have already suggested, it’s a matter of not all the results being in, then to put it bluntly there is no figure which can be honestly reported until all the results are in. If that takes several weeks or months despite all our cool technology, then so be it. Stop issuing scary figures, and wait until you can issue real ones. (I’m addressing that comment mainly to the guilty parties, of course, not WUWT posters who have to work with the ‘official pronouncements’ as they come in.)
As someone old enough to have been taught real science though, I find much “climate talk” a bit weird on purely linguistic grounds; it seems to perform a sort of neuro-linguistic programming in itself, even on this excellent blog. When I was at school, one might have mentioned, say, the human influence on climate. Now, all we hear is to do with forcings: forcing, according to my Shorter OED, being the action of force, its only near-scientific use being in phrases like “forcing frame” (a frame where the growth of a plant is artificially hastened). No. Most, if not all, of the influences on climate are just that, influences, and rarely artificial, without all the connotations of force. ‘Anomaly’ is another such: in normal language, something is anomalous only if it is abnormal, exceptional, irregular; in climate discussions the word seems to be used merely to ‘scarify’ the variation from some value which itself is pretty much arbitrary. What is ‘anomalous’ about some minor variability in parameters which we all know vary all the time? Nothing. It’s just a variation.
I can’t help feeling that the use of words with such pejorative connotations is all part of the con. Any speaker of normal English, hearing all this talk of ‘forcings’ and ‘anomalies’ etc., will at some subconscious level associate the words with their normal meanings and infer that there is a problem. And as George Gershwin memorably put it, “It Ain’t Necessarily So”. Indeed, as far as I can see, it ain’t so at all. Worth a thought.
Oh, and if my modest attempts at formatting have worked out OK … thanks for the advice! If not, do us a favour and either correct them or scratch them.
Off the current subject:
Any one have a way to do an estimate of the CO2 required to enable this meeting?
htp://www.politico.com/news/stories/0111/48471.html
All 260 Ambassadors from all embassies, consulates and other post from 180 countries. All called to D.C.
note: Go to meeting,, is that not on line still?
Has to be one big number.
If they are reporting the data and trends, then they should not release anything until all the data is in and let the information speak for itself.
However, as many of us know, they are not only reporting the data and trends, they are also reporting propaganda.
Unfortunately, a lot of influential people believe the propaganda.
MattN says:
January 31, 2011 at 6:24 am
“Why can’t the reporting be automated?”
Because then NOAA/NASA/UAE/MET/… won’t be able to selectively apply their “quality control” methods to the data.
As a long term observer of the CRU dataset (until post-climategate when I couldn’t see the point) I too have noticed this constant changing of the data.
Gendeau says: January 31, 2011 at 4:36 am
Am I being overly cynical in noting that:
Yes, probably. If there was something obvious to be cynical about the data I would have spotted it. In all the time I observed, I never spotted a consistent trend … except that it would report later when it was cooling, and the results would be rushed out early when it was warming!
However the significant changes that regularly occurred highlighted the ease with which an observer bias could be introduced into the results. Even a group without an overtly spoken bias may easily have adjusted the figures by viewing some (cooling) results as bad, and not viewing other (warming) results as similarly bad.
However, when you had a group actively trying to “hide the decline” their data was almost totally useless for drawing any conclusions.
Today is 1984. History has been revised to fit the liberal agenda and so this is really no surprise.
The problem stems from the fact that what is being measured is essentially noise. 0.6C, 0.7C. 0.4C… it does not matter. 2C, 4C, 10C… that is significant.
When you have a signal, as opposed to noise, these issues do not exist.
This quote makes the whole process seem fishy to me.
“Oddly enough the yearly average seems to stay the same, but the monthly values that create the average are in constant flux.”
Perhaps they intentionally hold the annual reading fixed to not draw attention to the changes, on the other hand, the month to month data changes make them look like they have no idea what they’re doing. How can you say we have new evidence that previous monthly temperatures should be adjusted yet the annual temperature remains the same?
I’ve read this blog for many years now and for most of that time I believed these numbers held some significance. Over time I’ve come to the belief that all of these ground based temperature records are entirely meaningless. There is way too much room for error and even manipulation to come into the picture — and that’s great compared to the problems with the historical, proxy based ‘temperature record’.
The satellite record is the best tool for this, unfortunately, the record is quite short.
I’ve been a skeptic for some time, but I, like most here, believe the CO2 we’re adding the atmosphere will have some affect on temperature. But how can we know? We’re mostly traveling blind.
Still, the evidence seems to indicate that the impact can’t be large.
MikeEE
Sorry, but if 5% of the data trickles in late it should not move the global monthly temp 5%. With 95% of the data in it should not perturb the global value at all!
And remember, this is a September value for January – 7 months after the period in question. How late is this trickle?
I agree with the author, this indicates some fundamental math errors.
Tony Hansen says:
January 31, 2011 at 5:02 am
John,
I too worry about the ‘adjustments’, but I also worry about your ’5%’.
5% of what?
I presume he means 5% of the 0.675C anomaly but this is meaningless in relation to the uncertainty of the figure itself which is probably nearer +/-100% !
There seems to be very little attention paid to the basic scientific principal that any result is meaningless unless it is accompanied by the experimental uncertainty.
There have been some attempts to estimate it but because of the quality and mixture of the data sources this seems to be a pretty speculative venture. Thus the temperature record it little more than speculative. Simple fact of the data quality.
Anyone worrying about 5% of 0.675C has not understood what they are looking at.
It’s also helpful not to get too obsessive about the figures for each month as they come hot off the press. Whatever they say is irrelevant to the climate record over 30 years.
What is worth archiving and keeping a close eye on is how the likes of GISS and CRU are constantly adjusting the long term record and how each time it just happens to get a little closer to their AGW theory than last time they added up the same temperature data.
How will you ever been able to trust data from organisations (CRU, GISS, etc) run by unscrupulous climate politicians attempting to achieve an economic policy outcome on behalf of the UN – regulation of human CO2 output – regardless of its impact on climate?
Are they noting these changes as they occur? The lack of transparency, reason for changes, etc, are the problems.
>> I too worry about the ‘adjustments’, but I also worry about your ’5%’.
>> 5% of what?
> Of the anomaly.
I don’t think you may measure it in that way. Suppose the anomaly is +0.001 C
and that this value is then revised to +0.002 C. This would be a very slight
change, yet it would be a change of 100%….!