Met Office Hadley Centre and Climatic Research Unit HadCRUT4 and CRUTEM4 Temperature Data Sets Adjusted/Corrected/Updated… Can You Guess The Impact?

Met Office – Hadley Center – Click the pic to view at source

Image Credit: Met Office Hadley Centre

By Just The Facts, Werner Brozek and Walter Dnes

The Met Office Hadley Centre and the Climatic Research Unit at the University of East Anglia, have adjusted/corrected/updated their HadCRUT4 & CRUTEM4 data sets, the update occurred with the April data recently released. There does not appear to have been a press release and there was no mention of the forthcoming change in this April 15, 2013 Met Office press release that covers the data sets, however version update tags were placed in bold at the top of the HadCRUT4 page;

“Update: An updated version of HadCRUT4 is now available. HadCRUT4 is now at version HadCRUT.4.2.0.0. Details of this update can be found here

and the CRUTEM4 page:

Update: CRUTEM4 has been updated to version CRUTEM.4.2.0.0. Details of this update can be found here.

The HadCRUT4 page notes that the HadCRUT4 “land ensemble has been updated to use station data for CRUTEM4 version CRUTEM.4.2.0.0” and the Release Notes for version CRUTEM.4.2.0.0 state that:

“Additions to the CRUTEM4 archive in version CRUTEM.4.2.0.0

The changes listed below refer mainly to additions of mostly national collections of digitized and/or homogenized monthly station series. Several national meteorological agencies now produce/maintain significant subsets of climate series that are homogenized for the purposes of climate studies. In addition, data-rescue types of activities continue and this frequently involves the digitization of paper records which then become publicly available.

The principal subsets of station series processed and merged with CRUTEM (chronological order) are:

Norwegian – homogenized series

Australian (ACORN) – homogenized subset

Brazilian – non-homogenized

Australian remote islands – homogenized

Antarctic (greater) – some QC and infilling

St. Helena – some homogenization adjustment

Bolivian subset – non-homogenized

Southeast Asian Climate Assessment (SACA) – infilling /some new additions

German/Polish – a number of German and a few Polish series – non-homogenized

Ugandan – non-homogenized

USA (USHCNv2.5) – homogenized

Canada – homogenized

In addition, there have been some corrections of errors. These are mostly of a random nature and the corrections have generally been done by manual edits. For a listing of new source codes in use, see below (end).”

While the magnitude of the changes is not that large, many of the adjustments/corrections/updates to HadCRUT4;

Met Office – Hadley Center – Click the pic to view at source

and CRUTEM4;

Met Office – Hadley Center – Click the pic to view at source

are concentrated in the last 16 years, a period that the Met Office is under scrutiny for the lack of warming in their data.

Also, some of the regional changes appear quite contrived, e.g. it looks like they found five hundredths of a degree of extra warming in the Northern Hemisphere in the last couple years:

Met Office – Hadley Center – Click the pic to view at source

South America they found almost a tenth of a degree of warming over the last decade;

Met Office – Hadley Center – Click the pic to view at source

Africa, had five hundredths of a degree of extra warming in the last few years;

Met Office – Hadley Center – Click the pic to view at source

and Australia/New Zealand a tenth of a degree of additional warming over the last few years:

Met Office – Hadley Center – Click the pic to view at source

Now updating data sets to include additional measurement locations and improve precision is usually a good thing, and if this was the only occurrence of an adjustment/correction/update resulting in additional warming, one might give the Met Office Hadley Centre and the Climatic Research Unit at the University of East Anglia the benefit of the doubt. However, over the last several years they have progressed through;

CRUTEM.4.2.0.0 (current version)

CRUTEM.4.1.1.0 (previous version)

CRUTEM.4.0.0.0 (initial release)

CRUTEM.3 (Corrections)

and

HadCRUT.4.2.0.0 (current version)

HadCRUT.4.1.1.0 (previous version)

HadCRUT.4.0.0.0 (initial release)

HadCRUT.3 (Corrections)

and it seems like the impact of each adjustment/correction/update is to increase the appearance of recent warming. The move from HadCRUT3 to HadCRUT4 was particularly troubling, i.e.;

“HadCRUT4 was introduced in March 2012. [2]It “includes the addition of newly digitised measurement data, both over land and sea, new sea-surface temperature bias adjustments and a more comprehensive error model for describing uncertainties in sea-surface temperature measurements”. [3] Overall, the net effect of HadCRUT4 versus HadCRUT3 is an increase in the average temperature anomaly, especially around 1950 and 1855, and less significantly around 1925 and 2005. [4]Also, the warmest year on record, which was 1998 with HadCRUT3, is now 2010 with HadCRUT4. [5]” Wikipedia

The difference between HadCRUT3 to HadCRUT4 is clearly visible in this WoodForTrees graph;

WoodForTrees.org – Paul Clark – Click the pic to view at source

however, note that WoodForTrees.org is still on CRUTEM.4.1.0.0, i.e. thus that difference will grow if/when WFTs is updated to CRUTEM.4.2.0.0

Also a note should be made regarding the graph that showed Hadcrut4 was about to go below the 95% mark. With the old version, 2011 and 2012 had anomalies of 0.399 and 0.433 respectively. The new Hadcrut4 has 2011 and 2012 go up to 0.406 and 0.448 respectively. It appears as if they bought themselves some time, but as you can see in the table, the average for the first three months is 0.420. So while this is even below the original 0.433, it is only for three months so it does not mean too much yet.

Source HADCRUT4.1 HADCRUT4.2
1.yr 2010 2010
1.an 0.540 0.547
2.yr 2005 2005
2.an 0.534 0.539
3.yr 1998 1998
3.an 0.523 0.531
Jan 0.378
Feb 0.476
Mar 0.410
ave 0.420
rnk 12th

Alright, so how many of you guessed wrong and thought that the impact of the adjustments/corrections/updates to HadCRUT4 and CRUTEM4 was to decrease recent warming?…

Get notified when a new post is published.
Subscribe today!
0 0 votes
Article Rating
99 Comments
Inline Feedbacks
View all comments
May 13, 2013 7:01 am

@Paul Homewood
Thanks you for the link to original Icelandic data. I have made a copy.
Comparing it to the temperatures currently being displayed by GISS I find only a tiny discrepancy for that station (Teigarhorn) and that year (1900). It pains me to admit it but I may have been crying wolf; wrongly claiming that GISS have been depressing historical temperatures by 0.9C. If that is the case I shall delete the section of my blog making such claims. That’s a lot of number crunching for me (all stations, all years) but it’s a neccessary task.
One must have the integrity to stick to the facts even when (especially when!) they don’t support one’s pet hypothesis.

Ian W
May 13, 2013 7:49 am

pat williams says:
May 12, 2013 at 8:06 pm
Convince anyone in Minnesota that it is warming….
http://www.cbsnews.com/video/watch/?id=50146674n

Actually might not be too difficult – today is due to be 21C at Minneapolis and tomorrow 33C. The blocking high has moved and all the cold temperatures are now over the NE of the USA with frosts for the next couple of days. The frosts will not raise many headlines but expect to be told about 90F temperatures in the Dakotas and Minnesota.

MikeR
May 13, 2013 8:18 am

Steve Mosher’s comment is very interesting. No nefarious motives needed, just a natural consequence of adding more data.
But why is CRUTEM biased low? Do we understand what’s happening there?

dwr54
May 13, 2013 8:27 am

Werner Brozek says:
May 13, 2013 at 6:52 am
“So there was a slight increase in slope as stated, just not enough to see at the single significant digit level.”
Yes, there was an increase of 4/1000ths of a degree C per decade since March 1997 between the present and previous version of HadCRUT4. I was surprised that people were making allegations of “corruption” against the HadCRUT4 producers, because this update brings the HadCRUT4 trend slightly closer to that seen in UAH over the recent term. It makes one wonder what folk here must think about the validity of the UAH trend?
I lack the skills to calculate the statistical significance in the new HadCRUT4 data set between July 1994 and March 2013. I’ll have to await the SkS update. I would imagine it’s something in the same order though.
By the way, the SkS calculator gives UAH since 1997 as 0.094 +/- 0.222 at the two sigma interval. Am I right in thinking that this means there is a 1/20 chance that the true UAH trend since 1997 may be as low as -0.128C per decade, and an equal 1/20 chance that it may be as high as +0.316C/dec?

May 13, 2013 8:41 am

To those actively involved in trying to suppress the temperature results as they are available on-line from official sources, I say: Let fools stay fools if they want to be. Fiddling with the data they can, to save their jobs, but people still having to shove snow in late spring, will soon begin to doubt the data…
http://blogs.24.com/henryp/2013/04/29/the-climate-is-changing/

May 13, 2013 9:01 am

Brian after Climategate I started what you are looking for. I could not bear that all that observational data had been corrupted
The raw data from over 2500 NOAA weather stations – 55,123,399 temperature readings so far taken hourly. The same ones, even with their bad siting. It’s a comparison of apples with apples
I will not lose it and will continue to make it available.
It’s the only way I and everyone else can know the truth. Click on my user name
Dave

May 13, 2013 10:52 am

MikeR says:
May 13, 2013 at 8:18 am
Steve Mosher’s comment is very interesting. No nefarious motives needed, just a natural consequence of adding more data.
But why is CRUTEM biased low? Do we understand what’s happening there?
#########
Their method of carving up the world into 5 degree bins probably contributes a some of the bias. Do folks here forget how much we slammed CRU methods over at Climate audit and AirVent?
you can visualize what that looks like by comparing how various methods work with
different data.
http://berkeleyearth.org/pdf/visualizing-the-average-robert-rohde.pdf
To see how bad the CRU method is we can do standard tests using perfect simulated data.
That is, create simulated data for the entire earth where we know the true average. Then sample that simulated world using a sub sample of stations.
Then compare methods
http://berkeleyearth.org/pdf/robert-rohde-memo.pdf
Interestingly we also found that sometimes adding data to CRU method can give you a WORSE answer.

May 13, 2013 11:02 am

They did more than adjust the last few years. Note they “cooled” the early part of the record. The Temperature Record “Thumbtack” (TRT) was inserted at ~1950 and the trace rotated counterclockwise. This is the Hansen Trick that was used to lower the 1936 record to be below the 1998 super El Nino year.

May 13, 2013 11:11 am

tonyb says:
May 13, 2013 at 1:22 am
Hi Mosh
In Berkeley and the other global data sets you have thousands of locations and thousands of individual data points relating to each location that purport to show the maximum and minimum for a specific day. We both know the sometimes haphazard way in which reading were sometimes taken, that stations move and of interpolation.
If you had to bet $10000 on each of your data points being an accurate representation of the real maximum or minimum in that location, for that day, to a tenth of a degree, how many of those data points would you put forward for scrutiny?::
#############
1. the datafile has a column called uncertainty for each record.
2. I dont know what you mean by “real maximum” mythical beast.
3. Since we use CRN in the US I suppose I could find 100 that I would bet you 10K on
Thats a bet then. right?
To an accuracy of two tenths of a degree?
Three tenths of a degree?
I would guess none.
#############
then you would be wrong. looks like you forgot that we use CRN.
“II bet you haven’t even checked for the political affiliations of the keeper of the records at that station, which you believe in other circumstances to be so important.”
In the case of Russia I always check. The funniest examples are russian metadata records.. How do you think I figured out it was important to ask you this question. It was because I had to ask this question.
“Yet we apparently have a global temperature accurate to fractions of a degree. We all know about the power of averaging Mosh, but we need to take a reality pill sometimes when we believe there is a global temperature and we know its value to such fantastic accuracy.”
You are mistaken. Nobody claims this accuracy. Let me explain. Suppose I say the average
temperature of the earth is 14.87654789126378
What does that mean? it doesnt mean that I know it to this levels of accuracy. Let me repeat that. IT DOES NOT MEAN WE KNOW IT TO THAT LEVEL OF ACCURACY.
here is what it means. I have 2000 new stations. You dont know the temperature.
Now, Estimate the temperature of any one I will choose randomly. Go ahead.
When I guess I will use 14.87654789126378 and that estimate will minimize error.
the “average” temperature is that estimate which minimizes the error. Its not about the accuracy of the thermomemter its about minimizing the error.
“It is high time the data you and others produce is broken down into the climate zones that exist, rather than the ‘global’ one size fits all which doesn’t. That wouldn’t make the data points any the more accurate, but we would get a better idea of those parts of the globe that are cooling, warming or static. How is that paper on cooling stations going?”
Climate Zones? climate zones are derived from temperature and preciptation. These zones dont exist, they are created from data.
Cooling stations: We are coming out with a major update that should make that work easier. For the most part cooling stations are limited to the US and Australia.. and there they are not statistcally signifiacnt. In addition there is another update in the works that looks at preserving more local scale.. Not sure how long that will take

Tonyb
Editor
May 13, 2013 11:32 am

Mosh
Thanks for your unusually long response.
You have tens of thousands of data points. The 100 you could find would be out of how many?
Genuinely look forward to your cooling stations update although the local scale one sounds potentially more rewarding.
Tonyb

Bart
May 13, 2013 11:48 am

Brian H says:
May 12, 2013 at 5:09 pm
“You’d think they would be smart enough to throw a contrary adjustment in here and there to give the appearance of impartiality.”
You’d think they would be smart enough to claim maybe 80% of scientists agree with CAGW – it’s still overwhelming, and wouldn’t be dismissed out of hand. Instead, they had to claim 97%. Even tinpot dictators know better than to claim those kinds of margins in their sham elections.

DirkH
May 13, 2013 11:57 am

Mosh, is it possible to tell the current average global temperature from the records? Climatists only ever mention anomalies.
So we could compute backwards using their adjusted data to find, say the average global temperature in 1934. We would expect this hypothetical back-computed temperature of 1934 to change with each adjustment.
So we could track that. I expect 1934 to get ever colder as time passes.
I think it was the year of the US dust bowl, right?

DirkH
May 13, 2013 12:00 pm

If I were the warmists, I would make every adjustment so that it optimizes the correlation coefficient of avg glob temps with CO2 concentrations.
Anybody capable of checking this?

climatereason
Editor
May 13, 2013 12:15 pm

Mosh
Wee bit bemused about your comment re climate zones. As Marcel Leroux remarked ‘the world has many different climates’ The UK is in a temperate zone as you know. It may well be that all places in temperate zones are cooling as we are. It may be just some (or none ) of them are. Looking at zones seems worthwhile and meaningful as they are readily identified.
Bearing in mind the historic correlation of CET to global’ temperatures I would guess we are not the only ones suffering from a temperature downturn but can that be neatly attributed to zones or are other factors at play here? (wind direction would be key for us here in Western Europe) . How long would that change need to go on to be worthy of interest. 30 years or 10 or 5?
Also there are far more places than Russia where political affilliations might affect data or other factors such as inexperience might come into play as Camuffo diligently ascertained as part of the ‘Improv’ project.
tonyb

Werner Brozek
May 13, 2013 12:28 pm

dwr54 says:
May 13, 2013 at 8:27 am
It makes one wonder what folk here must think about the validity of the UAH trend
I am only giving my personal opinion here. Please take a look at:
http://www.climate4you.com/GlobalTemperatures.htm#Global temperature trends
Note that of the 5 data sets listed, only UAH has a positive trend over 10 years. The other 4: RSS, GISS, NCDC and Hadcrut4 are all negative over 10 years. So UAH seems to be a bit of an outlier.
By the way, the SkS calculator gives UAH since 1997 as 0.094 +/- 0.222 at the two sigma interval. Am I right in thinking that this means there is a 1/20 chance that the true UAH trend since 1997 may be as low as -0.128C per decade, and an equal 1/20 chance that it may be as high as +0.316C/dec?
My interpretation is that there is a 95% chance the real number is between -0.128 and +0.316. So there would be a 2.5% chance it is above 0.316 and a 2.5% chance it is below -0.128 which would mean a 1/20 chance to be outside the 95% line, but a 1/40 chance it is higher than +0.316 and a 1/40 chance it is below -0.128. (If I am wrong here, please correct me.)

Berényi Péter
May 13, 2013 2:38 pm

By continuously adjusting the end of record upward, they are simply digging themselves into a hole. If the trend in their adjusted data set is higher than the actual one, it will increasingly diverge from the instrumental record, making it ever more difficult to justify adjustments, that are becoming larger than any conceivable source of error. At the end they are left with a catastrophic slowdown, with no statistically significant increase for decades. Sounds familiar?

Werner Brozek
May 13, 2013 3:56 pm

dwr54 says:
May 13, 2013 at 8:27 am
I was surprised that people were making allegations of “corruption” against the HadCRUT4 producers
From 1997 to 2012 is 16 years. Here are the changes in thousandths of a degree with the new version of Hadcrut4 being higher than the old version in all cases. So starting with 1997, the numbers are 2, 8, 3, 3, 4, 7, 7, 7, 5, 4, 5, 5, 5, 7, 8, and 15. The 0.015 was for 2012. What are the chances that the average anomaly goes up for 16 straight years by pure chance alone if a number of new sites are discovered? Assuming a 50% chance that the anomaly could go either way, the chances of 16 straight years of rises is 1 in 2^16 or 1 in 65,536. Of course this does not prove fraud, but considering that “HadCRUT4 was introduced in March 2012”, it just begs the question why it needed a major overhaul only a year later.
I believe people should not wonder why suspicions are aroused as to whether or not everything is kosher.

Bill Illis
May 13, 2013 5:13 pm

This has been going on for 20 years now.
Every month, or just periodically in earlier times, the records have been changed – cooling the past and warming the current records.
This occurred mainly at the NCDC and not so much with Hadcrut (but we are now on HadSST3 and Crutemp4 version 4.2.0.0).
Sorry, they have screwed around with your public records and intend to continue doing it.
We need to bring in real statisticians and real IT people to unsort this mess. We need to elect people willing to make this change.
Or we can just rely on the lower troposphere satellite temps.

p@ Dolan
May 13, 2013 7:54 pm

@son of mulder says:
May 13, 2013 at 2:18 am
Is anyone reading this stuff competent to apply Benford’s law to the data to see if adjustments look artificial, or is this not appropriate for increasing temperature anomalies?
I’m no expert, but I don’t think Benford’s law is reliable unless the numbers involved cross several orders of magnitude. I don’t believe, with the very small “adjustments” they’ve made, that an analysis using Benford’s law would produce any reliable indication.
Can someone correct me on that if I’m wrong?

NucEngineer
May 13, 2013 8:58 pm

And Winston looked at the sheet handed him:
“Adjustments prior to 1972 shall be -0.2 degrees and after 1998 shall be +0.3 degrees.”
Winston wondered at the adjustment to the data. At this point, no one even knows if the data, prior to his adjustments, was raw data or already adjusted one or more times previously.
It didn’t matter. All Winston was sure of is that one of the lead climatologists needed more slope to match his computer model outputs. He punched out the new Fortran cards and then dropped the old cards into the Memory Hole where they were burned.
“There!” Winston exclaimed to himself. “Now the temperature data record is correct again; all is double-plus good.”

May 14, 2013 12:28 am

@NucEngineer
We’re on the same page – just wrote this on a long running thread of mine
https://secure.worldcommunitygrid.org/forums/wcg/viewthread_thread,27654_offset,3850#421477
We should be buddy’s, if not brothers
Dave

Ratty
May 26, 2013 2:22 am

Funny – years working in a lab to GLP, GMP, USFDA and all sorts of regulations got it drilled into me that each and every single alteration of raw data records had to be individually justified and explained as to why it was being changed, with a signature and date. It seems that for all those years we were doing it wrong, and you could just attack the data with a broad brush when it no longer suited your desired outcome – who’d a thunk it?

May 27, 2013 11:32 am

Very late to this party but just to note that as of today WFT is using 4.2.0.0 – live graphs will update.
Paul