Met Office responds to HadCRUT global temperature audit by McLean

WUWT readers surely recall this storyBOMBSHELL: audit of global warming data finds it riddled with errors

While not issuing a press release, the scientists have responded to press inquiries.


Britain’s Met Office welcomes audit by Australian researcher about HadCRUT errors

Graham Lloyd, The Australian

Britain’s Met Office has welcomed an audit from Australian researcher John McLean that claims to have identified serious errors in its HadCRUT global temperature record.

“Any actual errors identified will be dealt with in the next major update.’’

The Met Office said automated quality checks were performed on the ocean data and monthly updates to the land data were subjected to a computer assisted manual quality control process.

“The HadCRUT dataset includes comprehensive uncertainty estimates in its estimates of global temperature,” the Met Office spokesman said.

“We previously acknowledged receipt of Dr John McLean’s 2016 report to us which dealt with the format of some ocean data files.

“We corrected the errors he then identified to us,” the Met Office spokesman said.


I’m sure that crap data apologists Mosher and Stokes will be along to tell us why this isn’t significant, and why HadCRUT is just fine, and we shouldn’t give any attention to these errors. /sarc

Jo Nova adds:

Without specifically admitting he has found serious errors, they acknowledge his previous notifications were useful in 2016, and promise “errors will be fixed in the next update.” That’s nice to know, but begs the question of why a PhD student working from home can find mistakes that the £226 million institute with 2,100 employees could not. Significantly, they do not disagree with any of his claims.

Most significantly they don’t even mention killer issue of the adjustments for site moves — the cumulative cooling of the oldest records to compensate for buildings that probably weren’t built there ’til decades later.

More on her take here.

Jo makes a good point. Why is it that skeptics always seem to be the ones that find the errors in climate data, hockey sticks, and other data machinations produced by the well-funded climate complex?

Perhaps it is because they simply don’t care, and curiosity takes a back seat to money. Like politicians looking to the next election, Climate Inc. has become so dependent on the money train, their main concern is the next grant application.

Eisenhower had it right. We’ve all heard about Dwight D. Eisenhower’s farewell address, warning to us about the “military industrial complex”. It’s practically iconic. But what you probably didn’t know was that same farewell speech contained a second warning, one that hints at our current situation with science. He said to the Nation then:

Akin to, and largely responsible for the sweeping changes in our industrial-military posture, has been the technological revolution during recent decades.

In this revolution, research has become central, it also becomes more formalized, complex, and costly. A steadily increasing share is conducted for, by, or at the direction of, the Federal government.

Today, the solitary inventor, tinkering in his shop, has been overshadowed by task forces of scientists in laboratories and testing fields. In the same fashion, the free university, historically the fountainhead of free ideas and scientific discovery, has experienced a revolution in the conduct of research. Partly because of the huge costs involved, a government contract becomes virtually a substitute for intellectual curiosity. For every old blackboard there are now hundreds of new electronic computers.

The prospect of domination of the nation’s scholars by Federal employment, project allocations, and the power of money is ever present – and is gravely to be regarded.

Yet, in holding scientific research and discovery in respect, as we should, we must also be alert to the equal and opposite danger that public policy could itself become the captive of a scientific-technological elite.

 

 

Get notified when a new post is published.
Subscribe today!
5 1 vote
Article Rating
159 Comments
Inline Feedbacks
View all comments
October 15, 2018 11:53 am

People in scientific (and other) disciplines these days tend to avoid repetitive tasks requiring constant concentration.

I’m no hero, I’m as lazy as the next person. We have computers to do all the heavy lifting of calculations, but getting historical data into digital databases is still manual work. It’s boring, repetitive and anyone who’s done it can surely attest to how easy it is to get distracted, you lose your train of thought, and that leads to errors.

Once the data is in a digital database, then we can sit back and play with it all we want. Sometimes, you can spot egregious errors in the data, but smaller errors usually don’t jump out at you. So who’s going to take the trouble to look for data-entry errors? Or worse, errors in writing down numbers in a century-old, hand-written record? Or confusion between hand-written 1’s and 7’s done by continental Europeans and being read by anglophones (or vice versa)? Sceptics, of course, because they’re hoping to find errors.

It would probably cost the CRU a couple of days’ operating costs to hire a few AI people to come in and design auditing software. Train the machines with known errors and let them rip through the databases. They could crow about how they’re improving the quality of the data. They do anyway, but this could give them some substance to crow about.

Ian W
October 15, 2018 12:32 pm

So the Southern Hemisphere was:

Temperatures for the entire Southern Hemisphere in 1850 and for the next three years are calculated from just one site in Indonesia and some random ships.

I would say it would be more honest to say that there was no information for a global dataset in the 1850s and that remains the case until there are sufficient sensors.

Is that difficult?

Well yes it is.

How many sensors and what distribution is needed to have an accuracy for global temperatures of +/- 5 degC ? Mathematically, you can probably create an average with a precision of several places of decimals (which is in fact what is done) but the accuracy is still unlikely to be as good as +/- 5degC at best even assisted by some applied guesswork.

Paramenter
October 15, 2018 1:12 pm

Jo makes a good point. Why is it that skeptics always seem to be the ones that find the errors in climate data, hockey sticks, and other data machinations produced by the well-funded climate complex?

Perhaps it is because they simply don’t care, and curiosity takes a back seat to money.

Perhaps there is another reason too. I reckon mainstream climate science sees themselves as defenders of ‘scientific orthodoxy’ against invading moronic masses, barbarians who want to desolate sacred ‘truth’ they firmly believe. Thus for them it is not a purely scientific dispute, it is quasi-religion one. Therefore, turning a blind eye on some errors (if they’re in favour of warming trends) or accepting some questionable adjustments are all acceptable practices if they serve this purpose. In this mentality the the end justify the means.

knr
Reply to  Paramenter
October 15, 2018 1:36 pm

Or perhaps one approach sees a paycheck and grants and another does not ?
The MET has done very well out of AGW, without actually getting better at its day job of forecasting weather. They certainly not seen the cut backs others have .

Paramenter
Reply to  knr
October 15, 2018 2:10 pm

Financial gratification is a big actor in the game. Unfortunately. Still, financial greed as a common denominator may be too simplistic. Some of them really believe that they’re some kind of missionaries bringing scientific enlightenment to the moronic masses who suffer under religious superstitious and are reluctant to accept teaching of the Academia. Myth of Prometheus is still alive in many scientific circles.

October 15, 2018 1:21 pm

I love finding really well-written articles and this is one of them. Thanks for sharing.

RockyRoad
October 15, 2018 1:45 pm

Post normal data inventions are particularly annoying!

October 15, 2018 2:59 pm

Hadley Centre’s response to John McLean’s corrections previously included the following comment and caveat;

“Correction issued 30 March 2016. The HadSST3 NH and SH files have been replaced. The temperature anomalies were correct but the values for the percent coverage of the hemispheres were previously incorrect.”

Now Hadley Centre responds with;

“Graham Lloyd, The Australian
Britain’s Met Office has welcomed an audit from Australian researcher John McLean{sic} that claims to have identified serious errors in its HadCRUT global temperature record.

“Any actual errors identified will be dealt with in the next major update.’’

One notes, that there is no caveat regarding temperature anomalies, yet.

Not that there is any easy accurate method to compare a dataset full of errors with a revised dataset with corrected errors; one gets suspicious that the anomalies will will once again, not change. Even for datasets riddled with errors.

JoNova’s comments are on target!

Do listed errors include calculating anomalies out to more decimal places than original temperatures were measured?

I note in the comment thread that Randy Stubbings points out:

Randy Stubbings October 15, 2018 at 7:15 am
I was working with HadCRUT4 data downloaded in September of 2018 and I compared it with the version I downloaded in July of 2014. Subtracting the 2014 values from the 2018 values shows a set of adjustments to the monthly median temperature anomalies ranging from about -0.06 degrees to about +0.06 degrees. The decade with the largest cooling adjustment was the 1860s (the average adjustment was -0.0198) and the two decades with the largest warming adjustments were the 2000s (+0.0059) and the partial decade from 2010 to June 2014 (+0.0203).

The average temperature anomaly from 1850 to 1899 was -0.313 as reported in June 2014 and -0.315 as reported in September 2018. The difference between the upper and lower confidence intervals on the reported temperature anomaly averages 0.6 degrees for the period, so reporting to three decimal places seems a bit silly to me. Isn’t 1850 to 1900 supposed to establish the benchmark temperature against which we measure the 1.5 degrees that was formerly known as 2.0 degrees?”

I am reminded of a discussion on a train to Washington DC where I was talking to a lady employed by the Department of Energy.
Her job was the tracking of the Exxon Payments for the Valdez spill. Literally, small fractions of a cent payment over many millions of transactions. Cumulatively, they were substantial.

False precision masquerading as accuracy is neither.

October 15, 2018 3:18 pm

“Any actual errors identified will be dealt with in the next major update.’’

It’s gonna get HOTTER !!!

October 15, 2018 3:39 pm

The question I would ask of the UK Met office is this. Having admitted that their are errors and that they will be corrected, are you then going to tell the IPCC that your calculations were wrong and that things are not as bad as you originally said ?

MJE

Anthony Banton
Reply to  Michael
October 15, 2018 4:47 pm

Michael:
The question(s) I would ask you is:
Why should they need to do that when any errors would make no discernible (if that) difference to the trend?
How many temp data points do you think comprise the Hadcrut dataset?
How many millions?
How many did Mr Mclean find exactly as a percentage of the total?

Venter
Reply to  Anthony Banton
October 15, 2018 6:47 pm

Oh Yeah, we’ve seen that, every error and every adjustment by the high priests will have no discernible effect. Yet they continue making errors and adjustments all the time which drive the trend only one way ” It’s worse than before “. Give us all a break from this crap Banton. McLean did work on his own to show the kind of crap you and your lot produce. And you squander millions of public money to produce this crap. In private sector you’d have been shown the door on the spot and probably indicted for false reporting if you’d been doing financial reporting this way. On the public sector trough, you all dig your nose in with no fear of any such repercussions.

Michael Jankowski
Reply to  Anthony Banton
October 15, 2018 8:27 pm

“…Why should they need to do that when any errors would make no discernible (if that) difference to the trend?…”

Priceless.

Venter
Reply to  Michael Jankowski
October 15, 2018 10:51 pm

Priceless and shameless isn’t it Michael? These kind of blokes are what populate the Met Office and call themselves ” scientists “, a shame to genuine scientists. Errors like this, if found in a clinical trial data or in financial data would render the entire study and report invalid and suspect.

But these high priests of warning can do any kind of careless and shoddy work without and have the gall to say ” Move on, nothing to see here “. any fear of repercussions. And they’ll have their water carrying apologists like Mosher, Stokes and barry to come and defend any crappy work they do.

Reply to  Michael Jankowski
October 16, 2018 5:11 am

You beat me to it Michael.

“Why should we worry that all our cars only had 3 wheels fitted when they came out of the factory last month? It’s not a problem, as they always had 4 fitted for the previous 100 years. It’s all about the trend, don’t you see? Anyway, we adjusted the data and found that from 1905 until 1920 the cars actually only had 2 wheels fitted, so the trend is increasing. Before you know it, we’ll have 5 wheels on each car!”

The data is junk and Stokes, Mosher, et al know it.

Scott Bennett
Reply to  Anthony Banton
October 16, 2018 12:53 am

==> Anthony Banton

Jeez you’re obtuse!

There were 75 findings, including systematic errors larger than the total trend! You’re like a dog on a bone, you can’t let go of that one issue (Station file errors). Read the paper and you will see that you have no idea what you are talking about.

There were 25 major findings, none of them rely on this straw man you keep burning.

The findings of the report had major implications:

The HadCRUT4 global annual average temperature anomaly supposedly increased
from -0.176°C in 1950 to 0.677°C in 2017, an increase of 0.753°C. When all of the
errors discussed in this report are corrected and appropriate error margins somehow
determined for other factors, that 0.753°C is likely to decrease and the error margins
increase. The new error margins might remove the statistical certainty that any global
warming whatsoever has occurred since 1950. Even if warming was found to have
occurred but only at 50% of the trend indicated by the HadCRUT4’s current figures it
might radically change public and political attitudes. – Mclean, John D. 2017

Anthony Banton
Reply to  Scott Bennett
October 17, 2018 6:52 am

Scott Bennet:
What is obtuse(ness) and those that are “like a dog with a bone” are the hounds on here answering faithfully to the dog-whistle.
Such that Middleton has already started a new article with the “now discredited Hadcrut” meme.
It’s entered Naysayer mythology.
No it is not discredited just because rabid confirmation bias makes it so in the mind of denizens here.

Nick Stokes knows more about global temp data and it’s usage than the anyone here and I defer to his comments on the two threads regarding the “Bombshell” that this is not – a few errors (75 ? heck) that were found in original national met files containing millions of data points that have NOT been shown to have been imput into Hadcrut … and even if they were the effect would be negligible

“OK. This is no BOMBSHELL. These are errors in the raw data files as supplied by the sources named. The MO publishes these unaltered, as they should. But they perform quality control before using them. You can find such a file of data as used here. I can’t find a more recent one, but this will do. It shows, for example
1. Data from Apto Uto was not used after 1970. So the 1978 errors don’t appear.
2. Paltinis, Romania, isn’t on that list, but seems to have been a more recently added station.
3. I can’t find Golden Rock, either in older or current station listings.”

“Also, “we do QA later” doesn’t explain why obvious errors are still in the source data.”
Because it is source data. People here would be yelling at them if they changed it before posting. You take the data as found, and then figure out what it means.”

“WTF? “left to others?”. How can you get a PhD saying that I did the proofreading, but calculations were to hard. And if a PhD project can’t do it, who are those others?”

“It isn’t an enormous task at all. HADCRUT isn’t rocket science. You just write a program that emulates it, and then see what happens when you correct hat you think is wrong with the data. I wrote a program years ago which I have run every month, before the major results come in (details and code here). I have done that for seven years. They are in good agreement. In particular the simplest of my methods, TempLS grid, gives results which are very close to HADCRUT. If I used 5° cells and hadsst3 instead of ERSST, I could make the agreement exact. I wouldn’t expect to get a PhD from doing that, let alone saying it was too hard.”

“In fact, HADCRUT do post the effect of every version change. Here is the page, dated 15 Sept 2016, quantifying the changes going from version 4.3 to V 4.4. They are quite invisible on the graph, but a difference plot shows them generally less than 0.01°C.2

“That comes back to the issue “Why is it that skeptics always seem to be the ones…?”. Why is it that naysayers are always the ones complaining about how temperatures are calculated by scientists, but never doing a calculation themselves? It really isn’t hard. You don’t even need a PhD.”

scross
Reply to  Anthony Banton
October 17, 2018 12:08 pm

Not so much 75 individual errors within the data itself, but rather (up to) 75 different kinds of error, as outlined in “Appendix 6 – Consolidated Findings” of his report. This up from (up to) 26 different kinds of error outlined in his PhD thesis, in “Chapter 8: Summary of Part 1” of that thesis. I say “(up to)” because I’m leaving room open for the possibility that some of his claims may be incorrect or at least inconsequential, and that some of them may in fact refer directly to individual errors within the data.

As to “who are those others”, given that he came up with so many more findings after publishing his thesis, I’d say it looks like he decided to take on that burden all by himself.

Scott Bennett
Reply to  Anthony Banton
October 18, 2018 10:42 pm

==>Anthony Banton

Again, a lot of words to say precisely nothing! A tale told by an NPC bot, full of sound and fury, signifying nothing! You can’t read or perhaps comprehension is your problem. You never address anything I’ve written you just go straight into your program loop.

The dataset is good enough for the IPCC but it’s not good enough for your mate Nick! Apparently you didn’t get his memo:

CRUTEM4 (and HADCRUT) are shown with uncertainties. By the time you get back to 1950, they are large (about 0.5°C). SH uncertainty is over 1°C. I personally don’t use HADCRUT back to 1850, and I’m sure many don’t. – Nick Stokes

Anyway, good luck with that puppet gig you’ve got going!

Clyde Spencer
Reply to  Anthony Banton
October 19, 2018 4:26 pm

Anthony Banton,

I’ll give the Devil his due, and acknowledge that Stokes is quite familiar with global temperature records. However, your statement, “Nick Stokes knows more about global temp data and it’s usage than the anyone here…,” is not supportable. With regulars like Roy Spencer, and today, John Mclean himself, and even Mosher, and Anthony Watts, you are engaging in hyperbole.

Steven Mosher
October 16, 2018 3:19 am

“We previously acknowledged receipt of Dr John McLean’s 2016 report to us which dealt with the format of some ocean data files.

“We corrected the errors he then identified to us,” the Met Office spokesman said.

Read carefully. OCEAN FORMAT

the data errors as I explained are NOT in the final product.

The site he quoted doesnt have enough data and it has never been in any of the copies I have

EVER

Reply to  Steven Mosher
October 16, 2018 5:02 am

Your typing in CAPITALS.

I think Dr McLean’s put the wind up Mosher. Excellent!

Reply to  Andrew Wilkins
October 17, 2018 2:35 am

*you’re

John McLean
Reply to  Steven Mosher
October 19, 2018 4:33 am

I’ve replied to this above where I describe the errors that I reported to the Hadley Centre and CRU in 2016.
I forgot to mention about that a Bishop Hill blog posting in March 2016 has the details of the problems.

Steven Mosher
Reply to  John McLean
October 21, 2018 11:12 am

Did you remember that they apply a filter screening outliers of 5sigma.

“” Each grid-box value is the mean of all available station anomaly values, except that station outliers in excess of five standard deviations are omitted.”

Ron Broberg and I covered this back in 2010

As Ron wrote

“Out of that set, there are 425 records that are more than 5-SD off of the norm on the high-side and 346 records that are more than 5-SD off of the norm on the low-side. Only about 0.02% of the records. “

Scott Bennett
Reply to  Steven Mosher
October 21, 2018 11:48 pm

==>Steven Mosher

Dr Mclean has addressed this several times:

The problem in a nutshell is that the Hadley Centre and/or CRU fail to remove outliers from the data before they calculate the standard deviations. This can lead to ridiculous values, which when multiplied by five to set the limits above and below the mean become positively bizarre.

The metadata for Apto Uto contains the following line:

Standard deviations = 0.6 0.6 0.5 11.9 0.5 11.8 12.0 0.6 0.5 0.6 0.6 0.7

Five standard deviations for most of those months means no more than 3.5°C but in three of those months they are 59, 59.5 and 60 degrees. The long-term averages in those months are around 28°C and together that means the temperatures of 81.5, 83.4 and 83.4 are all less than five standard deviations from that mean.

Before calculating the standard deviations from a subset of the data any outliers should have been removed and the process repeated until all the data fell within limits, which probably should have only been the more common three standard deviations anyway. – John McLean 2018*

Here is the header from the 2018 station data file that I downloaded this month:

“Number= 800890
Name= APTO_OTU
Country= COLOMBIA
Lat= 7.0
Long= 74.7
Height= 630
Start year= 1947
End year= 1988
First Good year= 1947
Source ID= 79
Source file= Jones
Jones data to= 1988
Normals source= Data
Normals source start year= 1961
Normals source end year= 1988
Normals= 24.1 24.4 24.6 27.8 24.6 27.9 28.0 24.6 24.4 24.1 24.1 24.0
Standard deviations source= Data
Standard deviations source start year= 1947
Standard deviations source end year= 1988
Standard deviations= 0.6 0.6 0.5 11.9 0.5 11.8 12.0 0.6 0.5 0.6 0.6 0.7
Obs:…”

*http://joannenova.com.au/2018/10/hadley-excuse-implies-their-quality-control-might-filter-out-the-freak-outliers-not-so/#comment-2060139

John McLean
Reply to  Scott Bennett
October 22, 2018 12:32 am

Mosher has strange faith that the Hadley Centre and CRU documentation he refers to is correct.
He needs to catch up with the 2012 documentation about both CRUTEM4 and HadCRUT4, and even then he also needs to examine the data rather than just believe the documentation.

Anthony Banton
Reply to  Scott Bennett
October 22, 2018 5:03 am

From that link…..

“I asked John to expand on what Hadley means. He replies that the quality control they do is very minimal, obviously inadequate, and these errors definitely survive the process and get into the HadCRUT4 dataset”

Handwaving denigration.
Lets have proof of this “obviously inadequate” QC please.

Anthony Banton
Reply to  Scott Bennett
October 22, 2018 5:06 am

Which isn’t what has been produced so far.
Do what Mclean should have done in order to gain a Phd worth it’s name …. and crunch the numbers.

Caligula Jones
October 16, 2018 9:20 am

John Endicott October 16, 2018 at 6:30 am
The question of the moment is what will happen if we burn a whole lot of carbon

“and the answer is pretty much the same as any other time there was a whole lot of carbon in the atmosphere. It’s not that difficult to understand that there is nothing unprecedented about our current PPM of CO2 in the atmosphere,”

True…but….

My problem when I point this out to alarmists is that they DO have it right about humans causing issues with nature but its not necessarily about climate change.

Its about land use.

If species A “remembers” (somehow) that its supposed to head north when it gets hot, and there are now highways and quite a few large honking cities in the way…might be an issue.

If when they get there they try to eat something that was wiped out 10,000 years ago…might be an issue.

Or not, depending on if you get the sniffles over evolution.

John Endicott
Reply to  Caligula Jones
October 16, 2018 11:43 am

I agree that land use is a much bigger environmental issue than our small contribution to a trace gas in the atmosphere (that, frankly, is a net benefit to life on Earth).

October 18, 2018 12:44 pm

The easiest person to fool is yourself.

Anthony Banton
October 22, 2018 4:49 am

stock
October 24, 2018 3:57 am

https://www.metoffice.gov.uk/hadobs/hadcrut4/data/current/download.html

Why is the Sept 2018 HADCRUT not yet released!!!!!!!!!!!!

stock
Reply to  stock
October 24, 2018 4:00 am

I sent email to them to ask
https://www.metoffice.gov.uk/about-us/contact