WUWT readers surely recall this story: BOMBSHELL: audit of global warming data finds it riddled with errors
While not issuing a press release, the scientists have responded to press inquiries.
Britain’s Met Office welcomes audit by Australian researcher about HadCRUT errors
Graham Lloyd, The Australian
Britain’s Met Office has welcomed an audit from Australian researcher John McLean that claims to have identified serious errors in its HadCRUT global temperature record.
“Any actual errors identified will be dealt with in the next major update.’’
The Met Office said automated quality checks were performed on the ocean data and monthly updates to the land data were subjected to a computer assisted manual quality control process.
“The HadCRUT dataset includes comprehensive uncertainty estimates in its estimates of global temperature,” the Met Office spokesman said.
“We previously acknowledged receipt of Dr John McLean’s 2016 report to us which dealt with the format of some ocean data files.
“We corrected the errors he then identified to us,” the Met Office spokesman said.
I’m sure that crap data apologists Mosher and Stokes will be along to tell us why this isn’t significant, and why HadCRUT is just fine, and we shouldn’t give any attention to these errors. /sarc
Jo Nova adds:
Without specifically admitting he has found serious errors, they acknowledge his previous notifications were useful in 2016, and promise “errors will be fixed in the next update.” That’s nice to know, but begs the question of why a PhD student working from home can find mistakes that the £226 million institute with 2,100 employees could not. Significantly, they do not disagree with any of his claims.
Most significantly they don’t even mention killer issue of the adjustments for site moves — the cumulative cooling of the oldest records to compensate for buildings that probably weren’t built there ’til decades later.
More on her take here.
Jo makes a good point. Why is it that skeptics always seem to be the ones that find the errors in climate data, hockey sticks, and other data machinations produced by the well-funded climate complex?
Perhaps it is because they simply don’t care, and curiosity takes a back seat to money. Like politicians looking to the next election, Climate Inc. has become so dependent on the money train, their main concern is the next grant application.
Eisenhower had it right. We’ve all heard about Dwight D. Eisenhower’s farewell address, warning to us about the “military industrial complex”. It’s practically iconic. But what you probably didn’t know was that same farewell speech contained a second warning, one that hints at our current situation with science. He said to the Nation then:
Akin to, and largely responsible for the sweeping changes in our industrial-military posture, has been the technological revolution during recent decades.
In this revolution, research has become central, it also becomes more formalized, complex, and costly. A steadily increasing share is conducted for, by, or at the direction of, the Federal government.
Today, the solitary inventor, tinkering in his shop, has been overshadowed by task forces of scientists in laboratories and testing fields. In the same fashion, the free university, historically the fountainhead of free ideas and scientific discovery, has experienced a revolution in the conduct of research. Partly because of the huge costs involved, a government contract becomes virtually a substitute for intellectual curiosity. For every old blackboard there are now hundreds of new electronic computers.
The prospect of domination of the nation’s scholars by Federal employment, project allocations, and the power of money is ever present – and is gravely to be regarded.
Yet, in holding scientific research and discovery in respect, as we should, we must also be alert to the equal and opposite danger that public policy could itself become the captive of a scientific-technological elite.
People in scientific (and other) disciplines these days tend to avoid repetitive tasks requiring constant concentration.
I’m no hero, I’m as lazy as the next person. We have computers to do all the heavy lifting of calculations, but getting historical data into digital databases is still manual work. It’s boring, repetitive and anyone who’s done it can surely attest to how easy it is to get distracted, you lose your train of thought, and that leads to errors.
Once the data is in a digital database, then we can sit back and play with it all we want. Sometimes, you can spot egregious errors in the data, but smaller errors usually don’t jump out at you. So who’s going to take the trouble to look for data-entry errors? Or worse, errors in writing down numbers in a century-old, hand-written record? Or confusion between hand-written 1’s and 7’s done by continental Europeans and being read by anglophones (or vice versa)? Sceptics, of course, because they’re hoping to find errors.
It would probably cost the CRU a couple of days’ operating costs to hire a few AI people to come in and design auditing software. Train the machines with known errors and let them rip through the databases. They could crow about how they’re improving the quality of the data. They do anyway, but this could give them some substance to crow about.
So the Southern Hemisphere was:
I would say it would be more honest to say that there was no information for a global dataset in the 1850s and that remains the case until there are sufficient sensors.
Is that difficult?
Well yes it is.
How many sensors and what distribution is needed to have an accuracy for global temperatures of +/- 5 degC ? Mathematically, you can probably create an average with a precision of several places of decimals (which is in fact what is done) but the accuracy is still unlikely to be as good as +/- 5degC at best even assisted by some applied guesswork.
Jo makes a good point. Why is it that skeptics always seem to be the ones that find the errors in climate data, hockey sticks, and other data machinations produced by the well-funded climate complex?
Perhaps it is because they simply don’t care, and curiosity takes a back seat to money.
Perhaps there is another reason too. I reckon mainstream climate science sees themselves as defenders of ‘scientific orthodoxy’ against invading moronic masses, barbarians who want to desolate sacred ‘truth’ they firmly believe. Thus for them it is not a purely scientific dispute, it is quasi-religion one. Therefore, turning a blind eye on some errors (if they’re in favour of warming trends) or accepting some questionable adjustments are all acceptable practices if they serve this purpose. In this mentality the the end justify the means.
Or perhaps one approach sees a paycheck and grants and another does not ?
The MET has done very well out of AGW, without actually getting better at its day job of forecasting weather. They certainly not seen the cut backs others have .
Financial gratification is a big actor in the game. Unfortunately. Still, financial greed as a common denominator may be too simplistic. Some of them really believe that they’re some kind of missionaries bringing scientific enlightenment to the moronic masses who suffer under religious superstitious and are reluctant to accept teaching of the Academia. Myth of Prometheus is still alive in many scientific circles.
I love finding really well-written articles and this is one of them. Thanks for sharing.
Post normal data inventions are particularly annoying!
Hadley Centre’s response to John McLean’s corrections previously included the following comment and caveat;
Now Hadley Centre responds with;
One notes, that there is no caveat regarding temperature anomalies, yet.
Not that there is any easy accurate method to compare a dataset full of errors with a revised dataset with corrected errors; one gets suspicious that the anomalies will will once again, not change. Even for datasets riddled with errors.
JoNova’s comments are on target!
Do listed errors include calculating anomalies out to more decimal places than original temperatures were measured?
I note in the comment thread that Randy Stubbings points out:
I am reminded of a discussion on a train to Washington DC where I was talking to a lady employed by the Department of Energy.
Her job was the tracking of the Exxon Payments for the Valdez spill. Literally, small fractions of a cent payment over many millions of transactions. Cumulatively, they were substantial.
False precision masquerading as accuracy is neither.
“Any actual errors identified will be dealt with in the next major update.’’
It’s gonna get HOTTER !!!
The question I would ask of the UK Met office is this. Having admitted that their are errors and that they will be corrected, are you then going to tell the IPCC that your calculations were wrong and that things are not as bad as you originally said ?
MJE
Michael:
The question(s) I would ask you is:
Why should they need to do that when any errors would make no discernible (if that) difference to the trend?
How many temp data points do you think comprise the Hadcrut dataset?
How many millions?
How many did Mr Mclean find exactly as a percentage of the total?
Oh Yeah, we’ve seen that, every error and every adjustment by the high priests will have no discernible effect. Yet they continue making errors and adjustments all the time which drive the trend only one way ” It’s worse than before “. Give us all a break from this crap Banton. McLean did work on his own to show the kind of crap you and your lot produce. And you squander millions of public money to produce this crap. In private sector you’d have been shown the door on the spot and probably indicted for false reporting if you’d been doing financial reporting this way. On the public sector trough, you all dig your nose in with no fear of any such repercussions.
“…Why should they need to do that when any errors would make no discernible (if that) difference to the trend?…”
Priceless.
Priceless and shameless isn’t it Michael? These kind of blokes are what populate the Met Office and call themselves ” scientists “, a shame to genuine scientists. Errors like this, if found in a clinical trial data or in financial data would render the entire study and report invalid and suspect.
But these high priests of warning can do any kind of careless and shoddy work without and have the gall to say ” Move on, nothing to see here “. any fear of repercussions. And they’ll have their water carrying apologists like Mosher, Stokes and barry to come and defend any crappy work they do.
You beat me to it Michael.
“Why should we worry that all our cars only had 3 wheels fitted when they came out of the factory last month? It’s not a problem, as they always had 4 fitted for the previous 100 years. It’s all about the trend, don’t you see? Anyway, we adjusted the data and found that from 1905 until 1920 the cars actually only had 2 wheels fitted, so the trend is increasing. Before you know it, we’ll have 5 wheels on each car!”
The data is junk and Stokes, Mosher, et al know it.
==> Anthony Banton
Jeez you’re obtuse!
There were 75 findings, including systematic errors larger than the total trend! You’re like a dog on a bone, you can’t let go of that one issue (Station file errors). Read the paper and you will see that you have no idea what you are talking about.
There were 25 major findings, none of them rely on this straw man you keep burning.
The findings of the report had major implications:
Scott Bennet:
What is obtuse(ness) and those that are “like a dog with a bone” are the hounds on here answering faithfully to the dog-whistle.
Such that Middleton has already started a new article with the “now discredited Hadcrut” meme.
It’s entered Naysayer mythology.
No it is not discredited just because rabid confirmation bias makes it so in the mind of denizens here.
Nick Stokes knows more about global temp data and it’s usage than the anyone here and I defer to his comments on the two threads regarding the “Bombshell” that this is not – a few errors (75 ? heck) that were found in original national met files containing millions of data points that have NOT been shown to have been imput into Hadcrut … and even if they were the effect would be negligible
“OK. This is no BOMBSHELL. These are errors in the raw data files as supplied by the sources named. The MO publishes these unaltered, as they should. But they perform quality control before using them. You can find such a file of data as used here. I can’t find a more recent one, but this will do. It shows, for example
1. Data from Apto Uto was not used after 1970. So the 1978 errors don’t appear.
2. Paltinis, Romania, isn’t on that list, but seems to have been a more recently added station.
3. I can’t find Golden Rock, either in older or current station listings.”
“Also, “we do QA later” doesn’t explain why obvious errors are still in the source data.”
Because it is source data. People here would be yelling at them if they changed it before posting. You take the data as found, and then figure out what it means.”
“WTF? “left to others?”. How can you get a PhD saying that I did the proofreading, but calculations were to hard. And if a PhD project can’t do it, who are those others?”
“It isn’t an enormous task at all. HADCRUT isn’t rocket science. You just write a program that emulates it, and then see what happens when you correct hat you think is wrong with the data. I wrote a program years ago which I have run every month, before the major results come in (details and code here). I have done that for seven years. They are in good agreement. In particular the simplest of my methods, TempLS grid, gives results which are very close to HADCRUT. If I used 5° cells and hadsst3 instead of ERSST, I could make the agreement exact. I wouldn’t expect to get a PhD from doing that, let alone saying it was too hard.”
“In fact, HADCRUT do post the effect of every version change. Here is the page, dated 15 Sept 2016, quantifying the changes going from version 4.3 to V 4.4. They are quite invisible on the graph, but a difference plot shows them generally less than 0.01°C.2
“That comes back to the issue “Why is it that skeptics always seem to be the ones…?”. Why is it that naysayers are always the ones complaining about how temperatures are calculated by scientists, but never doing a calculation themselves? It really isn’t hard. You don’t even need a PhD.”
Not so much 75 individual errors within the data itself, but rather (up to) 75 different kinds of error, as outlined in “Appendix 6 – Consolidated Findings” of his report. This up from (up to) 26 different kinds of error outlined in his PhD thesis, in “Chapter 8: Summary of Part 1” of that thesis. I say “(up to)” because I’m leaving room open for the possibility that some of his claims may be incorrect or at least inconsequential, and that some of them may in fact refer directly to individual errors within the data.
As to “who are those others”, given that he came up with so many more findings after publishing his thesis, I’d say it looks like he decided to take on that burden all by himself.
==>Anthony Banton
Again, a lot of words to say precisely nothing! A tale told by an NPC bot, full of sound and fury, signifying nothing! You can’t read or perhaps comprehension is your problem. You never address anything I’ve written you just go straight into your program loop.
The dataset is good enough for the IPCC but it’s not good enough for your mate Nick! Apparently you didn’t get his memo:
Anyway, good luck with that puppet gig you’ve got going!
Anthony Banton,
I’ll give the Devil his due, and acknowledge that Stokes is quite familiar with global temperature records. However, your statement, “Nick Stokes knows more about global temp data and it’s usage than the anyone here…,” is not supportable. With regulars like Roy Spencer, and today, John Mclean himself, and even Mosher, and Anthony Watts, you are engaging in hyperbole.
“We previously acknowledged receipt of Dr John McLean’s 2016 report to us which dealt with the format of some ocean data files.
“We corrected the errors he then identified to us,” the Met Office spokesman said.
Read carefully. OCEAN FORMAT
the data errors as I explained are NOT in the final product.
The site he quoted doesnt have enough data and it has never been in any of the copies I have
EVER
Your typing in CAPITALS.
I think Dr McLean’s put the wind up Mosher. Excellent!
*you’re
I’ve replied to this above where I describe the errors that I reported to the Hadley Centre and CRU in 2016.
I forgot to mention about that a Bishop Hill blog posting in March 2016 has the details of the problems.
Did you remember that they apply a filter screening outliers of 5sigma.
“” Each grid-box value is the mean of all available station anomaly values, except that station outliers in excess of five standard deviations are omitted.”
Ron Broberg and I covered this back in 2010
As Ron wrote
“Out of that set, there are 425 records that are more than 5-SD off of the norm on the high-side and 346 records that are more than 5-SD off of the norm on the low-side. Only about 0.02% of the records. “
==>Steven Mosher
Dr Mclean has addressed this several times:
Here is the header from the 2018 station data file that I downloaded this month:
“Number= 800890
Name= APTO_OTU
Country= COLOMBIA
Lat= 7.0
Long= 74.7
Height= 630
Start year= 1947
End year= 1988
First Good year= 1947
Source ID= 79
Source file= Jones
Jones data to= 1988
Normals source= Data
Normals source start year= 1961
Normals source end year= 1988
Normals= 24.1 24.4 24.6 27.8 24.6 27.9 28.0 24.6 24.4 24.1 24.1 24.0
Standard deviations source= Data
Standard deviations source start year= 1947
Standard deviations source end year= 1988
Standard deviations= 0.6 0.6 0.5 11.9 0.5 11.8 12.0 0.6 0.5 0.6 0.6 0.7
Obs:…”
*http://joannenova.com.au/2018/10/hadley-excuse-implies-their-quality-control-might-filter-out-the-freak-outliers-not-so/#comment-2060139
Mosher has strange faith that the Hadley Centre and CRU documentation he refers to is correct.
He needs to catch up with the 2012 documentation about both CRUTEM4 and HadCRUT4, and even then he also needs to examine the data rather than just believe the documentation.
From that link…..
“I asked John to expand on what Hadley means. He replies that the quality control they do is very minimal, obviously inadequate, and these errors definitely survive the process and get into the HadCRUT4 dataset”
Handwaving denigration.
Lets have proof of this “obviously inadequate” QC please.
Which isn’t what has been produced so far.
Do what Mclean should have done in order to gain a Phd worth it’s name …. and crunch the numbers.
John Endicott October 16, 2018 at 6:30 am
The question of the moment is what will happen if we burn a whole lot of carbon
“and the answer is pretty much the same as any other time there was a whole lot of carbon in the atmosphere. It’s not that difficult to understand that there is nothing unprecedented about our current PPM of CO2 in the atmosphere,”
True…but….
My problem when I point this out to alarmists is that they DO have it right about humans causing issues with nature but its not necessarily about climate change.
Its about land use.
If species A “remembers” (somehow) that its supposed to head north when it gets hot, and there are now highways and quite a few large honking cities in the way…might be an issue.
If when they get there they try to eat something that was wiped out 10,000 years ago…might be an issue.
Or not, depending on if you get the sniffles over evolution.
I agree that land use is a much bigger environmental issue than our small contribution to a trace gas in the atmosphere (that, frankly, is a net benefit to life on Earth).
The easiest person to fool is yourself.
https://www.metoffice.gov.uk/hadobs/hadcrut4/data/current/download.html
Why is the Sept 2018 HADCRUT not yet released!!!!!!!!!!!!
I sent email to them to ask
https://www.metoffice.gov.uk/about-us/contact