Met Office responds to HadCRUT global temperature audit by McLean

WUWT readers surely recall this storyBOMBSHELL: audit of global warming data finds it riddled with errors

While not issuing a press release, the scientists have responded to press inquiries.


Britain’s Met Office welcomes audit by Australian researcher about HadCRUT errors

Graham Lloyd, The Australian

Britain’s Met Office has welcomed an audit from Australian researcher John McLean that claims to have identified serious errors in its HadCRUT global temperature record.

“Any actual errors identified will be dealt with in the next major update.’’

The Met Office said automated quality checks were performed on the ocean data and monthly updates to the land data were subjected to a computer assisted manual quality control process.

“The HadCRUT dataset includes comprehensive uncertainty estimates in its estimates of global temperature,” the Met Office spokesman said.

“We previously acknowledged receipt of Dr John McLean’s 2016 report to us which dealt with the format of some ocean data files.

“We corrected the errors he then identified to us,” the Met Office spokesman said.


I’m sure that crap data apologists Mosher and Stokes will be along to tell us why this isn’t significant, and why HadCRUT is just fine, and we shouldn’t give any attention to these errors. /sarc

Jo Nova adds:

Without specifically admitting he has found serious errors, they acknowledge his previous notifications were useful in 2016, and promise “errors will be fixed in the next update.” That’s nice to know, but begs the question of why a PhD student working from home can find mistakes that the £226 million institute with 2,100 employees could not. Significantly, they do not disagree with any of his claims.

Most significantly they don’t even mention killer issue of the adjustments for site moves — the cumulative cooling of the oldest records to compensate for buildings that probably weren’t built there ’til decades later.

More on her take here.

Jo makes a good point. Why is it that skeptics always seem to be the ones that find the errors in climate data, hockey sticks, and other data machinations produced by the well-funded climate complex?

Perhaps it is because they simply don’t care, and curiosity takes a back seat to money. Like politicians looking to the next election, Climate Inc. has become so dependent on the money train, their main concern is the next grant application.

Eisenhower had it right. We’ve all heard about Dwight D. Eisenhower’s farewell address, warning to us about the “military industrial complex”. It’s practically iconic. But what you probably didn’t know was that same farewell speech contained a second warning, one that hints at our current situation with science. He said to the Nation then:

Akin to, and largely responsible for the sweeping changes in our industrial-military posture, has been the technological revolution during recent decades.

In this revolution, research has become central, it also becomes more formalized, complex, and costly. A steadily increasing share is conducted for, by, or at the direction of, the Federal government.

Today, the solitary inventor, tinkering in his shop, has been overshadowed by task forces of scientists in laboratories and testing fields. In the same fashion, the free university, historically the fountainhead of free ideas and scientific discovery, has experienced a revolution in the conduct of research. Partly because of the huge costs involved, a government contract becomes virtually a substitute for intellectual curiosity. For every old blackboard there are now hundreds of new electronic computers.

The prospect of domination of the nation’s scholars by Federal employment, project allocations, and the power of money is ever present – and is gravely to be regarded.

Yet, in holding scientific research and discovery in respect, as we should, we must also be alert to the equal and opposite danger that public policy could itself become the captive of a scientific-technological elite.

 

 

5 1 vote
Article Rating
159 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Ian W
October 15, 2018 6:01 am

I will be interested to see how the Met Office ‘corrects’ the total lack of observations in the southern hemisphere in the early record and their sparsity in the northern hemisphere. Perhaps an admission that their early record is insufficient to use as a ‘global’ temperature set. I would have thought that there is a minimum coverage requirement before any surface observation set can be called ‘global’ and with the reduction in observation stations we are perilously close to that now.

Bill Powers
Reply to  Ian W
October 15, 2018 6:13 am

In science data rules but in politics the data is what they say it is. We have been immersed in global politics with this Meme since the UN established the IPCC explicitly to find an adverse effect between man and his environment or should I say woman and her environment after all we wouldn’t want to be POLITICALLY INCORRECT.

Roger Knights
Reply to  Bill Powers
October 15, 2018 6:46 am

“In science data rules but in politics the data is what they say it is.”

If A is greater than B, and B is greater than C, then Ais greater than C, except where prohibited by law.
—Robert Anton Wilson

RockyRoad
Reply to  Roger Knights
October 15, 2018 1:36 pm

…except where prohibited by politics, which is even worse!

ShanghaiDan
Reply to  Roger Knights
October 15, 2018 2:57 pm

Shaddup and be happy your chocolate ration has been increased to 20 grams per week, instead of the old 30 grams per week…

Streetcred
Reply to  ShanghaiDan
October 16, 2018 12:13 am

LOL … that’s a keeper !

Ill Tempered Klavier
Reply to  ShanghaiDan
October 16, 2018 6:14 pm

What?!?!?!? Ration chocolate!?!?!?

Up against the wall (beep).

Chocoholics Unanimous meeting: Godiva’s at 5:00

knr
Reply to  Ian W
October 15, 2018 6:41 am

models , that is always the answer when you spent has much as they have on computing power

David Cage
Reply to  knr
October 15, 2018 10:21 am

The real answer is probably less sinister and that is academic snobbery. Any computer model is only as good as the data it is founded on and tested against. The cost of computing means that corners are cut elsewhere and one of the greatest is in data collection which as far as I can see is scarcely if at all mentioned in peer reviews.
I do know from personal experience that when a lecturer I knew criticised the methods and attention to detail in the data collection he was told by someone who was a senior scientist at the time to go back to his grease monkeying and leave science to scientists. Now of course they would not express this sentiment but questioning reveals nothing has changed.

Randy Stubbings
Reply to  Ian W
October 15, 2018 7:15 am

I was working with HadCRUT4 data downloaded in September of 2018 and I compared it with the version I downloaded in July of 2014. Subtracting the 2014 values from the 2018 values shows a set of adjustments to the monthly median temperature anomalies ranging from about -0.06 degrees to about +0.06 degrees. The decade with the largest cooling adjustment was the 1860s (the average adjustment was -0.0198) and the two decades with the largest warming adjustments were the 2000s (+0.0059) and the partial decade from 2010 to June 2014 (+0.0203).

The average temperature anomaly from 1850 to 1899 was -0.313 as reported in June 2014 and -0.315 as reported in September 2018. The difference between the upper and lower confidence intervals on the reported temperature anomaly averages 0.6 degrees for the period, so reporting to three decimal places seems a bit silly to me. Isn’t 1850 to 1900 supposed to establish the benchmark temperature against which we measure the 1.5 degrees that was formerly known as 2.0 degrees?

John Tillman
Reply to  Randy Stubbings
October 15, 2018 8:42 pm

Baselines are typically 30 years. We’re presently working with 1981-2010, but will move to 1991-2020 soon.

dennisambler
Reply to  John Tillman
October 17, 2018 3:55 am

In 2010 D’Aleo and Smith report that in the period of the 1960’s to the 1980’s the number of stations used for calculating global surface temperatures was about 6,000. But it dropped rapidly to about 1,500 by 1990. Further, large gaps began appearing in some of the reported data.

In the Climategate e-mails, Phil Jones wanted to stay with 61-90, partly because it was one of the coldest periods in the 20th century and partly because of the major loss of stations after that time.

Reply to  Ian W
October 15, 2018 8:24 am

” Perhaps an admission that their early record is insufficient to use as a ‘global’ temperature set. “
If you go back in time, there will always be a point where you decide that the record is insufficient. For GISS and NOAA it is 1880. HADCRUt decided to post numbers back to 1850. The quality is always fading towards the end chosen, and you can always say it should have been a few years earlier or later.

But HADCRUT post the error estimates, increasing as you go back, and you can make your own decision about whether to use the calculation.

Latitude
Reply to  Nick Stokes
October 15, 2018 9:15 am

You do realize those “error estimates” should increase in time, right?

Latitude
Reply to  Latitude
October 15, 2018 9:18 am

………exponentially

Pat Frank
Reply to  Nick Stokes
October 15, 2018 9:33 am

But HADCRUT post the error estimates…” Fantasy land.

Ian W
Reply to  Nick Stokes
October 15, 2018 12:31 pm

So the Southern Hemisphere was:

Temperatures for the entire Southern Hemisphere in 1850 and for the next three years are calculated from just one site in Indonesia and some random ships.

I would say it would be more honest to say that there was no information for a global dataset in the 1850s and that remains the case until there are sufficient sensors.

Is that difficult?

Well yes it is.

How many sensors and what distribution is needed to have an accuracy for global temperatures of +/- 5 degC ? Mathematically, you can probably create an average with a precision of several places of decimals (which is in fact what is done) but the accuracy is still unlikely to be as good as +/- 5degC at best even assisted by some applied guesswork.

Mickey Reno
Reply to  Nick Stokes
October 15, 2018 5:18 pm

Ideally a climate record would go back through 2 or 3 full cycles of glaciation and their associated interglacial periods. Anything less than this is insufficient to make meaningful determinations of what is and is not within the range of possible “natural variability.” Basing anything on a record made during the instrumental era is totally insufficient, and any sample of that short a duration would be non-representative for any reasoned statistical use to decide long term climate.

The long ice core records show CO2 not to be the driving influence of end of glaciation warming, but rather, show CO2 increases to be an effect of inter glacial warming (probably due to ocean warming leading to CO2 outgassing). Which suggest that whatever melted the ice was natural, and that CO2 played no vital role. Go ahead Nick, convince me that a buggy, tendentious, infintessimally short HadCRUT based record can teach us anything meaningful about natural climate variability on these time scales. This will be a very difficult sales job for you. But if you don’t at least try, forever resign yourself to the role of poseur within the set of people who claim to understand the Earth’s climate.

Reply to  Mickey Reno
October 15, 2018 8:29 pm

“The long ice core records show CO2 not to be the driving influence of end of glaciation warming”
Who said it was? Of course it wasn’t. No-one was emitting CO2 then.

Latimer Alder (@latimeralder)
Reply to  Nick Stokes
October 15, 2018 11:14 pm

Then clearly the climate changed without CO2 changing. There are plenty of ‘climate drivers’ other than CO2.

Hence claims that CO2 is THE climate control knob are obviously false.

This is not difficult stuff.

Reply to  Nick Stokes
October 16, 2018 12:01 am

“There are plenty of ‘climate drivers’ other than CO2.”
Yes, there are. There are plenty of things that will kill you other than sharks. That is no help if a shark is heading your way.

The question of the moment is what will happen if we burn a whole lot of carbon. We’re doing that. It hasn’t happened on such a scale before.

bit chilly
Reply to  Nick Stokes
October 16, 2018 3:23 am

good to see nick acknowledge what will happen when we burn a whole load of carbon is still a question and not settled science after all.

John Endicott
Reply to  Nick Stokes
October 16, 2018 6:30 am

The question of the moment is what will happen if we burn a whole lot of carbon

and the answer is pretty much the same as any other time there was a whole lot of carbon in the atmosphere. It’s not that difficult to understand that there is nothing unprecedented about our current PPM of CO2 in the atmosphere, it’s been higher at previous points in the past. and it’s been lower as well. And the times its been lower have been not too good for life on the planet whereas times where it has been higher have been prosperous times for life on the planet. rising CO2 is not the problem you imagine it to be.

Anthony Banton
Reply to  Nick Stokes
October 16, 2018 9:23 am

“Hence claims that CO2 is THE climate control knob are obviously false.

This is not difficult stuff.”

What’s “not difficult” (unless because of something other than science comprehension)
is that CO2 is a GHG and whether it comes before or after another warming driver it will reduce the Earth’s ability to cool.

Why is it that that is so difficult for some denizens to understand?

The ice-cores record the way the Earth has responded to changes in absorbed solar energy.
Primarily via our eccentric orbit around the Sun and axial tilt (Milankovitch).
In the past (accepting massive volcanic out-gassing in the PETM FI) CO2 has come along after. That is the way the carbon cycle behaves.
Now it’s coming first ….. because humans have increased atmospheric CO2 content by greater than 40% from the in-balance carbon cycle level of ~280ppm pre-industrial.
IE: Not the natural carbon cycle.

AGW is not Science
Reply to  Nick Stokes
October 16, 2018 11:08 am

@Anthony Banton

“What’s “not difficult” (unless because of something other than science comprehension)
is that CO2 is a GHG and whether it comes before or after another warming driver it will reduce the Earth’s ability to cool.”

If that were of any consequence, then the ice core reconstructions would show the alleged CO2 “contribution” to warming. They do not. They show CO2 FOLLOWING temperature, up AND down, with the same ~800 year time lag. And whatever claim you want to make about CO2 “contributing” to warming once the time lag has been made up and BOTH temperature and CO2 are rising, it isn’t supported by the data. Because if it was, then once what was (excuse me) REALLY causing the temperature rise stopped, what we SHOULD see, but DO NOT SEE, as long as CO2 levels continue to rise (i.e., for another 800 years) is temperatures CONTINUING TO RISE, at a lower rate, that lower rate being CO2’s “contribution” to the warming. Instead, what we see is temperatures FALLING, WHILE CO2 LEVELS CONTINUE TO RISE, which tells us that CO2’s “contribution” is essentially ZERO.

Anthony Banton
Reply to  Nick Stokes
October 16, 2018 1:46 pm

“If that were of any consequence, then the ice core reconstructions would show the alleged CO2 “contribution” to warming. They do not. They show CO2 FOLLOWING temperature, up AND down,”

Of course they do!!!!

Still not comprehending, sigh!

The ice-core record is of the natural carbon cycle’s response to temperature change.
OK?
CO2 content within the CC is by definition in balance with the planets sinks/sources (at any small finite time).

(say) The temp rises due an increase of insolation over higher northern latitudes (greatest land-mass area and high albedo/melt change potential)

Consequently that temp rise reduces the ocean’s ability to absorb CO2 – leading to an imbalance in the CC that leads to a slow – I mean very, very slow, increase in atmospheric CO2 content ….

“The fastest large natural increase measured in older ice cores is around 20 ppmv (parts per million by volume) in 1,000 years (a rate seen during Earth’s emergence from the last ice age around 12,000 years ago). CO2 concentration increased by the same amount, 20 ppmv, in the last 10 years! ”

https://www.bas.ac.uk/data/our-data/publication/ice-cores-and-climate-change/

That creates an increased GHE via there being more GHGs in the atmosphere (CO2,CH4,H2O).
Leading to a +ve feedback and a further push on rising temps.
The same happens in reverse as CO2 follows the falling temp as northern insolation reduces…..
Global temps fall.
The oceans are able to absorb more CO2.
The atmos can hold less WV.
Reduced atmos GHG’s.
Reduced GHE.
+feedback of CO2 pushing falling GMSTs.
CO2 LAGS temp as seen via the natural CC in the ice-core record.

That is as it should be.

What’s different now?
No solar forcing (as experience over the millenia of orbital change).
However – increasing CO2 (100x faster than the natural ice-core recorded increase).
AGAIN – it matters not when CO2 enters the atmosphere.
It’s a GHG and MUST reduce the Earth’s ability to cool if it increases.
CO2 now leads and not follows because it’s NOT a natural CC response as (again) we see in the ice-cores.
Instead of a feed-back to temperature change it is now the driver of temperature of it’s own account.
Sorry but I cant make it simpler than that.

Ill Tempered Klavier
Reply to  Nick Stokes
October 16, 2018 6:35 pm

AB:
What a pile of silliness and horse feathers. You really don’t have a clue what’s actually going on, do you?

Mickey Reno
Reply to  Nick Stokes
October 17, 2018 6:46 pm

I know he’s not a scientist, but propagandist and politician Al Gore (ManBearPig) said it in his ridiculous film. The UK government endorsed this view when they forced all the poor bastards attending public school had to watch his drivel in their classrooms. Real Climate is on record as saying Al is not wrong. Who do you need to hear from before you admit that this is your team’s official position. Gavin and Jeff Severinghaus do some clever prestidigitation and finessing on how causes can happen after effects, but they come across as desperate and stupid.

Nick, are you agreeing that the pointer of falsification points away from CO2 as a cause of glacial to interglacial warming? Do you disagree with the RealClimate excuses?

Eamon Butler
Reply to  Nick Stokes
October 18, 2018 1:52 am

At what point do error estimates become so large, they render the whole exercise meaningless?

MarkW
Reply to  Ian W
October 15, 2018 8:51 am

The modern data still suffers from grossly insufficient sampling.

Curious George
Reply to  Ian W
October 15, 2018 10:39 am

You just adjust missing data into existence. Next time, ask a really difficult question.

bit chilly
Reply to  Ian W
October 16, 2018 3:16 am

what is the betting once the “corrections” have taken place it will be worse than we thought ?

HotScot
October 15, 2018 6:04 am

“Why is it that skeptics always seem to be the ones that find the errors in climate data, hockey sticks, and other data machinations produced by the well-funded climate complex?”

It’s like everything else Anthony. The ‘concencus alarmists’ see one side of the story, theirs. And whilst sceptics are subject to exactly the same propaganda as them, and we understand the arguments supporting climate alarmism, we have also taken the time to educate ourselves on the other side of the debate.

But of course we’re the idiots for taking an objective view of the subject, understanding both sides of the argument and reaching a reasonable conclusion.

HotScot
Reply to  HotScot
October 15, 2018 6:21 am

PS

Nor do we run round with our hair on fire screaming “the worlds going to be perfect in 50 years time”, all we’re saying is that judging by the last 40 years (or couple of hundred years of shoddy weather records) there appears no reason for alarm as the planet is in better nick now that it has been for many years. It’s more prosperous, there’s less poverty, more cures for disease, longer lives and, of course, far more peaceful than it has ever been in mankinds history.

And I have to laugh when climate is blamed for the Syria crisis. Did the climate cause the American Revolution, the Napoleonic wars, WW1 or WW2, Vietnam or the Korean war? If so then it must be the force that has brought about a more peaceful world since then.

But of course while the claim about Syria is a reasonable conclusion to draw, the rest of my statement is just ridiculous………..

Newminster
Reply to  HotScot
October 15, 2018 7:52 am

Point of interest in there, HotScot — those “shoddy” records never needed to be anything better. Nor, by and large, do they now.

My “official” temperature at the moment (the thermometer on the north-facing wall that gives my weekly max/min readings) says 21.4. The one out in the open says 24.8. 3.5° difference in less than 50 feet.But so what? And not to mention the 12° degree difference since 6 o’clock this morning! Interesting information but what exactly am I supposed to do with it?

Agreed that we need standards if we are to get reliable figures but what are we going to do with those figures when we have them? This seems to me to be where we see the emperor in his new clothes. Experts collect these figures and then massage them, homogenise them, “nurse them, rehearse them, and give out the news” that they can’t predict what they will be more than four days ahead — today’s forecast maximum four days ago was for 17°, this morning it was for 21 — but that they know that Schellnhuber’s totally out of thin air 2° more than some unspecified “average” for 200 years ago (when the figures really really were shoddy by today’s standards) is going to mean doom unless we cough up large amounts of cash, payable to heaven knows who, from now on.

And we fall for it!

D Cage
Reply to  Newminster
October 15, 2018 10:33 am

My thermometer is around 10 metres from my drive. A few degrees shift in the wind makes the reading change by nearly five degrees on a sunny but warm rather than hot day day depending on if it goes over the drive at 45 degrees C or the grass at 20 degrees C according to the hand held infra red thermometer. I really don’t understand how unless they live under different rules the reading to sub degree levels can make any sense at all.

AGW is not Science
Reply to  D Cage
October 16, 2018 11:14 am

Exactly. Temperature readings are all about where the thermometers are located, and the characteristics of those locations. Since those have changed from largely rural to largely urbanized over time, there is always going to be a spurious upward trend in the temperature readings. And the “adjustments” are being done in a manner exactly OPPOSITE what they should be to reflect this.

Alan Tomalty
Reply to  HotScot
October 15, 2018 9:43 am

What amazes me is that alarmists like Nick Stokes have no comments when other alarmists make the claim that the Arctic ice will disappear within 5 years and then 5 years comes and it is still there. There are dozens of other predictions that never come true. The alarmists have cried wolf so often ;the global warming farce is getting tedious. Us skeptics realize that it is a religion to alarmists like Nick but when you plead to your God a 1000 times and he/she doesnt answer, a reasonable person would begin to doubt.

Reply to  HotScot
October 15, 2018 6:59 am

HotShot wrote:
“Why is it that skeptics
always seem to be the ones
that find the errors
in climate data, hockey sticks,
and other data machinations
produced by the well-funded
climate complex?”

My comment:
I’ve been writing
an economics newsletter as a hobby
since 1977. Every issue contains
typoes and gramma errers
that I find later,
because after writing two drafts,
I have no desire
to proofread what I wrote
when I’m charging only
one dollar an issue !

If the confuser
didn’t spot my errors
while I was typing,
I just assume they
couldn’t be too bad.

You can’t find errors
unless you look for them,
with a desire to find them,
and then correct them.

Concerning the errors coming in from
the national meteorological offices
listed in the summary of McClean’s report
(I’m too cheap to pay $8 for the actual report),
… I don’t know how many of these “local errors”,
if any, are caught and fixed before compilation
of the HADcrut4 average global temperature.

I assumed John McLean didn’t know that either
— government bureaucrats with
science degrees tend to guard their
“final” adjusted, readjusted,
and re-re-adjusted “data”
with a pack of junkyard dogs
(led by bureaucrats such as
Michael “Fido” Mann),

The fact that such obvious
“garbage data”
was submitted
in the first place
concerns me.

The fact that
weather satellite data
and weather balloon data
are similar to each other,
but both show less warming than
surface data, concerns me too.

The fact that most of the planet
is not populated, and has no weather
stations, makes the surface data into a
an infilled / wild guessed laughingstock
— infilled numbers can never be verified
before 1979, or falsified.

After 1979, surface data can be compared
to the two other measurement methodologies,
satellites and balloons — and they are different
= not verified.

Sort of shooting down my own arguments:
(1)
The average temperature is not “the climate”,
and is meaningless to people — no one lives
in the “average climate”, and

(2)
Slight warming of the average temperature,
mainly at night,
mainly in the colder months,
and mainly in the colder latitudes,
is GOOD NEWS
— If we worry, then we ought to worry
about the large temperature declines
after the current interglacial ends,
perhaps delayed somewhat
by adding CO2 to the air?

A lot of people on this planet would suffer
from a significantly colder climate.

No one would care about a +2 degree rise
in the average temperature in the next 600 years
(extrapolating the actual
warming rate since 1950,
and assuming
2 ppm / year CO2 growth
for the next 600 years).

My climate change blog,
with over 25,000 page views so far:
http://www.elOnionBloggle.Blogspot.com

Another Paul
Reply to  Richard Greene
October 15, 2018 10:47 am

“If the confuser didn’t spot my errors…” I’m confused, does that make me the confuser?

Smart Rock
Reply to  Another Paul
October 15, 2018 11:11 am

I think he means “computer” and confuser is his way of taking a jab at the role they play in our lives

Reply to  Richard Greene
October 15, 2018 3:22 pm

Is this meant to be in prose form??
Otherwise, please be sure to correct formatting when copy and pasting.

Sorry to be pedantic, but it’s just a little annoying.

Mike Haseler
October 15, 2018 6:09 am

I think those doing these metrics need to get acquainted with even the basics of a quality system like ISO9000

HotScot
Reply to  Mike Haseler
October 15, 2018 6:23 am

Mike Haseler

Just a quick audit here, isn’t it 9001 now? 🙂

Mike Haseler
Reply to  HotScot
October 15, 2018 6:31 am

(As I remember)
9000 was the overall concept.
9001 was design, manufacture and servicing
9002 was just manufacture and servicing

I think there was a 9003 which presumably was just servicing.

However,

Kevin Whalen
Reply to  Mike Haseler
October 15, 2018 7:29 am

You are pretty much correct, though they’ve roped in some industry specific standards as well that are covered by letters instead of numbers My mother is ISO consultant and she throws around enough acronyms to make my eyes glaze over – and I’m a US Army retiree.

tweak
Reply to  Mike Haseler
October 15, 2018 7:48 am

9003 for ‘services rendered’?

So there is a Q/A program for brothels now? Amazing.

jono1066
Reply to  Mike Haseler
October 15, 2018 8:52 am

Aaaah,
the good old days of BS5750 as a new toy to play with, based on the super AQAP 1-4,
I never did like the number 9000, and yes it was the series.
Even here the numbers keep on inflating, next they will simply make it 5 digits long,
darn, it already is
make way for 6 digits !

Reply to  jono1066
October 15, 2018 2:57 pm

It’s now ISO 9001.
If you wish to NOT be assessed for ‘Design’, you need to include a statement (itself auditable) that that is the case.
There has, over the last thirty or more years, been a burgeoning industry: ISO –
9001
14001
18001
23000
25000
44000 and doubtless more I have missed – [22000, 50000??].
I retired – a former ISO 9002, 9001 and [Shipping’s ISM Code] Lead Auditor – in 2017.
And happily so.
And not battling with Southern Railways, who would normally take me into London, often on time. The return journey was less certain! Sometimes by buses!!

Auto

Patrick MJD
Reply to  jono1066
October 15, 2018 6:09 pm

I recall being part of a BS5750 audit in the 80’s and being young and new to industry I thought it was something important. Before the audit however, I was pulled to one side and told by my line manager “The answer to give to any question you are asked is “Please refer this question to my BS5750 coordinator”. From then on I knew what the BS in BS5750 meant, and it wasn’t British Standard.

Ve2
Reply to  Mike Haseler
October 16, 2018 2:50 am

9004 is just going to be PC.

DayHay
Reply to  Mike Haseler
October 16, 2018 9:00 am

ISO is only a scheme to document what you are doing.
Even if you are doing it wrong.

Reply to  Mike Haseler
October 15, 2018 8:06 am

I did some ISO 9001
quality management work
in product development
before I retired at the end
of 2004.

Most important
was to document
the PD process in detail,
train the engineers,
and follow the process
consistently.

There is no way
government bureaucrats
would ever document
the ACTUAL process
of how NOAA / NASA
come up with
an average global
surface temperature !
( assuming there is a “process”
— it might be an ad hoc process,
managed “from the top”,
that starts with the politically
correct climate conclusion,
and then works backwards
to support the conclusion ! )

Does anyone think
goobermint bureaucrats
document the ACTUAL
infilling,
adjustments,
re-adjustments,
re-re adjustments,
and how they can claim a
+/- 0.1 degree C.
margin of error ?

( Remember when one year was a
few hundredths of a degree warmer
than the prior year ?
Do you expect them
to document how they could have
measured the global temperature
in hundredths of a degree C. ? )

ISO quality control will never happen.

Remember their motto:
“It’s good enough for government work !”

Mike Haseler
Reply to  Richard Greene
October 15, 2018 8:12 am

The reason private sector care about quality is because the private sector have real customers who want to get real quality and don’t want to pay for rubbish.

And there’s a reason why the public sector don’t care about quality, don’t care about those that have to deal with them and spend all their time on politicised rubbish.

Clyde Spencer
Reply to  Mike Haseler
October 15, 2018 8:53 am

Mike Haseler
Thus the old quip, “Good enough for government work!.”

Another Ian
Reply to  Clyde Spencer
October 15, 2018 1:41 pm

Australia had a spate of “quality control” at one stage a few years ago, which lead to an addition to “good enough for government work” –

“Quality control that doesn’t control quality”

Karlos51
Reply to  Clyde Spencer
October 15, 2018 5:11 pm

process rich, outcome poor..

Steven Fraser
Reply to  Richard Greene
October 15, 2018 9:47 am

Richard,

You mean, something like ‘Choose the desired temperature, then retrofit the process?’ /s

John in Oz
Reply to  Richard Greene
October 15, 2018 2:36 pm

See http://joannenova.com.au/2015/06/if-it-cant-be-replicated-it-isnt-science-bom-admits-temperature-adjustments-are-secret/ for the reasons for the Australian BOM not being able to ISO 9000 their processes.

E.g.:

several choices within the adjustment process remain a matter of expert judgment and appropriate disciplinary knowledge

AGW is not Science
Reply to  John in Oz
October 16, 2018 11:24 am

“several choices within the adjustment process remain a matter of expert torture of the data using the appropriate ideological slant and circular logic to support the pre-conceived conclusions”

There, fixed it for ’em.

D Cage
Reply to  Mike Haseler
October 15, 2018 10:35 am

Let’s set our sight a bit lower and get them to pass Poundland’s standards , a cheap commercial chain here in the UK.

John in Oz
Reply to  Mike Haseler
October 15, 2018 2:39 pm

ISO9000 led to many bad processes continuing – but they were extensively analysed and documented to ensure that everybody consistently made the same mistakes.

mikewaite
October 15, 2018 6:12 am

It could be argued that in the case of the climate change question/problem/dispute Ike’s warnings should be inverted. It is the public policy and the politicians and their financial confederates that have captured the scientific and technological elite not the other way about.

mikewaite
Reply to  mikewaite
October 15, 2018 6:16 am

BTW I hope no one over there is offended by my calling Eisenhower Ike . it was how he was always known to my father’s generation ( the one that lived through the war) and to the press of that period .

Bob Smith
Reply to  mikewaite
October 15, 2018 6:37 am

No problem. For those of us growing up when he was president, the expression “I like Ike” was the way to show support.

Editor
Reply to  mikewaite
October 15, 2018 2:10 pm

I believe the only person you would annoy would be the President’s mother. She didn’t like nicknames, so I heard some 60 years ago, so she named her son Dwight.

Seems not to have helped.

Gary Pearse
Reply to  mikewaite
October 15, 2018 7:41 am

Scientists aren’t captives of government. Funds were the magnet and unresisting scientists are the iron filings. They even line up perfectly with the demands of the magnetic field.

Trillions spent and unpaid sceptics find all the errors – makes one wonder how 99% of real, fundamental scientific discovery was self funded prior to 100 years ago and these science-lite guys need billions a year to crank out the the same chaff with no advancement beyond a little linear formula discovered by Tyndall in the 19th century.

Einstein had to spend his days as a Swiss patent clerk to put bread on his table and after dinner revolutionized our understanding of the Universe and corrected the work of Sir Isaac Newton, no less. Heck he only needed a pencil, paper and some pipe tobacco wh8ch he paid for out of pocket.

Our foremost paleoclimatologist, Steve McIntyre, a mining engineer in his day job, similarly at his own cost struck down the hockey stick and corrected Mann’s faulty stats (Mann’s invalid method that converted red noise into hockey sticks and basically breathed new phlogiston into alchemy), caused other authors to have papers withdrawn, etc.,etc. He also had doubts about use of the popular bristlecone pine as a temperature proxy and inquired of Mann if the series had been updated (from the 1980s IIRC). Mann said it was too expensive and time consuming, so McIntyre set himself a challenge: to take off in the morning from Starbucks in a Colorado mountain town, climb up to update the series and be back at the Starbucks by afternoon. He did just that and his findings retired bristlecone pines from paleoclimate duty.

Anthony Watts’s US weatherstations exposé was another that got NOAA into action. I think volunteer crowd sourcing to check work done by alarmists is the way To put These shameful (shamful?) gougers out of business.

HotScot
Reply to  Gary Pearse
October 15, 2018 8:49 am

Gary Pearse

No computer ever came up with a scientific theory. It’s all from the brain and the proof of the pudding is in the blood sweat and tears, not a spreadsheet, that’s no better than an abacus.

Alasdair
Reply to  Gary Pearse
October 15, 2018 8:50 am

Trouble is Gary that these (shamful) gouges are not in business. They get paid for just keeping their noses clean.

ResourceGuy
October 15, 2018 6:26 am

Of course this in no way undermines a well established religion and its adherents. Nor does it undermine the power structure and money flows within that religion. The faithful will endure all critique from the unwashed. You can’t slow the ship now even with evidence of icebergs ahead.

Mike Haseler
Reply to  ResourceGuy
October 15, 2018 6:35 am

There are far more inane discussions going on.

For example today people are congratulating two Royals for finally working out how to have sex together.

Even the alarmists who comment here are intellectually above the sexgratulatory comments on other fora.

knr
October 15, 2018 6:39 am

‘Any actual errors identified will be dealt with in the next major update.’
by models that give us the result we ‘need’

‘The HadCRUT dataset includes comprehensive uncertainty estimates in its estimates of global temperature,” the Met Office spokesman said.’
Of course we never say that the media nor to politicians and estimates we mean, something either below or above that do ask us to define that ‘something’.

“We previously acknowledged receipt of Dr John McLean’s 2016 report to us which dealt with the format of some ocean data files.
And filed it ‘ironically ‘ under inconvenient truth .

“We corrected the errors he then identified to us,”
But has we never make any, because ‘models’ this is a none-problem.

Al Miller
October 15, 2018 6:45 am

As has been noted before, I would compare these scientists to prostitutes, but prostitutes provide a needed service unlike “climate scientists” on the government dole.

Mark Pawelek
October 15, 2018 7:05 am

1. Mosher told me he personally identified an error in GHCN with one station reading 15000ºC, which after he corrected had no noticeable affect on the global average! Within 1 hour he corrected himself and said the error was “only” 5000ºC.

2. Why do skeptics find errors, never the ‘climate consensus’? Over here, Philip B. Stark and Andrea Saltelli wrote there had been a fundamental shift in how science is done since the 1960s. Prior to the 1960s people entered science as a vocation; they cared about being right and doing right. With the vast expansion in higher education, in the West, since then, people now enter science as a career; they care about grants, papers published, career advancement, … There’s no fix for this. Much of science is broke and will continue to be broke while so many treat scientific knowledge as a tool to lever in their career advancement. Science is not like other pursuits. It is easily gamed because even scientists can be quite naïve in accepting truth claims made in niche disciplines outside their expertise.

Anders Valland
October 15, 2018 7:07 am

Climate God who govern’t our Planet,
Hallowed be thy existence.
Thy taxes come.
Thy influence be done
on earth as it is in the atmosphere.
Receive today our indulgence,
and abolish our environmental sins,
but for those with diesel cars,
Tempt us not with science and common sense,
but deliver us from those of no faith.
For thine is the Internet,
and the Grants, and the Smugness,
for ever and ever
Amen

Jacob Frank
Reply to  Anders Valland
October 15, 2018 8:28 am

Beautiful

commieBob
October 15, 2018 7:14 am

I don’t see why it’s a problem that ‘they’ adjust the 1800s temperatures downward. Nobody says that anthropogenic CO2 caused any warming before 1950. Cooling the 1800s temperatures just means that more of the modern warming is unambiguously due to natural causes. That means any calculations of equilibrium climate sensitivity (ECS) should be reduced.

Any pre-1950 warming should weaken the alarmist position. The more ‘they’ cool the 1800s temperatures, the greater the pre-1950 warming, the less the ECS. I think they are hoist with their own petard.

Gary Pearse
Reply to  commieBob
October 15, 2018 8:09 am

Commie, you missed the memo.they pushed the start period from 1950 back to 1850 and pushed the 1930s-40s peak down~1C. Before that, over 90% of the warming had occurred by late 30s and questions were raised on what caused that and why did the temperature decline for 40yrs after the peak? They “fixed” both!

Did you not catch that in doing so, they had 1C of the 1.5C already in the bank. The big worry was over 3C rise by 2100 measured from 1950. Now its 0.5 higher than today that’s going to kill us off! They realized once observations caught up with their 1990 forecast, the 300% too hot expectation meant that we couldnt attain even the 2C threshold by 2100 even if we pulled out all stops on burning of fossil fuels. 1.5C is double what happened in the past 100yrs. They hedged their bets even on that making 2C a complete disaster (even with 1C already banked.)

Ted
Reply to  Gary Pearse
October 16, 2018 9:06 pm

To give full credit for giving themselves multiple ways out, they don’t say that anthropogenic CO2 caused no warming before 1950; just that it has caused most warming since 1950.

rishrac
October 15, 2018 7:20 am

Want to know what’s wrong with an average?

20 F + 80 F = the crops in the field are dead or 100/2 = 50 F ( Sat,Oct 13 & 14, 2018)

40 F + 60 F = I can pick tomatoes until or 100/2 = 50 F (Mon, Oct 8 & 9, 2018)

Caligula Jones
Reply to  rishrac
October 15, 2018 7:42 am

Indeed.

Here in Toronto, we had a hot summer. “Average” temp was 22.5C. Last summer was 20.1, so yeah, panic time! Apocalypse!! Death is here on heated wings…

But…if you count extremely hot days (i.e., 7 degrees over the average max):

2018: 8
2017: 1

Ah ha! getting hotter, but:

2016: 8
2015: 5
2014: 1
2013: 3
2012: 8
2011: 3
2010: 5
2009: 0

etc. No pattern at all. (BTW, “hottest summer in Toronto history”, i.e., since 1938 measuring like this was 1955 with 17 extremely hot days. Ah, the “good times”, right?).

I think if anything is significant with such small time frames, its nighttime temps and we did set a “record” with 64 nights above average max.

Oh, and one more nerdy number: people will remember it a dry summer, but we had 226.8 mm of rain Average summer? 226.7.

I think in the general “Hype it UP” sense, that .1 mm means we were actually FLOODED!!!!

Caligula Jones
October 15, 2018 7:27 am

“Jo makes a good point. Why is it that skeptics always seem to be the ones that find the errors in climate data, hockey sticks, and other data machinations produced by the well-funded climate complex?”

Possibly because skeptics are the only ones looking for errors? You know, that last step one should do before yelling “fire” in a crowded theater, just in case its not really on fire, just seems like it because there is a smoke machine on stage and an actor yelling “fire”.

Seriously, thought, we know why: if it bleeds, it leads. Can’t get media attention and grants with “maybe”, or “might”, or “slight chance of”.

To paraphrase the line about legal proceedings: “if you have the measurements, pound the measurements. If you don’t have the measurements, pound the table”.

Basically, there would more skepticism and less panic if more of the general public (let alone the media) had better math skills.

Hell, ANY math skills.

Double hell, any interest in math at all.

Reply to  Caligula Jones
October 15, 2018 7:40 am

“Possibly because skeptics are the only ones looking for errors?”

That is correct. Many were seeing the ‘Earth going Venus’ propaganda that was circulated on facebook and were blindly believing it, without checking.

It was annoying enough to me to determine me to look over the sources (confirming me that it’s pure idiocy, even if the code would be correct, although it was not). In 5 minutes I found a fatal bug: https://github.com/ddbkoll/PyRADS/issues/2 It amazes me how people think that such garbage is the absolute truth and nothing but absolute truth.

Alasdair
Reply to  Adrian
October 15, 2018 9:26 am

Too right Adrian. To me the basic bug in the IPCC logic lies in the definition of Radiative Forcing (RF) found in the WG1 sections which quite frankly fails to comply with thermodynamic law.
If I were to plug this purported Forcing energy flux (approx. 1.6 Watts/sq.m) into my kettle; by definition, it would never boil.
It is tantamount to a student howler.

Raising this issue during an (un-named) university climate course, I was politely informed that the experts knew better and the matter was for convenience to simplify calculation by which it was sorted out. It was suggested that I went on another re- education course.

Unfortunately I remain with the belief that if the definition is wrong then the calculation is in error.

October 15, 2018 7:30 am

SUBJECT- THE FLAWED MODELS.

The models are all flawed because they do not incorporate solar/geo magnetic effects. The only reason why this is not more apparent yet is because thus far the solar/geo magnetic fields have yet to reach threshold values of weakness which would result in a major rather then a minor climatic impact.

So if you think the climate models are off now just wait a few more years. This is going to become more and more apparent as the climate models forecast a continued warming trend while reality is likely a continued cooling trend.

Latitude
Reply to  Salvatore Del Prete
October 15, 2018 7:36 am

SDP, the models can never be right
..and the models are the best proof that their past adjustments are lies

Latitude
October 15, 2018 7:35 am

Why is it that skeptics always seem to be the ones that find the errors in climate data….

They know it’s there…..they can’t work with it, know what to adjust it, or even how to use it…
…without knowing

These aren’t “errors” and don’t let them get away with that…..

steve case
Reply to  Latitude
October 15, 2018 7:58 am

Latitude – at 7:35 am
“…These aren’t “errors” and don’t let them get away with that…”

“Science is the Belief in the Ignorance of Experts” – Richard Feynman

It was generous of Dr. Feynman to allude to the wrongness of the experts as merely an artifact of ignorance.

steve case
October 15, 2018 7:49 am

Jo Nova,”… why a PhD student working from home can find mistakes that the £226 million institute with 2,100 employees could not …”

IKE, “Today, the solitary inventor, tinkering in his shop, has been overshadowed by task forces of scientists in laboratories and testing fields. …The prospect of domination of the nation’s scholars by Federal employment, project allocations, and the power of money is ever present – and is gravely to be regarded.”

Just maybe there is still a place for solitary individuals making significant contributions to science and industry. (-:

Nick Schroeder
October 15, 2018 7:49 am

The mantra of any large human enterprise:

You do what the boss wants done and say what the boss wants said – or you no longer work there.

October 15, 2018 8:06 am

“Why is it that skeptics always seem to be the ones that find the errors in climate data”
Well, is it? There are indeed errors in “climate data”. Many millions of numbers are collected from around the world, and mistakes are inevitable. A few years ago, I went on an investigation of similar errors in GHCN. Again there was the odd place and month (in Bolivia this time) where decimal points had slipped and numbers like 80 °C were appearing. But I didn’t just crow about finding an error. I looked into where they had come into the system (mostly on the Climat forms submitted by the local MO’s, but at least one was a GHCN error), and what GHCN did about them. And in each case, GHCN had flagged the errors, but left them in place. This is a proper attitude to raw data. You don’t change what was reported; you adapt what you are going to do about it.

But more importantly, there, I checked the consequences. I did the calculation with and without the suspect data. And it did make a difference, as I noted. This contrasts with this PhD thesis of John Mclean, where he says, on p 4:

“This thesis makes little attempt to quantify the uncertainties exposed by this investigation, save for some brief mention of the impact certain issues might have on error margins, because numerous issues are discussed, and it would be an enormous task to quantify the uncertainties associated with the many instances of each. It has been left to others to quantify the impact of incomplete data, inconsistencies, questionable assumptions, very likely data errors and questionable adjustments of the recorded data.”

I wasn’t hoping for a PhD. I was just doing blog articles. But I did do the calculations to see how much it mattered. And I did look into what could be done about it. The obvious answer (which took me a while) was to make sensible use of he flags provided by GHCN. The problem went away.

That comes back to the issue “Why is it that skeptics always seem to be the ones…?”. Why is it that naysayers are always the ones complaining about how temperatures are calculated by scientists, but never doing a calculation themselves? It really isn’t hard. You don’t even need a PhD.

MarkW
Reply to  Nick Stokes
October 15, 2018 8:58 am

This from the guy who last week was claiming that all the errors had already been found and corrected.

Reply to  MarkW
October 15, 2018 9:23 am

You’re not reading properly. As I said here, the errors in GHCN were found and flagged previously. That is the right response (I believe GHCN tries to get the source to review, not always successfully). The errors had been corrected in the GHCN adjusted file, which is what GISS and NOAA use. You just have to be more careful with observing the flags with raw data, as I learned.

MarkW
Reply to  Nick Stokes
October 15, 2018 1:17 pm

That’s not what the memo says.

Reply to  MarkW
October 15, 2018 8:25 pm

Which memo? I’m talking abaout GHCN.

Michael Jankowski
Reply to  Nick Stokes
October 15, 2018 8:22 pm

Awww, what a heartwarming and completely irrelevant story, Nick.

Mr.
Reply to  Nick Stokes
October 15, 2018 10:32 am

Is Excel your main tool Nick?

Reply to  Mr.
October 15, 2018 10:35 am

No, I program mainly in R.

Michael S. Kelly LS, BSA Ret.
Reply to  Nick Stokes
October 16, 2018 12:21 am

I program mainly in a language that is part of a pirated IDE: Rrrrrrrrrrrr.

Patrick MJD
Reply to  Nick Stokes
October 15, 2018 6:12 pm

Temperature are calculated? There’s the problem in climate seance.

Frank
Reply to  Nick Stokes
October 16, 2018 1:38 am

“Why is it that skeptics always seem to be the ones that find the errors …”? Answer: Because one needs to be skeptical to find mistakes in work that confirms one’s preconceptions.

A group of skeptics (BEST) explored a new methodology (kriging) for construction a global temperature record over land. To their surprising, they found that: Warming over land was more rapid that HadCRUT was reporting! The diurnal temperature range wasn’t shrinking as expected! That an index constructed only from stations outside urban areas showed just as much warming as the index from all stations. Real skeptics – ie scientists – don’t always discover what the expect to find.

AGW is not Science
Reply to  Frank
October 16, 2018 11:56 am

“A group of alarmists posing as skeptics (BEST)”

There, fixed it for ya.

Frank
Reply to  Nick Stokes
October 16, 2018 1:45 am

Nick Stokes is correct to criticize the absence of a calculation showing impact of these error. However, he fails to note that HadCRU failed to do the same thing after correcting errors back in 2016 in response to McLean’s criticisms.

Reply to  Frank
October 16, 2018 2:10 am

” However, he fails to note that HadCRU failed to do the same thing after correcting errors back in 2016 in response to McLean’s criticisms.”
How do you know they failed to do that?
In fact, HADCRUT do post the effect of every version change. Here is the page, dated 15 Sept 2016, quantifying the changes going from version 4.3 to V 4.4. They are quite invisible on the graph, but a difference plot shows them generally less than 0.01°C.

John McLean
Reply to  Frank
October 19, 2018 4:30 am

Frank, the errors I reported in 2016 were
– hemispheric summary files (avg temp and coverage per month) had the correct average temperatures but the coverage was for the other hemisphere.
– the file of sea surface observation counts for HadSST3 were from South to North but the main data file is from North to South. (This gave problems like SST observations around Mount Everest)
– the same SST observations file had instances where the counts exceeded 9999 and overflowed the space allocated. (It looked like the program that wrote that file was in Fortran because the overflowed fields were filled with ******.)
None of these problems directly impacted either HadSST3 or HadCRUT4 datasets.

Kristi Silber
Reply to  Nick Stokes
October 16, 2018 10:32 pm

“Why is it that skeptics always seem to be the ones that find the errors in climate data”

I was going to comment on that, too. What an odd statement! It’s especially weird that so many people here think whenever data are adjusted to fix errors/biases, it must be fudging the data.

Then the hockey stick issue was raised AGAIN, as if M&M really accomplished anything by their statistically flawed search for statistical flaws.

I imagine it’s because Anthony, et al. mostly read press releases rather than the less-publicized literature that comes without them that they aren’t aware that there is debate in science, and that there are corrigenda and errata that are due to discoveries by AGW-friendly scientists. Just a guess.

TallDave
Reply to  Nick Stokes
October 18, 2018 12:55 pm

This isn’t the 1970s, and this is the kind of problem you really ought to have a better handle on before you branch into mult-trillion-dollar global policy consulting. Every step from raw data to published numbers and the result of each step should be posted for public scrutiny. The surface station issues should have been examined long before Watts. Etc.

It’s clear QC is not a priority among the billions spent, and what QC is done often makes the data worse, as with the “warm the past of moved stations” bug McLean detailed.

The shrugging off of data issues with “calculate it yourselves!” explains why half the country believes none of the claims of AGW advocates, even the ones that are reasonable. At this point most media claims made about AGW hav roughly the scientific backing of phrenology.

John McLean
Reply to  Nick Stokes
October 19, 2018 4:24 am

Nick,

So you did the calculations with a few outliers removed. Big deal. There are far more problems with the data than just a few outliers, it’s just that they are very obvious errors that show quality control is poor.

Also, we woudn’t be talking about outliers in the station data if the national meteorological services (NMSs) had paid proper attention to detail. Do you really think that we should believe the rest of the data from the same NMSs? How about the data from the NMS that says the site is in the northern hemisphere when it seems to be in the southern hemisphere?

Peter
October 15, 2018 8:06 am

Random question / idea. Has anyone ever proposed something along the lines of PE licensing for climate research? Like engineering disciplines, I propose we create something like PE licensing for climatologists, which imposes strict evidence-based standards. Moreover, that these people be held liable for damages they cause. Just like an engineer who designs an unsafe system that results in someone’s death, or which results in monetary damages.

If I design a control system that brings down someone’s plant, oops, my ass is in big trouble. Just like a doctor, I have to maintain liability insurance. If we spend however many X billions of dollars on “carbon credits” or whatever, because this person predicted that the earth’s temperature would rise by Y degrees, but then it doesn’t happen – who is held responsible

People who make these predictions should be subject to the same penalties that a PE would. Which is CRIMINAL LIABILITY. I feel like the nature of this whole debate needs to change. These people are free to make whatever wild predictions they like, based on erratic datasets and weak assumptions, but bear no responsibility for the consequence of being wrong.

Any research or recommendations should be distinguished as “academic theory” or “professional.” Government policy actions should be strictly limited to only professional work product.

David Chappell
Reply to  Peter
October 15, 2018 11:05 am

Nice idea in theory but so many of these predictions mature well after the predictor is no more. Of course, there is always the Old Testament remedy of laying responsibility on their children’s children.

fah
Reply to  Peter
October 15, 2018 5:28 pm

I don’t think the analogy works well. Engineering is fundamentally different from climate science, as well as other fields one can think of that study things, but don’t make things. We could list a bunch of examples but that is too much trouble here. Here is the American Engineers’ Council for Professional Development definition of Engineering:

“The creative application of scientific principles to design or develop structures, machines, apparatus, or manufacturing processes, or works utilizing them singly or in combination; or to construct or operate the same with full cognizance of their design; or to forecast their behavior under specific operating conditions; all as respects an intended function, economics of operation and safety to life and property.”

The focus is clearly on making things and predicting behavior of things you make. All the PE tests given currently are smack in the middle of the “making things” world. An engineer, perhaps more than physicists, chemists, or biologists, has to deal inevitably with experimental confirmation – either the thing you make works or it doesn’t. Since somebody usually buys the thing you make, there is a need for liability management. It is unclear how this paradigm could fit climate science.

There are other kinds of licensure for things like toxicologists or forensic chemists and the like, but they tend to be driven by real world needs for their services.

Greg Cavanagh
Reply to  fah
October 15, 2018 7:57 pm

You make it sound as though we don’t need climate scientists, therefor they don’t need to be licenced or held accountable for their predictions.

It is a sorry state of affairs when a broad spectrum profession like “scientists” is not held accountable for their study or the results of their papers. They just pump them out and get them published. End of story, they go home and feel proud about themselves that their publishing count is now +1.

And it’s a double insult when the cost of the research is in the millions, and they get to push their beliefs on the TV or convince a politician, which then costs billions. All this with zero accountability?

John Endicott
Reply to  Greg Cavanagh
October 16, 2018 6:35 am

The problem with “holding them accountable for their predictions” is that their predictions can and often are set for a point well beyond when they’d be around to be held accountable. All the other professions you mention, the problems usually manifest within the practitioners lifetime. Not so much with predictions about what the climate will be like 50 years from now made by a “climate scientist” in their 50s.

Kristi Silber
Reply to  Greg Cavanagh
October 16, 2018 10:48 pm

Well then, why don’t we just get rid of scientists altogether? No more geologists, paleontologists, geneticists, food chemists, pharmacologists atmospheric physicists, oceanographers…no more exploration if there is the possibility that errors might be made. That would save a lot of money. Or we could have a system in which if a scientist made an error, they would have to pay back their grant. Sound like a plan to me!

The rest of the world can do science, and we will just make money, because that’s the only thing that’s important.

Scrap the space exploration program, too.

John Endicott
Reply to  Kristi Silber
October 17, 2018 5:56 am

Well then, why don’t we just get rid of scientists altogether?

no need to throw the baby out with the bath water there Kristi. Getting rid of activist scientists on the other hand, would be an excellent first step. activism and science don’t mix. Activists are by nature biased, science on the other hand is supposed to be neutral/non-biased. You can be an activist or you can be a scientist, you can’t be both.

Dr. Strangelove
Reply to  Peter
October 16, 2018 6:58 am

Because climatology is unlike engineering, climate forecasts should not be taken seriously. Like employing statistics to predict the outcome of poker games. Don’t bet your money unless you’re willing to lose. IPCC should be regarded as an astrology club borrowing the language of astronomy then engaging in fortune telling. Only weather forecasting is legit.

Caligula Jones
Reply to  Dr. Strangelove
October 16, 2018 9:15 am

” Don’t bet your money unless you’re willing to lose.”

I.e., Taleb’s “Skin in the Game”: if it ain’t your money, bet the house…

dodgy geezer
October 15, 2018 8:26 am

…“Any actual errors identified will be dealt with in the next major update.’’…

1 – Obvious errors like temperatures in excess of 100degC will be removed.

2 – HadCRUT will then claim that their data is the most accepted set of the planet, since it has been audited by both believers and skeptics alike….

climanrecon
October 15, 2018 8:46 am

The other (recent) end of climate data is in serious trouble, but this seldom gets a mention. For example, here is the entirety of the 2017 rainfall data for Australia in GHCNM version 2 (the version that now deals with rainfall data). Missing data is marked in red, besides that problem the number of stations is woefully inadequate for analysis work on recent droughts:

comment image?w=1024

observa
October 15, 2018 8:46 am

Bit of a hiccup but have the 2100 with their computer assisted manual checks and major updates worked out what the average temperature of the globe is supposed to be yet?

Ben Vorlich
October 15, 2018 8:47 am

Having had some errors pointed out they are only going to correct those errors. So we now we have two people, Stokes and McLean, highlighting issues. To me this suggests that there are more as yet unidentified errors. With all their resources perhaps the Met Office could spend some their budget on doing a proper audit rather than a couple of dozen of them jetting off to conferences in exotic locations.

Political Junkie
October 15, 2018 9:00 am

Above some folks were looking at Canadian extreme temperature trends. Ottawa has continuous records dating back to the 1880’s.

Annual extreme temperature has been declining at 0.52C per century over the period. The number of days over 30C has declined by 5.6 days per year.

Try telling that to the CBC or TVO. The data is readily available on a ECCC website.

Mike Smith
October 15, 2018 10:18 am

I’ll be interested to see HOW the errors are “dealt with”. Erroneous data needs to be excluded but I suspect they’ll figure a way to “adjust” it and then announce “It’s worse than we thought”.

Jim Gorman
October 15, 2018 11:46 am

Determining if the data available is fit for purpose is something I have argued before. Each and every study that claims to be ‘climate science’ should have a section detailing how and why the data set being used is applicable. It should cover adjustments to the data and why those are applicable.

Any study that that blames global climate change for creating the problem being studied while referencing ‘global temperature’ should be turned back. Whether it is polar bears, caterpillars in Honduras, or the Great Barrier Reef the ‘global temperature’ tells you nothing about localized temperatures, evaporation, etc. If scientists continue this then they open themselves to the argument that only one or two thermometers are needed to determine the ‘global temperature’.

This means they would need to develop temperature data sets from local recording stations rather than relying on one globalized, theoretical, ‘global temperature’ as a fallback to their findings.

Smart Rock
October 15, 2018 11:53 am

People in scientific (and other) disciplines these days tend to avoid repetitive tasks requiring constant concentration.

I’m no hero, I’m as lazy as the next person. We have computers to do all the heavy lifting of calculations, but getting historical data into digital databases is still manual work. It’s boring, repetitive and anyone who’s done it can surely attest to how easy it is to get distracted, you lose your train of thought, and that leads to errors.

Once the data is in a digital database, then we can sit back and play with it all we want. Sometimes, you can spot egregious errors in the data, but smaller errors usually don’t jump out at you. So who’s going to take the trouble to look for data-entry errors? Or worse, errors in writing down numbers in a century-old, hand-written record? Or confusion between hand-written 1’s and 7’s done by continental Europeans and being read by anglophones (or vice versa)? Sceptics, of course, because they’re hoping to find errors.

It would probably cost the CRU a couple of days’ operating costs to hire a few AI people to come in and design auditing software. Train the machines with known errors and let them rip through the databases. They could crow about how they’re improving the quality of the data. They do anyway, but this could give them some substance to crow about.

Ian W
October 15, 2018 12:32 pm

So the Southern Hemisphere was:

Temperatures for the entire Southern Hemisphere in 1850 and for the next three years are calculated from just one site in Indonesia and some random ships.

I would say it would be more honest to say that there was no information for a global dataset in the 1850s and that remains the case until there are sufficient sensors.

Is that difficult?

Well yes it is.

How many sensors and what distribution is needed to have an accuracy for global temperatures of +/- 5 degC ? Mathematically, you can probably create an average with a precision of several places of decimals (which is in fact what is done) but the accuracy is still unlikely to be as good as +/- 5degC at best even assisted by some applied guesswork.

Paramenter
October 15, 2018 1:12 pm

Jo makes a good point. Why is it that skeptics always seem to be the ones that find the errors in climate data, hockey sticks, and other data machinations produced by the well-funded climate complex?

Perhaps it is because they simply don’t care, and curiosity takes a back seat to money.

Perhaps there is another reason too. I reckon mainstream climate science sees themselves as defenders of ‘scientific orthodoxy’ against invading moronic masses, barbarians who want to desolate sacred ‘truth’ they firmly believe. Thus for them it is not a purely scientific dispute, it is quasi-religion one. Therefore, turning a blind eye on some errors (if they’re in favour of warming trends) or accepting some questionable adjustments are all acceptable practices if they serve this purpose. In this mentality the the end justify the means.

knr
Reply to  Paramenter
October 15, 2018 1:36 pm

Or perhaps one approach sees a paycheck and grants and another does not ?
The MET has done very well out of AGW, without actually getting better at its day job of forecasting weather. They certainly not seen the cut backs others have .

Paramenter
Reply to  knr
October 15, 2018 2:10 pm

Financial gratification is a big actor in the game. Unfortunately. Still, financial greed as a common denominator may be too simplistic. Some of them really believe that they’re some kind of missionaries bringing scientific enlightenment to the moronic masses who suffer under religious superstitious and are reluctant to accept teaching of the Academia. Myth of Prometheus is still alive in many scientific circles.

October 15, 2018 1:21 pm

I love finding really well-written articles and this is one of them. Thanks for sharing.

RockyRoad
October 15, 2018 1:45 pm

Post normal data inventions are particularly annoying!

October 15, 2018 2:59 pm

Hadley Centre’s response to John McLean’s corrections previously included the following comment and caveat;

“Correction issued 30 March 2016. The HadSST3 NH and SH files have been replaced. The temperature anomalies were correct but the values for the percent coverage of the hemispheres were previously incorrect.”

Now Hadley Centre responds with;

“Graham Lloyd, The Australian
Britain’s Met Office has welcomed an audit from Australian researcher John McLean{sic} that claims to have identified serious errors in its HadCRUT global temperature record.

“Any actual errors identified will be dealt with in the next major update.’’

One notes, that there is no caveat regarding temperature anomalies, yet.

Not that there is any easy accurate method to compare a dataset full of errors with a revised dataset with corrected errors; one gets suspicious that the anomalies will will once again, not change. Even for datasets riddled with errors.

JoNova’s comments are on target!

Do listed errors include calculating anomalies out to more decimal places than original temperatures were measured?

I note in the comment thread that Randy Stubbings points out:

Randy Stubbings October 15, 2018 at 7:15 am
I was working with HadCRUT4 data downloaded in September of 2018 and I compared it with the version I downloaded in July of 2014. Subtracting the 2014 values from the 2018 values shows a set of adjustments to the monthly median temperature anomalies ranging from about -0.06 degrees to about +0.06 degrees. The decade with the largest cooling adjustment was the 1860s (the average adjustment was -0.0198) and the two decades with the largest warming adjustments were the 2000s (+0.0059) and the partial decade from 2010 to June 2014 (+0.0203).

The average temperature anomaly from 1850 to 1899 was -0.313 as reported in June 2014 and -0.315 as reported in September 2018. The difference between the upper and lower confidence intervals on the reported temperature anomaly averages 0.6 degrees for the period, so reporting to three decimal places seems a bit silly to me. Isn’t 1850 to 1900 supposed to establish the benchmark temperature against which we measure the 1.5 degrees that was formerly known as 2.0 degrees?”

I am reminded of a discussion on a train to Washington DC where I was talking to a lady employed by the Department of Energy.
Her job was the tracking of the Exxon Payments for the Valdez spill. Literally, small fractions of a cent payment over many millions of transactions. Cumulatively, they were substantial.

False precision masquerading as accuracy is neither.

October 15, 2018 3:18 pm

“Any actual errors identified will be dealt with in the next major update.’’

It’s gonna get HOTTER !!!

October 15, 2018 3:39 pm

The question I would ask of the UK Met office is this. Having admitted that their are errors and that they will be corrected, are you then going to tell the IPCC that your calculations were wrong and that things are not as bad as you originally said ?

MJE

Anthony Banton
Reply to  Michael
October 15, 2018 4:47 pm

Michael:
The question(s) I would ask you is:
Why should they need to do that when any errors would make no discernible (if that) difference to the trend?
How many temp data points do you think comprise the Hadcrut dataset?
How many millions?
How many did Mr Mclean find exactly as a percentage of the total?

Venter
Reply to  Anthony Banton
October 15, 2018 6:47 pm

Oh Yeah, we’ve seen that, every error and every adjustment by the high priests will have no discernible effect. Yet they continue making errors and adjustments all the time which drive the trend only one way ” It’s worse than before “. Give us all a break from this crap Banton. McLean did work on his own to show the kind of crap you and your lot produce. And you squander millions of public money to produce this crap. In private sector you’d have been shown the door on the spot and probably indicted for false reporting if you’d been doing financial reporting this way. On the public sector trough, you all dig your nose in with no fear of any such repercussions.

Michael Jankowski
Reply to  Anthony Banton
October 15, 2018 8:27 pm

“…Why should they need to do that when any errors would make no discernible (if that) difference to the trend?…”

Priceless.

Venter
Reply to  Michael Jankowski
October 15, 2018 10:51 pm

Priceless and shameless isn’t it Michael? These kind of blokes are what populate the Met Office and call themselves ” scientists “, a shame to genuine scientists. Errors like this, if found in a clinical trial data or in financial data would render the entire study and report invalid and suspect.

But these high priests of warning can do any kind of careless and shoddy work without and have the gall to say ” Move on, nothing to see here “. any fear of repercussions. And they’ll have their water carrying apologists like Mosher, Stokes and barry to come and defend any crappy work they do.

Andrew Wilkins
Reply to  Michael Jankowski
October 16, 2018 5:11 am

You beat me to it Michael.

“Why should we worry that all our cars only had 3 wheels fitted when they came out of the factory last month? It’s not a problem, as they always had 4 fitted for the previous 100 years. It’s all about the trend, don’t you see? Anyway, we adjusted the data and found that from 1905 until 1920 the cars actually only had 2 wheels fitted, so the trend is increasing. Before you know it, we’ll have 5 wheels on each car!”

The data is junk and Stokes, Mosher, et al know it.

Scott Bennett
Reply to  Anthony Banton
October 16, 2018 12:53 am

==> Anthony Banton

Jeez you’re obtuse!

There were 75 findings, including systematic errors larger than the total trend! You’re like a dog on a bone, you can’t let go of that one issue (Station file errors). Read the paper and you will see that you have no idea what you are talking about.

There were 25 major findings, none of them rely on this straw man you keep burning.

The findings of the report had major implications:

The HadCRUT4 global annual average temperature anomaly supposedly increased
from -0.176°C in 1950 to 0.677°C in 2017, an increase of 0.753°C. When all of the
errors discussed in this report are corrected and appropriate error margins somehow
determined for other factors, that 0.753°C is likely to decrease and the error margins
increase. The new error margins might remove the statistical certainty that any global
warming whatsoever has occurred since 1950. Even if warming was found to have
occurred but only at 50% of the trend indicated by the HadCRUT4’s current figures it
might radically change public and political attitudes. – Mclean, John D. 2017

Anthony Banton
Reply to  Scott Bennett
October 17, 2018 6:52 am

Scott Bennet:
What is obtuse(ness) and those that are “like a dog with a bone” are the hounds on here answering faithfully to the dog-whistle.
Such that Middleton has already started a new article with the “now discredited Hadcrut” meme.
It’s entered Naysayer mythology.
No it is not discredited just because rabid confirmation bias makes it so in the mind of denizens here.

Nick Stokes knows more about global temp data and it’s usage than the anyone here and I defer to his comments on the two threads regarding the “Bombshell” that this is not – a few errors (75 ? heck) that were found in original national met files containing millions of data points that have NOT been shown to have been imput into Hadcrut … and even if they were the effect would be negligible

“OK. This is no BOMBSHELL. These are errors in the raw data files as supplied by the sources named. The MO publishes these unaltered, as they should. But they perform quality control before using them. You can find such a file of data as used here. I can’t find a more recent one, but this will do. It shows, for example
1. Data from Apto Uto was not used after 1970. So the 1978 errors don’t appear.
2. Paltinis, Romania, isn’t on that list, but seems to have been a more recently added station.
3. I can’t find Golden Rock, either in older or current station listings.”

“Also, “we do QA later” doesn’t explain why obvious errors are still in the source data.”
Because it is source data. People here would be yelling at them if they changed it before posting. You take the data as found, and then figure out what it means.”

“WTF? “left to others?”. How can you get a PhD saying that I did the proofreading, but calculations were to hard. And if a PhD project can’t do it, who are those others?”

“It isn’t an enormous task at all. HADCRUT isn’t rocket science. You just write a program that emulates it, and then see what happens when you correct hat you think is wrong with the data. I wrote a program years ago which I have run every month, before the major results come in (details and code here). I have done that for seven years. They are in good agreement. In particular the simplest of my methods, TempLS grid, gives results which are very close to HADCRUT. If I used 5° cells and hadsst3 instead of ERSST, I could make the agreement exact. I wouldn’t expect to get a PhD from doing that, let alone saying it was too hard.”

“In fact, HADCRUT do post the effect of every version change. Here is the page, dated 15 Sept 2016, quantifying the changes going from version 4.3 to V 4.4. They are quite invisible on the graph, but a difference plot shows them generally less than 0.01°C.2

“That comes back to the issue “Why is it that skeptics always seem to be the ones…?”. Why is it that naysayers are always the ones complaining about how temperatures are calculated by scientists, but never doing a calculation themselves? It really isn’t hard. You don’t even need a PhD.”

scross
Reply to  Anthony Banton
October 17, 2018 12:08 pm

Not so much 75 individual errors within the data itself, but rather (up to) 75 different kinds of error, as outlined in “Appendix 6 – Consolidated Findings” of his report. This up from (up to) 26 different kinds of error outlined in his PhD thesis, in “Chapter 8: Summary of Part 1” of that thesis. I say “(up to)” because I’m leaving room open for the possibility that some of his claims may be incorrect or at least inconsequential, and that some of them may in fact refer directly to individual errors within the data.

As to “who are those others”, given that he came up with so many more findings after publishing his thesis, I’d say it looks like he decided to take on that burden all by himself.

Scott Bennett
Reply to  Anthony Banton
October 18, 2018 10:42 pm

==>Anthony Banton

Again, a lot of words to say precisely nothing! A tale told by an NPC bot, full of sound and fury, signifying nothing! You can’t read or perhaps comprehension is your problem. You never address anything I’ve written you just go straight into your program loop.

The dataset is good enough for the IPCC but it’s not good enough for your mate Nick! Apparently you didn’t get his memo:

CRUTEM4 (and HADCRUT) are shown with uncertainties. By the time you get back to 1950, they are large (about 0.5°C). SH uncertainty is over 1°C. I personally don’t use HADCRUT back to 1850, and I’m sure many don’t. – Nick Stokes

Anyway, good luck with that puppet gig you’ve got going!

Clyde Spencer
Reply to  Anthony Banton
October 19, 2018 4:26 pm

Anthony Banton,

I’ll give the Devil his due, and acknowledge that Stokes is quite familiar with global temperature records. However, your statement, “Nick Stokes knows more about global temp data and it’s usage than the anyone here…,” is not supportable. With regulars like Roy Spencer, and today, John Mclean himself, and even Mosher, and Anthony Watts, you are engaging in hyperbole.

Steven Mosher
October 16, 2018 3:19 am

“We previously acknowledged receipt of Dr John McLean’s 2016 report to us which dealt with the format of some ocean data files.

“We corrected the errors he then identified to us,” the Met Office spokesman said.

Read carefully. OCEAN FORMAT

the data errors as I explained are NOT in the final product.

The site he quoted doesnt have enough data and it has never been in any of the copies I have

EVER

Andrew Wilkins
Reply to  Steven Mosher
October 16, 2018 5:02 am

Your typing in CAPITALS.

I think Dr McLean’s put the wind up Mosher. Excellent!

Andrew Wilkins
Reply to  Andrew Wilkins
October 17, 2018 2:35 am

*you’re

John McLean
Reply to  Steven Mosher
October 19, 2018 4:33 am

I’ve replied to this above where I describe the errors that I reported to the Hadley Centre and CRU in 2016.
I forgot to mention about that a Bishop Hill blog posting in March 2016 has the details of the problems.

Steven Mosher
Reply to  John McLean
October 21, 2018 11:12 am

Did you remember that they apply a filter screening outliers of 5sigma.

“” Each grid-box value is the mean of all available station anomaly values, except that station outliers in excess of five standard deviations are omitted.”

Ron Broberg and I covered this back in 2010

As Ron wrote

“Out of that set, there are 425 records that are more than 5-SD off of the norm on the high-side and 346 records that are more than 5-SD off of the norm on the low-side. Only about 0.02% of the records. “

Scott Bennett
Reply to  Steven Mosher
October 21, 2018 11:48 pm

==>Steven Mosher

Dr Mclean has addressed this several times:

The problem in a nutshell is that the Hadley Centre and/or CRU fail to remove outliers from the data before they calculate the standard deviations. This can lead to ridiculous values, which when multiplied by five to set the limits above and below the mean become positively bizarre.

The metadata for Apto Uto contains the following line:

Standard deviations = 0.6 0.6 0.5 11.9 0.5 11.8 12.0 0.6 0.5 0.6 0.6 0.7

Five standard deviations for most of those months means no more than 3.5°C but in three of those months they are 59, 59.5 and 60 degrees. The long-term averages in those months are around 28°C and together that means the temperatures of 81.5, 83.4 and 83.4 are all less than five standard deviations from that mean.

Before calculating the standard deviations from a subset of the data any outliers should have been removed and the process repeated until all the data fell within limits, which probably should have only been the more common three standard deviations anyway. – John McLean 2018*

Here is the header from the 2018 station data file that I downloaded this month:

“Number= 800890
Name= APTO_OTU
Country= COLOMBIA
Lat= 7.0
Long= 74.7
Height= 630
Start year= 1947
End year= 1988
First Good year= 1947
Source ID= 79
Source file= Jones
Jones data to= 1988
Normals source= Data
Normals source start year= 1961
Normals source end year= 1988
Normals= 24.1 24.4 24.6 27.8 24.6 27.9 28.0 24.6 24.4 24.1 24.1 24.0
Standard deviations source= Data
Standard deviations source start year= 1947
Standard deviations source end year= 1988
Standard deviations= 0.6 0.6 0.5 11.9 0.5 11.8 12.0 0.6 0.5 0.6 0.6 0.7
Obs:…”

*http://joannenova.com.au/2018/10/hadley-excuse-implies-their-quality-control-might-filter-out-the-freak-outliers-not-so/#comment-2060139

John McLean
Reply to  Scott Bennett
October 22, 2018 12:32 am

Mosher has strange faith that the Hadley Centre and CRU documentation he refers to is correct.
He needs to catch up with the 2012 documentation about both CRUTEM4 and HadCRUT4, and even then he also needs to examine the data rather than just believe the documentation.

Anthony Banton
Reply to  Scott Bennett
October 22, 2018 5:03 am

From that link…..

“I asked John to expand on what Hadley means. He replies that the quality control they do is very minimal, obviously inadequate, and these errors definitely survive the process and get into the HadCRUT4 dataset”

Handwaving denigration.
Lets have proof of this “obviously inadequate” QC please.

Anthony Banton
Reply to  Scott Bennett
October 22, 2018 5:06 am

Which isn’t what has been produced so far.
Do what Mclean should have done in order to gain a Phd worth it’s name …. and crunch the numbers.

Caligula Jones
October 16, 2018 9:20 am

John Endicott October 16, 2018 at 6:30 am
The question of the moment is what will happen if we burn a whole lot of carbon

“and the answer is pretty much the same as any other time there was a whole lot of carbon in the atmosphere. It’s not that difficult to understand that there is nothing unprecedented about our current PPM of CO2 in the atmosphere,”

True…but….

My problem when I point this out to alarmists is that they DO have it right about humans causing issues with nature but its not necessarily about climate change.

Its about land use.

If species A “remembers” (somehow) that its supposed to head north when it gets hot, and there are now highways and quite a few large honking cities in the way…might be an issue.

If when they get there they try to eat something that was wiped out 10,000 years ago…might be an issue.

Or not, depending on if you get the sniffles over evolution.

John Endicott
Reply to  Caligula Jones
October 16, 2018 11:43 am

I agree that land use is a much bigger environmental issue than our small contribution to a trace gas in the atmosphere (that, frankly, is a net benefit to life on Earth).

TallDave
October 18, 2018 12:44 pm

The easiest person to fool is yourself.

Anthony Banton
October 22, 2018 4:49 am

stock
October 24, 2018 3:57 am

https://www.metoffice.gov.uk/hadobs/hadcrut4/data/current/download.html

Why is the Sept 2018 HADCRUT not yet released!!!!!!!!!!!!

stock
Reply to  stock
October 24, 2018 4:00 am

I sent email to them to ask
https://www.metoffice.gov.uk/about-us/contact

%d bloggers like this:
Verified by MonsterInsights