WUWT readers surely recall this story: BOMBSHELL: audit of global warming data finds it riddled with errors
While not issuing a press release, the scientists have responded to press inquiries.
Britain’s Met Office welcomes audit by Australian researcher about HadCRUT errors
Graham Lloyd, The Australian
Britain’s Met Office has welcomed an audit from Australian researcher John McLean that claims to have identified serious errors in its HadCRUT global temperature record.
“Any actual errors identified will be dealt with in the next major update.’’
The Met Office said automated quality checks were performed on the ocean data and monthly updates to the land data were subjected to a computer assisted manual quality control process.
“The HadCRUT dataset includes comprehensive uncertainty estimates in its estimates of global temperature,” the Met Office spokesman said.
“We previously acknowledged receipt of Dr John McLean’s 2016 report to us which dealt with the format of some ocean data files.
“We corrected the errors he then identified to us,” the Met Office spokesman said.
I’m sure that crap data apologists Mosher and Stokes will be along to tell us why this isn’t significant, and why HadCRUT is just fine, and we shouldn’t give any attention to these errors. /sarc
Jo Nova adds:
Without specifically admitting he has found serious errors, they acknowledge his previous notifications were useful in 2016, and promise “errors will be fixed in the next update.” That’s nice to know, but begs the question of why a PhD student working from home can find mistakes that the £226 million institute with 2,100 employees could not. Significantly, they do not disagree with any of his claims.
Most significantly they don’t even mention killer issue of the adjustments for site moves — the cumulative cooling of the oldest records to compensate for buildings that probably weren’t built there ’til decades later.
More on her take here.
Jo makes a good point. Why is it that skeptics always seem to be the ones that find the errors in climate data, hockey sticks, and other data machinations produced by the well-funded climate complex?
Perhaps it is because they simply don’t care, and curiosity takes a back seat to money. Like politicians looking to the next election, Climate Inc. has become so dependent on the money train, their main concern is the next grant application.
Eisenhower had it right. We’ve all heard about Dwight D. Eisenhower’s farewell address, warning to us about the “military industrial complex”. It’s practically iconic. But what you probably didn’t know was that same farewell speech contained a second warning, one that hints at our current situation with science. He said to the Nation then:
Akin to, and largely responsible for the sweeping changes in our industrial-military posture, has been the technological revolution during recent decades.
In this revolution, research has become central, it also becomes more formalized, complex, and costly. A steadily increasing share is conducted for, by, or at the direction of, the Federal government.
Today, the solitary inventor, tinkering in his shop, has been overshadowed by task forces of scientists in laboratories and testing fields. In the same fashion, the free university, historically the fountainhead of free ideas and scientific discovery, has experienced a revolution in the conduct of research. Partly because of the huge costs involved, a government contract becomes virtually a substitute for intellectual curiosity. For every old blackboard there are now hundreds of new electronic computers.
The prospect of domination of the nation’s scholars by Federal employment, project allocations, and the power of money is ever present – and is gravely to be regarded.
Yet, in holding scientific research and discovery in respect, as we should, we must also be alert to the equal and opposite danger that public policy could itself become the captive of a scientific-technological elite.
I will be interested to see how the Met Office ‘corrects’ the total lack of observations in the southern hemisphere in the early record and their sparsity in the northern hemisphere. Perhaps an admission that their early record is insufficient to use as a ‘global’ temperature set. I would have thought that there is a minimum coverage requirement before any surface observation set can be called ‘global’ and with the reduction in observation stations we are perilously close to that now.
In science data rules but in politics the data is what they say it is. We have been immersed in global politics with this Meme since the UN established the IPCC explicitly to find an adverse effect between man and his environment or should I say woman and her environment after all we wouldn’t want to be POLITICALLY INCORRECT.
“In science data rules but in politics the data is what they say it is.”
If A is greater than B, and B is greater than C, then Ais greater than C, except where prohibited by law.
—Robert Anton Wilson
…except where prohibited by politics, which is even worse!
Shaddup and be happy your chocolate ration has been increased to 20 grams per week, instead of the old 30 grams per week…
LOL … that’s a keeper !
What?!?!?!? Ration chocolate!?!?!?
Up against the wall (beep).
Chocoholics Unanimous meeting: Godiva’s at 5:00
models , that is always the answer when you spent has much as they have on computing power
The real answer is probably less sinister and that is academic snobbery. Any computer model is only as good as the data it is founded on and tested against. The cost of computing means that corners are cut elsewhere and one of the greatest is in data collection which as far as I can see is scarcely if at all mentioned in peer reviews.
I do know from personal experience that when a lecturer I knew criticised the methods and attention to detail in the data collection he was told by someone who was a senior scientist at the time to go back to his grease monkeying and leave science to scientists. Now of course they would not express this sentiment but questioning reveals nothing has changed.
I was working with HadCRUT4 data downloaded in September of 2018 and I compared it with the version I downloaded in July of 2014. Subtracting the 2014 values from the 2018 values shows a set of adjustments to the monthly median temperature anomalies ranging from about -0.06 degrees to about +0.06 degrees. The decade with the largest cooling adjustment was the 1860s (the average adjustment was -0.0198) and the two decades with the largest warming adjustments were the 2000s (+0.0059) and the partial decade from 2010 to June 2014 (+0.0203).
The average temperature anomaly from 1850 to 1899 was -0.313 as reported in June 2014 and -0.315 as reported in September 2018. The difference between the upper and lower confidence intervals on the reported temperature anomaly averages 0.6 degrees for the period, so reporting to three decimal places seems a bit silly to me. Isn’t 1850 to 1900 supposed to establish the benchmark temperature against which we measure the 1.5 degrees that was formerly known as 2.0 degrees?
Baselines are typically 30 years. We’re presently working with 1981-2010, but will move to 1991-2020 soon.
In 2010 D’Aleo and Smith report that in the period of the 1960’s to the 1980’s the number of stations used for calculating global surface temperatures was about 6,000. But it dropped rapidly to about 1,500 by 1990. Further, large gaps began appearing in some of the reported data.
In the Climategate e-mails, Phil Jones wanted to stay with 61-90, partly because it was one of the coldest periods in the 20th century and partly because of the major loss of stations after that time.
” Perhaps an admission that their early record is insufficient to use as a ‘global’ temperature set. “
If you go back in time, there will always be a point where you decide that the record is insufficient. For GISS and NOAA it is 1880. HADCRUt decided to post numbers back to 1850. The quality is always fading towards the end chosen, and you can always say it should have been a few years earlier or later.
But HADCRUT post the error estimates, increasing as you go back, and you can make your own decision about whether to use the calculation.
You do realize those “error estimates” should increase in time, right?
………exponentially
“But HADCRUT post the error estimates…” Fantasy land.
So the Southern Hemisphere was:
I would say it would be more honest to say that there was no information for a global dataset in the 1850s and that remains the case until there are sufficient sensors.
Is that difficult?
Well yes it is.
How many sensors and what distribution is needed to have an accuracy for global temperatures of +/- 5 degC ? Mathematically, you can probably create an average with a precision of several places of decimals (which is in fact what is done) but the accuracy is still unlikely to be as good as +/- 5degC at best even assisted by some applied guesswork.
Ideally a climate record would go back through 2 or 3 full cycles of glaciation and their associated interglacial periods. Anything less than this is insufficient to make meaningful determinations of what is and is not within the range of possible “natural variability.” Basing anything on a record made during the instrumental era is totally insufficient, and any sample of that short a duration would be non-representative for any reasoned statistical use to decide long term climate.
The long ice core records show CO2 not to be the driving influence of end of glaciation warming, but rather, show CO2 increases to be an effect of inter glacial warming (probably due to ocean warming leading to CO2 outgassing). Which suggest that whatever melted the ice was natural, and that CO2 played no vital role. Go ahead Nick, convince me that a buggy, tendentious, infintessimally short HadCRUT based record can teach us anything meaningful about natural climate variability on these time scales. This will be a very difficult sales job for you. But if you don’t at least try, forever resign yourself to the role of poseur within the set of people who claim to understand the Earth’s climate.
“The long ice core records show CO2 not to be the driving influence of end of glaciation warming”
Who said it was? Of course it wasn’t. No-one was emitting CO2 then.
Then clearly the climate changed without CO2 changing. There are plenty of ‘climate drivers’ other than CO2.
Hence claims that CO2 is THE climate control knob are obviously false.
This is not difficult stuff.
“There are plenty of ‘climate drivers’ other than CO2.”
Yes, there are. There are plenty of things that will kill you other than sharks. That is no help if a shark is heading your way.
The question of the moment is what will happen if we burn a whole lot of carbon. We’re doing that. It hasn’t happened on such a scale before.
good to see nick acknowledge what will happen when we burn a whole load of carbon is still a question and not settled science after all.
The question of the moment is what will happen if we burn a whole lot of carbon
and the answer is pretty much the same as any other time there was a whole lot of carbon in the atmosphere. It’s not that difficult to understand that there is nothing unprecedented about our current PPM of CO2 in the atmosphere, it’s been higher at previous points in the past. and it’s been lower as well. And the times its been lower have been not too good for life on the planet whereas times where it has been higher have been prosperous times for life on the planet. rising CO2 is not the problem you imagine it to be.
“Hence claims that CO2 is THE climate control knob are obviously false.
This is not difficult stuff.”
What’s “not difficult” (unless because of something other than science comprehension)
is that CO2 is a GHG and whether it comes before or after another warming driver it will reduce the Earth’s ability to cool.
Why is it that that is so difficult for some denizens to understand?
The ice-cores record the way the Earth has responded to changes in absorbed solar energy.
Primarily via our eccentric orbit around the Sun and axial tilt (Milankovitch).
In the past (accepting massive volcanic out-gassing in the PETM FI) CO2 has come along after. That is the way the carbon cycle behaves.
Now it’s coming first ….. because humans have increased atmospheric CO2 content by greater than 40% from the in-balance carbon cycle level of ~280ppm pre-industrial.
IE: Not the natural carbon cycle.
@Anthony Banton
“What’s “not difficult” (unless because of something other than science comprehension)
is that CO2 is a GHG and whether it comes before or after another warming driver it will reduce the Earth’s ability to cool.”
If that were of any consequence, then the ice core reconstructions would show the alleged CO2 “contribution” to warming. They do not. They show CO2 FOLLOWING temperature, up AND down, with the same ~800 year time lag. And whatever claim you want to make about CO2 “contributing” to warming once the time lag has been made up and BOTH temperature and CO2 are rising, it isn’t supported by the data. Because if it was, then once what was (excuse me) REALLY causing the temperature rise stopped, what we SHOULD see, but DO NOT SEE, as long as CO2 levels continue to rise (i.e., for another 800 years) is temperatures CONTINUING TO RISE, at a lower rate, that lower rate being CO2’s “contribution” to the warming. Instead, what we see is temperatures FALLING, WHILE CO2 LEVELS CONTINUE TO RISE, which tells us that CO2’s “contribution” is essentially ZERO.
“If that were of any consequence, then the ice core reconstructions would show the alleged CO2 “contribution” to warming. They do not. They show CO2 FOLLOWING temperature, up AND down,”
Of course they do!!!!
Still not comprehending, sigh!
The ice-core record is of the natural carbon cycle’s response to temperature change.
OK?
CO2 content within the CC is by definition in balance with the planets sinks/sources (at any small finite time).
(say) The temp rises due an increase of insolation over higher northern latitudes (greatest land-mass area and high albedo/melt change potential)
Consequently that temp rise reduces the ocean’s ability to absorb CO2 – leading to an imbalance in the CC that leads to a slow – I mean very, very slow, increase in atmospheric CO2 content ….
“The fastest large natural increase measured in older ice cores is around 20 ppmv (parts per million by volume) in 1,000 years (a rate seen during Earth’s emergence from the last ice age around 12,000 years ago). CO2 concentration increased by the same amount, 20 ppmv, in the last 10 years! ”
https://www.bas.ac.uk/data/our-data/publication/ice-cores-and-climate-change/
That creates an increased GHE via there being more GHGs in the atmosphere (CO2,CH4,H2O).
Leading to a +ve feedback and a further push on rising temps.
The same happens in reverse as CO2 follows the falling temp as northern insolation reduces…..
Global temps fall.
The oceans are able to absorb more CO2.
The atmos can hold less WV.
Reduced atmos GHG’s.
Reduced GHE.
+feedback of CO2 pushing falling GMSTs.
CO2 LAGS temp as seen via the natural CC in the ice-core record.
That is as it should be.
What’s different now?
No solar forcing (as experience over the millenia of orbital change).
However – increasing CO2 (100x faster than the natural ice-core recorded increase).
AGAIN – it matters not when CO2 enters the atmosphere.
It’s a GHG and MUST reduce the Earth’s ability to cool if it increases.
CO2 now leads and not follows because it’s NOT a natural CC response as (again) we see in the ice-cores.
Instead of a feed-back to temperature change it is now the driver of temperature of it’s own account.
Sorry but I cant make it simpler than that.
AB:
What a pile of silliness and horse feathers. You really don’t have a clue what’s actually going on, do you?
I know he’s not a scientist, but propagandist and politician Al Gore (ManBearPig) said it in his ridiculous film. The UK government endorsed this view when they forced all the poor bastards attending public school had to watch his drivel in their classrooms. Real Climate is on record as saying Al is not wrong. Who do you need to hear from before you admit that this is your team’s official position. Gavin and Jeff Severinghaus do some clever prestidigitation and finessing on how causes can happen after effects, but they come across as desperate and stupid.
Nick, are you agreeing that the pointer of falsification points away from CO2 as a cause of glacial to interglacial warming? Do you disagree with the RealClimate excuses?
At what point do error estimates become so large, they render the whole exercise meaningless?
The modern data still suffers from grossly insufficient sampling.
You just adjust missing data into existence. Next time, ask a really difficult question.
what is the betting once the “corrections” have taken place it will be worse than we thought ?
“Why is it that skeptics always seem to be the ones that find the errors in climate data, hockey sticks, and other data machinations produced by the well-funded climate complex?”
It’s like everything else Anthony. The ‘concencus alarmists’ see one side of the story, theirs. And whilst sceptics are subject to exactly the same propaganda as them, and we understand the arguments supporting climate alarmism, we have also taken the time to educate ourselves on the other side of the debate.
But of course we’re the idiots for taking an objective view of the subject, understanding both sides of the argument and reaching a reasonable conclusion.
PS
Nor do we run round with our hair on fire screaming “the worlds going to be perfect in 50 years time”, all we’re saying is that judging by the last 40 years (or couple of hundred years of shoddy weather records) there appears no reason for alarm as the planet is in better nick now that it has been for many years. It’s more prosperous, there’s less poverty, more cures for disease, longer lives and, of course, far more peaceful than it has ever been in mankinds history.
And I have to laugh when climate is blamed for the Syria crisis. Did the climate cause the American Revolution, the Napoleonic wars, WW1 or WW2, Vietnam or the Korean war? If so then it must be the force that has brought about a more peaceful world since then.
But of course while the claim about Syria is a reasonable conclusion to draw, the rest of my statement is just ridiculous………..
Point of interest in there, HotScot — those “shoddy” records never needed to be anything better. Nor, by and large, do they now.
My “official” temperature at the moment (the thermometer on the north-facing wall that gives my weekly max/min readings) says 21.4. The one out in the open says 24.8. 3.5° difference in less than 50 feet.But so what? And not to mention the 12° degree difference since 6 o’clock this morning! Interesting information but what exactly am I supposed to do with it?
Agreed that we need standards if we are to get reliable figures but what are we going to do with those figures when we have them? This seems to me to be where we see the emperor in his new clothes. Experts collect these figures and then massage them, homogenise them, “nurse them, rehearse them, and give out the news” that they can’t predict what they will be more than four days ahead — today’s forecast maximum four days ago was for 17°, this morning it was for 21 — but that they know that Schellnhuber’s totally out of thin air 2° more than some unspecified “average” for 200 years ago (when the figures really really were shoddy by today’s standards) is going to mean doom unless we cough up large amounts of cash, payable to heaven knows who, from now on.
And we fall for it!
My thermometer is around 10 metres from my drive. A few degrees shift in the wind makes the reading change by nearly five degrees on a sunny but warm rather than hot day day depending on if it goes over the drive at 45 degrees C or the grass at 20 degrees C according to the hand held infra red thermometer. I really don’t understand how unless they live under different rules the reading to sub degree levels can make any sense at all.
Exactly. Temperature readings are all about where the thermometers are located, and the characteristics of those locations. Since those have changed from largely rural to largely urbanized over time, there is always going to be a spurious upward trend in the temperature readings. And the “adjustments” are being done in a manner exactly OPPOSITE what they should be to reflect this.
What amazes me is that alarmists like Nick Stokes have no comments when other alarmists make the claim that the Arctic ice will disappear within 5 years and then 5 years comes and it is still there. There are dozens of other predictions that never come true. The alarmists have cried wolf so often ;the global warming farce is getting tedious. Us skeptics realize that it is a religion to alarmists like Nick but when you plead to your God a 1000 times and he/she doesnt answer, a reasonable person would begin to doubt.
HotShot wrote:
“Why is it that skeptics
always seem to be the ones
that find the errors
in climate data, hockey sticks,
and other data machinations
produced by the well-funded
climate complex?”
My comment:
I’ve been writing
an economics newsletter as a hobby
since 1977. Every issue contains
typoes and gramma errers
that I find later,
because after writing two drafts,
I have no desire
to proofread what I wrote
when I’m charging only
one dollar an issue !
If the confuser
didn’t spot my errors
while I was typing,
I just assume they
couldn’t be too bad.
You can’t find errors
unless you look for them,
with a desire to find them,
and then correct them.
Concerning the errors coming in from
the national meteorological offices
listed in the summary of McClean’s report
(I’m too cheap to pay $8 for the actual report),
… I don’t know how many of these “local errors”,
if any, are caught and fixed before compilation
of the HADcrut4 average global temperature.
I assumed John McLean didn’t know that either
— government bureaucrats with
science degrees tend to guard their
“final” adjusted, readjusted,
and re-re-adjusted “data”
with a pack of junkyard dogs
(led by bureaucrats such as
Michael “Fido” Mann),
The fact that such obvious
“garbage data”
was submitted
in the first place
concerns me.
The fact that
weather satellite data
and weather balloon data
are similar to each other,
but both show less warming than
surface data, concerns me too.
The fact that most of the planet
is not populated, and has no weather
stations, makes the surface data into a
an infilled / wild guessed laughingstock
— infilled numbers can never be verified
before 1979, or falsified.
After 1979, surface data can be compared
to the two other measurement methodologies,
satellites and balloons — and they are different
= not verified.
Sort of shooting down my own arguments:
(1)
The average temperature is not “the climate”,
and is meaningless to people — no one lives
in the “average climate”, and
(2)
Slight warming of the average temperature,
mainly at night,
mainly in the colder months,
and mainly in the colder latitudes,
is GOOD NEWS
— If we worry, then we ought to worry
about the large temperature declines
after the current interglacial ends,
perhaps delayed somewhat
by adding CO2 to the air?
A lot of people on this planet would suffer
from a significantly colder climate.
No one would care about a +2 degree rise
in the average temperature in the next 600 years
(extrapolating the actual
warming rate since 1950,
and assuming
2 ppm / year CO2 growth
for the next 600 years).
My climate change blog,
with over 25,000 page views so far:
http://www.elOnionBloggle.Blogspot.com
“If the confuser didn’t spot my errors…” I’m confused, does that make me the confuser?
I think he means “computer” and confuser is his way of taking a jab at the role they play in our lives
Is this meant to be in prose form??
Otherwise, please be sure to correct formatting when copy and pasting.
Sorry to be pedantic, but it’s just a little annoying.
I think those doing these metrics need to get acquainted with even the basics of a quality system like ISO9000
Mike Haseler
Just a quick audit here, isn’t it 9001 now? 🙂
(As I remember)
9000 was the overall concept.
9001 was design, manufacture and servicing
9002 was just manufacture and servicing
I think there was a 9003 which presumably was just servicing.
However,
You are pretty much correct, though they’ve roped in some industry specific standards as well that are covered by letters instead of numbers My mother is ISO consultant and she throws around enough acronyms to make my eyes glaze over – and I’m a US Army retiree.
9003 for ‘services rendered’?
So there is a Q/A program for brothels now? Amazing.
Aaaah,
the good old days of BS5750 as a new toy to play with, based on the super AQAP 1-4,
I never did like the number 9000, and yes it was the series.
Even here the numbers keep on inflating, next they will simply make it 5 digits long,
darn, it already is
make way for 6 digits !
It’s now ISO 9001.
If you wish to NOT be assessed for ‘Design’, you need to include a statement (itself auditable) that that is the case.
There has, over the last thirty or more years, been a burgeoning industry: ISO –
9001
14001
18001
23000
25000
44000 and doubtless more I have missed – [22000, 50000??].
I retired – a former ISO 9002, 9001 and [Shipping’s ISM Code] Lead Auditor – in 2017.
And happily so.
And not battling with Southern Railways, who would normally take me into London, often on time. The return journey was less certain! Sometimes by buses!!
Auto
I recall being part of a BS5750 audit in the 80’s and being young and new to industry I thought it was something important. Before the audit however, I was pulled to one side and told by my line manager “The answer to give to any question you are asked is “Please refer this question to my BS5750 coordinator”. From then on I knew what the BS in BS5750 meant, and it wasn’t British Standard.
9004 is just going to be PC.
ISO is only a scheme to document what you are doing.
Even if you are doing it wrong.
I did some ISO 9001
quality management work
in product development
before I retired at the end
of 2004.
Most important
was to document
the PD process in detail,
train the engineers,
and follow the process
consistently.
There is no way
government bureaucrats
would ever document
the ACTUAL process
of how NOAA / NASA
come up with
an average global
surface temperature !
( assuming there is a “process”
— it might be an ad hoc process,
managed “from the top”,
that starts with the politically
correct climate conclusion,
and then works backwards
to support the conclusion ! )
Does anyone think
goobermint bureaucrats
document the ACTUAL
infilling,
adjustments,
re-adjustments,
re-re adjustments,
and how they can claim a
+/- 0.1 degree C.
margin of error ?
( Remember when one year was a
few hundredths of a degree warmer
than the prior year ?
Do you expect them
to document how they could have
measured the global temperature
in hundredths of a degree C. ? )
ISO quality control will never happen.
Remember their motto:
“It’s good enough for government work !”
The reason private sector care about quality is because the private sector have real customers who want to get real quality and don’t want to pay for rubbish.
And there’s a reason why the public sector don’t care about quality, don’t care about those that have to deal with them and spend all their time on politicised rubbish.
Mike Haseler
Thus the old quip, “Good enough for government work!.”
Australia had a spate of “quality control” at one stage a few years ago, which lead to an addition to “good enough for government work” –
“Quality control that doesn’t control quality”
process rich, outcome poor..
Richard,
You mean, something like ‘Choose the desired temperature, then retrofit the process?’ /s
See http://joannenova.com.au/2015/06/if-it-cant-be-replicated-it-isnt-science-bom-admits-temperature-adjustments-are-secret/ for the reasons for the Australian BOM not being able to ISO 9000 their processes.
E.g.:
“several choices within the adjustment process remain a matter of expert torture of the data using the appropriate ideological slant and circular logic to support the pre-conceived conclusions”
There, fixed it for ’em.
Let’s set our sight a bit lower and get them to pass Poundland’s standards , a cheap commercial chain here in the UK.
ISO9000 led to many bad processes continuing – but they were extensively analysed and documented to ensure that everybody consistently made the same mistakes.
It could be argued that in the case of the climate change question/problem/dispute Ike’s warnings should be inverted. It is the public policy and the politicians and their financial confederates that have captured the scientific and technological elite not the other way about.
BTW I hope no one over there is offended by my calling Eisenhower Ike . it was how he was always known to my father’s generation ( the one that lived through the war) and to the press of that period .
No problem. For those of us growing up when he was president, the expression “I like Ike” was the way to show support.
I believe the only person you would annoy would be the President’s mother. She didn’t like nicknames, so I heard some 60 years ago, so she named her son Dwight.
Seems not to have helped.
Scientists aren’t captives of government. Funds were the magnet and unresisting scientists are the iron filings. They even line up perfectly with the demands of the magnetic field.
Trillions spent and unpaid sceptics find all the errors – makes one wonder how 99% of real, fundamental scientific discovery was self funded prior to 100 years ago and these science-lite guys need billions a year to crank out the the same chaff with no advancement beyond a little linear formula discovered by Tyndall in the 19th century.
Einstein had to spend his days as a Swiss patent clerk to put bread on his table and after dinner revolutionized our understanding of the Universe and corrected the work of Sir Isaac Newton, no less. Heck he only needed a pencil, paper and some pipe tobacco wh8ch he paid for out of pocket.
Our foremost paleoclimatologist, Steve McIntyre, a mining engineer in his day job, similarly at his own cost struck down the hockey stick and corrected Mann’s faulty stats (Mann’s invalid method that converted red noise into hockey sticks and basically breathed new phlogiston into alchemy), caused other authors to have papers withdrawn, etc.,etc. He also had doubts about use of the popular bristlecone pine as a temperature proxy and inquired of Mann if the series had been updated (from the 1980s IIRC). Mann said it was too expensive and time consuming, so McIntyre set himself a challenge: to take off in the morning from Starbucks in a Colorado mountain town, climb up to update the series and be back at the Starbucks by afternoon. He did just that and his findings retired bristlecone pines from paleoclimate duty.
Anthony Watts’s US weatherstations exposé was another that got NOAA into action. I think volunteer crowd sourcing to check work done by alarmists is the way To put These shameful (shamful?) gougers out of business.
Gary Pearse
No computer ever came up with a scientific theory. It’s all from the brain and the proof of the pudding is in the blood sweat and tears, not a spreadsheet, that’s no better than an abacus.
Trouble is Gary that these (shamful) gouges are not in business. They get paid for just keeping their noses clean.
Of course this in no way undermines a well established religion and its adherents. Nor does it undermine the power structure and money flows within that religion. The faithful will endure all critique from the unwashed. You can’t slow the ship now even with evidence of icebergs ahead.
There are far more inane discussions going on.
For example today people are congratulating two Royals for finally working out how to have sex together.
Even the alarmists who comment here are intellectually above the sexgratulatory comments on other fora.
‘Any actual errors identified will be dealt with in the next major update.’
by models that give us the result we ‘need’
‘The HadCRUT dataset includes comprehensive uncertainty estimates in its estimates of global temperature,” the Met Office spokesman said.’
Of course we never say that the media nor to politicians and estimates we mean, something either below or above that do ask us to define that ‘something’.
“We previously acknowledged receipt of Dr John McLean’s 2016 report to us which dealt with the format of some ocean data files.
And filed it ‘ironically ‘ under inconvenient truth .
“We corrected the errors he then identified to us,”
But has we never make any, because ‘models’ this is a none-problem.
As has been noted before, I would compare these scientists to prostitutes, but prostitutes provide a needed service unlike “climate scientists” on the government dole.
1. Mosher told me he personally identified an error in GHCN with one station reading 15000ºC, which after he corrected had no noticeable affect on the global average! Within 1 hour he corrected himself and said the error was “only” 5000ºC.
2. Why do skeptics find errors, never the ‘climate consensus’? Over here, Philip B. Stark and Andrea Saltelli wrote there had been a fundamental shift in how science is done since the 1960s. Prior to the 1960s people entered science as a vocation; they cared about being right and doing right. With the vast expansion in higher education, in the West, since then, people now enter science as a career; they care about grants, papers published, career advancement, … There’s no fix for this. Much of science is broke and will continue to be broke while so many treat scientific knowledge as a tool to lever in their career advancement. Science is not like other pursuits. It is easily gamed because even scientists can be quite naïve in accepting truth claims made in niche disciplines outside their expertise.
Climate God who govern’t our Planet,
Hallowed be thy existence.
Thy taxes come.
Thy influence be done
on earth as it is in the atmosphere.
Receive today our indulgence,
and abolish our environmental sins,
but for those with diesel cars,
Tempt us not with science and common sense,
but deliver us from those of no faith.
For thine is the Internet,
and the Grants, and the Smugness,
for ever and ever
Amen
Beautiful
I don’t see why it’s a problem that ‘they’ adjust the 1800s temperatures downward. Nobody says that anthropogenic CO2 caused any warming before 1950. Cooling the 1800s temperatures just means that more of the modern warming is unambiguously due to natural causes. That means any calculations of equilibrium climate sensitivity (ECS) should be reduced.
Any pre-1950 warming should weaken the alarmist position. The more ‘they’ cool the 1800s temperatures, the greater the pre-1950 warming, the less the ECS. I think they are hoist with their own petard.
Commie, you missed the memo.they pushed the start period from 1950 back to 1850 and pushed the 1930s-40s peak down~1C. Before that, over 90% of the warming had occurred by late 30s and questions were raised on what caused that and why did the temperature decline for 40yrs after the peak? They “fixed” both!
Did you not catch that in doing so, they had 1C of the 1.5C already in the bank. The big worry was over 3C rise by 2100 measured from 1950. Now its 0.5 higher than today that’s going to kill us off! They realized once observations caught up with their 1990 forecast, the 300% too hot expectation meant that we couldnt attain even the 2C threshold by 2100 even if we pulled out all stops on burning of fossil fuels. 1.5C is double what happened in the past 100yrs. They hedged their bets even on that making 2C a complete disaster (even with 1C already banked.)
To give full credit for giving themselves multiple ways out, they don’t say that anthropogenic CO2 caused no warming before 1950; just that it has caused most warming since 1950.
Want to know what’s wrong with an average?
20 F + 80 F = the crops in the field are dead or 100/2 = 50 F ( Sat,Oct 13 & 14, 2018)
40 F + 60 F = I can pick tomatoes until or 100/2 = 50 F (Mon, Oct 8 & 9, 2018)
Indeed.
Here in Toronto, we had a hot summer. “Average” temp was 22.5C. Last summer was 20.1, so yeah, panic time! Apocalypse!! Death is here on heated wings…
But…if you count extremely hot days (i.e., 7 degrees over the average max):
2018: 8
2017: 1
Ah ha! getting hotter, but:
2016: 8
2015: 5
2014: 1
2013: 3
2012: 8
2011: 3
2010: 5
2009: 0
etc. No pattern at all. (BTW, “hottest summer in Toronto history”, i.e., since 1938 measuring like this was 1955 with 17 extremely hot days. Ah, the “good times”, right?).
I think if anything is significant with such small time frames, its nighttime temps and we did set a “record” with 64 nights above average max.
Oh, and one more nerdy number: people will remember it a dry summer, but we had 226.8 mm of rain Average summer? 226.7.
I think in the general “Hype it UP” sense, that .1 mm means we were actually FLOODED!!!!
“Jo makes a good point. Why is it that skeptics always seem to be the ones that find the errors in climate data, hockey sticks, and other data machinations produced by the well-funded climate complex?”
Possibly because skeptics are the only ones looking for errors? You know, that last step one should do before yelling “fire” in a crowded theater, just in case its not really on fire, just seems like it because there is a smoke machine on stage and an actor yelling “fire”.
Seriously, thought, we know why: if it bleeds, it leads. Can’t get media attention and grants with “maybe”, or “might”, or “slight chance of”.
To paraphrase the line about legal proceedings: “if you have the measurements, pound the measurements. If you don’t have the measurements, pound the table”.
Basically, there would more skepticism and less panic if more of the general public (let alone the media) had better math skills.
Hell, ANY math skills.
Double hell, any interest in math at all.
“Possibly because skeptics are the only ones looking for errors?”
That is correct. Many were seeing the ‘Earth going Venus’ propaganda that was circulated on facebook and were blindly believing it, without checking.
It was annoying enough to me to determine me to look over the sources (confirming me that it’s pure idiocy, even if the code would be correct, although it was not). In 5 minutes I found a fatal bug: https://github.com/ddbkoll/PyRADS/issues/2 It amazes me how people think that such garbage is the absolute truth and nothing but absolute truth.
Too right Adrian. To me the basic bug in the IPCC logic lies in the definition of Radiative Forcing (RF) found in the WG1 sections which quite frankly fails to comply with thermodynamic law.
If I were to plug this purported Forcing energy flux (approx. 1.6 Watts/sq.m) into my kettle; by definition, it would never boil.
It is tantamount to a student howler.
Raising this issue during an (un-named) university climate course, I was politely informed that the experts knew better and the matter was for convenience to simplify calculation by which it was sorted out. It was suggested that I went on another re- education course.
Unfortunately I remain with the belief that if the definition is wrong then the calculation is in error.
SUBJECT- THE FLAWED MODELS.
The models are all flawed because they do not incorporate solar/geo magnetic effects. The only reason why this is not more apparent yet is because thus far the solar/geo magnetic fields have yet to reach threshold values of weakness which would result in a major rather then a minor climatic impact.
So if you think the climate models are off now just wait a few more years. This is going to become more and more apparent as the climate models forecast a continued warming trend while reality is likely a continued cooling trend.
SDP, the models can never be right
..and the models are the best proof that their past adjustments are lies
Why is it that skeptics always seem to be the ones that find the errors in climate data….
They know it’s there…..they can’t work with it, know what to adjust it, or even how to use it…
…without knowing
These aren’t “errors” and don’t let them get away with that…..
Latitude – at 7:35 am
“…These aren’t “errors” and don’t let them get away with that…”
It was generous of Dr. Feynman to allude to the wrongness of the experts as merely an artifact of ignorance.
Just maybe there is still a place for solitary individuals making significant contributions to science and industry. (-:
The mantra of any large human enterprise:
You do what the boss wants done and say what the boss wants said – or you no longer work there.
“Why is it that skeptics always seem to be the ones that find the errors in climate data”
Well, is it? There are indeed errors in “climate data”. Many millions of numbers are collected from around the world, and mistakes are inevitable. A few years ago, I went on an investigation of similar errors in GHCN. Again there was the odd place and month (in Bolivia this time) where decimal points had slipped and numbers like 80 °C were appearing. But I didn’t just crow about finding an error. I looked into where they had come into the system (mostly on the Climat forms submitted by the local MO’s, but at least one was a GHCN error), and what GHCN did about them. And in each case, GHCN had flagged the errors, but left them in place. This is a proper attitude to raw data. You don’t change what was reported; you adapt what you are going to do about it.
But more importantly, there, I checked the consequences. I did the calculation with and without the suspect data. And it did make a difference, as I noted. This contrasts with this PhD thesis of John Mclean, where he says, on p 4:
“This thesis makes little attempt to quantify the uncertainties exposed by this investigation, save for some brief mention of the impact certain issues might have on error margins, because numerous issues are discussed, and it would be an enormous task to quantify the uncertainties associated with the many instances of each. It has been left to others to quantify the impact of incomplete data, inconsistencies, questionable assumptions, very likely data errors and questionable adjustments of the recorded data.”
I wasn’t hoping for a PhD. I was just doing blog articles. But I did do the calculations to see how much it mattered. And I did look into what could be done about it. The obvious answer (which took me a while) was to make sensible use of he flags provided by GHCN. The problem went away.
That comes back to the issue “Why is it that skeptics always seem to be the ones…?”. Why is it that naysayers are always the ones complaining about how temperatures are calculated by scientists, but never doing a calculation themselves? It really isn’t hard. You don’t even need a PhD.
This from the guy who last week was claiming that all the errors had already been found and corrected.
You’re not reading properly. As I said here, the errors in GHCN were found and flagged previously. That is the right response (I believe GHCN tries to get the source to review, not always successfully). The errors had been corrected in the GHCN adjusted file, which is what GISS and NOAA use. You just have to be more careful with observing the flags with raw data, as I learned.
That’s not what the memo says.
Which memo? I’m talking abaout GHCN.
Awww, what a heartwarming and completely irrelevant story, Nick.
Is Excel your main tool Nick?
No, I program mainly in R.
I program mainly in a language that is part of a pirated IDE: Rrrrrrrrrrrr.
Temperature are calculated? There’s the problem in climate seance.
“Why is it that skeptics always seem to be the ones that find the errors …”? Answer: Because one needs to be skeptical to find mistakes in work that confirms one’s preconceptions.
A group of skeptics (BEST) explored a new methodology (kriging) for construction a global temperature record over land. To their surprising, they found that: Warming over land was more rapid that HadCRUT was reporting! The diurnal temperature range wasn’t shrinking as expected! That an index constructed only from stations outside urban areas showed just as much warming as the index from all stations. Real skeptics – ie scientists – don’t always discover what the expect to find.
“A group of alarmists posing as skeptics (BEST)”
There, fixed it for ya.
Nick Stokes is correct to criticize the absence of a calculation showing impact of these error. However, he fails to note that HadCRU failed to do the same thing after correcting errors back in 2016 in response to McLean’s criticisms.
” However, he fails to note that HadCRU failed to do the same thing after correcting errors back in 2016 in response to McLean’s criticisms.”
How do you know they failed to do that?
In fact, HADCRUT do post the effect of every version change. Here is the page, dated 15 Sept 2016, quantifying the changes going from version 4.3 to V 4.4. They are quite invisible on the graph, but a difference plot shows them generally less than 0.01°C.
Frank, the errors I reported in 2016 were
– hemispheric summary files (avg temp and coverage per month) had the correct average temperatures but the coverage was for the other hemisphere.
– the file of sea surface observation counts for HadSST3 were from South to North but the main data file is from North to South. (This gave problems like SST observations around Mount Everest)
– the same SST observations file had instances where the counts exceeded 9999 and overflowed the space allocated. (It looked like the program that wrote that file was in Fortran because the overflowed fields were filled with ******.)
None of these problems directly impacted either HadSST3 or HadCRUT4 datasets.
“Why is it that skeptics always seem to be the ones that find the errors in climate data”
I was going to comment on that, too. What an odd statement! It’s especially weird that so many people here think whenever data are adjusted to fix errors/biases, it must be fudging the data.
Then the hockey stick issue was raised AGAIN, as if M&M really accomplished anything by their statistically flawed search for statistical flaws.
I imagine it’s because Anthony, et al. mostly read press releases rather than the less-publicized literature that comes without them that they aren’t aware that there is debate in science, and that there are corrigenda and errata that are due to discoveries by AGW-friendly scientists. Just a guess.
This isn’t the 1970s, and this is the kind of problem you really ought to have a better handle on before you branch into mult-trillion-dollar global policy consulting. Every step from raw data to published numbers and the result of each step should be posted for public scrutiny. The surface station issues should have been examined long before Watts. Etc.
It’s clear QC is not a priority among the billions spent, and what QC is done often makes the data worse, as with the “warm the past of moved stations” bug McLean detailed.
The shrugging off of data issues with “calculate it yourselves!” explains why half the country believes none of the claims of AGW advocates, even the ones that are reasonable. At this point most media claims made about AGW hav roughly the scientific backing of phrenology.
Nick,
So you did the calculations with a few outliers removed. Big deal. There are far more problems with the data than just a few outliers, it’s just that they are very obvious errors that show quality control is poor.
Also, we woudn’t be talking about outliers in the station data if the national meteorological services (NMSs) had paid proper attention to detail. Do you really think that we should believe the rest of the data from the same NMSs? How about the data from the NMS that says the site is in the northern hemisphere when it seems to be in the southern hemisphere?
Random question / idea. Has anyone ever proposed something along the lines of PE licensing for climate research? Like engineering disciplines, I propose we create something like PE licensing for climatologists, which imposes strict evidence-based standards. Moreover, that these people be held liable for damages they cause. Just like an engineer who designs an unsafe system that results in someone’s death, or which results in monetary damages.
If I design a control system that brings down someone’s plant, oops, my ass is in big trouble. Just like a doctor, I have to maintain liability insurance. If we spend however many X billions of dollars on “carbon credits” or whatever, because this person predicted that the earth’s temperature would rise by Y degrees, but then it doesn’t happen – who is held responsible
People who make these predictions should be subject to the same penalties that a PE would. Which is CRIMINAL LIABILITY. I feel like the nature of this whole debate needs to change. These people are free to make whatever wild predictions they like, based on erratic datasets and weak assumptions, but bear no responsibility for the consequence of being wrong.
Any research or recommendations should be distinguished as “academic theory” or “professional.” Government policy actions should be strictly limited to only professional work product.
Nice idea in theory but so many of these predictions mature well after the predictor is no more. Of course, there is always the Old Testament remedy of laying responsibility on their children’s children.
I don’t think the analogy works well. Engineering is fundamentally different from climate science, as well as other fields one can think of that study things, but don’t make things. We could list a bunch of examples but that is too much trouble here. Here is the American Engineers’ Council for Professional Development definition of Engineering:
“The creative application of scientific principles to design or develop structures, machines, apparatus, or manufacturing processes, or works utilizing them singly or in combination; or to construct or operate the same with full cognizance of their design; or to forecast their behavior under specific operating conditions; all as respects an intended function, economics of operation and safety to life and property.”
The focus is clearly on making things and predicting behavior of things you make. All the PE tests given currently are smack in the middle of the “making things” world. An engineer, perhaps more than physicists, chemists, or biologists, has to deal inevitably with experimental confirmation – either the thing you make works or it doesn’t. Since somebody usually buys the thing you make, there is a need for liability management. It is unclear how this paradigm could fit climate science.
There are other kinds of licensure for things like toxicologists or forensic chemists and the like, but they tend to be driven by real world needs for their services.
You make it sound as though we don’t need climate scientists, therefor they don’t need to be licenced or held accountable for their predictions.
It is a sorry state of affairs when a broad spectrum profession like “scientists” is not held accountable for their study or the results of their papers. They just pump them out and get them published. End of story, they go home and feel proud about themselves that their publishing count is now +1.
And it’s a double insult when the cost of the research is in the millions, and they get to push their beliefs on the TV or convince a politician, which then costs billions. All this with zero accountability?
The problem with “holding them accountable for their predictions” is that their predictions can and often are set for a point well beyond when they’d be around to be held accountable. All the other professions you mention, the problems usually manifest within the practitioners lifetime. Not so much with predictions about what the climate will be like 50 years from now made by a “climate scientist” in their 50s.
Well then, why don’t we just get rid of scientists altogether? No more geologists, paleontologists, geneticists, food chemists, pharmacologists atmospheric physicists, oceanographers…no more exploration if there is the possibility that errors might be made. That would save a lot of money. Or we could have a system in which if a scientist made an error, they would have to pay back their grant. Sound like a plan to me!
The rest of the world can do science, and we will just make money, because that’s the only thing that’s important.
Scrap the space exploration program, too.
Well then, why don’t we just get rid of scientists altogether?
no need to throw the baby out with the bath water there Kristi. Getting rid of activist scientists on the other hand, would be an excellent first step. activism and science don’t mix. Activists are by nature biased, science on the other hand is supposed to be neutral/non-biased. You can be an activist or you can be a scientist, you can’t be both.
Because climatology is unlike engineering, climate forecasts should not be taken seriously. Like employing statistics to predict the outcome of poker games. Don’t bet your money unless you’re willing to lose. IPCC should be regarded as an astrology club borrowing the language of astronomy then engaging in fortune telling. Only weather forecasting is legit.
” Don’t bet your money unless you’re willing to lose.”
I.e., Taleb’s “Skin in the Game”: if it ain’t your money, bet the house…
…“Any actual errors identified will be dealt with in the next major update.’’…
1 – Obvious errors like temperatures in excess of 100degC will be removed.
2 – HadCRUT will then claim that their data is the most accepted set of the planet, since it has been audited by both believers and skeptics alike….
The other (recent) end of climate data is in serious trouble, but this seldom gets a mention. For example, here is the entirety of the 2017 rainfall data for Australia in GHCNM version 2 (the version that now deals with rainfall data). Missing data is marked in red, besides that problem the number of stations is woefully inadequate for analysis work on recent droughts:
Bit of a hiccup but have the 2100 with their computer assisted manual checks and major updates worked out what the average temperature of the globe is supposed to be yet?
Having had some errors pointed out they are only going to correct those errors. So we now we have two people, Stokes and McLean, highlighting issues. To me this suggests that there are more as yet unidentified errors. With all their resources perhaps the Met Office could spend some their budget on doing a proper audit rather than a couple of dozen of them jetting off to conferences in exotic locations.
Above some folks were looking at Canadian extreme temperature trends. Ottawa has continuous records dating back to the 1880’s.
Annual extreme temperature has been declining at 0.52C per century over the period. The number of days over 30C has declined by 5.6 days per year.
Try telling that to the CBC or TVO. The data is readily available on a ECCC website.
I’ll be interested to see HOW the errors are “dealt with”. Erroneous data needs to be excluded but I suspect they’ll figure a way to “adjust” it and then announce “It’s worse than we thought”.
Determining if the data available is fit for purpose is something I have argued before. Each and every study that claims to be ‘climate science’ should have a section detailing how and why the data set being used is applicable. It should cover adjustments to the data and why those are applicable.
Any study that that blames global climate change for creating the problem being studied while referencing ‘global temperature’ should be turned back. Whether it is polar bears, caterpillars in Honduras, or the Great Barrier Reef the ‘global temperature’ tells you nothing about localized temperatures, evaporation, etc. If scientists continue this then they open themselves to the argument that only one or two thermometers are needed to determine the ‘global temperature’.
This means they would need to develop temperature data sets from local recording stations rather than relying on one globalized, theoretical, ‘global temperature’ as a fallback to their findings.
People in scientific (and other) disciplines these days tend to avoid repetitive tasks requiring constant concentration.
I’m no hero, I’m as lazy as the next person. We have computers to do all the heavy lifting of calculations, but getting historical data into digital databases is still manual work. It’s boring, repetitive and anyone who’s done it can surely attest to how easy it is to get distracted, you lose your train of thought, and that leads to errors.
Once the data is in a digital database, then we can sit back and play with it all we want. Sometimes, you can spot egregious errors in the data, but smaller errors usually don’t jump out at you. So who’s going to take the trouble to look for data-entry errors? Or worse, errors in writing down numbers in a century-old, hand-written record? Or confusion between hand-written 1’s and 7’s done by continental Europeans and being read by anglophones (or vice versa)? Sceptics, of course, because they’re hoping to find errors.
It would probably cost the CRU a couple of days’ operating costs to hire a few AI people to come in and design auditing software. Train the machines with known errors and let them rip through the databases. They could crow about how they’re improving the quality of the data. They do anyway, but this could give them some substance to crow about.
So the Southern Hemisphere was:
I would say it would be more honest to say that there was no information for a global dataset in the 1850s and that remains the case until there are sufficient sensors.
Is that difficult?
Well yes it is.
How many sensors and what distribution is needed to have an accuracy for global temperatures of +/- 5 degC ? Mathematically, you can probably create an average with a precision of several places of decimals (which is in fact what is done) but the accuracy is still unlikely to be as good as +/- 5degC at best even assisted by some applied guesswork.
Jo makes a good point. Why is it that skeptics always seem to be the ones that find the errors in climate data, hockey sticks, and other data machinations produced by the well-funded climate complex?
Perhaps it is because they simply don’t care, and curiosity takes a back seat to money.
Perhaps there is another reason too. I reckon mainstream climate science sees themselves as defenders of ‘scientific orthodoxy’ against invading moronic masses, barbarians who want to desolate sacred ‘truth’ they firmly believe. Thus for them it is not a purely scientific dispute, it is quasi-religion one. Therefore, turning a blind eye on some errors (if they’re in favour of warming trends) or accepting some questionable adjustments are all acceptable practices if they serve this purpose. In this mentality the the end justify the means.
Or perhaps one approach sees a paycheck and grants and another does not ?
The MET has done very well out of AGW, without actually getting better at its day job of forecasting weather. They certainly not seen the cut backs others have .
Financial gratification is a big actor in the game. Unfortunately. Still, financial greed as a common denominator may be too simplistic. Some of them really believe that they’re some kind of missionaries bringing scientific enlightenment to the moronic masses who suffer under religious superstitious and are reluctant to accept teaching of the Academia. Myth of Prometheus is still alive in many scientific circles.
I love finding really well-written articles and this is one of them. Thanks for sharing.
Post normal data inventions are particularly annoying!
Hadley Centre’s response to John McLean’s corrections previously included the following comment and caveat;
Now Hadley Centre responds with;
One notes, that there is no caveat regarding temperature anomalies, yet.
Not that there is any easy accurate method to compare a dataset full of errors with a revised dataset with corrected errors; one gets suspicious that the anomalies will will once again, not change. Even for datasets riddled with errors.
JoNova’s comments are on target!
Do listed errors include calculating anomalies out to more decimal places than original temperatures were measured?
I note in the comment thread that Randy Stubbings points out:
I am reminded of a discussion on a train to Washington DC where I was talking to a lady employed by the Department of Energy.
Her job was the tracking of the Exxon Payments for the Valdez spill. Literally, small fractions of a cent payment over many millions of transactions. Cumulatively, they were substantial.
False precision masquerading as accuracy is neither.
“Any actual errors identified will be dealt with in the next major update.’’
It’s gonna get HOTTER !!!
The question I would ask of the UK Met office is this. Having admitted that their are errors and that they will be corrected, are you then going to tell the IPCC that your calculations were wrong and that things are not as bad as you originally said ?
MJE
Michael:
The question(s) I would ask you is:
Why should they need to do that when any errors would make no discernible (if that) difference to the trend?
How many temp data points do you think comprise the Hadcrut dataset?
How many millions?
How many did Mr Mclean find exactly as a percentage of the total?
Oh Yeah, we’ve seen that, every error and every adjustment by the high priests will have no discernible effect. Yet they continue making errors and adjustments all the time which drive the trend only one way ” It’s worse than before “. Give us all a break from this crap Banton. McLean did work on his own to show the kind of crap you and your lot produce. And you squander millions of public money to produce this crap. In private sector you’d have been shown the door on the spot and probably indicted for false reporting if you’d been doing financial reporting this way. On the public sector trough, you all dig your nose in with no fear of any such repercussions.
“…Why should they need to do that when any errors would make no discernible (if that) difference to the trend?…”
Priceless.
Priceless and shameless isn’t it Michael? These kind of blokes are what populate the Met Office and call themselves ” scientists “, a shame to genuine scientists. Errors like this, if found in a clinical trial data or in financial data would render the entire study and report invalid and suspect.
But these high priests of warning can do any kind of careless and shoddy work without and have the gall to say ” Move on, nothing to see here “. any fear of repercussions. And they’ll have their water carrying apologists like Mosher, Stokes and barry to come and defend any crappy work they do.
You beat me to it Michael.
“Why should we worry that all our cars only had 3 wheels fitted when they came out of the factory last month? It’s not a problem, as they always had 4 fitted for the previous 100 years. It’s all about the trend, don’t you see? Anyway, we adjusted the data and found that from 1905 until 1920 the cars actually only had 2 wheels fitted, so the trend is increasing. Before you know it, we’ll have 5 wheels on each car!”
The data is junk and Stokes, Mosher, et al know it.
==> Anthony Banton
Jeez you’re obtuse!
There were 75 findings, including systematic errors larger than the total trend! You’re like a dog on a bone, you can’t let go of that one issue (Station file errors). Read the paper and you will see that you have no idea what you are talking about.
There were 25 major findings, none of them rely on this straw man you keep burning.
The findings of the report had major implications:
Scott Bennet:
What is obtuse(ness) and those that are “like a dog with a bone” are the hounds on here answering faithfully to the dog-whistle.
Such that Middleton has already started a new article with the “now discredited Hadcrut” meme.
It’s entered Naysayer mythology.
No it is not discredited just because rabid confirmation bias makes it so in the mind of denizens here.
Nick Stokes knows more about global temp data and it’s usage than the anyone here and I defer to his comments on the two threads regarding the “Bombshell” that this is not – a few errors (75 ? heck) that were found in original national met files containing millions of data points that have NOT been shown to have been imput into Hadcrut … and even if they were the effect would be negligible
“OK. This is no BOMBSHELL. These are errors in the raw data files as supplied by the sources named. The MO publishes these unaltered, as they should. But they perform quality control before using them. You can find such a file of data as used here. I can’t find a more recent one, but this will do. It shows, for example
1. Data from Apto Uto was not used after 1970. So the 1978 errors don’t appear.
2. Paltinis, Romania, isn’t on that list, but seems to have been a more recently added station.
3. I can’t find Golden Rock, either in older or current station listings.”
“Also, “we do QA later” doesn’t explain why obvious errors are still in the source data.”
Because it is source data. People here would be yelling at them if they changed it before posting. You take the data as found, and then figure out what it means.”
“WTF? “left to others?”. How can you get a PhD saying that I did the proofreading, but calculations were to hard. And if a PhD project can’t do it, who are those others?”
“It isn’t an enormous task at all. HADCRUT isn’t rocket science. You just write a program that emulates it, and then see what happens when you correct hat you think is wrong with the data. I wrote a program years ago which I have run every month, before the major results come in (details and code here). I have done that for seven years. They are in good agreement. In particular the simplest of my methods, TempLS grid, gives results which are very close to HADCRUT. If I used 5° cells and hadsst3 instead of ERSST, I could make the agreement exact. I wouldn’t expect to get a PhD from doing that, let alone saying it was too hard.”
“In fact, HADCRUT do post the effect of every version change. Here is the page, dated 15 Sept 2016, quantifying the changes going from version 4.3 to V 4.4. They are quite invisible on the graph, but a difference plot shows them generally less than 0.01°C.2
“That comes back to the issue “Why is it that skeptics always seem to be the ones…?”. Why is it that naysayers are always the ones complaining about how temperatures are calculated by scientists, but never doing a calculation themselves? It really isn’t hard. You don’t even need a PhD.”
Not so much 75 individual errors within the data itself, but rather (up to) 75 different kinds of error, as outlined in “Appendix 6 – Consolidated Findings” of his report. This up from (up to) 26 different kinds of error outlined in his PhD thesis, in “Chapter 8: Summary of Part 1” of that thesis. I say “(up to)” because I’m leaving room open for the possibility that some of his claims may be incorrect or at least inconsequential, and that some of them may in fact refer directly to individual errors within the data.
As to “who are those others”, given that he came up with so many more findings after publishing his thesis, I’d say it looks like he decided to take on that burden all by himself.
==>Anthony Banton
Again, a lot of words to say precisely nothing! A tale told by an NPC bot, full of sound and fury, signifying nothing! You can’t read or perhaps comprehension is your problem. You never address anything I’ve written you just go straight into your program loop.
The dataset is good enough for the IPCC but it’s not good enough for your mate Nick! Apparently you didn’t get his memo:
Anyway, good luck with that puppet gig you’ve got going!
Anthony Banton,
I’ll give the Devil his due, and acknowledge that Stokes is quite familiar with global temperature records. However, your statement, “Nick Stokes knows more about global temp data and it’s usage than the anyone here…,” is not supportable. With regulars like Roy Spencer, and today, John Mclean himself, and even Mosher, and Anthony Watts, you are engaging in hyperbole.
“We previously acknowledged receipt of Dr John McLean’s 2016 report to us which dealt with the format of some ocean data files.
“We corrected the errors he then identified to us,” the Met Office spokesman said.
Read carefully. OCEAN FORMAT
the data errors as I explained are NOT in the final product.
The site he quoted doesnt have enough data and it has never been in any of the copies I have
EVER
Your typing in CAPITALS.
I think Dr McLean’s put the wind up Mosher. Excellent!
*you’re
I’ve replied to this above where I describe the errors that I reported to the Hadley Centre and CRU in 2016.
I forgot to mention about that a Bishop Hill blog posting in March 2016 has the details of the problems.
Did you remember that they apply a filter screening outliers of 5sigma.
“” Each grid-box value is the mean of all available station anomaly values, except that station outliers in excess of five standard deviations are omitted.”
Ron Broberg and I covered this back in 2010
As Ron wrote
“Out of that set, there are 425 records that are more than 5-SD off of the norm on the high-side and 346 records that are more than 5-SD off of the norm on the low-side. Only about 0.02% of the records. “
==>Steven Mosher
Dr Mclean has addressed this several times:
Here is the header from the 2018 station data file that I downloaded this month:
“Number= 800890
Name= APTO_OTU
Country= COLOMBIA
Lat= 7.0
Long= 74.7
Height= 630
Start year= 1947
End year= 1988
First Good year= 1947
Source ID= 79
Source file= Jones
Jones data to= 1988
Normals source= Data
Normals source start year= 1961
Normals source end year= 1988
Normals= 24.1 24.4 24.6 27.8 24.6 27.9 28.0 24.6 24.4 24.1 24.1 24.0
Standard deviations source= Data
Standard deviations source start year= 1947
Standard deviations source end year= 1988
Standard deviations= 0.6 0.6 0.5 11.9 0.5 11.8 12.0 0.6 0.5 0.6 0.6 0.7
Obs:…”
*http://joannenova.com.au/2018/10/hadley-excuse-implies-their-quality-control-might-filter-out-the-freak-outliers-not-so/#comment-2060139
Mosher has strange faith that the Hadley Centre and CRU documentation he refers to is correct.
He needs to catch up with the 2012 documentation about both CRUTEM4 and HadCRUT4, and even then he also needs to examine the data rather than just believe the documentation.
From that link…..
“I asked John to expand on what Hadley means. He replies that the quality control they do is very minimal, obviously inadequate, and these errors definitely survive the process and get into the HadCRUT4 dataset”
Handwaving denigration.
Lets have proof of this “obviously inadequate” QC please.
Which isn’t what has been produced so far.
Do what Mclean should have done in order to gain a Phd worth it’s name …. and crunch the numbers.
John Endicott October 16, 2018 at 6:30 am
The question of the moment is what will happen if we burn a whole lot of carbon
“and the answer is pretty much the same as any other time there was a whole lot of carbon in the atmosphere. It’s not that difficult to understand that there is nothing unprecedented about our current PPM of CO2 in the atmosphere,”
True…but….
My problem when I point this out to alarmists is that they DO have it right about humans causing issues with nature but its not necessarily about climate change.
Its about land use.
If species A “remembers” (somehow) that its supposed to head north when it gets hot, and there are now highways and quite a few large honking cities in the way…might be an issue.
If when they get there they try to eat something that was wiped out 10,000 years ago…might be an issue.
Or not, depending on if you get the sniffles over evolution.
I agree that land use is a much bigger environmental issue than our small contribution to a trace gas in the atmosphere (that, frankly, is a net benefit to life on Earth).
The easiest person to fool is yourself.
https://www.metoffice.gov.uk/hadobs/hadcrut4/data/current/download.html
Why is the Sept 2018 HADCRUT not yet released!!!!!!!!!!!!
I sent email to them to ask
https://www.metoffice.gov.uk/about-us/contact