John McLean writes relative to our previous story on this issue: Friday Funny: more upside down data
A few hours ago I received an email from John Kennedy at the Hadley Centre to say that HadSST3 data had been corrected.
A message now appears at the top of http://www.metoffice.gov.uk/hadobs/hadsst3/ , saying:
“08/04/2016: An error in the format of some of the ascii files was brought to our attention by John McLean. Maps of numbers of observations and measurement and sampling uncertainties provided in ascii format ran from south to north rather than north to south as described in the data format. This has now been fixed. In some cases, the number of observations in a grid cell exceeded 9999 and were replaced by a series of asterisks in the ascii files. This too has been fixed with numbers of observations now represented as integers between 0, indicating no data, and 9,999,999, indicating lots of data. “
“9,999,999 indicating lots of data”? Why not just say an 8 digit integer field? On the bright side, the data is now integer form and overflows have been corrected.
This correction joins the early corrections of HadSST3-nh.dat and HadSST3-sh.dat made by the CRU, with the note on https://crudata.uea.ac.uk/cru/data/temperature/ saying
Correction issued 30 March 2016. The HadSST3 NH and SH files have been replaced. The temperature anomalies were correct but the values for the percent coverage of the hemispheres were previously incorrect. The global-mean file was correct, as were all the HadCRUT4 and CRUTEM4 files. If you downloaded the HadSST3 NH or SH files before 30 March 2016, please download them again. “
Note that neither makes any reference to how long the problems have been there!
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.

I wish everything would be plotted with 95% confidence intervals. Then I can see how often those error bars are exceeded, which would remind us that this stuff doesn’t follow “normal” statistics.
The bottom line is John noticed the issue, never claimed it had significance relating to any scientific findings ect. He openly said he thinks he has seen an issue, and asked others to look.
This was entirely normal in the world of science, yet, all of a sudden Nick and others became skeptics, if only they would don that cloak 24×7 😀
Instead Nick fueled the “denier” screamers delusions as did the others like ATTP, who I cant take seriously because he uses the word denier, I cant take anyone serious when they utter that term, it’s 2015 for god’s sake
Nice to see Nick Stokes on form https://wattsupwiththat.com/2014/03/03/monday-mirthiness-the-stokes-defense/
I love it, Josh. I must admit that I wasn’t familiar with Stokes until he stepped into this issue about flaws in the data.
“inconsistent latitude order convention”
How are the adjustments that are done to the data impacted by such?
No adjustments were required to observation counts, which was the file with two problems. No adjustment was required to the coverage data in HadSST3-nh.dat and .sh.dat because coverage is determined according to whether the temperature anomaly for that grid cell is present (i.e. not ‘missing data’). It’s only temperature recordings that are adjusted.
I’m continually amazed at the number of reports/studies/etc that have been shown to use the initial data incorrectly, yet the result isn’t changed when the data is used correctly.
How is that possible?
Just see what happens when they make the data available? Recall: Why should I make the data available to you, when your aim is to try and find something wrong with it.
lol anomaly difference (C) from 1961-90. The chart average is based on a 29 year cherry pick? Which in reality time wise is too short of period to decide what average means let alone to make policy from.
DWR54@April 12, 2016 at 9:11 am
“Mark,
Where a known bias, whether warming or cooling, in a temperature data series is identified, how would you suggest it be dealt with?”
How is the bias known?
When did it begin?
Was it a step change of progressive?
What is the nature of the bias: instrument choice; instrument calibration; enclosure; siting, etc?
Simple answer: Eliminate the bias going forward. Remove the biased data from the analysis over the period of the bias, if that can be determined. Otherwise, completely eliminate that data from the analysis..
“How is the bias known?
When did it begin?”
A classic case is the ship-buoy bias, which has got Lamar Smith so upwrought. Surface measurements of SST have long been from ships. But over the last thirty years, drifting buoys have been used to an increasing extent, with gains in coverage and consistency of instrumentation. But do they give the same results? You can tell that when a ship and a buoy take readings in nearly the same place and time. With the first few years data, there was no statistically significant difference. Not enough matches. But by the time of Kennedy’s paper in 2011, there were many millions of data points, and a small but undeniable discrepancy of about 0.1°C (some spatial variation) between ship and buoy. By the time of Karl 2015, there was even more data. So there is a mixture of ship and buoy data, with this slight difference. What to do? You have to adjust. Otherwise the estimate of average temperature varies with the percentage of buoys in the mix.
Like the adjustments to the Argo data?
SST data has been collected via water sampling in wooden, canvas and rubber buckets, hull-mounted sensors, thermometers on engine water intakes and via drifting buoys, fixed buoys and recently Argo buoys. For the first five methods no data was recorded to say which method was used, so there’s a bunch of assumptions and estimates that get used.. Further, there’s adjustments for the height of the side of the ship to be taken into account (and usually cooling is assumed despite the possibility that the air might be warmer than the ocean). Then there’s the unanswerable question of whether the bucket was placed in the shade on deck or in the sun. There’s even the question of whether the sampling of the water was made at night and the light or candle used to read the thermometer caused warming either directly or the delay to get into the light altered the temperature indicated on the thermometer.
Adjusting for all this is a black art and I doubt very much that it’s all accurate.
A significance of 0.1. 😉
Perhaps, but since the buoys are purpose-built and, presumably, calibrated before deployment, one would think that the adjustment should have been toward the buoy data.
Can’t wait to see NOAA’s fudge mix or is it too sensitive to provide ELECTED officials ? .
I can`t see why this is such a big deal. This is science in action.
Issue identified, Issue resolved, all in the public domain with reasons etc. To me as an engineer this is just good science / QA in action.
Arguably, “QA in action” would have caught the error before publication. Relying on an unassociated third party to catch the error and bring it to your attention is hardly QA on your part.