An oddity in the Karl et al. 2015 buoy data adjustments

Frank Lansner writes via email:

I just want to make sure that people are aware of this little maybe-oddity regarding argumentation for ERSSTv4 data changes due to the shift from ship measurements to buoys.

We are all focusing on the data changes ERSSTv3b to ERSSTv4 after year 2003 as these affect the pause.

And so the argument goes, that it’s the shift to buoys that makes warm-changes necessary in ERSSTv4.

But if so, what about the period 1986 to 2003 ? In this period we have an increase from around 8% to 50% use of buoys.

So why is it not necessary to warm adjust 1986-2003, just like 2003 – 2016 ?

I’m aware that this may have been addressed many times, but I just have not noticed.

Data of buoy fractions In % are estimates from this article:

https://judithcurry.com/2015/11/22/a-buoy-only-sea-surface-temperature-record/

Until 2006 percentage values are read from the graphic in the article. And in addition they write that the fraction of buoys after 2006 is above 70%. Thus the 2007-2016 number are just reflecting that we are over 70 % in 2015.

lansner-buoys-ersstv3b

But the point remains, buoy fraction when up to 50% in 2003 but was only accompanied by still colder temperature adjustments.

So, there “should” be a very important reason to cold adjust around 1986-2000 – that dominates over the “necessary” warm adjustment due to buoys.

0 0 votes
Article Rating
120 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Jared
February 14, 2017 10:05 am

“So, there “should” be a very important reason to cold adjust around 1986-2000 – that dominates over the “necessary” warm adjustment due to buoys.”
There is a good reason. There was an El Nino in 1997/1998 and it’s height on the graph needed lowered. In another 20 years when another El Nino hits then the 2016 El Nino will need lowered to make 2036 look warmer than 2016.

indefatigablefrog
Reply to  Jared
February 14, 2017 10:53 am

Yes, this is a perfectly standard practice in climate science, called “back-borrowing”.
The general public can be induced to panic, only if you show them a graph which terminates in a catastrophic surge in temperatures.
It is often necessary to go back and borrow some of the warming from early times so that the current non-event looks surprising by comparison.
Of course, there is a minor warming trend, as was observed way back in the 1950’s. long before anyone had conceived of using that trend as a means by which to demolish the economies of the world.
So, all that we have to do, is continually depress the past slightly and thereby make it look like the temperature series is just on the verge of raging upwards.
It’s shockingly easy to pull the wool over everyone’s eyes.
At least, it seems to have worked so far.

Paul
Reply to  indefatigablefrog
February 14, 2017 1:21 pm

Like in, the Future Value of Money Warming?

TA
Reply to  indefatigablefrog
February 15, 2017 5:45 am

Good post, indefatigablefrog. I laughed out loud at that first line.

Tenn
Reply to  Jared
February 14, 2017 1:19 pm

Can I ask a dumb question – why are they combining data from different sources at all?
Satellites, buoys, and ship measurements all measure the temperature differently. While some may be over or under representing temperature, theoretically they are all doing it to a consistent degree. So…just graph the data separately! I can handle a graph that has three lines on it rather than one. Why combine the data at all? Why not just graph the raw data and be done with it?
The only reason to combine the different data streams as it provides ample opportunity to “adjust” data.

Latitude
Reply to  Tenn
February 14, 2017 1:53 pm

exactly….they tuned the buoy data to match the ship data
…that’s the way they got warming

David Long
Reply to  Tenn
February 14, 2017 2:11 pm

Not dumb at all! It’s become my pet peeve from years of reading climate science. In all my years in geology (now retired) we NEVER did this. For a simple reason: it is not scientifically valid. You can graph them side-by-side, you can do statistical analysis to compare the result, but you CANNOT combine them.
The only reasons I can think of for doing this: you don’t know any better, you think your audience doesn’t know any better and/or you or trying to hide something.

Robinson
Reply to  Tenn
February 14, 2017 3:50 pm

An excellent question. It’s completely ridiculous but they do it anyway. It’s not science.

Thom
Reply to  Tenn
February 14, 2017 6:32 pm

Remember when the tree rings don’t work you use the “little trick.” No different here.

Reply to  Tenn
February 15, 2017 1:21 am

“Can I ask a dumb question – why are they combining data from different sources at all?”
Pretty simple. At the limit every instrument is different from every other instrument. all sources are unique.
If you want a spatially complete and temporally complete record, then you combine as much information as you can from various sources, assigning uncertainties as you go.. testing by holding out data..
That’s just basic observational science.
Take UAH for example. In a 35 period that have over 10 different instruments and two fundamentally different types of sensors.

catweazle666
Reply to  Steven Mosher
February 15, 2017 2:55 pm

“That’s just basic observational science.”
Don’t you mean “just basic observational “CLIMATE”science, AKA “Mike’s ‘Nature’ Trick”, Mosher?.

Alan McIntire
Reply to  Tenn
February 15, 2017 10:10 am

Averaging data from different sources is like describing the average Adult American as having one boob, one ball, and 2 1/2 children.

catweazle666
Reply to  Tenn
February 15, 2017 2:47 pm

The splicing of datasets from disparate sources (AKA “Mike’s ‘Nature’ Trick”) with widely variable levels of accuracy and precision is always a highly dubious practice, never more so than in the case of ships’ engine inlet temperature and buoy temperatures.
Ship’s engine cooling water inlet temperature data is acquired from the engine room cooling inlet temperature gauges by the engineers at their convenience.
There is no standard for either the location of the inlets with regard especially to depth below the surface, the position in the pipework of the measuring instruments or the time of day the reading is taken.
The instruments themselves are of industrial quality, their limit of error in °C per DIN EN 13190 is ±2 deg C. or sometimes even ±4 deg. C for a class 2 instrument, as can be seen in the tables here: DS_IN0007_GB_1334.pdf . After installation it is exceptionally unlikely that they are ever checked for calibration.
It is not clear how such readings can be compared with the readings from buoy instruments specified to a limit of error of tenths or even hundreds of a degree C. or why they are considered to have any value whatsoever for the purposes to which they are put, which is to produce historic trends apparently precise to 0.001 deg. C upon which spending of literally trillions of £/$/whatever are decided.
But hey, this is climate “science” we’re discussing so why would a little thing like that matter?

Louis
Reply to  Tenn
February 15, 2017 11:32 pm

Has anyone tuned (cooled) the ship data to match the buoy data? I wonder how much that would change the slope of the graph.

Tenn
Reply to  Tenn
February 21, 2017 10:18 am

As a scientist I have dealt with this issue of different data streams. And I must say, the way NOAA has handled things is the worst possible solution I can envision.
Put a big FAT error bar on the ship data, and smaller error bars on the buoy and satellite data. Graph them all. Simple.
The only reason to combine the data is to dumb the data down for decision makers. But given this is science, and actually pretty important, why can’t we try and smarten up the decision makers instead? Is it too much to ask, when making trillion $ decisions, that everyone try and educate themselves a tiny bit regarding data reliability?

rishrac
February 14, 2017 10:17 am

I have no idea what they are trying to do with this data. From my understanding, the buoys sink down and takes readings at different depths. Ships draw the water in at near surface level. Who knows the accuracy of the thermometers? I don’t think there is a fair comparasion of data. Nor is one method long enough to establish any kind of reasonable estimate. Then there is the idea that perhaps that ship readings are better. As the water is pulled in, there is turbulence which mixes the water. A buoy could be stationed in a place where there is a warm spot or colder.
This is a lot like painting stripes on a lion and calling it a tiger.

JCH
Reply to  rishrac
February 14, 2017 10:23 am

You’re describing ARGO floats. Buoy: http://www.ndbc.noaa.gov/images/stations/3m.jpg

Ron Clutz
Reply to  JCH
February 14, 2017 10:38 am

Thanks for clarifying. It seems those buoys have thermometers at a depth of 1 meter below the surface. Meanwhile the MODIS satellites are reading the top of the water in their reporting of sea surface temperatures.

Reply to  rishrac
February 14, 2017 10:33 am

Argo data is not used. The buoys under discussion are surface moored bouys (anchored to the sea bed).
http://ndbc-load.nws.noaa.gov/images/buoys/10m.jpg

rishrac
Reply to  Joel O’Bryan
February 14, 2017 11:53 am

If the data is compatible, why adjust it ?

Latitude
Reply to  Joel O’Bryan
February 14, 2017 1:55 pm

The ship data is no good…
…let’s use buoys
The buoys are showing cooling…
…tune they to the ship data

richard verney
Reply to  rishrac
February 14, 2017 2:06 pm

Ships measure sea water at depth, and do not measure SST. The depth at which the sea water is measured varies according to the design and configuration of the ship, whether it is in ballast, part laden, fully laden, and how it is trimmed etc. In addition it could be riding rough seas which can again impact upon the depth at which water is being drawn as the vessel pitches/heaves and rolls/sways.
It can vary between 3 to more than 15metres, with perhaps 4 to 8 being typical.
The problem is that the as trade changes, the general design and configuration of the ocean going fleet changes. Further some voyages that were traditionally ballast voyages have become laden passages as trade patterns and the need for commodities/finished goods changes over time. Thus the sampling does not remain constant over time.
More worrying is that ship data is very suspect (I have spend some 30 years examining ships data). It is not uncommon to find noon day reports in the deck log, the engine log, noon report to owners, noon report to charterers, noon report to weather routing agencies (that track and guide the vessel) all containing different details for weather, sea state, currents, sea temperature, cargo heating etc. If one gets to see the engineer’s diary/scrap book, one will often see yet further different figures. Thus one might see 6 different figures for sea temperature all supposedly intended to record the same factual data at the same point in time. Which if any is the correct figure, no one knows.
There are many commercial reasons why a vessel may not report or keep accurate data. Further, the equipment was never intended to give an accuracy measured in tenths of a degree, and an engine room can at times be a less than ideal working environment in which to make accurate observations or even to keep proper records. A lot of data may simply be filled in by guess work based upon experience of the crew.
No scientist would never prefer ship’s data over that of buoys.

richard verney
Reply to  richard verney
February 14, 2017 2:09 pm

Correction:
No scientist would ever prefer ship’s data over that of buoys.

rishrac
Reply to  richard verney
February 14, 2017 2:21 pm

I read your response. Thanks. I still think they are trying to homogenize different types of data into a final product that fits a particular agenda. And there doesn’t seem to be enough consistency in measurements to say anything definitive.

Reply to  richard verney
February 14, 2017 2:21 pm

Richard Verney Ship data.
I have read only a couple of papers on experiments on ship data and it seems a factor that has been disregarded is the effect of sea state (mixing of sea temperature layers as well as pitch, roll and heave) and wind state (cooling rate of water in buckets). It also occurs to me that human error in reading is assumed to be random but could be biassed by any difficulty in reading due to the design and installation of thermometers or by the maintenance state of the thermometers. Then consider the chances of ever getting two readings in the same place in the same conditions!
I am an ancient mariner and I have also used ships data for analytical work. I would not trust it at all!

Reply to  richard verney
February 15, 2017 4:49 am

” or even to keep proper records”
There is even a name for it. Gun decking the logs.

DHR
Reply to  richard verney
February 15, 2017 6:10 am

Good points. Also, ships measure cooling water temperature to help run and monitor the engines. They use resistance, bimetallic or other types of common thermometers. The Navy specifies and accuracy of +/- 1% of full scale which is about 1F for typical installations. This is good enough for ships. I expect the thermometers used in the yellow buoys are far more accurate.

Reply to  richard verney
February 15, 2017 5:46 pm

Thank you for that very interesting info, Richard.
M Simon, I wasn’t sure my guess was correct, so I looked up the term “gun decking.”
https://www.merriam-webster.com/dictionary/gundeck
(My guess was correct.)

Clyde Spencer
Reply to  richard verney
February 16, 2017 6:21 pm

richard v,
Ship data has some minimal value, but for all the reasons you give, it should be assigned a very large margin of error. If it is combined with buoy data, then the composite data set essentially has the precision of the ship data and it is pure fantasy to claim precision of even 0.1 degrees!

Henning Nielsen
Reply to  rishrac
February 14, 2017 2:48 pm

Or painting a leopard yellow all over and calling it a lion.

Skippy
February 14, 2017 10:23 am

Does this mean we can stop funding all buoy research programs since they are useless?

Henning Nielsen
Reply to  Skippy
February 14, 2017 2:51 pm

Skippy: No, keep funding them, only make sure they show warming.

Duster
Reply to  Skippy
February 14, 2017 9:10 pm

No, just revert to buoy data and stop trying to adjust it with ship’s data. Apples and oranges.

Paul Penrose
Reply to  Skippy
February 15, 2017 6:51 am

Skippy,
Of course not, it is still useful. But understand the limitations of the data that is collected. Misuse and misrepresentation of data is still the biggest problem in science, especially climate science.

February 14, 2017 10:29 am

During the 1960s when I was serving on some Weather Reporting Merchant Vessels, the water temperature taken 4 times a day at the engine room intake valve, about 6″ inside the hull in the engine room, was the temperature of the water anywhere from 15′ to 30′ below the sea surface depending on whether the vessel was loaded or in ballast. (It was “accurate” to about one centigrade degree, certainly the recording book showed only whole degrees and that is what the R/O sent off to Portishead.)
Not, in fact, the SST.
Did they take this into account?

MarkW
Reply to  Oldseadog
February 14, 2017 10:37 am

Since there was no way to determine the exact circumstances of each reading, they made no attempt to adjust for such variances.
Like always with the warmists, they assume that all such problems will average out if they just take enough readings.

Paul
Reply to  MarkW
February 14, 2017 1:27 pm

“they assume that all such problems will average out if they just take enough readings.”
We often had that discussion here at work, about NOAA’s +/- 0.1C error bars. Then I suggested a team exercise, report the thickness of a sheet of paper using a yardstick. Go ahead, average all you want.

NW sage
Reply to  MarkW
February 14, 2017 5:53 pm

In view of the unknowables about the precision of the data the ONLY data treatment which makes sense is to assign a larger than ‘normal’ plus-minus uncertainty value to the data. That of course is usually counterproductive when they are trying to ‘prove’ that changes on the order of 0.1 deg C have occurred when the data accuracy is only known to plus-minus 4 deg C! Thus they try to ignore the issue.

jmac
Reply to  Oldseadog
February 14, 2017 12:15 pm

When I was a Radio Officer at sea ’72 to ’92 the observing office always used a bucket to take the surface sea water temperature.
The engine room intake was considered to be not an accurate source and indeed was positively discouraged by the Met Office.
Regards, John.

jmac
Reply to  jmac
February 14, 2017 12:17 pm

Observing Officer.

M Seward
Reply to  jmac
February 14, 2017 12:59 pm

The shift to the engine cooling intakes fits more with the trend to constant monitoring of intake temperatures by the engine control software and/or the engine manufacturers on the basis of monitoring the duty load on them. Essentially this is about minimising warranty claims on engines that have simply been overworked ( say too much power extracted when cooling water temperature is high) say to make a port berth slot rather than sit in the roads after some heavy weather or whatever. Means – motive – opportunity. The old met Office crew were awake it seems.
The upshot of all this is that for a set of reasons the historical data was NEVER fit for the purpose of determining a global temperature value and contained various known but not accurately quantifieable biases. The great kiddy fiddle of this data is at the very centre of the CAGW scare scam.

richard verney
Reply to  Oldseadog
February 14, 2017 2:13 pm

I have made this point many times, and have posted a comment above, before seeing your comment.
The problem is that the fleet has changed over time, so too have the nature of some voyages.
The warmists like to adjust data for station moves or for TOB, but simply ignore any adjustment to take account that ship’s are recording temperature at depth, not SST, and the sampling has varied over time as the ocean fleet characteristics have changed over time.

Auto
Reply to  Oldseadog
February 14, 2017 2:16 pm

Olderseadogthan I [and I have 45 years in shipping]
I always used a bucket. 1974 to 1987.
I appreciate that there are – and were – corrections that could be made.
At least we reported to 0.1 C – subject to the errors and corrections; and human error.
I continue to encourage my ships to become VOSs
Auto

Reply to  Auto
February 15, 2017 5:02 am

They should be BGOHWs.

Dodgy Geezer
February 14, 2017 10:29 am

….. Ships draw the water in at near surface level.
Ships at the turn of the 1900s drew their water from buckets thrown overboard, which sampled the first few feet of the water column.
By midcentury, the ships were typically drawing perhaps 30-40ft, and sampled the water from inlet cooling pipes perhaps 10-15 feet below sea level.
By the 1980s vast ships drawing 100ft or more were in service, and they probably took in cooling water 40-50ft below the surface……
Has anyone corrected for apparent lower sampling of water? And can I have a grant…?

rbabcock
Reply to  Dodgy Geezer
February 14, 2017 11:38 am

Large containership drafts are more in the 18m range, not 100′. Aircraft carriers are also about that. https://people.hofstra.edu/geotrans/eng/ch3en/conc3en/containership_draft_size.html
100′ wouldn’t get you into a lot of harbors.

Moa
Reply to  rbabcock
February 14, 2017 12:54 pm

Nimitz-class and Ford-class aircraft carriers have masses around 100 kilotons and draught of 37-41 feet (11.3 – 12.5 m).
https://en.wikipedia.org/wiki/Nimitz-class_aircraft_carrier
However, oil supertankers have mass up to 261 kilotons and drafts of 81 feet
http://maritime-connector.com/worlds-largest-ships/
Just the facts, ma’am.

Greg
Reply to  Dodgy Geezer
February 14, 2017 12:21 pm

Met Office , Hadley did to their credit take quite a detailed look at this sort of thing and are probably about the most thorough work on the question. However, a lot of their ‘corrections’ are speculative at best. I also do strongly object to their idea of contradicting the written record of whether some readings were engine room or bucket if they did not have the right ‘statistical average’ for a certain region according to their back of envelop guesses of the sliding change from one to the other.
Most of the details in the three papers Kennedy 2011 a,b,c on of which is linked above.
For a fully referenced analysis see here:
https://judithcurry.com/2012/03/15/on-the-adjustments-to-the-hadsst3-data-set-2/

tty
Reply to  Greg
February 14, 2017 2:00 pm

Another interesting thing to consider is the big mystery why SST:s 1939-1945 run noticeably warmer than before or after.
Strangely enough nobody seems to have realized that during those years most ships operated in something called “convoys”, which normally consisted of columns of five ships. This means that the water, whether from a bucket or an engine intake, had on average been churned up by 2.5 other ships, and that part of it had passed, as cooling water, through, on average, 2.5 condensers.

Auto
Reply to  Greg
February 14, 2017 2:26 pm

Dodgy G
No grant.
Sorry /Sarc.!!
I would add that modern container ships – VLContCs typical have a draft of 15-17 metres.
No oil tanker had a normal draft of 100 feet [say ~31 metres]. I do not know of any above about 28.5 metres.
In the 1970s the Shell ships Batillus and Bellamya, and two Total sisters, drew that sort of draft was about 93-94 feet. :Laden mass approaching 600 000 tonnes.

Auto
Reply to  Greg
February 14, 2017 2:27 pm

tty

Sara Hall
Reply to  Dodgy Geezer
February 15, 2017 1:39 am

I was on one of the largest supertankers in the world at the time (the Esso Scotia in 1976) and the draft difference between laden and ballast was around 60′, if my memory serves me right.

JaneHM
February 14, 2017 10:35 am

Yes I wonder too if perhaps there was an El Nino in that window?

tty
Reply to  JaneHM
February 14, 2017 1:54 pm

There sure was. The biggest one in more than a century exactly at the low point of the “adjustment”

February 14, 2017 10:41 am

Surely it’s clear by now that Karl et al were going for a poetic truth rather than any approximation to a literal reality.
Ships, buoys, engine inlets or buckets… they all exist in the Now.
But a Pause only exists in the Past. It takes time for something to not happen.
That was the beautiful depth of Karl et al. To realise that the scientific method can be superseded by the will of the true Scientist.
Just as the rules of grammar can be superseded by the will of the true Poet.

Reply to  M Courtney
February 14, 2017 12:02 pm

That’s pretty good Mr. Courtney and got the requisite grins from me.

JohnKnight
Reply to  M Courtney
February 14, 2017 12:25 pm

A pause by any other name … ; )

Another Doug
Reply to  JohnKnight
February 14, 2017 2:15 pm

I think that I shall never see
a trend as lovely as +2C

NW sage
Reply to  M Courtney
February 14, 2017 5:59 pm

I like that: Poetic Truth. Perhaps you should copyright it! It will become synonymous with Alternate Facts and Directed Information.

RWturner
February 14, 2017 10:41 am

You’re confusing this paper with a science research intended to actually make the temperature reconstruction more accurate, it’s actually a political science paper simply intended to “bust” the pause.

Auto
Reply to  RWturner
February 14, 2017 2:28 pm

RWTurner
PolSci.
Thanks.
Agreed.
Noted.
Auto

Hats off...
February 14, 2017 10:55 am

As time progresses, the deliberate obfuscation behind this game of Follow the Pea/House of Cards/Emperor’s New Clothes/Boy Who Cried Wolf becomes more and more apparent. Ten years ago, I would have given the leading “scientists” the benefit of the doubt that maybe they just hadn’t considered all possible scenarios in a highly complicated atmosphere. Not any more! It’s appears to be nothing more than the equivalent of insider trading on the stock exchange and should be prosecuted as such.

Greg
Reply to  Hats off...
February 14, 2017 12:23 pm

More like Libor but good idea.

February 14, 2017 11:03 am

My info is several decades old, but some oceanography grad students and profs of my acquaintance were trying to take ocean- and off-shore spring-water samples using buckets at various depths in the Gulf of Mexico, with rare expeditions down toward Fiji and New Zealand. They used micro-trace radioactives profiles to match up the samples with probable land sources in some cases. But they did have concerns about their small research boats affecting the samples.

cirby
February 14, 2017 11:07 am

Just a couple of years ago, I was arguing with warmists because of the “necessary” downward adjustment to ship measurements in the late 40s through early 60s, because they were too warm (and made the past look cooler, which made the present seem warmer). We had to adjust the ship numbers downward because the buoy measurements were more accurate. Right?
So in the past, we had to adjust SSTs down because of ships vs buoys, and in the present we have to adjust them up because of ships vs buoys…

Greg
Reply to  cirby
February 14, 2017 12:25 pm

Right. As I point out in the HadSST artilce linked above, they removed the majority of the variability from the majority of the record: the early 2/3 of late 19th and early 20th c.
The only period where change is acceptable is post 1960, the rest is apparently due to ‘bias’ in the data.

Reply to  cirby
February 14, 2017 12:59 pm

cirby February 14, 2017 at 11:07 am
Just a couple of years ago, I was arguing with warmists because of the “necessary” downward adjustment to ship measurements in the late 40s through early 60s, because they were too warm (and made the past look cooler, which made the present seem warmer). We had to adjust the ship numbers downward because the buoy measurements were more accurate. Right?
So in the past, we had to adjust SSTs down because of ships vs buoys, and in the present we have to adjust them up because of ships vs buoys…

Really? Fig 2 from Karl et al shows the past (pre 1940) was adjusted to be warmer.
http://www.realclimate.org/images/noaa_update.jpg

LansnerFrank
Reply to  Phil.
February 14, 2017 1:42 pm

Are your figures not underlaying GHCN adjustments actually very old?
Because on the final global temperature datasets like GISS there is never a new version that actually changes like the above in the real world. It seem someone like to show internal GHCN versions to pretend final global temprature trends like is not adjusted as they are, always coolling the past.
Its like Putin: he wants to invade Ukraine, but pretend he never did. (“See GHCN..” as if GHCN is final global temperatures)

Frank Lansnre
Reply to  Phil.
February 14, 2017 1:43 pm

Are your figures not underlaying GHCN adjustments actually very old?
Because on the final global temperature datasets like GISS there is never a new version that actually changes like the above in the real world. It seem someone like to show internal GHCN versions to pretend final global temprature trends like is not adjusted as they are, always coolling the past.
Its like Putin: he wants to invade Ukraine, but pretend he never did. (“See GHCN..” as if GHCN is final global temperatures)

Frank Lansnre
Reply to  Phil.
February 14, 2017 1:45 pm

Are your figures not underlaying GHCN adjustments that are actually very old?
Because on the final global temperature datasets like GISS there is never a new version that actually changes like the above in the real world. It seem someone like to show internal GHCN versions to pretend final global temprature trends like is not adjusted as they are, always coolling the past.
Its like Putin: he wants to invade Ukraine, but pretend he never did. (“See GHCN..” as if GHCN is final global temperatures)

Ray in SC
Reply to  Phil.
February 15, 2017 4:13 pm

Phil,
This was done to erase the warming between 1910 – 1945.

February 14, 2017 11:27 am

Look at Kennedy et al. 2011 for fraction of buoy and ship.
http://onlinelibrary.wiley.com/store/10.1029/2010JD015220/asset/jgrd16941.pdf?v=1&t=iz5w1kkr&s=d6645b022bf05630e509afd02883f98b733cfd71
it is a quite good description of what UK Met is doing with their HADSST3 dataset. It is by far more transparent and logical than the NOAA paper.
In terms of fractions I guess there should be no big difference between Hadsst and Noaa since both are using ICOADS data.
As you can see there is only sparse usage of buoys in the eighties ( below 10%, mayb 5% avg). Doesn’t really make sense to adjust the 95% to the 5%.
The elephant in the room is the adjustment to NMAT with cooling the nineties, especially 1997/1998.

LansnerFrank
Reply to  paulclim
February 14, 2017 11:41 am

Hi Paulclim, thankyou, im having a little trouble with your link?
K.R. Frank

Reply to  LansnerFrank
February 15, 2017 3:57 am

Frank,
that was the link to the PDF in the online library. I don’t know why it doesn’t work.
Please try this link to the Kennedy paper in the library.
http://onlinelibrary.wiley.com/doi/10.1029/2010JD015220/full

Greg
Reply to  paulclim
February 14, 2017 12:01 pm

The whole idea of using NMAT to adjust daytime SST is a fallacy since it assumes that any changes, be they natural or AGW, have exactly equal effects on cooler noctural temperatures as they do on the daytime maxima.
They should be studying these difference in order to understand how climate works , not “correcting” the maxima by pretending they will affected in the same way as the minima.

Greg
Reply to  Greg
February 14, 2017 12:11 pm

There is a temperature inversion in the surface waters during the night in many regions, were convection brings up heat from deeper water. This does not happen in the daytime.
The unwarranted assumption that both should show the same changes and thus that one requires “bias correction” is totally without foundation.

Reply to  Greg
February 14, 2017 2:43 pm

I agree. The reality is worse than that. While I admit that NMAT partly serves as reference because daytime ship data can be affected by heated ship decks other effects like upwelling do the opposite.
The real flaw is applying the NMAT just for a specific period and not the whole series. That means after the general adjustment of the buoy data to be inline with shipdata both have to be brought inline with NMAT or none.
The effect is, that as soon as ship data are increasingly replaced by buoy data, which are just elevated but not NMAT reduced, temperatures show a sharp rise.
In 2015 (before El Nino spike) Karl wrote that the NMAT adjustment resulted in a 0.03 deg. trend increase. Now after the undampened El Nino spike I am sure it is much more than that.

ferd berple
February 14, 2017 11:41 am

So why is it not necessary to warm adjust 1986-2003, just like 2003 – 2016 ?
===================
Really, this is basic science. Why do the folks at WUWT have trouble with this? When data is “bad” it needs to be corrected. When the data is “good”, no correction is required.
How can you tell temperature data is good or bad? This is simple. Follow the scientific method. We all know that temperatures are rising due to rising CO2, so data that shows rising temperatures must be “good” data. Data that doesn’t show rising temperatures must be “bad” data. It really is quite simple
Before 2003 data, temperatures were rising so everyone could tell the data was good. After 2003 the data showed temperatures were not rising, so this means the data must be bad and must be adjusted.
This is a most basic scientific truth that government scientists have known for years. Karl et al obviously grasped the problem, recognized that the post 2003 data showed temperatures were not rising, so the post 2003 data must be bad and they went out and corrected it.
And not only that, Karl et al took extraordinary steps to prove their results were correct. By not archiving their data and methods it is impossible to show their results cannot be replicated. And this is fundamental science. If it is impossible to show that result cannot be replicated, then the results must be correct.
And, just in time for the Paris Climate conference, so developed countries around the world could commit to borrowing trillions of dollars to against our children’s future. Selling the next generation into debt slavery to control the weather.
It certainly is a relief to see we are not like the primitive people’s that came before us. Demanding human sacrifices to control the weather to guarantee next year’s harvest. How silly these primitive people must have been.

Jared
Reply to  ferd berple
February 15, 2017 6:49 am

So true. The other ‘genius’ move by the team is when they smooth the past with 200 year averages to get rid of the wiggles then use 5 year running averages post 1900 to keep the wiggles and make the natural warming from the Little Ice Age look like it’s not natural. Try this with Ohio State yearly football wins, use a 20 year smoothing average from 1890-2000, then use a 3 year smoothing average from 2002-2016. It makes a great hockey stick. Always smooth the past over long periods to get rid of any hint that past temps were actually higher, then don’t smooth the current wiggle.

Chris Hanley.
February 14, 2017 12:21 pm

70% of the planet surface is ocean.
Trying to determine the surface temperature anomaly from a combination of measurements at mostly urban locations which have changed over time, to of all things ship bucket measurements which are mostly confined to NH commercial shipping lanes anyway, then a scattered array of buoys to a fraction of one degree C is ludicrous.
Normally when a better means of measuring something, in this case the GAT anomaly, is found it supersedes the old.

Scott
February 14, 2017 12:51 pm

Here on the Great Lakes we always lower an electronic temperature probe down on a wire to find the thermocline, usually around 50 feet, because fish like to hang out at the thermocline (temperature quickly drops into the 40s at the thermocline).. Sometimes there is no thermocline (springtime typically) and the surface is way warmer than deeper, sometimes the temperature drops 5F from surface to thermocline, sometimes the temperature drops only 1 or 2F from surface to thermocline. If it was always the same then I wouldn’t have to buy an expensive probe, I would just refer to some model. The problem of taking surface bucket temperatures vs. engine intake temperatures as I see it is that sometimes they are more or less the same, sometimes they are quite different. Also, I would imagine if a boat drafted deep enough it would drag some of that colder thermocline water up into the engine intakes.

February 14, 2017 1:36 pm

I hate to be a spoil sport, but the whole buoy-measuring venture seems farcical to me, since I am not grasping exactly what an ocean buoy is supposed to be measuring.
I would think that, even in a relatively small volume of ocean water, a person would need dozens of measurements to establish some sort of temperature profile for the given volume of water. Even with the large number of buoys in existence, Earth’s ocean volume is still HUGE by comparison, and I just cannot see those relatively few buoys measuring anything consistently real, especially with the accuracy I see claimed.
So, I guess that not only do I question the adjustments, but also I question the fundamental premise based on the given number of buoys.

Catcracking
February 14, 2017 1:40 pm

What a scientific mess to sort out the “correct” temperature. Now they claim that the temperature increased 0.0x degrees in 2016 whereas the accuracy is =/- 1 to 2 degrees C! Do they intentionally muddle up the data and adjustments so that no one can really figure out if the earth is warming or cooling?
I can only trust the satellite data.

TA
Reply to  Catcracking
February 15, 2017 6:33 am

“I can only trust the satellite data.”
Yeah, NASA says the satellite data is more accurate.
“In 1990, NASA determined that satellite temperatures were more accurate than surface temperatures, and should be adopted as their standard.”
https://realclimatescience.com/2017/02/100-predictable-fraud-from-government-climate-scientists/
One reason satellite data is more accurate is it is not affected by human thumbs on the scale, or some of the many problems associated with measuring the temperatures of the oceans.
Good article at the link above. Trump’s prosecutors should incorporate the webpage into their legal arguments against the temperature data manipulators. You know, the ones that have cost the world trillions of dollars in wasted spending, and devastated wildlife, trying to control a climate that is not out of control.

February 14, 2017 1:53 pm

Adjusting the 1986-2003 data would not eliminate the pause, so is unjustified to prove the conclusion.

J Mac
February 14, 2017 1:53 pm

Deliberate data obfuscation to secure selective ‘science’ remuneration….

February 14, 2017 2:09 pm

All adjustments are made to a time series to enable apples-apples trending. The issue, as I see it, is that the raw data is not maintained and displayed for others to make conclusions as to the validity of the adjustments or to make adjustments of their own. So annoying not seieng the raw data or the “anchor” point.

bit chilly
Reply to  Macha
February 14, 2017 3:31 pm

the issue for me is the raw data is not fit for purpose .it really is that simple.

February 14, 2017 2:10 pm

Why the complete utter disregard for water vapor in the atmosphere and its affect on average temperature? I was recently looking at the historical temperature in my area and noticed that the graph of dewpoint and temperature were almost identical. I looked at other cities and although they were not as close together, the two graphs had very similar slopes. Here is one example chosen at random,
https://www.wunderground.com/history/airport/KOMA/2014/12/25/MonthlyHistory.html?req_city=&req_state=&req_statename=&reqdb.zip=&reqdb.magic=&reqdb.wmo=
Others are available at https://www.wunderground.com/history/

Reply to  usurbrain
February 15, 2017 5:33 am

@ususbrain
Disregarding water vapor adds significant error.
https://pielkeclimatesci.files.wordpress.com/2009/10/r-290.pdf
But your question is part of the broader issue of the scientific validity of adding temperatures
at all, much less those from different sources.
http://www.uoguelph.ca/%7Ermckitri/research/globaltemp/GlobTemp.JNET.pdf
I consider the practice to be a violation of thermodynamic principles.

Reply to  4kx3
February 15, 2017 5:35 am

> @ususbrain

knr
February 14, 2017 2:20 pm

This merely reflects that hear is an area that has significant and long lasting problems in its ability to take measurements for its basic functions . And on this they built ‘ a mountain of ‘settled science’, now that is amazing.

willhaas
February 14, 2017 2:21 pm

They should just throw out all their data as being questionable. Good data should not have to be adjusted.

February 14, 2017 3:37 pm

“Homogenisation”: The art of getting the result you want/need out of a dataset.

February 14, 2017 3:50 pm

Can anyone explain how CO2 is warming the oceans? Aren’t warming oceans evidence that more visible light is reaching earth? Not that there is more CO2 in the air? Don’t you first have to prove CO2 can warm water before you blame CO2 for the warming?
PDO/ADO and other Natural Cycles You’ve Never Heard of…and for good reason.
https://co2islife.wordpress.com/2017/01/22/climate-science-on-trial-the-forensic-files-exhibit-z/

Reply to  co2islife
February 14, 2017 4:02 pm

Co2islife, ……..CO2 is not warming the oceans. CO2 is slowing down the cooling of the oceans by bouncing back some of the IR that the oceans are emitting.

Reply to  Martin Clark
February 14, 2017 4:13 pm

Not according to MODTRAN. The air above the oceans have plenty of H2O and it absorbs all the IR that is absorbed. CO2’s signature isn’t seen until you are 3 km up or more.

Reply to  Martin Clark
February 14, 2017 4:31 pm

Co2islife, don’t forget that any H2O molecule between sea level and 2.99 km high not only absorbs IR, it also EMITS it. So the poor old lonely molecule of CO2 at 3.01 km has to deal with the IR photon emitted by the H20 molecule a few meters below it.

Brett Keane
February 14, 2017 3:57 pm

Change this data, Karl, at your peril: https://www.whaleoil.co.nz/2017/02/guest-post-men-science-visited-shores/
h/t Slater@whaleoil

Steve Oregon
Reply to  Brett Keane
February 14, 2017 5:24 pm

That’s interesting.
In that link
“Towards the end of 2016 modern scientists from the New Zealand government released a publication. Somewhat pessimistic in nature it forewarned of the possibility of boiling oceans, caused by a thing called ‘global warming’. The scientists conferred upon us the knowledge that they had been assiduously monitoring ocean temperatures around New Zealand for over thirty years and that in that time the measured temperatures had changed; not one jot, by zero, there was no trend, no change; at all.”
Here’s how the new Zealand study explains the absence of any warming from the study he cites:
http://www.mfe.govt.nz/sites/default/files/media/Environmental%20reporting/our-marine-environment.pdf
“Data on sea-surface temperature and filling critical knowledge gaps
Data on sea-surface temperature provide a good example of a tension that can arise between
reporting using the highest-quality data and providing a comprehensive report. Our reporting
programme has very reliable satellite data on sea-surface temperature available from 1993,
available from our data service. These data showed no determinable trend over the past two
decades. This is not surprising, as on such short time scales – from year to year and decade to
decade – natural variability can mask long-term trends.
Long-term data on sea-surface temperature do show a statistically significant trend. These
data come from a range of measurement instruments and sites. While the data do not have
the consistent and broad spatial coverage of satellite-only data across the entire time series,
scientists still have high confidence in the findings.
Given the importance of long-term data to understanding how the state of our environment is
changing, we report on both the satellite-only data and the long-term data in our key findings
and the body of the report. However, we did not acquire the long-term data as part of our
reporting programme and these are not available on our data service. “

February 14, 2017 3:57 pm

Please excuse my ponderous and extensive inputs, given my severe professional frustration and dismay on this matter!
I speak just as a retired Chartered Professional Engineer and Project Manager and not as a Scientist, but even so I cannot even begin to accept current Energy policies’ with their unnecessary, massive subsidies and other additional costs which are supposedly justified, based on the supposedly “settled” CAGW/Climate Change science that the warmists’ have imposed! Over a period of 20-25 years, ongoing we have and are still wasting unaffordable expenditure of $billions worldwide of taxpayers’ and consumers’ money!
Yet even now, due to ongoing temperatures inconveniently never matching past forecasts :
1. Warmists have continually needed to perform feats of intellectual and scientific acrobatics and contortions to reconcile their forecasts against records of ongoing events. Ongoing record data “adjustments” by the warmists only further discredits such a theory
2. Warmists have had to massively reduce their forecast future temperatures rises based on a reduced sensitivity, i.e. a fundamental consideration of their original theory that started this religious calling. This now involves a reduced temperature rise per doubling of CO2 levels from roughly 4 degrees C to “only” 2 degrees C. Even the latter, they now say will occur by 2100. This means very roughly only a 0.15 degree C per decade temperature rise over 85 years or so.
3. Past and ongoing records of such minute temperature rises, as needed to properly, adequately and scientifically “substantiate” any theory for such forecasts, requires almost superhuman efforts, even in a laboratory, let alone in the open sea and air around us, to obtain the reliability, the accuracy, and the consistency and frequency of methods of measurement and measuring station environment needed for such an exercise. Clearly such necessary actions were not taken in the past and are not being provided now and as such no adequate data is available to “substantiate” and prove the warmist theory!
4. Now even CO2 level data is similarly being disputed, although accuracy needed is nowhere near as that required for temperature readings.
5. Even now no one appears to have adequately and accurately separated out the causes of very many naturally induced and varying temperature rises – many of which are cyclical, and also other human induced temperature rises than CO2 emissions. This is clearly required to identify actual man-made CO2 emission induced temperature rises to compare with CO2 increases. As such there is no basis for any correlation let alone cause and effect linkage to “substantiate” the warmist theory.
6. As repeatedly reported by many, vague and disputed estimates of future costs incurred should current warmist policies not be adhered to totally disregard the benefits CO2 provides including increased crop yields, greening and reduced plant watering which have been recorded and substantiated over the last few years. The latter is particularly beneficial for non-developed nations who suffer from famines, droughts and floods. As such the warmists “investment analysis” is fundamentally flawed and unscientifically biased and needs to be ignored.
7. Even as a non-scientist, I know that the onus is on the warmists to substantiate their theory, and not that “deniers” or even “luke warmists” need to disprove it!
If this was my project, and someone came to me with such a proposal based on the above, and wanting the monies now being spent as an investment to achieve the above-mentioned project objective, then I would simply pick up the telephone and ring the local asylum and have the person immediately certified and incarcerated.
Or, in my humble ignorance, am I missing something?

DC
February 14, 2017 5:36 pm

David Long ‘Not dumb at all! It’s become my pet peeve from years of reading climate science.’
Look at the way proxy data is spliced to ice core data to instrument data with out a clear acknowledgement of where one stops and the other begins, think time gaps.

AP
February 14, 2017 5:40 pm

Didn’t they throw out the data from about 50% of the buoys anyway, because those buoys showed a cooling trend?

AP
Reply to  AP
February 14, 2017 5:43 pm
AP
Reply to  AP
February 14, 2017 5:47 pm

Also here, because part of the NASA page that described what they did seems to have “disappeared”:
http://jennifermarohasy.com/2008/11/correcting-ocean-cooling-nasa-changes-data-to-fit-the-models/

DHR
Reply to  AP
February 15, 2017 6:38 am

AP, the document you referenced says just that. The researcher did not recover some of the low-reading buoys and test them to see whether the data were erroneous. Nor did he recover some of the high-reading buoys and test them for error. He just threw out the low data. This “scientific” approach is described nicely by Ferd Berple, above. Perhaps we could call the method Berple Burp Adjustment.

February 14, 2017 6:17 pm

This is rich.comment image

Poly
February 14, 2017 7:31 pm

Ho Hum, Paul Vaughan has been highlighting this problem for a long time.
His recent quote; “ERSSTv4 should have been retracted (and v3b2 reinstated) instantly upon release. WE HAVE KNOWN THIS FOR YEARS.”

Rob
February 14, 2017 7:39 pm

Scientific Fraud used to be a big deal?

Stephen Greene
February 14, 2017 7:43 pm

The FDA would never allow any of this data.

mairon62
February 14, 2017 10:12 pm

There is also a false implication of an imbedded precision that never existed when data from a ship’s intake is included in averages that are then reported in 1/100th of a degree. Sure when you take an arithmetical average you end up with .xx, but that’s an artifact of the math. I bet intake data was never logged or recorded in any fraction of a degree; who would have cared? Even the gauges used are calibrated +/- 1 degree; at least pre-1990. And this number from a ship can be conflated and used in the determination to claim “hottest year ever”?

Reply to  mairon62
February 15, 2017 4:16 am

Mairon, it may seem that this precision is not achievable given the tolerances of the system itself and the reading process. But that’s exactly the reason why everybody is giving anomalies instead of temperatures. The reasoning behind is: Once you have a really big number of readings with high tolerances the average of them will be nearly the same as with systems and reading processes with much smaller tolerances. Plusses and Minusses equal each other over time. Once you have the average temperature of let’s say all Januaries of all places in the world of the last 100 years, the delta to the current January reading of all places in the world should have a much smaller tolerance and higher precision.
I don’t know whether the precision is as high as claimed but I believe that the precision of the anomalies is much higher than that of an absolute temperature. Look at the other side: Satellite data, such as UAH, also give anomalies and claim an uncertainty of +/- 0.1 deg. C wehreas the system itself measures with a tolerance of +/- 1.0 deg. C.

Reply to  paulclim
February 15, 2017 11:47 am

Anomalies are a propaganda tool to make graphs look worse than they are. If actual temperatures were plotted who would pay attention to 0.01 C changes? Granted anomalies have a purpose but don’t use them for a substitute for accuracy or precision.
You need to go read about accuracy and precision. If you can only measure to within a degree, your range will probably be close to that. Range is what determines precision. Averaging anomalies simply won’t change that. Precision is also only useful when applied to one measuring device. Averaging anomalies to obtain a precise number just doesn’t work. Here is an example. Think of 10 people shooting at a target with 10 different pistols. Does an average precision number really tell you anything? If you were going to purchase 1 million pistols would you want your range officer to provide you with a group average precision number to make a decision with?
Accuracy is no different. The numerator of an accuracy figure is the difference between a given device and a known standard. This doesn’t vary by averaging other readings from other devices into a combined figure. A 1% device averaged with readings from a 10% device doesn’t give you a <1% measurement. Think of the targets again. Again would you want to know an average accuracy of all 10 pistols when deciding which one to buy? Does it even have any meaning?
This doesn't even address significant digits.

Reply to  paulclim
February 16, 2017 4:18 am

Jim,
I agree with you in terms of accuracy and precision. The advantage of anomalies is that the uncertainty or the range of the results changes. Under the condition that there are no systemic errors (which of course there are in reality) you will have only random results in a range given by accuracy and precision of your measurement system and process. Let’s call that range tolerance. By switching to averages and anomalies you statistically reduce that tolerance to a small uncertainty.
Example: You measure something that has a sinus behaviour and theoretical values are between the extrema 90 and 110 following an ideal sinus curve around 100 which would also be the average. Let’s say your measurement system & process has a tolerance of plus/minus 5. The range of the measured values will then be between 85 and 115 in the extrema. Supposing that (because of no systemic errors) there are as many and as high outliers in plus direction as there are in minus direction the average of all values will be still 100. The uncertainty of that average is not the tolerance of the measured values but the probability that the average does tend in one direction (+ or -). It is only the number of readings that determines this uncertainty. With a very high number of readings and perfect statistical distribution of the values the uncertainty of the average value will be basically zero. But let’s say the average would be 100 with an uncertainty of +/- 0.2.
Now imagine the sinus measurement above is a series over time and space. And every single point is an average of many spatial readings at that point of time. The original tolerance band of the spatial measurements was also reduced to a smaller uncertainty of their average, let’s say 0.3 because the number of redings is smaller than that of the total series. If you now calculate the uncertainty of the difference between both averages (the anomaly) you could probably do it by a simple tolerance chain calculation (t = (a^2+b^2)^0.5) which would result in a total uncertainty of 0.36.
That means from measuring values of let’s say 105 plus/minus 5 you get anomalies of 5 +/- 0.36. That does not mean that the single values of 105 suddenly have a tolerance of +/- 0.36. It just means that the global average at a certain point of time deviates from the total average over time with un uncertainty of +/- 0.36.
While I admit there is much propaganda in the whole AGW theater, the anomaly issue is not part of it, at least as far as I can get it. The more serious issue in temperature data sets are the systemic errors in the measurements and the dealing with it by the adjustments. There are huge opportunities for interpretations.

February 14, 2017 11:25 pm

Yes and a bit No.
GISS, yes, 2015
HadCrut, similar but still different adjustments, 2015
NOAA, yes, 2015

Nechit
February 15, 2017 9:10 am

I am not a scientist, although long ago I did get an MA in geology as my second major. It seems to me that the correct adjustment should be to lower ship-measured temperatures to match the more-accurate buoys, as the group at Berkeley did. If you extend their method back to about 1980, when there were proportionately many more ship measurements, if would seem to necessarily eliminate much of the reported seas surface warming of the 1980s and 1990s. Wasn’t the reported warming of the ’80s and ’90s the whole reason for this global warming panic to begin with? Think anyone could get funding for such a study?

Reply to  Nechit
February 15, 2017 6:11 pm

Nechit, you have a masters degree in geology, but you say “I am not a scientist?”
You are too modest. You might not be doing scientific research, but you are certainly a scientist.
Bill Nye has only a bachelors degree in mechanical engineering, and works as a children’s television entertainer. But he nevertheless has a profile on famousscientists.org, and is known far and wide as “the science guy.” If he’s a scientist, then you certainly are, too.

Nechit
Reply to  daveburton
February 15, 2017 7:33 pm

Sorry, I have a BA — and evidently I can’t type very well.

Frank Lansner
February 16, 2017 7:55 am

One can also illustrate the point of this article in another way:
The adjustments for NOAA SST data is around the same in 1988 as in 2009.
But
in 1988, SST readings were around 12 % from buoys
in 2009, SST readings were around 62% from buoys
So.. adjustmenst are not at all primarily due to fraction of buoys.
Kind Regards, Frank

February 19, 2017 10:25 am

Zeke ran away once the questions got interesting and some explanations were requested.
This is the same guy warming data as it moves south down the latitudes when he should be cooling it. They really need to get rid of these people