Yet another study tries to erase "the pause" – but is missing a whole year of data

From UC Berkeley Earth comes this paper that tries some new statistical techniques to get “the pause” to go away, following on with the infamous Karl et al paper of 2015, that played tricks with SST measurements done in the 40’s and 50’s to increase the slope of the warming. This aims to do the same, though the methods look to be a bit more sophisticated than Karl’s ham-handed approach. The paper link is below, fully open sourced. I invite readers to have a look at it, and judge for yourselves. Personally, it looks like ignoring the most current data available for 2016, which has been cooling compared to 2015, invalidates the claim right out of the gate.

If a climate skeptic did this sort of stuff, using incomplete data, we’d be excoriated. yet somehow, this paper using incomplete data gets a pass by the journal, and publishes with 2015 data at the peak of warming, just as complete 2016 data becomes available.

The results section of the paper say:

From January 1997 through December 2015, ERSSTv3b has the lowest central trend estimate of the operational versions of the four composite SST series assessed, at 0.07°C per decade. HadSST3 is modestly higher at 0.09°C per decade, COBE-SST is at 0.08°C per decade, whereas ERSSTv4 shows a trend of 0.12°C per decade over the region of common coverage for all four series. We find that ERSSTv3b shows significantly less warming than the buoy-only record and satellite-based IHSSTs over the periods of overlap [P < 0.01, using an ARMA(1, 1) (autoregressive moving average) model to correct for autocorrelation], as shown in Fig. 1. ERSSTv3b is comparable to ERSSTv4 and the buoy and satellite records before 2003, but notable divergences are apparent thereafter.

zeke-allsets-fig1

What’s missing? Error bars showing uncertainty. Plus, the data only goes to December 2015They’ve missed an ENTIRE YEAR’s worth of data, and while doing so claim “the pause” is busted. It would be interesting to see that same graph done with current data through December 2016, where global SST has plummeted. Looks like a clear case of cherry picking to me, by not using all the available data. Look for a follow up post using all the data.

Here’s what the world’s sea surface temperature looks like at the end of 2016 – rather cool.

global-sst-12-29-2016

Compare that to December 2015, for Hausfather’s end data period – they ended on a hot note:

global-sst-12-31-2015

 

I did ask Zeke Hausfather, the lead author about this paper via email, about it and the data, and to his credit, he responded within the hour, saying:

Hi Anthony,

We haven’t updated our buoy-only, satellite-only, and argo-only records to present yet (then still end January 1st 2016), but we are planning on updating them in the near future.

By the way, the paper itself is open access, available here: http://advances.sciencemag.org/content/3/1/e1601207.full.pdf+html
We also have a background document we put together here: http://www-users.york.ac.uk/~kdc3/papers/ihsst2016/background.html
I’m attaching the data shown in that figure. All series have been masked to common coverage (though we we do three different variations of tests for coverage effects, as we discuss in detail in the paper).
The data are:
acci97Mm.temp – Satellite radiometer record from 1997 (from ATSR and AVHRR)
buoy97Mm.temp – Buoy-only record from 1997
cobe97Mm.temp – COBE-SST (Japanese record)
had97Mm.temp – HadSST3
v3_97Mm.temp – ERSSTv3b
v4_97Mm.temp – ERSSTv4
We start in 1997 because prior to that there is insufficient data from buoys to get a global estimate, and satellite data is only available from mid-1996.
Hope that helps,
-Zeke

I have made the data available here in a ZIP file (17KB)

That’s how science should work, sharing the data, but I contend that the data should be updated in the paper before publishing it. A year long gap, with a significant cooling taking place, is bound to change the results. Perhaps this is an artifact of the slow peer-review process.

But, Zeke should know better, than to allow the word “disproved” in a headline. We’ll see how well his study claims of “pause-busting” hold up in a year without a major El Niño to bolster his case.

UPDATE: Bob Tisdale points out via email that this paper seems to be a manifestation of a guest post at Judith Curry’s a year ago:

A buoy-only sea surface temperature record

In that post, there’s some serious concerns about the buoy data used, from climate Scientist John Kennedy of the UK Met Office

Dear Bob,

You raise some interesting points, which I’d like to expand on a little. I’ve used your numbering.

First, coastal “SST” from drifters can exhibit large variations because there can be large variations in coastal areas. Also, sometimes, buoys wash up on beaches and start measuring air temperature rather than SST. It’s also common to see drifting buoys reporting erratic measurements shortly before they go offline, wherever they happen to be. Occasionally, they get picked up by ships and, for a short period, record air temperatures on deck. This paper goes into some of the problems that ship and drifter data suffer from:

http://onlinelibrary.wiley.com/doi/10.1002/jgrc.20257/full

Second, drifter design was standardised in the early 1990s. Since then, the only major change I know of has been in the size of the buoys: modern mini drifters are smaller than their non-mini predecessors. Different manufacturers make buoys to the specifications laid down in the standard design. Metadata for buoys is not especially easy to get hold of (for ships there’s ICOADS and WMO publication 47), but work is ongoing to organise the metadata and to see if there are measurable differences between drifters from different manufacturers. Work has also been done to fit a small number of drifters with higher-quality thermometers alongside the standard thermistor. See e.g.

http://journals.ametsoc.org/doi/abs/10.1175/2010JTECHO741.1

The results suggest that individual buoys can exhibit a variety of problems. On average, though, they seem to be unbiased relative to the true SST. Individually, they are higher or lower, with calibrations that vary by a few tenths of a degree.

There can occasionally be large calibration errors (of a degree or more). Nowadays, there is constant monitoring of the drifter network by a number of different centres. Large calibration errors are usually identified quickly. Sometimes these can be fixed remotely, sometimes they can’t and the buoy goes onto a list (see, for example, http://www.meteo.shom.fr/qctools/ ). Monitoring of the early data was less thorough.

As a result of the above considerations, everyone who uses drifting buoy data applies some level of quality screening to it. What is generally accepted is that the average drifter makes a much better SST measurement than the average ship (though there are exceptions, of course, in both directions).

Third, I’d note that drifter coverage is not so great prior to 1995 (I think Kevin said the same), so the relative effect of calibration errors would be more pronounced as well as the difficulty of making a solid comparison with fewer data points. I think, more generally, it’s useful to know how consistent the trends are across a variety of periods. As your graphs show, looking at a variety of periods can reveal different aspects of the data.

Fourth, (I think you mistyped HadSST2 when you meant HadNMAT2, or did I misunderstand?). Question: are the coverages of HadNMAT2 and ERSSTv4 in your plot the same? Coverage of NMAT is confined to areas where ships go, and ship coverage has declined somewhat over this period, whereas ERSSTv4 is more or less global.

The closeness with which NMAT and ERSSTv4 should track each other is something to consider also. The ERSST ship adjustment is smoothed so that variations of shorter than a few years (approximately) are not resolved. My understanding of this is that it’s necessary to reduce the effect of random measurement errors on the estimated bias. By smoothing over several years, the effect of random measurement errors average out, so what’s left is largely due to systematic errors (which is good because that’s what they are trying to assess). On the other hand, it means that the method can’t resolve changes in bias that happen faster than that.

Fifth, the uptick in the number of ICOADS SST observations in 2005 coincides with a large increase in the number of drifting buoy data. Depending on the version of ICOADS used, there’s also often a change in the number and composition of observations at the switch from delayed mode to real time. I think for ICOADS 2.5, that’s the end of 2007.

Sixth, don’t forget that there are 100 different estimates of HadSST3 – which together span estimated uncertainty in the bias adjustment – and additional measurement, sampling and coverage uncertainties which can also affect the trends over shorter periods such as the ones being discussed here. In brief, the trend over this period as estimated by HadSST3 is uncertain. The same goes for ERSSTv4: there is an uncertainty analysis (Liu et al. 2015 published at the same time as Huang et al. 2015). One should be wary about drawing conclusions from a comparison based only on the medians.

http://journals.ametsoc.org/doi/abs/10.1175/JCLI-D-14-00007.1

Best regards to one and all,

John Kennedy

One wonders of Hausfather and Cowtan saw this concern, and if they did, heeded it.


Global warming hiatus disproved — again

By Robert Sanders, Media relations

A controversial paper published two years ago that concluded there was no detectable slowdown in ocean warming over the previous 15 years — widely known as the “global warming hiatus” — has now been confirmed using independent data in research led by researchers from UC Berkeley and Berkeley Earth, a non-profit research institute focused on climate change.

A NEMO float, part of the global Argo array of ocean sensing stations, deployed in the Arctic from the German icebreaker Polarstern Bremerhaven. (Photo courtesy of Argo)
A NEMO float, part of the global Argo array of ocean sensing stations, deployed in the Arctic from the German icebreaker Polarstern Bremerhaven. (Photo courtesy of Argo)

After correcting for this “cold bias,” researchers with the National Oceanic and Atmospheric Administration concluded in the journal Science that the oceans have actually warmed 0.12 degrees Celsius (0.22 degrees Fahrenheit) per decade since 2000, nearly twice as fast as earlier estimates of 0.07 degrees Celsius per decade. This brought the rate of ocean temperature rise in line with estimates for the previous 30 years, between 1970 and 1999.The 2015 analysis showed that the modern buoys now used to measure ocean temperatures tend to report slightly cooler temperatures than older ship-based systems, even when measuring the same part of the ocean at the same time. As buoy measurements have replaced ship measurements, this had hidden some of the real-world warming.

This eliminated much of the global warming hiatus, an apparent slowdown in rising surface temperatures between 1998 and 2012. Many scientists, including the International Panel on Climate Change, acknowledged the puzzling hiatus, while those dubious about global warming pointed to it as evidence that climate change is a hoax.

Climate change skeptics attacked the NOAA researchers and a House of Representatives committee subpoenaed the scientists’ emails. NOAA agreed to provide data and respond to any scientific questions but refused to comply with the subpoena, a decision supported by scientists who feared the “chilling effect” of political inquisitions.

The new study, which uses independent data from satellites and robotic floats as well as buoys, concludes that the NOAA results were correct. The paper will be published Jan. 4 in the online, open-access journal Science Advances.

“Our results mean that essentially NOAA got it right, that they were not cooking the books,” said lead author Zeke Hausfather, a graduate student in UC Berkeley’s Energy and Resources Group.

Long-term climate records

Hausfather said that years ago, mariners measured the ocean temperature by scooping up a bucket of water from the ocean and sticking a thermometer in it. In the 1950s, however, ships began to automatically measure water piped through the engine room, which typically is warm. Nowadays, buoys cover much of the ocean and that data is beginning to supplant ship data. But the buoys report slightly cooler temperatures because they measure water directly from the ocean instead of after a trip through a warm engine room.

sst-berkeleynoaa
A new UC Berkeley analysis of ocean buoy (green) and satellite data (orange) show that ocean temperatures have increased steadily since 1999, as NOAA concluded in 2015 (red) after adjusting for a cold bias in buoy temperature measurements. NOAA’s earlier assessment (blue) underestimated sea surface temperature changes, falsely suggesting a hiatus in global warming. The lines show the general upward trend in ocean temperatures. (Zeke Hausfather graphic)

Hausfather and colleague Kevin Cowtan of the University of York in the UK extended that study to include the newer satellite and Argo float data in addition to the buoy data.NOAA is one of three organizations that keep historical records of ocean temperatures – some going back to the 1850s – widely used by climate modelers. The agency’s paper was an attempt to accurately combine the old ship measurements and the newer buoy data.

“Only a small fraction of the ocean measurement data is being used by climate monitoring groups, and they are trying to smush together data from different instruments, which leads to a lot of judgment calls about how you weight one versus the other, and how you adjust for the transition from one to another,” Hausfather said. “So we said, ‘What if we create a temperature record just from the buoys, or just from the satellites, or just from the Argo floats, so there is no mixing and matching of instruments?’”

In each case, using data from only one instrument type – either satellites, buoys or Argo floats – the results matched those of the NOAA group, supporting the case that the oceans warmed 0.12 degrees Celsius per decade over the past two decades, nearly twice the previous estimate. In other words, the upward trend seen in the last half of the 20th century continued through the first 15 years of the 21st: there was no hiatus.

“In the grand scheme of things, the main implication of our study is on the hiatus, which many people have focused on, claiming that global warming has slowed greatly or even stopped,” Hausfather said. “Based on our analysis, a good portion of that apparent slowdown in warming was due to biases in the ship records.”

Correcting other biases in ship records

In the same publication last year, NOAA scientists also accounted for changing shipping routes and measurement techniques. Their correction – giving greater weight to buoy measurements than to ship measurements in warming calculations – is also valid, Hausfather said, and a good way to correct for this second bias, short of throwing out the ship data altogether and relying only on buoys.

Berkeley’s analysis of ocean buoy (green) and satellite data (orange) and NOAA’s 2015 adjustment (red) are compared to the Hadley data (purple), which have not been adjusted to account for some sources of cold bias. The Hadley data still underestimate sea surface temperature changes. (Zeke Hausfather graphic)
Berkeley’s analysis of ocean buoy (green) and satellite data (orange) and NOAA’s 2015 adjustment (red) are compared to the Hadley data (purple), which have not been adjusted to account for some sources of cold bias. The Hadley data still underestimate sea surface temperature changes. (Zeke Hausfather graphic)

“In the last seven years or so, you have buoys warming faster than ships are, independently of the ship offset, which produces a significant cool bias in the Hadley record,” Hausfather said. The new study, he said, argues that the Hadley center should introduce another correction to its data.

“People don’t get much credit for doing studies that replicate or independently validate other people’s work. But, particularly when things become so political, we feel it is really important to show that, if you look at all these other records, it seems these researchers did a good job with their corrections,” Hausfather said.

Co-author Mark Richardson of NASA‘s Jet Propulsion Laboratory and the California Institute of Technology in Pasadena added, “Satellites and automated floats are completely independent witnesses of recent ocean warming, and their testimony matches the NOAA results. It looks like the NOAA researchers were right all along.“

Other co-authors of the paper are David C. Clarke, an independent researcher from Montreal, Canada, Peter Jacobs of George Mason University in Fairfax, Virginia, and Robert Rohde of Berkeley Earth. The research was funded by Berkeley Earth.

The paper: Assessing Recent Warming Using Instrumentally-Homogeneous Sea Surface Temperature Records (Science Advances)

0 0 votes
Article Rating

Discover more from Watts Up With That?

Subscribe to get the latest posts sent to your email.

297 Comments
Inline Feedbacks
View all comments
David S
January 4, 2017 11:02 pm

i find it fascinating that there is this big argument about historic data. There are big arguments about what adjustments are appropriate if any. There are big arguments about what was the actual data prior to satellite. How the hell can any normal person think that with this level of uncertainty about the past there can be any certainty about future prediction. My suspicions are that the real error bars would exceed the amount of change. With that level of uncertainty why would any normal human being allow his government to adopt measures that significantly impact current living standards without any likelihood they would have meaningful impact. Who also decided that 2 degrees warmer would be bad anyway. No one seams to be too traumatised if their work circumstances mean they have to move from Canada to say Hong Kong. The lack of logic behind the whole AGW scare indicates to me that globally a large number of people have lost their collective minds.

TA
Reply to  David S
January 5, 2017 12:54 pm

“The lack of logic behind the whole AGW scare indicates to me that globally a large number of people have lost their collective minds.”
It looks that way, doesn’t it. A lot of people seem to be living in a different reality. CAGW is giving us a lesson in human psychology

January 5, 2017 12:13 am

The MSM is all over this like a rash. The average punter doesn’t want to know about the problems in measuring the data, just the headlines. The average innumerate journalist too. However there is something inherently implausible about taking three measurement systems (all of which have defects) across maybe 70 years of data and claiming to be able to measure average ocean surface temperatures to 0.1C accuracy, and identify a trend. And I have to say, taking the period 1998 to 2015 just looks odd. The oceans seem to involve cycles decades long so why not use the whole period for which data – however flaky – is available? But top marks for publishing what they have.

mothcatcher
January 5, 2017 1:40 am

Hi, Zeke, if your are still listening!
Please let me record, as someone from the sceptical side of the argument, my embarassment at the way your paper has been treated by the commenters here, and even by AW ( for whom I have a high regard). Let me also record my thanks for the clear and polite way you have responded to some pretty middle-earth stuff.
There may be good reasons why your results can be criticised, but until I see them laid out in a coherent fashion, I’m okay with your exposition. Would seem confirmation bias is far from confined to the warmist side.

Simon
Reply to  mothcatcher
January 5, 2017 10:30 am

Well said.

JTK551
Reply to  mothcatcher
January 5, 2017 11:11 am

Gotta agree with mothcatcher. This is how science is supposed to work. Transparency. Not perfection, or a search for the ultimate truth, but published studies that folks can study and challenge. Cudos to Zeke for engaging.

basicstats
January 5, 2017 2:07 am

The basic problem here is the data. A mish-mash of different instruments whose relative proportions (and coverage) in the combined sample change drastically over time is difficult, if not impossible, to assemble into a single metric. Reading the authors twisting and turning over obvious issues of spatial coverage (and baseline), it becomes a matter of trust that there has not been wholesale datamining. Kriging, with its typically much too smooth interpolation functions, does not inspire confidence.
One basic point about Karl-type offset corrections using the mean difference. This might be ok if there is a systematic 0.12C ship-buoy difference. But common sense (could be wrong of course) suggests the discrepancy is actually made up of a lot of small errors (say 0.01-0.02) together with a small proportion of very large outliers where something has gone badly wrong. The mean difference is a very poor summary of this type of discrepancy.

richard verney
Reply to  basicstats
January 5, 2017 3:00 am

i see this as a significant issue.
What ships are actually being used for ship data?
Are these trading vessels?
If so they are measuring ocean data over narrow trading routes/lines.
Further, trade has changed over the years, so the routes have also changed?
Also, whilst ships may still be sailing similar routes, the way in which they are sailing these routes may have changed. Eg., some legs may now predominantly be ballast voyages, or some routes which in the past were ballast voyages, are now laden voyages. All of this would have a material impact upon the depth at which a ship draws its inlet water (via the inlet manifold which is situated low down towards the keel of the vessel) and thus the depth at which the ship is measuring sea temperature.
At the end of the day, ships are not measuring SST.
Surface Station have design regulations with respect to Stevenson screens etc. But is there any similar quality control with ships, or is it nothing more than a general mish mash of data?
We live on a water world. The vast majority of energy is stored in the oceans, and yet we have the least reliable data on the oceans when the measurement of the oceans is the most important metric.
In my opinion, pre ARGO, the data is worthless. ARGO data is too short and there is spatial coverage issues. Further, shortly after ARGO was set up, the buoys that were showing ocean cooling were simply discarded and not returned to laboratory to check whether there was some problem/error with equipment/calibration. This alone raises serious issues as to the merits of ARGO data.

MarkW
Reply to  richard verney
January 5, 2017 9:39 am

Over years, ships themselves have changed dramatically.
That’s another factor that has to adjusted for.

January 5, 2017 2:08 am

Zeke along with rest of the authors are doing a Reddit AMA(Ask Me Anything) on January 9th around noon Pacific Time. He comments to the minions of /r/science about the paper, here.
https://www.reddit.com/r/science/comments/5m11tu/new_study_confirms_that_global_warming_never/dc0qbgq/
The AMA Subreddit is here, for those that want to chime in or ask a question. https://www.reddit.com/r/IAmA/
If you are unfamiliar with Reddit, it’s the 7th ‘most popular site’ in the US. 2 spots above Twitter.

richard verney
January 5, 2017 2:40 am

Zeke Hausfather
Very good to see you commenting on and clarifying some of the points raised by your paper. I would appreciate your clarification on the following:
1. At what draft do buoys measure SST?, ie., how many cm below the surface is the temperature measured?
2. At what draft do ships measure sea temperature via the engine intake? ie., how many metres below the surface is the intake situated?
3. What adjustments are made to take account of the fact that the draft of a vessel, and thus the depth below the surface at which the vessel’s engine intake is situated, varies between vessels, and even with the same vessel its draft varies on a daily basis as consumables are used,,as trim is adjusted etc.
4. What adjustments are made, and how are these assessed, to take account of differences in ship design over the course of the last say 40 years which has an impact upon the depth at which vessel’s engine inlet is situated below the surface? If the fleet and composition has changed over the years, then it follows that the depth at which sea temperature is being drawn for measurement has also changed over these years.
5. Is it not the case that buoys, buckets, and ship engine inlet all measuring different things? Is it not more correct to view them as giving an insight into the temperature profile of the ocean, rather than to consider that they are interchangeable and can be compared on a like for like basis?
6.. Whilst this is only a guess, I would have thought that most ships were intaking water at least 5 metres below the surface, and many very considerably deeper than that. Isn’t one of the major issues here that ships are not measuring SST?
Your comments/clarification would be appreciated.

jaffa68
January 5, 2017 2:47 am

Alarmists must love the amount of time they get sceptics to waste arguing over peripheral issues like this.
Even if sceptics were to accept that every year is hotter than the last all it tells us is that the planet, which has always warmed and cooled, is warming – so what?
The only relevant issues are
1. how much warming is unnatural
2. are the implications (of any unnatural warming) negative
The alarmist answers seem to be…….
1. models & proxy analysis prove it is unnatural
2. models prove it is going to be a catastrophe
When questioned about their analysis the alarmist scientist response is, my pals agree with me and I agree with them, you can’t see what we did because you’re not part of our club and if you don’t accept what we say you must be evil.
We need to stop being diverted by talking points that allow alarmists to hide their work. Activist scientists can’t be trusted unless they hand over all their models, programs and data for analysis by every interested party. Since they claim the entire planet is at risk there’s no reasonable excuse for hiding this information. Clearly it’s weak at best but probably garbage.

Reply to  jaffa68
January 14, 2017 8:48 am

JAFFA 68 WRTOTE:
“Alarmists must love the amount of time they get sceptics to waste arguing over peripheral issues like this.
Even if sceptics were to accept that every year is hotter than the last all it tells us is that the planet, which has always warmed and cooled, is warming – so what?”
MY COMMENT:
How dare you bring such common sense to this comment section !
Where are your data?
Where are your numbers?
Where are numbers to two and three decimal places?
Do you have a PhD?
In my opinion, the only relevant questions are:
(1) Is the climate in 2017 causing problems for people and the crops they grow?
The obvious answer is no, but assuming some people will say yes — then
(2) What are the problems caused by a slight warming since 1850 ?
(considering that half of the “warming” is from “adjustments” to the raw data, and the warming seems to be mainly affecting the Arctic and the surrounding, barely inhabited land).

Editor
January 5, 2017 4:06 am

I note that the graph starts in 1999, ie in the middle of La Nina
Why did it not start in 1998?

Thomho
January 5, 2017 4:28 am

A concluding comment in the wake of this study made by the lead author is “based on our analysis a good portion of the apparent slowdown in (global) warming was due to biases in the ship records”
Ie there was no real pause in global warming
Well if so I cant help idly wondering how come all those climate scientists (eg Prof Malcolm England) at the time earnestly wrote learned academic papers purporting to explain the cause of the pause ?? Eg it was due to faster or weaker winds across the Pacific ocean etc etc

Toneb
Reply to  Thomho
January 5, 2017 5:46 am

“Well if so I cant help idly wondering how come all those climate scientists (eg Prof Malcolm England) at the time earnestly wrote learned academic papers purporting to explain the cause of the pause ?? Eg it was due to faster or weaker winds across the Pacific ocean etc etc”
Because, at the time there seemed to be ….. though on inspection of the impact that the PDO/ENSO, one can see why.
Yes, natural variation is there as well.
Thing is, the IPCC use an ensemble of forecasts that eliminate that natural variation – which is why there are increasingly wide 95% cl’s with the median, and which the obs stayed within.
That is what scientists do, investigate stuff if it don’t fit.
I remember a paper that pointed out that the GCM runs that had the correct ENSO cycle were on the money FI.
It should also be noted that the forcing, turned out to be less than expected for the IPCC projections.
Now with the PDO cycle gone +ve the full warming trend has resumed.

bit chilly
Reply to  Toneb
January 6, 2017 4:43 am

i think i am beginning to understand what the problem is/was with uk weather forecasting.

JasG
January 5, 2017 4:36 am

The BBC are on it too. The only time they actually mention the consensus-established pause is when there is a pause-buster paper out. They ignore all papers that say the opposite. They now call the NOAA adjustments “controversial” only because they can now say the new paper validates it.
The plain fact that anyone with any understanding of the issue – including the authors – can see it as trash. Obviously they are trying to influence policy while complaining about skeptics doing the same. The difference is that only the skeptical side is honest while alarmists are deliberately lying to protect their own fallacious concept of the greater good. A concept that will lead to widespread poverty & misery if unchecked.

January 5, 2017 6:28 am

A. Watt mentioned in the article:
“What’s missing? Error bars showing uncertainty. Plus, the data only goes to December 2015. They’ve missed an ENTIRE YEAR’s worth of data, and while doing so claim “the pause” is busted. It would be interesting to see that same graph done with current data through December 2016, where global SST has plummeted. Looks like a clear case of cherry picking to me, by not using all the available data. Look for a follow up post using all the data.”
There is a common misconception that including the downward part of an up-spike of a graph temperatures will make a lower trend. The opposite is the case. As I had only the Hadsst3 data, I made the excercise with it.
http://www.woodfortrees.org/plot/hadsst3gl/from:1997/plot/hadsst3gl/from:1997/trend/plot/hadsst3gl/from:1997/to:2016/trend
As you see, including the missing cooling phase after Jan 2016 gave a steeper trend than the graph until Dec 2015, being on the top of the spike. Every value within a spike is above average and will thereby steepen the trend.
If one wants to have a realistic trend, he has to make it without extreme events like ENSO. Just like Judith Curry said: To see if the pause is still there, we have to wait until the comming La Nina has levelled out. Say two years or so.

Resourceguy
Reply to  Johannes S. Herbst
January 5, 2017 6:34 am

Exactly, but the headlines are already out now and all over. The details are another matter. I seem to recall the use of strawman arguments in the early attempts to kill the pause notion by saying it was pinned to a strong El Nino on the starting point (wrong). Now we have an actual case of pinning the end of the data series to another strong El Nino year. You know it’s all a big con when they get their way going and coming.

January 5, 2017 6:46 am

For comparing apples with apples, we should stick to global temperatures and not using sea surface or land only graphs.
Hadsst3 never had a pause, even before El Nino in 2014.
Hadsst2 had a pause form 1997 until July 2012.
http://www.woodfortrees.org/plot/hadsst3gl/from:1997/plot/hadsst3gl/from:1997/to:2014/trend/plot/hadsst2gl/from:1997/to:2012.5/trend

Reply to  Johannes S. Herbst
January 5, 2017 11:14 am

Hadsst3 never had a pause, even before El Nino in 2014.

Hadsst3 did have a significant pause a while ago.

5. For Hadsst3, the slope is flat since March 2001 or 13 years, 4 months. (goes to June)

That is from my article here:
https://wattsupwiththat.com/2014/08/23/midyear-prognosis-for-records-in-2014-now-includes-june-data/

Aaron Edwards
January 5, 2017 7:23 am

Here is a link to a paper describing the difficulty of measuring SST from ship data. Accounting for and fully understanding the vagaries involved in compensating for a number of highly variable factors is far from an exact science. Similar to extracting temperature from tree ring data one should be regard any assumptions made about adjustments that favor the idea of cooling off the old data set to amplify the warming trend should be taken with a critical eye. The idea of an adjustment is sound but I cannot understand why any one would be so naïve as to proclaim that their particular adjustment scheme is the truth. One could think of reasons why the old data was too cold and therefore should be adjusted downward. I wouldn’t put much stock into the idea that we know enough to posit the idea that we have revealed a massive trend of SST catastrophic warming by any such educated adjustments downward.
LINK: J.B.R Mathews Comparing Historical and modern Methods of SST Measurements PART 1
http://www.ocean-sci.net/9/683/2013/os-9-683-2013.pdf

Editor
January 5, 2017 7:45 am

On the contrary, the fault with the paper, if it is indeed faulty, is not that the data set ends in 2015, though that is convenient if one wants a big uptick at the end.
The fault in the paper is contained in a single sentence, the last sentence, in conclusions section:

Overall, these results suggest that the new ERSSTv4 record represents the most accurate composite estimate of global SST trends during the past two decades and thus support the finding (14) that previously reported rates of surface warming in recent years have been underestimated.

Their implication, and this is certainly confirmed in their press releases and interviews, is that this SST re-evaluation means that Global Surface Temperatures trends have been underestimated — blown up to be “pause busting” everywhere (but NOT in the paper itself).
They start the paper with this perfectly reasonable sentence:

“Accurate sea surface temperature (SST) data are necessary for a wide range of applications, from providing boundary conditions for numerical weather prediction, to assessing the performance of climate modeling, to understanding drivers of marine ecosystem changes. “

No mention of global surface (land and sea) temperatures in there….no mention of climate change in there.
I feel that Judith Curry’s take is probably correct — we really can’t know what is going on here until the data sets get sorted out.
It is improper to say that “here we show that over the last 15 years, SSTs are increasing by trend therefore there has been no global warming pause.”
If one looks at the “data” files linked in Zeke’s email, they look like this: [very tiny excerpt]

1997.79166667 0.0177318198296
1997.875 0.0537548223484
1997.95833333 0.105809554399
1998.04166667 0.169285042633
1998.125 0.132279643598
1998.20833333 0.122622513514

These single numbers, like the one bolded, for each time/data, are claimed to be a valid representation of a reality that looks like this:comment image
My very personal opinion is that this all is mathematics gone mad …

January 5, 2017 7:49 am

… different instruments for different eras, … different methodologies for different measuring instruments, … differing skill levels, … different standards of strictness, … errant wash ups because actual humans are not present all the time monitoring, exerting the same exacting standards everywhere consistently, … what a nightmare !
… and how does one homogenize all this within the seemingly small range of concern that we are talking about, again? … with more models ? … with more room to apply best guesses, maybe in critical areas of uncertainty?
There just seems to be considerable … flexibility, shall I say, for creativity in all this.
How can we agree on a trend, if we cannot even agree on how it is measured, or that we CAN measure it?

MarkW
Reply to  Griff
January 5, 2017 9:43 am

I wonder why Griff never includes articles from sites that actually do science?

Joel Snider
Reply to  MarkW
January 5, 2017 12:17 pm

Because that would undercut his messaging, so he has to rely on self-described ‘evangelists’ like ol’ Phil from Slate , who makes a living selling alarmism, and is currently begging for money (offerings?), to keep his crusade alive.

Caligula Jones
Reply to  MarkW
January 5, 2017 12:49 pm

Give him a break. That’s “Cut and Paste 201”. He’s still a freshman.

Chris
Reply to  MarkW
January 6, 2017 1:44 am

I almost never see you post links to sites of any kind, just comments.

Toneb
Reply to  MarkW
January 7, 2017 1:04 pm

If you’ you had bothered to look you would have seen that there were 3 links included in the article pertaining to and of the subject in question.
And really?
on WUWT you say … “I wonder why Griff never includes articles from sites that actually do science?”
Oh the irony and the word beginning with “h”.

RBom
January 5, 2017 8:23 am

Looks like the buoys tend to congregate toward equatorial waters and the satellite clocks tend to sow down (on ascending and descending orbits the “clock” position trails the real position so that the reading-time minute-daily of the clock over years are occurring across the equatorial waters). This means all the “data sets” have a “warm” bias. But as written above, the real errors (lack of accuracy, precision and incalculable uncertainty) are so large makes the analysis and conclusions irrelevant.
Boys will be boys when attempting to reconcile the phallus with the chalice using Lunar Laser Ranging and VLBA interferometry. The phallus always wins.

knr
January 5, 2017 8:43 am

It has achieved the most important the most important thing any piece o climate ‘science research’ can. Lots of free and unquestioning plugging through the national news media . The authors must be very happy , and for them the even better news is there will be zero coverage when the paper gets demolished .

January 5, 2017 9:56 am

The REAL question, would you drive over a bridge or occupy a multi-story building designed and built by any of these guys doing supposed CLI-SCI and massaging previously recorded data?
Neither would I …

Steve Oregon
January 5, 2017 11:17 am

There is one absolute. This WUWT page demolishes the alarmist’s constant claim that the science is not discussed here.
This thread is likely the most both sides attended, thorough, germane, open, frank and public discussion anywhere. Including participation by the author of the study itself.
One would hope there would be some or more uniformly accepted, incremental baby step conclusions produced.
But the resistance by some to accept some of the most rudimentary realities regarding the limits to the reliability and meaning of global measurements perpetuates an unnecessary obstruction to healthy dialogue progression.
Such a rigid impediment cannot be good for any kind of science.
But what do I know?

January 5, 2017 12:35 pm

Again, if we cannot agree on what the measuring instruments are doing, what the measuring instruments are measuring, whether the measuring instruments ARE measuring what we are in disagreement over what they are measuring, and what methods we use to homogenize what we seem to disagree we are even measuring, then I think that there is a more basic problem here …
… maybe using the common words “global temperature” is the problem, in that these words themselves do NOT describe a real measure at all. … I know that dwelling on this possibility messes up a lot of people’s fun or professional dedications, but at some point, we might have to get down to mass deconstructing the very foundation.
There is already a dedicated group trying to slay the dragon — “heat-breathing” CO2 monster. Maybe we need a dedicated group to slay the “mathmyth” (another monster sort of like the Kraken, except more insidious, in that it attacks the processes of the mind).
Let’s review, for example:
https://www.corbettreport.com/?powerpress_embed=17190-podcast&powerpress_player=mediaelement-video

Reply to  Robert Kernodle
January 5, 2017 1:04 pm

There is already a dedicated group trying to slay the dragon — “heat-breathing” CO2 monster.

The greenhouse effect from co2, has no affect on cooling or min temp.comment image

Reply to  micro6500
January 5, 2017 1:55 pm

I have no idea what I am looking at, micro. Go ahead, call me Ig Nor Ramus.

Reply to  micro6500
January 6, 2017 7:13 am

[still trying to clarify]
So, what you are trying to visualize is the rhythmic relationship between four variables on a two-dimensional graph where most people are used to seeing only two variables at a time visualized. This is what confused me. Plus all those radiating arrows pointing to peaks and valleys cluttered my first-impression view of the graph.
Just to make sure I understand, … am I correct in thinking that there are two other scales implied on the left — one for relative humidity, and one for temperature ?
If so, then is my crude (I emphasize “crude”) redrawing of your figure anywhere close to correct ? If so, then this is visually more clear to me.comment image
Over the elapsed time of a day, radiation is low, when relative humidity is high, and radiation is high, when relative humidity is low. Yes ?

Reply to  Robert Kernodle
January 6, 2017 7:31 am

The simple answer is yes. If you masked off the Sun, the surface radiates 24×7, and as you say when rel humidity is “high” there is measurably less energy being radiated away from the surface. And when humidity is low, the rate is high. During a sunny day, there is a large net incoming energy rate. But once the sun starts to set, temps fall at the rel humidity limiting rate if it’s over about 70%, when it’s less than that it cools about 3 x higher which is the co2 limiting rate. Since rel humidity is a temperature effect, this temperature regulates cooling to dew points, then slows down.

Reply to  Robert Kernodle
January 5, 2017 2:28 pm

It’s 3 clear sky days in Australia, net rad is measured delta radiation from the surface, then there’s temp and rel humidity. Every night after air temps drop near dew point temps, and rel humidity start getting into it’s upper range, the outgoing radiation drops by about 2/3rds. It did not get foggy or cloudy. While this is happening you can still use a telescope.
But because the transition is temperature based, any excess warming from co2 would get radiated prior to the slowing of the cooling rate, until that excess had been radiated from the surface.

Reply to  micro6500
January 5, 2017 3:33 pm

Okay, thanks.
For further clarification, if you don’t mind, what is the scale on the left (and units), and what is the progression as we move from left to right?

Reply to  Robert Kernodle
January 5, 2017 5:19 pm

Left scale is W/m^2, there are just under 4 days, the horizontal sampling scale was I think every 10 minutes and excel couldn’t display a usable x scale. It is easiest just to reference min and max to make the passing of time.

Reply to  micro6500
January 5, 2017 3:42 pm

I clicked on your handle and went to your website, micro, and I’ll study that too.

Reply to  Robert Kernodle
January 6, 2017 7:34 am

Here is a less annotated uncropped versioncomment image

Reply to  micro6500
January 6, 2017 7:36 am

The jagged edges in the afternoon, is scattered clouds. The days are warm enough to get some clouds in the afternoon, then as it cools it clears again. See that in Ohio as well on clear days.

Reply to  micro6500
January 6, 2017 10:44 am

To my novice mind, this seems like profound insight, and so forgive me if I need to dwell on it further.
Could you summarize what this means ? That is, In those high cooling-rate “valleys”, where relative humidity is high and radiation is low, what’s going on with the heat transfer ? Is water vapor heating, while CO2 is sort of “hanging out” ? Or is CO2 transferring heat to the water vapor that is heating, thereby putting that heat into a place that will rise upward via convection to transfer out in the upper atmosphere ?, … which means that CO2 is transferring heat into a cooling mechanism, as I seem to understand that you might be saying ?

Reply to  Robert Kernodle
January 6, 2017 11:09 am

I’m not sure. My lead theory is that it is just an IR fog, that is not visible (redundant). And many times I’ve thought fog felt “warm”, and I wonder if that’s not from a high IR flux, ie the fog is brightly lite, and we feel that IR.
Consider (since rel humidity is near 100%) it’s the equivalent of a laser cavity as all of the lasing materiel is discharging, only in the case of the atm, there are no mirrors.
“this seems like profound insight”
I think so, hence why I’m being annoying about it and keep bringing it up. The lights will come on…

Reply to  micro6500
January 6, 2017 12:03 pm

I just had my mind blown by the physics exchange between one of the latest article authors and Nic Stokes.
Now this adds yet another mind-blowing dimension to THAT !
All this backroom technical obscurity eventually comes to light in some simple-minded generalization in the future that a fifth-grader can understand, hopefully. It’s hell what you have to go through to reach a basic conclusion. Glad somebody is doing the work, since it’s out of my little league. (^_^)

michael hart
January 5, 2017 1:23 pm

“The research was funded by Berkeley Earth.”

…Who list several anonymous foundations as donors.
I guess that’s alright then /sarc. Maybe Willie Soon should be afforded the same luxury.
The saddest thing is actually the answer you come up with when you ask yourself “why would any genuine scientist actually want to spend time producing a paper like this?”. It is clearly done for ephemeral political/funding purposes.

Shub Niggurath
Reply to  michael hart
January 5, 2017 10:10 pm

BEST is funded by the Novim group.

Gonzo
January 5, 2017 1:36 pm

I’m calling BS on their graphs/data. How can 2010 mini el nino SST’s be warmer than the monster 1998 el nino SST’s? This is the same kind of “climate science” that shows 1930’s US temps cooler than 1980’s/90’s

January 5, 2017 2:00 pm

… and I am reminded of THIS:
https://wattsupwiththat.com/2011/12/31/krige-the-argo-probe-data-mr-spock/

One problem here, as with much of climate science, is that the only uncertainty that is considered is the strict mathematical uncertainty associated with the numbers themselves, dissociated from the real world. There is an associated uncertainty that is sometimes not considered. This is the uncertainty of how much your measurement actually represents the entire volume or area being measured. …
… Now recall that instead of a bathtub with lots of thermometers, for the Argo data we have a chunk of ocean that’s 380 km (240 miles) on a side with a single Argo float taking its temperature. We’re measuring down a kilometre and a half (about a mile), and we get three vertical temperature profiles a month … how well do those three vertical temperature profiles characterize the actual temperature of sixty thousand square miles of ocean? (140,000 sq. km.)

O R
Reply to  Robert Kernodle
January 5, 2017 4:21 pm

How well can we estimate UAH TLT v6 global temperatures if we discard 99,83% of the spatial information, and only measure in 18 points worldwide? Quite well seemingly…
http://postmyimage.com/img2/581_image.png

jpatrick
January 5, 2017 7:20 pm