The latest head in the sand excuse from climate science: the global warming pause 'never happened'

From the “fighting denial with denial” department comes this desperate ploy and press release written to snare headlines with gullible media. Meanwhile, just a couple of days ago the UK Met office said the global warming pause may continue.

headinsand

Global warming ‘hiatus’ never happened, Stanford scientists say

A new study reveals that the evidence for a recent pause in the rate of global warming lacks a sound statistical basis. The finding highlights the importance of using appropriate statistical techniques and should improve confidence in climate model projections.

From STANFORD’S SCHOOL OF EARTH, ENERGY & ENVIRONMENTAL SCIENCES via press release

An apparent lull in the recent rate of global warming that has been widely accepted as fact is actually an artifact arising from faulty statistical methods, Stanford scientists say.

The study, titled “Debunking the climate hiatus” and published online this week in the journal Climatic Change, is a comprehensive assessment of the purported slowdown, or hiatus, of global warming. “We translated the various scientific claims and assertions that have been made about the hiatus and tested to see whether they stand up to rigorous statistical scrutiny,” said study lead author Bala Rajaratnam, an assistant professor of statistics and of Earth system science.

The finding calls into question the idea that global warming “stalled” or “paused” during the period between 1998 and 2013. Reconciling the hiatus was a major focus of the 2013 climate change assessment by the Intergovernmental Panel on Climate Change (IPCC).

Using a novel statistical framework that was developed specifically for studying geophysical processes such as global temperature fluctuations, Rajaratnam and his team of Stanford collaborators have shown that the hiatus never happened.

“Our results clearly show that, in terms of the statistics of the long-term global temperature data, there never was a hiatus, a pause or a slowdown in global warming,” said Noah Diffenbaugh, a climate scientist in the School of Earth, Energy & Environmental Sciences, and a co-author of the study.

Faulty ocean buoys

The Stanford group’s findings are the latest in a growing series of papers to cast doubt on the existence of a hiatus. Another study, led by Thomas Karl, the director of the National Centers for Environmental Information of the National Oceanic and Atmospheric Administration (NOAA) and published recently in the journal Science, found that many of the ocean buoys used to measure sea surface temperatures during the past couple of decades gave cooler readings than measurements gathered from ships. The NOAA group suggested that by correcting the buoy measurements, the hiatus signal disappears.

While the Stanford group also concluded that there has not been a hiatus, one important distinction of their work is that they did so using both the older, uncorrected temperature measurements as well as the newer, corrected measurements from the NOAA group.

“By using both datasets, nobody can claim that we made up a new statistical technique in order to get a certain result,” said Rajaratnam, who is also a fellow at the Stanford Woods Institute for the Environment. “We saw that there was a debate in the scientific community about the global warming hiatus, and we realized that the assumptions of the classical statistical tools being used were not appropriate and thus could not give reliable answers.”

More importantly, the Stanford group’s technique does not rely on strong assumptions to work. “If one makes strong assumptions and they are not correct, the validity of the conclusion is called into question,” Rajaratnam said.

A different approach

Rajaratnam worked with Stanford statistician Joseph Romano and Earth system science graduate student Michael Tsiang to take a fresh look at the hiatus claims. The team methodically examined not only the temperature data but also the statistical tools scientists were using to analyze the data. A look at the latter revealed that many of the statistical techniques climate scientists were employing were ones developed for other fields such as biology or medicine, and not ideal for studying geophysical processes. “The underlying assumptions of these analyses often weren’t justified,” Rajaratnam said.

For example, many of the classical statistical tools often assume a random distribution of data points, also known as a normal or Gaussian distribution. They also ignore spatial and temporal dependencies that are important when studying temperature, rainfall and other geophysical phenomena that can change daily or monthly, and which often depend on previous measurements. For example, if it is hot today, there’s a higher chance that it will be hot tomorrow because a heat wave is already in place.

Global surface temperatures are similarly linked, and one of the clearest examples of this can be found in the oceans. “The ocean is very deep and can retain heat for a long time,” said Diffenbaugh, who is also a senior fellow at the Woods Institute. “The temperature that we measure on the surface of the ocean is a reflection not just of what’s happening on the surface at that moment, but also the amount of trapped heat beneath the surface, which has been accumulating for years.”

While designing a framework that would take temporal dependencies into account, the Stanford scientists quickly ran into a problem. Those who argue for a hiatus claim that during the 15-year period between 1998 and 2013, global surface temperatures either did not increase at all, or they rose at a much slower rate than in the years before 1998. Statistically, however, this is a hard claim to test because the number of data points for the purported hiatus period is relatively small, and most classical statistical tools require large numbers of data points.

There is a workaround, however. A technique that Romano invented in 1992, called “subsampling,” is useful for discerning whether a variable – be it surface temperature or stock prices – has changed in the short term based on limited amount of data. “In order to study the hiatus, we took the basic idea of subsampling and then adapted it to cope with the small sample size of the alleged hiatus period,” Romano said. “When we compared the results from our technique with those calculated using classical methods, we found that the statistical confidence obtained using our framework is 100 times stronger than what was reported by the NOAA group.”

The Stanford group’s technique also handled temporal dependency in a more sophisticated way than in past studies. For example, the NOAA study accounted for temporal dependency when calculating sea surface temperature changes, but it did so in a relatively simple way, with one temperature point being affected only by the temperature point directly prior to it. “In reality, however, the temperature could be influenced by not just the previous data points, but six or 10 points before,” Rajaratnam said.

Pulling marbles out of a jar

To understand how the Stanford group’s subsampling technique differs from the classical techniques that had been used before, imagine placing 50 colored marbles, each one representing a particular year, into a jar. The marbles range from blue to red, signifying different average global surface temperatures.

“If you wanted to determine the likelihood of getting 15 marbles of a certain color pattern, you could repeatedly pull out 15 marbles at a time, plot their average color on a graph, and see where your original marble arrangement falls in that distribution,” Tsiang said. “This approach is analogous to how many climate scientists had previously approached the hiatus problem.”

In contrast, the new strategy that Rajaratnam, Romano and Tsiang invented is akin to stringing the marbles together before placing them into the jar. “Stringing the marbles together preserves their relationships to one another, and that’s what our subsampling technique does,” Tsiang said. “If you ignore these dependencies, you can alter the strength of your conclusions or even arrive at the opposite conclusion.”

When the team applied their subsampling technique to the temperature data, they found that the rate of increase of global surface temperature did not stall or slow down from 1998 to 2013 in a statistically significant manner. In fact, the rate of change in global surface temperature was not statistically distinguishable between the recent period and other periods earlier in the historical data.

The Stanford scientists say their findings should go a long way toward restoring confidence in the basic science and climate computer models that form the foundation for climate change predictions.

“Global warming is like other noisy systems that fluctuate wildly but still follow a trend,” Diffenbaugh said. “Think of the U.S. stock market: There have been bull markets and bear markets, but overall it has grown a lot over the past century. What is clear from analyzing the long-term data in a rigorous statistical framework is that, even though climate varies from year-to-year and decade-to-decade, global temperature has increased in the long term, and the recent period does not stand out as being abnormal.”

###

Debunking the climate hiatus

Bala Rajaratnam, Joseph Romano, Michael Tsiang, Noah S. Diffenbaugh

Abstract

The reported “hiatus” in the warming of the global climate system during this century has been the subject of intense scientific and public debate, with implications ranging from scientific understanding of the global climate sensitivity to the rate in which greenhouse gas emissions would need to be curbed in order to meet the United Nations global warming target. A number of scientific hypotheses have been put forward to explain the hiatus, including both physical climate processes and data artifacts. However, despite the intense focus on the hiatus in both the scientific and public arenas, rigorous statistical assessment of the uniqueness of the recent temperature time-series within the context of the long-term record has been limited. We apply a rigorous, comprehensive statistical analysis of global temperature data that goes beyond simple linear models to account for temporal dependence and selection effects. We use this framework to test whether the recent period has demonstrated i) a hiatus in the trend in global temperatures, ii) a temperature trend that is statistically distinct from trends prior to the hiatus period, iii) a “stalling” of the global mean temperature, and iv) a change in the distribution of the year-to-year temperature increases. We find compelling evidence that recent claims of a “hiatus” in global warming lack sound scientific basis. Our analysis reveals that there is no hiatus in the increase in the global mean temperature, no statistically significant difference in trends, no stalling of the global mean temperature, and no change in year-to-year temperature increases.

The paper is open access, read it here

0 0 votes
Article Rating

Discover more from Watts Up With That?

Subscribe to get the latest posts sent to your email.

275 Comments
Inline Feedbacks
View all comments
Sasha
September 17, 2015 7:40 am

Keep torturing the data until it says what you want it to say.

emsnews
Reply to  Sasha
September 17, 2015 8:10 am

‘Ve vill make you talk!’

Scott
Reply to  emsnews
September 17, 2015 8:39 am

Statisticians? We don’t need no stinking classic statisticians!
Off to the Gausian Gulag with you till you confess your coldness to the sancttity of the God of Warm! Good Grief?

RWturner
Reply to  emsnews
September 17, 2015 10:54 am

This paper is a major breakthrough! It used to be garbage in, garbage out. Now it’s garbage in, polished-garbage out!

D Mew
Reply to  emsnews
September 17, 2015 11:48 am

“Lies, damned lies, and statistics”
Benjamin Disraeli

Mike
Reply to  emsnews
September 17, 2015 2:36 pm

The study, titled “Debunking the climate hiatus” and published online this week in the journal Climatic Change

Well with a title that starts with a word like “debunking” it is blattently obvious we are dealing internet trolls and not scientists
.“Debunking” is argumentative and polemitc, this is an attempt at political point scoring not a scientific study. Do even hope to be taken seriously with a title like that?

simple-touriste
Reply to  emsnews
September 17, 2015 4:23 pm

Debunking is what you do with people talk through the wrong end, see Moon landing hoaxers, or “Moon Landing is an Hoax…”. You debunk fraudulent made-up stories (like the tales of G.E. Séralini who uses “encrypted emails” for fear of Monsanto interference, when Monsanto for years failed to take any action against him beside refuting his crappy antiglyphosate and antiGMOs studies).
Showing that a scientist is in error is not debunking, it is refuting. Showing that someone who pretends to do science is in fact doing tea leafs reading is a debunking.
Knowing when to debunk and when to refute is really epistemology 101.
It may be that current satellite data is not enough to conclude anything about the climate system… anything.

george e. smith
Reply to  emsnews
September 17, 2015 4:31 pm

The various algorithms of statistical mathematics are thoroughly spelled out in numerous standard text books.
” Average ” is obtained by adding all of the members of the data set, and dividing that total by the number of elements in the data set.
The result is always exact, because all the elements of the data set, are exact real numbers.
The result (the average) is always correct regardless of the elements in the data set; it works for any finite set of real numbers. It works whether the numbers are unrelated to each other in any way, or whether they are calculated from some closed form mathematical equation.
The same goes for all the other algorithms of statistical mathematics. they all give a specific result for any finite data set of exact real numbers; and there is no restriction on what those real numbers are.
Now when I say the numbers of the data set are exact; that is not the same as saying they represent any actual real world value of anything; they are just numbers.
Where the big mistake is made is in asserting that the results of any of those statistical mathematics algorithms actually mean anything.
They don’t mean anything except that which they are defined to be in the textbooks.
So the ” average ” of a data set means just that; it is the average.
The ” median ” of the same data set is calculated from a different algorithm from ” average ” and usually gives a different result which, is the median of that data set; by definition. And it doesn’t mean ANYTHING else.
g
So our new discovery for today is that someone has described a new statistical mathematics algorithm, from those we have all seen before; so it generally gives a different result for a given data set; but it too still means nothing, except that which it has been defined to compute.

Greg Cavanagh
Reply to  emsnews
September 17, 2015 6:03 pm

Lewis, insurance industries are businesses. The aim of a business is to make money and stay in business. The statistics they use attempt to keep the books balanced in their favour. They need be only sufficiently related to actual world events to reliably keep the business in business.
George is correct in that statistics is only the manipulation of numbers. Re-read George’s text again, he qualifies it quite specifically.

george e. smith
Reply to  Sasha
September 17, 2015 4:12 pm

“””””…..Statistically, however, this is a hard claim to test because the number of data points for the purported hiatus period is relatively small, and most classical statistical tools require large numbers of data points……”””””
Well actually you have precisely one data point. The history from circa 1987/8 and 2015.
We have no way to rerun it to get another data point.
g

ScienceABC123
September 17, 2015 7:45 am

Translation: “The models can’t be wrong! Change the data!”

Leigh
Reply to  ScienceABC123
September 17, 2015 12:55 pm

Exactly!
Ajust the adjustments.(UP)
There, all fixed just in time for the Paris “fund raiser”.
How convenient.

george e. smith
Reply to  ScienceABC123
September 17, 2015 4:41 pm

I think they are claiming (in effect) that the surface based data sets do not conform to the Nyquist sampling criterion for sampled data systems.
But we always knew that was so.
And what is this bunk about the ocean buoys being wrong, and the bucket of water on a ship’s deck being correct ??
The ocean buoys showed that water temperature and air temperature are different, and are not correlated.
Well John Christy told us that in Jan 2001.
g

dennisambler
Reply to  george e. smith
September 18, 2015 1:35 am

http://www.21stcenturysciencetech.com/articles/ocean.html
The late Oceanographer Dr Robert Stevenson was a “bucket man” and had this to say when writing a critique of Levitus et al (2000):
“Surface water samples were taken routinely, however, with buckets from the deck and the ship’s engine-water intake valve. Most of the thermometers were calibrated into 1/4-degrees Fahrenheit. They came from the U.S. Navy.
Galvanized iron buckets were preferred, mainly because they lasted longer than the wood and canvas. But, they had the disadvantage of cooling quickly in the winds, so that the temperature readings needed to be taken quickly.
I would guess that any bucket-temperature measurement that was closer to the actual temperature by better than 0.5° was an accident, or a good guess. But then, no one ever knew whether or not it was good or bad. Everyone always considered whatever reading was made to be precise, and they still do today.
The archived data used by Levitus, and a plethora of other oceanographers, were taken by me, and a whole cadre of students, post-docs, and seagoing technicians around the world. Those of us who obtained the data, are not going to be snowed by the claims of the great precision of “historical data found stored in some musty archives.”

Joe Crawford
September 17, 2015 7:47 am

Guess I’ll wait to see what Steve McIntyre and friends have to say about this technique. I’m sure they will digest it in detail. Until then, it’ just another climate paper.

Anne Ominous
Reply to  Joe Crawford
September 17, 2015 12:47 pm

Originally the subsampling technique, described here:
http://home.uchicago.edu/~amshaikh/webfiles/subsampling_topics.pdf
was for estimating parameters in sets of data that were simply too large to handle, by using samples of the data. In contrast it appears that here, they used this technique to estimate what larger sets of data MIGHT look like, using relatively little data. In other words, they made a huge leap from making inferences about large sets of data from samples, to inferring properties of a hypothetical large data set from a small one.
This is interesting. From what I can tell, in effect they reversed a statistical sampling technique and used it instead to extrapolate.
I’m not a statistician, but it would take some strong evidence to convince me that the technique is valid for this purpose.

SteveT
Reply to  Anne Ominous
September 18, 2015 4:19 am

This is interesting. From what I can tell, in effect they reversed a statistical sampling technique and used it instead to extrapolate.
**********************************************************************************
What’s all the fuss about, if reversing statistical data is alright for Michael Mann (tiljander) then it’s got to be all right…………oh hang on a minute, let me think this through!
SteveT

Tim
Reply to  Joe Crawford
September 17, 2015 6:20 pm

Yes that is true, but the problem is that as always, they get to grab the headlines first no matter how much BS it is. We are always playing defence.

cirby
September 17, 2015 7:49 am

“Our new technique basically consists of adding one. To almost everything. We get a much happier answer that way. If we used the old techniques, we kept getting the wrong answers, so it’s obvious that pre-additive statistics doesn’t work right with AGW theory.”

Reply to  cirby
September 17, 2015 10:00 am

The part I really like is that you can’t tell the recent period from any previous historical period. So is not this an argument for “earth is recovering from an ice age and has been generally warming ever since the last period of glaciation some 15,000 years ago” Nothing to see here take down the bunting on the stage an tell all those dignitaries attending Paris in December to stay home!

September 17, 2015 7:49 am

Is this like ‘find a temperature you like and use it 10 times’?

mpaul
September 17, 2015 7:51 am

Once again, rather than using the high tech Satellite data, they use the less reliable and less accurate surface datasets — all of which are heavily adjusted by partisans. Simple question — if you applied the same method to RSS or UAH what would the results be?

DWR54
Reply to  mpaul
September 17, 2015 9:04 am

Why would scientists who allegedly have a warming agenda (motivate) and who are able to adjust the data (opportunity) purposely introduce a pause in warming?

Jimbo
Reply to  DWR54
September 17, 2015 9:52 am

An apparent lull in the recent rate of global warming that has been widely accepted as fact is actually an artifact arising from faulty statistical methods, Stanford scientists say.

What happens to all those papers which cited any of the following papers?
PS I thought we had a consensus on the standstill. So much for consensus eh.

Dr. Judith L. Lean – Geophysical Research Letters – 15 Aug 2009
“…This lack of overall warming is analogous to the period from 2002 to 2008 when decreasing solar irradiance also countered much of the anthropogenic warming…”
doi:10.1029/2009GL038932
__________________
Prof. Shaowu Wang et al – Advances in Climate Change Research – 2010
Does the Global Warming Pause in the Last Decade: 1999-2008?
“…The decade of 1999-2008 is still the warmest of the last 30 years, though the global temperature increment is near zero;….The models did not provide answers to the physical causes for warming pause. The mechanism still remains controversial….”
doi:10.3724/SP.J.1248.2010.00049
__________________
Dr. B. G. Hunt – Climate Dynamics – February 2011
The role of natural climatic variation in perturbing the observed global mean temperature trend
“Controversy continues to prevail concerning the reality of anthropogenically-induced climatic warming. One of the principal issues is the cause of the hiatus in the current global warming trend.”
doi:10.1007/s00382-010-0799-x
__________________
Dr. Robert K. Kaufmann – PNAS – 2nd June 2011
“…Given the widely noted increase in the warming effects of rising greenhouse gas concentrations, it has been unclear why global surface temperatures did not rise between 1998 and 2008. We find that this hiatus in warming coincides…”
doi: 10.1073/pnas.1102467108
__________________
Dr. Gerald A. Meehl – Nature Climate Change – 18th September 2011
“There have been decades, such as 2000–2009, when the observed globally averaged surface-temperature time series shows little increase or even a slightly negative trend1 (a hiatus period)….”
doi:10.1038/nclimate1229
__________________
Met Office Blog – Dave Britton (10:48:21) – 15 October 2012
“We agree with Mr Rose that there has been only a very small amount of warming in the 21st Century. As stated in our response, this is 0.05 degrees Celsius since 1997 equivalent to 0.03 degrees Celsius per decade.”
metofficenews.wordpress.com/2012/10/14/met-office-in-the-media-14-october-2012
__________________
Dr. James Hansen – NASA GISS – 15 January 2013
Global Temperature Update Through 2012
“…The 5-year mean global temperature has been flat for a decade, which we interpret as a combination of natural variability and a slowdown in the growth rate of the net climate forcing…”
columbia.edu/~jeh1/mailings/2013/20130115_Temperature2012.pdf
__________________
Dr. Virginie Guemas – Nature Climate Change – 1 March 2013
“…Despite a sustained production of anthropogenic greenhouse gases, the Earth’s mean near-surface temperature paused its rise during the 2000–2010 period…”
doi:10.1038/nclimate1863
__________________
Professor Masahiro Watanabe – Geophysical Research Letters – 28 June 2013
“The weakening of k commonly found in GCMs seems to be an inevitable response of the climate system to global warming, suggesting the recovery from hiatus in coming decades.”
doi:10.1002/grl.50541
__________________
Met Office – July 2013
The recent pause in global warming, part 3: What are the implications for projections of future warming?
….Executive summary
The recent pause in global surface temperature rise does not materially alter the risks of substantial warming of the Earth by the end of this century.”
Source: metoffice.gov.uk/media/pdf/3/r/Paper3_Implications_for_projections.pdf
__________________
Dr. Yu Kosaka et. al. – Nature – 28 August 2013
Climate change: The case of the missing heat
Sixteen years into the mysterious ‘global-warming hiatus’, scientists are piecing together an explanation.
Recent global-warming hiatus tied to equatorial Pacific surface cooling
Despite the continued increase in atmospheric greenhouse gas concentrations, the annual-mean global temperature has not risen in the twenty-first century…”
doi:10.1038/nature12534
__________________
Dr. Kevin E. Trenberth – Nature News Feature – 15 January 2014
Climate change: The case of the missing heat
Sixteen years into the mysterious ‘global-warming hiatus’, scientists are piecing together an explanation.
“The 1997 to ’98 El Niño event was a trigger for the changes in the Pacific, and I think that’s very probably the beginning of the hiatus,” says Kevin Trenberth, a climate scientist…
doi:10.1038/505276a
__________________
Dr. Gabriel Vecchi – Nature News Feature – 15 January 2014
“A few years ago you saw the hiatus, but it could be dismissed because it was well within the noise,” says Gabriel Vecchi, a climate scientist……“Now it’s something to explain.”…..
doi:10.1038/505276a
__________________
Dr. Jana Sillmann et al – IopScience – 18 June 2014
Observed and simulated temperature extremes during the recent warming hiatus
“This regional inconsistency between models and observations might be a key to understanding the recent hiatus in global mean temperature warming.”
doi:10.1088/1748-9326/9/6/064023
__________________
Dr. Kevin E. Trenberth et al – Nature Climate Change – 11 July 2014
Seasonal aspects of the recent pause in surface warming
Factors involved in the recent pause in the rise of global mean temperatures are examined seasonally. For 1999 to 2012, the hiatus in surface warming is mainly evident in the central and eastern Pacific…….atmospheric circulation anomalies observed globally during the hiatus.
doi:10.1038/nclimate2341
__________________
Dr. Young-Heon Jo et al – American Meteorological Society – 24 October 2014
Climate signals in the mid to high latitude North Atlantic from altimeter observations
“…..Furthermore, the low-frequency variability in the SPG relates to the propagation of Atlantic meridional overturning circulation (AMOC) variations from the deep-water formation region to mid-latitudes in the North Atlantic, which might have the implications for recent global surface warming hiatus.”
http://dx.doi.org/10.1175/JCLI-D-12-00670.1
__________________
Dr. Hans Gleisner – Geophysical Research Letters – 28 January 2015
Recent global warming hiatus dominated by low latitude temperature trends in surface and troposphere data
Over the last 15 years, global mean surface temperatures exhibit only weak trends…..Omission of successively larger polar regions from the global-mean temperature calculations, in both tropospheric and surface data sets, shows that data gaps at high latitudes can not explain the observed differences between the hiatus and the pre-hiatus period….
http://dx.doi.org/10.1002/2014GL062596
__________________
Dr. Hervé Douville et al – Geophysical Research Letters – 10 February 2015
The recent global-warming hiatus: What is the role of Pacific variability?
The observed global mean surface air temperature (GMST) has not risen over the last 15 years, spurring outbreaks of skepticism regarding the nature of global warming and challenging the upper-range transient response of the current-generation global climate models….
http://dx.doi.org/10.1002/2014GL062775
__________________
Dr. Veronica Nieves – Science – 31 July 2015
Recent hiatus caused by decadal shift in Indo-Pacific heating
Recent modeling studies have proposed different scenarios to explain the slowdown in surface temperature warming in the most recent decade…..
http://www.sciencemag.org/content/349/6247/532.short

JimS
Reply to  DWR54
September 17, 2015 10:27 am

But Jimbo, the 97% consensus is a rather limited one, and does not accommodate every hypothesis in climate science. For instance, one can find paper after paper claiming that climate change is the cause of extreme weather events; but then, you can find an equal number of papers claiming that climate change will moderate weather to such an extent that extreme weather events will disappear.

Catcracking
Reply to  mpaul
September 17, 2015 11:43 am

“Once again, rather than using the high tech Satellite data, they use the less reliable and less accurate surface datasets”
Yes that is a big problem for credibility with this paper, how can any honest scientist totally ignore the Satellite data without at least acknowledging it’s existence and explaining it’s impact or why it is not relevant.
Absent that the study is just a waste of taxpayer $$$, but what is new.

MarkW
Reply to  Catcracking
September 17, 2015 11:53 am

Less accurate and less reliable, but much more manipulated.

adrian smits
Reply to  Catcracking
September 17, 2015 2:17 pm

Especially when the satellite data is backed up by the radiosonde balloon weather system which appears to be in close agreement with the satellites.

David A
Reply to  mpaul
September 17, 2015 12:26 pm

Simple, 1998 was far warmer then any year sense. The “scientists” doing this study appear to think temperature readings before other readings somehow affect current readings. Nonsense, T is what it is, period.
1998 was far warmer then any year sense.

TRM
Reply to  David A
September 17, 2015 12:52 pm

Since …. there is no sense in this study 🙂

David A
Reply to  David A
September 17, 2015 3:42 pm

Yes and yes

Reply to  mpaul
September 17, 2015 7:34 pm

if you applied the same method to RSS or UAH what would the results be?

Or indeed if they analysed from 1970 instead of 1950 what would their results be? I feel this paper is far from “robust”.

September 17, 2015 7:52 am

“By blending fake data with massively adjusted, homogenized, and infilled data, no one can say we reached our conclusion first, then invented some new techniques to prove it.”

Logoswrench
September 17, 2015 7:53 am

Funny how everything “faulty or noisy” only happens in the cooling or neutral direction.
Cut the funding cut the nonsense.

Jimbo
Reply to  Logoswrench
September 17, 2015 9:56 am

If surface temperature starts trending upwards they will say that the pause (that never happened) has now ended. 😉 Heads we win, tails you lose. This is why it’s now known as Climastrology.

Reply to  Jimbo
September 17, 2015 10:20 am

And the practitioners can be called Climate Scientologists…

PiperPaul
Reply to  Jimbo
September 17, 2015 10:22 am

If surface temperature starts trending upwards they will say that the pause (that never happened) has now ended.
I suspect you are right. And then they’ll claim that they never claimed there wasn’t a pause. And the media won’t look into things because memory hole and incompetence.

jl
Reply to  Jimbo
September 17, 2015 5:27 pm

I sorta like Climate Astrology. “You will meet someone interesting” is replaced by “you will experience warmer temperatures………someday.”

David
September 17, 2015 7:53 am

There are statistics and damn lies!! If you have one foot in boiling water and the other foot in iced water, statistically you should be quite comfortable.

Keith Willshaw
Reply to  David
September 17, 2015 8:17 am

The best summation of such methods appeared in the letters page of a newpaper (The National Observer) in 1891
“Sir, —It has been wittily remarked that there are three kinds of falsehood:
the first is a ‘fib,’ the second is a downright lie, and the third and most aggravated is statistics. It is on statistics and on the absence of statistics that the advocate relies…”

jeanparisot
Reply to  Keith Willshaw
September 17, 2015 11:42 am

and now we have models.

MarkW
September 17, 2015 7:55 am

“Using a novel statistical framework that was developed specifically for studying geophysical processes such as global temperature fluctuations”
Translation, we kept torturing the data until it eventually told us what we wanted to hear.
Reminds me of the “novel” statistical tricks used to create the original hockey stick.

emsnews
Reply to  MarkW
September 17, 2015 8:13 am

‘Ve vill make you hot!’ Yes, torture the thermometer until it gives up.

Reply to  MarkW
September 17, 2015 11:55 am

No, no, the Novel used was “Earth In the Balance.” 😉

Gregory Lawn
September 17, 2015 7:58 am

They seem to ignore the satellite data.
They ignore their own buoys and use what, bucket measurements and ship intake measurements from where?
They still utilize the buggered up data from land based instrument readings with the problems of UHI’s, and cherry picked locations.
Garbage in, garbage out.

latecommer2014
Reply to  Gregory Lawn
September 17, 2015 9:12 am

They use only what’s useful in backing their forgone conclusions and call it “science”. they need to be prosecuted for their crimes against humanity as well as slander and libel against all honest scientists. They can not truely believe their lies!

MarkW
September 17, 2015 7:58 am

They developed a technique that when used on unadjusted date, it showed no pause.
Then they used it on data that had been adjusted in order to decrease the size of the pause, and once again, it showed no pause.
And in their minds this proves that their technique must be valid?

September 17, 2015 8:01 am

This is like them saying we drank one beer on monday, two beers on tuesday, three beers on wednesday, thursday and friday but because our beer consumption was rising earlier in the week we actually drank four beers on thursday and five beers on friday. Let them try getting those expenses through the accounts department without a receipt for those 3 extra beers.

Alx
September 17, 2015 8:03 am

“the Stanford group’s technique does not rely on strong assumptions to work. If one makes strong assumptions and they are not correct, the validity of the conclusion is called into question,” Rajaratnam said.

My god, the blindness is astounding. The assumptions in his paper are that his “new and improved” method is the indisputably correct method and the relationships between all factors describing temperature at a given point in time are completely proven and understood by him.
What a dolt. Until Rajaratnams assumptions are proven over time with experimentation and evidence, they remain assumptions making any conclusions from them pending at best.

MarkW
Reply to  Alx
September 17, 2015 10:20 am

I wonder if there are any actual statisticians in that group.
Climate science has a long history of using “unique” statistical methods without actually bothering to understand statistics.

jclarke341
Reply to  Alx
September 17, 2015 9:47 pm

I loved this quote, because it reminded me of the ‘strong’ assumption of a positive, water vapor feedback that has yet to be found anywhere outside of a theoretical climate model. In an attempt to defend the models, Rajaratnam inadvertently brings up why all of the climate models are crap; the weakness of a strong assumption as the main component of a theory!

Sturgis Hooper
September 17, 2015 8:03 am

The hiatus in the scientific method since 1982 continues.

AnonyMoose
September 17, 2015 8:05 am

I wonder whether their method is able to detect the warming since start of the temperature record, or if the method also thinks that there’s no change in that.

Reply to  AnonyMoose
September 17, 2015 8:44 am

That was my thought.
Particularly the 15 years leading up to Hansen’s testimony to the US Congress.
If it can’t find that then they have officially debunked AGW as an issue requiring specific actions to be taken.
Frankly, I’m surprised they didn’t look.

Hivemind
September 17, 2015 8:06 am

But statistical analysis already had non-gaussian analysis tools. Why did the authors have to use a tool that was made up by one of the authors (Romano)? Twenty years ago, granted, but still one of the author’s own pet tools.

September 17, 2015 8:08 am

I am no statistician, but something smells very wrong about the technique described. I eagerly await McIntyre’s input.

J
Reply to  TonyG
September 17, 2015 9:39 am

Amen to that TonyG,
Let’s see what Steve McIntyre at Climate Audit says about this sub-sampling.
All of these bespoke science results are so transparently timed (and created) to influence discussion before the Paris climate get together !

Stephen Richards
Reply to  TonyG
September 17, 2015 12:25 pm

I doubt that SteveMc will bother with such a puérile paper.

Ian Magness
September 17, 2015 8:12 am

So, have I got this right? The modern data taken over the last 20 years with modern instruments is all crap (so we adjust it) but the historical data is more accurate and reliable? Really?
With regard to the whole approach, this is not serious science. Proper science, or indeed mathematics, wouldn’t start with the idea that “we know the solution so let’s adjust the methodology and data until we reach that desired answer”. Proper science would say “let’s look at the hard data and the risk/uncertainty factors inherent in it and see what it’s telling us with any degree of certainty”. Or is that just too simple?

emsnews
Reply to  Ian Magness
September 17, 2015 8:15 am

An army of world leaders insist on lies so they can tax thin air.
So any excuse, any tortured data to justify this is good in their eyes but the problem is, will world populations tolerate these immense taxes on nothing? I doubt it seriously.

Latitude
September 17, 2015 8:15 am

…so they are changing the data again

emsnews
Reply to  Latitude
September 17, 2015 8:16 am

And the goal posts move at warp speed.

ripshin
Editor
September 17, 2015 8:18 am

In the interest of taking this a face value, and in challenging my own assumptions and beliefs, I have some questions. The clearest evidence of the hiatus is the satellite record, which demonstrates some 17 to 18 years without a warming trend. So:
1) Is the satellite data, UAH & RSS, really some sort of statistical analysis? I assumed each month’s temp anomaly was an average over that whole month, which I guess is technically a statistical tool, but hardly the type of analysis that one could argue is “inappropriate”. Am I missing something here?
2) Given that around half of the satellite records shows a lack of warming, how is it possible to claim that this is an insufficient quantity of data points? Is there any legitimacy to this claim?
3) With such a precise measurement system, what type of statistical analysis is really needed? Can’t we just, like, look at the observations and SEE what happened? Am I being naively ignorant here?
Am I missing something obvious here?
rip

DWR54
Reply to  ripshin
September 17, 2015 8:56 am

Re 3: Can’t we just, like, look at the observations and SEE what happened?
_______________
That’s what they say they did and what they accuse others of not having done thoroughly enough. The paper is open access and available for download here: http://link.springer.com/article/10.1007/s10584-015-1495-y

Ockham
Reply to  ripshin
September 17, 2015 8:59 am

From what I remember of my statistics, the only reason to ‘subsample’ a population, is the impossibility of sampling the entire population. The surface temperature record or the buoy temperature record by design and necessity, is already a subsample. These are subsamples of the population of temperatures everywhere on the planet at any given moment in time. Why, methinks, do they have to apply their super-special subsampling technique to that which is already a subsample? Subsampling implies missing information. So, to me it seems, they intentionally lose information to gain a trend.

Reply to  ripshin
September 17, 2015 5:33 pm

ripshin September 17, 2015 at 8:18 am says:
“With such a precise measurement system, what type of statistical analysis is really needed? Can’t we just, like, look at the observations and SEE what happened.”
I agree, none is needed. I go with Ernest Rutherford who told us that “…If your experiment needs statistics you should have done a better experiment.”
In our case the better experiment would be using satellites. Ground-based data are corrupted and falsified. Here is an example. In 2008 I was researching satellite data for my book “What Warming.” I accidentally discovered that there had been no warming in the eighties and nineties. It extended from 1979 to 1997, an 18 year stretch, just like the present hiatus. A graph of it is found as figure 15 in my book. But cross checking with ground-based sources I found that they were showing a phony “late twentieth century warming” in its place. That same phony warming is also shown by the Stanford worthies who authored the article. They actually don’t know that there was another hiatus before the current one, nor do they know how it was suppressed. I discovered also that a source for that phony warming was HadCRUT3 and put a warning about it into the preface of the book when it came out. Nothing happened. Later I discovered that that GISS and NCDC had been co-conspirators with HadCRUT in this cover-up. They had all used the same computer to adjust their output and the computer left its footprints in exactly the same places in their publicly available temperature curves. They are still there, since the nineties when the deed was done, They constitute sharp upward spikes that look like noise. Two of them sit directly on top of the super El Nino peak. I have periodically mentioned this but have been entirely ignored. This allegedly scientific organization has no discipline and no ethical guidelines and their so-called climate “scientists” ignore any complaints from the outside.

RoHa
Reply to  ripshin
September 17, 2015 5:39 pm

“With such a precise measurement system, what type of statistical analysis is really needed? Can’t we just, like, look at the observations and SEE what happened? Am I being naively ignorant here?”
I am naively ignorant, so I too can’t see why any fancy statistics are needed. If the temp is measured the same way each time, and the raw figures show a flat line, what more do we need?

Gary Pearse
September 17, 2015 8:21 am

So the hiatus as measured by surface thermometers and corroborated by satellite measurements is a figment of bad statistics by those doing the temp records. Well, indeed, the problem with 150yr trend has been the very egregious use of subjective data manipulation as the authors point out, jacking up recent temperatures, shoving down past temperatures and especially submerging the real record period of the 1930s/40s. This of course is to feed the ravens in Paris, but it is going to have unconsidered negative consequences. With the discipline of the satellite record pinning the present temperature levels, they will be forced to flatten the slope of warming period of the 1990s, particularly if they want to even irradicate a slowdown in temperatures. This will require the 1930s to be lifted halfway back by these methods and the IPCC’s lower bound in climate sensitivity to become the ‘best estimate’.
To imagine what must be done too erase a significant slowdown during a period of rapid CO2 rise, think of a string layed on the temperature trace and fixed at the ‘present’ end. Now make your adjustments. It seems to me a terrible bargain for them to make for this last ditch effort for Paris – remove the urgency, constrain climate sensitivity to an unscary level and push off thermageddon by a century or more. Yes, these are desperate times for climate troughers. I suspect a fair contingent of those with vestiges of scruples will be unable to swallow this latest serving, especially when they see that with the new record, the jig is virtually up. There will be defections. Mark Steyn’s book “Disgrace” outed a fair number of scientists that may now have less to lose in voicing dissent and it will encourage some younger, frightened scientists to step out of line.
This classic end game stuff – approaching the “Sauve qui peut” stage of a war. (“save himself who can”)

DWR54
Reply to  Gary Pearse
September 17, 2015 9:34 am

Re thermometer measurements: Part 3.1 of the paper concentrates on temperature trends 1998-2013. There was no statistically significant warming, but they did detect a warming trend.
As of the moment (to August 2015), the trend in both GISS and NOAA/NCDC shows statistically significant warming (0.124 ±0.109 °C/decade (2σ) in GISS; 0.118 ±0.103 °C/decade (2σ) in NOAA).

Reply to  Gary Pearse
September 17, 2015 10:26 am

I don’t think it’s a case of measured temperatures being a figment of bad statistics. To me, the critical graph is the top panel of figure 3. In essence, it seems to me that the “new improved” statistics makes the temperature in 1998 about 0.3degC lower than it really was, thus permitting a slope to reappear. I have no idea whether the statistics are correct or not, but you have to admit it’s a neat “trick”!
Surely this approach must cast some doubt on the whole of the record? Since the authors accuse previous papers of using “naive statistical methods” and state that “only 15 years of data” is not enough for “classical statistics”, I wonder whether they would like to look at the complete record from 1850 onwards instead of just 1998-2013. I also wonder why they chose just that period, since Lord Monckton now puts it back to 1996.
Finally, I note that the MSM has picked up on this very quickly. It fits perfectly with the agenda and we can expect it to be widely quoted. As long as it’s not disproven before Paris it will have done its job.

PhilC
Reply to  Peter Ward
September 17, 2015 11:18 am

new and improved or not, 15 or so years is not enough data for any statistical test and is totally naive. The only useful analysis is to look at the actual, original measurements. The only valid conclusion is that the temperature data record is extremely noisy so any statistical model will have error ranges nearly equal to the overall change.

James Francisco
Reply to  Peter Ward
September 17, 2015 1:23 pm

“Finally, I note that the MSM has picked up on this very quickly. It fits perfectly with the agenda and we can expect it to be widely quoted. As long as it’s not disproven before Paris it will have done its job.”
It would seem odd that the MSM would say that new evidence shows that the pause didn’t exist, when they never admitted that there was a pause. It would make it apparent that they failed to give us the whole story.

Splice
September 17, 2015 8:22 am

[snip – fake email address, a valid email address is required to comment here -see result -mod]
splice@onet.pl – Result: Bad
MX record about onet.pl exists.
Connection succeeded to mx.poczta.onet.pl SMTP.
220-mx.poczta.onet.pl ESMTP
> HELO technotarget.com
521 5.5.1 Protocol error
> MAIL FROM:
=> RCPT TO:

Reply to  Splice
September 17, 2015 8:29 am

Why did they start in 1970?

Splice
Reply to  David Johnson
September 17, 2015 8:52 am

Because ‘escalator’ started there.

Reply to  David Johnson
September 17, 2015 7:40 pm

They actually started in 1950 to reduce the slope of the increase to 1997. That way it better matches any increases since.
eg from the paper…
First, a standard regression of global temperature on time is fitted to both the 1998–2013 hiatus period and the period 1950–1997, with errors assumed to be independently and identically distributed (see Fig. 2 top left panel).

Reply to  Splice
September 17, 2015 8:32 am

If that’s an escalator, the last couple of steps look very much like we’ve hit the landing at the top.

Reg Nelson
Reply to  Splice
September 17, 2015 8:36 am

Wiser men know that only 15% of the globe has surface temperature data, which means that 85% of the data used to make that graph was made up.

Splice
Reply to  Reg Nelson
September 17, 2015 8:51 am

[snip fake email address -mod]

Reg Nelson
Reply to  Reg Nelson
September 17, 2015 9:05 am

Splice September 17, 2015 at 8:51 am
Nope, it means only, that you know nothing about measuring temperature anomalies.
——–
Did you read the article? They admit that sample size of the surface temperature data is too small.
Perhaps you can explain to me how you can measure a temperature anomaly in the middle of the Pacific Ocean where there are no weather stations.

RoHa
Reply to  Reg Nelson
September 17, 2015 5:41 pm

“which means that 85% of the data used to make that graph was made up.”
Isn’t that how we get the best data?

Go Whitecaps!!
Reply to  Splice
September 17, 2015 8:37 am

Of course they cherry pick the start date to the cold 70s not the warm 30-40s. BTW how are the computer models working.

Splice
Reply to  Go Whitecaps!!
September 17, 2015 9:03 am

> cold 70s not the warm 30-40s
Nope:
http://www.woodfortrees.org/plot/hadcrut4gl/from:1920
‘Escalator’ exists since 70s, and thats why it’s the starting date.
No one claims it existed before.
> BTW how are the computer models working.
Quite well. As most of models predicts, we have currently warming rate about 0.16°C/decade at Earth surface, and about 0.12°C/decade in lower troposphere.

Latitude
Reply to  Go Whitecaps!!
September 17, 2015 9:39 am

Splice, the graphic you posted is not warming at that rate

Caligula Jones
Reply to  Splice
September 17, 2015 8:40 am

Nice graphic.
Here’s a better one that demonstrates how ridiculous quoting that blog is, unless its a “how not to science” post:
http://www.populartechnology.net/2012/03/truth-about-skeptical-science.html

Matt G
Reply to  Splice
September 17, 2015 10:08 am

Splice,
1) Explain how 2003 peak was warmer than a strong El Nino in 1997/98 because it wasn’t on any world data sets even back in 2005. Nothing supports it apart from deliberate tampering of data made up by infilling regions with no observations. They can chose whatever they like to warm it up with this method and they have done.
2) You have cherry picked by far the worst global non-data set there is and has lost all credibility. Only purpose to use it in a science paper is to highlight how awful it is.
3) Any peak has a rise and flattens at the top. The non data graph describes actually that, so it’s no more that a flat part of the peak.
4) Statistics can show all sorts of rubbish and the red line only makes it appear like it continues to rise because the leveled out area at the top is warmer than the rising part of the peak numerous years before it.
5) It shows a pause at the top, not matter how you spin it. The warming rate has significantly decreased to an almost standstill.
Even the most deliberate tampering of non-data any more towards warming, shows very little warming over the past 13 years.
http://www.woodfortrees.org/plot/gistemp/from:2002/plot/gistemp/from:2002/trend

Julian Williams in Wales
September 17, 2015 8:23 am

“lies , damned lies and statistics” ?

Mike
September 17, 2015 8:26 am

Last line of the PR “…even though climate varies from year-to-year and decade-to-decade, global temperature has increased in the long term, and the recent period does not stand out as being abnormal.”
Aren’t they supposed to be proving the recent period’s warming is abnormal?

emsnews
Reply to  Mike
September 17, 2015 12:11 pm

They made the very warm 1930’s go away so yes, it looks like even with the pause, it is getting warmer and warmer even though this is utterly false, thus the need to eliminate the 1930’s in various ways. Tricky dicky stuff.

1 2 3 6