From the “fighting denial with denial” department comes this desperate ploy and press release written to snare headlines with gullible media. Meanwhile, just a couple of days ago the UK Met office said the global warming pause may continue.
Global warming ‘hiatus’ never happened, Stanford scientists say
A new study reveals that the evidence for a recent pause in the rate of global warming lacks a sound statistical basis. The finding highlights the importance of using appropriate statistical techniques and should improve confidence in climate model projections.
An apparent lull in the recent rate of global warming that has been widely accepted as fact is actually an artifact arising from faulty statistical methods, Stanford scientists say.
The study, titled “Debunking the climate hiatus” and published online this week in the journal Climatic Change, is a comprehensive assessment of the purported slowdown, or hiatus, of global warming. “We translated the various scientific claims and assertions that have been made about the hiatus and tested to see whether they stand up to rigorous statistical scrutiny,” said study lead author Bala Rajaratnam, an assistant professor of statistics and of Earth system science.
The finding calls into question the idea that global warming “stalled” or “paused” during the period between 1998 and 2013. Reconciling the hiatus was a major focus of the 2013 climate change assessment by the Intergovernmental Panel on Climate Change (IPCC).
Using a novel statistical framework that was developed specifically for studying geophysical processes such as global temperature fluctuations, Rajaratnam and his team of Stanford collaborators have shown that the hiatus never happened.
“Our results clearly show that, in terms of the statistics of the long-term global temperature data, there never was a hiatus, a pause or a slowdown in global warming,” said Noah Diffenbaugh, a climate scientist in the School of Earth, Energy & Environmental Sciences, and a co-author of the study.
Faulty ocean buoys
The Stanford group’s findings are the latest in a growing series of papers to cast doubt on the existence of a hiatus. Another study, led by Thomas Karl, the director of the National Centers for Environmental Information of the National Oceanic and Atmospheric Administration (NOAA) and published recently in the journal Science, found that many of the ocean buoys used to measure sea surface temperatures during the past couple of decades gave cooler readings than measurements gathered from ships. The NOAA group suggested that by correcting the buoy measurements, the hiatus signal disappears.
While the Stanford group also concluded that there has not been a hiatus, one important distinction of their work is that they did so using both the older, uncorrected temperature measurements as well as the newer, corrected measurements from the NOAA group.
“By using both datasets, nobody can claim that we made up a new statistical technique in order to get a certain result,” said Rajaratnam, who is also a fellow at the Stanford Woods Institute for the Environment. “We saw that there was a debate in the scientific community about the global warming hiatus, and we realized that the assumptions of the classical statistical tools being used were not appropriate and thus could not give reliable answers.”
More importantly, the Stanford group’s technique does not rely on strong assumptions to work. “If one makes strong assumptions and they are not correct, the validity of the conclusion is called into question,” Rajaratnam said.
A different approach
Rajaratnam worked with Stanford statistician Joseph Romano and Earth system science graduate student Michael Tsiang to take a fresh look at the hiatus claims. The team methodically examined not only the temperature data but also the statistical tools scientists were using to analyze the data. A look at the latter revealed that many of the statistical techniques climate scientists were employing were ones developed for other fields such as biology or medicine, and not ideal for studying geophysical processes. “The underlying assumptions of these analyses often weren’t justified,” Rajaratnam said.
For example, many of the classical statistical tools often assume a random distribution of data points, also known as a normal or Gaussian distribution. They also ignore spatial and temporal dependencies that are important when studying temperature, rainfall and other geophysical phenomena that can change daily or monthly, and which often depend on previous measurements. For example, if it is hot today, there’s a higher chance that it will be hot tomorrow because a heat wave is already in place.
Global surface temperatures are similarly linked, and one of the clearest examples of this can be found in the oceans. “The ocean is very deep and can retain heat for a long time,” said Diffenbaugh, who is also a senior fellow at the Woods Institute. “The temperature that we measure on the surface of the ocean is a reflection not just of what’s happening on the surface at that moment, but also the amount of trapped heat beneath the surface, which has been accumulating for years.”
While designing a framework that would take temporal dependencies into account, the Stanford scientists quickly ran into a problem. Those who argue for a hiatus claim that during the 15-year period between 1998 and 2013, global surface temperatures either did not increase at all, or they rose at a much slower rate than in the years before 1998. Statistically, however, this is a hard claim to test because the number of data points for the purported hiatus period is relatively small, and most classical statistical tools require large numbers of data points.
There is a workaround, however. A technique that Romano invented in 1992, called “subsampling,” is useful for discerning whether a variable – be it surface temperature or stock prices – has changed in the short term based on limited amount of data. “In order to study the hiatus, we took the basic idea of subsampling and then adapted it to cope with the small sample size of the alleged hiatus period,” Romano said. “When we compared the results from our technique with those calculated using classical methods, we found that the statistical confidence obtained using our framework is 100 times stronger than what was reported by the NOAA group.”
The Stanford group’s technique also handled temporal dependency in a more sophisticated way than in past studies. For example, the NOAA study accounted for temporal dependency when calculating sea surface temperature changes, but it did so in a relatively simple way, with one temperature point being affected only by the temperature point directly prior to it. “In reality, however, the temperature could be influenced by not just the previous data points, but six or 10 points before,” Rajaratnam said.
Pulling marbles out of a jar
To understand how the Stanford group’s subsampling technique differs from the classical techniques that had been used before, imagine placing 50 colored marbles, each one representing a particular year, into a jar. The marbles range from blue to red, signifying different average global surface temperatures.
“If you wanted to determine the likelihood of getting 15 marbles of a certain color pattern, you could repeatedly pull out 15 marbles at a time, plot their average color on a graph, and see where your original marble arrangement falls in that distribution,” Tsiang said. “This approach is analogous to how many climate scientists had previously approached the hiatus problem.”
In contrast, the new strategy that Rajaratnam, Romano and Tsiang invented is akin to stringing the marbles together before placing them into the jar. “Stringing the marbles together preserves their relationships to one another, and that’s what our subsampling technique does,” Tsiang said. “If you ignore these dependencies, you can alter the strength of your conclusions or even arrive at the opposite conclusion.”
When the team applied their subsampling technique to the temperature data, they found that the rate of increase of global surface temperature did not stall or slow down from 1998 to 2013 in a statistically significant manner. In fact, the rate of change in global surface temperature was not statistically distinguishable between the recent period and other periods earlier in the historical data.
The Stanford scientists say their findings should go a long way toward restoring confidence in the basic science and climate computer models that form the foundation for climate change predictions.
“Global warming is like other noisy systems that fluctuate wildly but still follow a trend,” Diffenbaugh said. “Think of the U.S. stock market: There have been bull markets and bear markets, but overall it has grown a lot over the past century. What is clear from analyzing the long-term data in a rigorous statistical framework is that, even though climate varies from year-to-year and decade-to-decade, global temperature has increased in the long term, and the recent period does not stand out as being abnormal.”
###
Debunking the climate hiatus
Bala Rajaratnam, Joseph Romano, Michael Tsiang, Noah S. Diffenbaugh
Abstract
The reported “hiatus” in the warming of the global climate system during this century has been the subject of intense scientific and public debate, with implications ranging from scientific understanding of the global climate sensitivity to the rate in which greenhouse gas emissions would need to be curbed in order to meet the United Nations global warming target. A number of scientific hypotheses have been put forward to explain the hiatus, including both physical climate processes and data artifacts. However, despite the intense focus on the hiatus in both the scientific and public arenas, rigorous statistical assessment of the uniqueness of the recent temperature time-series within the context of the long-term record has been limited. We apply a rigorous, comprehensive statistical analysis of global temperature data that goes beyond simple linear models to account for temporal dependence and selection effects. We use this framework to test whether the recent period has demonstrated i) a hiatus in the trend in global temperatures, ii) a temperature trend that is statistically distinct from trends prior to the hiatus period, iii) a “stalling” of the global mean temperature, and iv) a change in the distribution of the year-to-year temperature increases. We find compelling evidence that recent claims of a “hiatus” in global warming lack sound scientific basis. Our analysis reveals that there is no hiatus in the increase in the global mean temperature, no statistically significant difference in trends, no stalling of the global mean temperature, and no change in year-to-year temperature increases.
The paper is open access, read it here
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.

They have to have some ‘peer reviewed papers’ denying the pause in press so they can say ‘Recent work has shown that the hiatus was an error’ when they all get to Paris in time for Christmas.
I done this study, like, and, like, taking into account all the then normal levels of community violence and such and disease and accidents from stuff happening and like as well as governments and such attacking their own people and stuff, well sampling the period 1939 to 1945 like, world war 2 never actually happened.
Amazing isn’t it. Who’d a thunk it but statistics don’t lie.
OK. So, based on the marbles analogy, their subsampling technique works by preserving the relationships of sampled points to each other, and they note that a given temperature point can be affected not just by the previous temperature point, but by the previous six to ten points.
I await the review of this paper by someone with more statistical expertise, who can tell me if all they did was a variation on a long-term running average, effectively using the prior trend to “smooth” the hiatus out of existence as a short-term fluctuation.
Go to woodfortrees.org and examine any of the global temperature indices – as opposed to hemispheric, land-only or sea-only ones. A consensus of them shows the pause starting in 2001. The satellite-measured ones of the lower troposphere (their UAH one is v.5.5 which is overwarming during the pause period) show the 1998 peak as a distinct El Nino spike, within the late part of the warming period rather than being the beginning of the pause period. Look at only RSS and HadCRUT3 – it looks like the pause started in 2001.
. The Stanford group’s findings are the latest in a growing series of papers to cast doubt on the existence of a hiatus.
What garbage science.
The flurry of such flawed scientific studies are the alarmists typical daily out put until the Paris Conference . Expect to get one a day if not more . They are desperate I saw another such a flawed study where they claimed that if we burn all our fossil fuels currently in the ground, we will thaw the Antarctica. They are now doing studies going 10,000 years ahead. Making worst case scenarios to support their current flawed science that may never come about 10000 years ahead speaks to the nonsense that is happening in climate science these days
Lets not get caught in the fine points of what constitutes a pause , hiatus or a slowdown. Everyone has their own definition and there will be no consensus on this .The fundamental flaw that remains in the AGW science despite what they now say is that they predicted unprecedented warming until 2100 as co2 levels rise and this has not happened to date and is unlikely to happened in the future . If you monkey with the observable data at will, then all bets are off and the science is a sham.
According to NOAA data, Annual temperature anomalies since 2005 or last 10 years for combined all GLOBAL LAND areas ( 149 million sq. km) have slight decline or flat trend at – 0.02 C/decade
.
• The pause is still real for global land with both land and satellite based measurements.
o It is clear that GLOBAL as in ‘GLOBAL WARMING” is meaningless as the warming is not global wide as entire continents are actually cooling .
o The trend of North American annual land temperature anomalies has been steadily cooling whether you go back to 1998,2000 or 2005 at -0.20 C /decade, -0.05 C /decade and -0.41 C /decade respectively according to NOAA
o The trend of Northern Hemisphere annual land temperature anomalies has been slightly cooling or flat since 2005 at -0.05 C./decade
o The trend of Southern Hemisphere annual land temperature anomalies has been slightly warming or really flat at + 0.06 C /decade. Africa is also slightly warming or flat at -0.07 C/decade.
•
Here is part of an unsolicited email I received today. This is one of the alarmists strategies for Paris:
“This year, DeSmog is celebrating our 10th anniversary!
With a decade of experience, our team is best positioned to investigate and report on the people who want to throw a monkey wrench in the critical climate negotiations happening in Paris this December.
We need to raise $30,000 to ramp up our independent journalism efforts at the Paris climate summit and beyond. Will you help us clear the PR pollution?
Even with over 7,000 published articles featuring hard-hitting commentary, in-depth analysis and our constantly expanding research database, it’s hard to believe ten years have passed since DeSmog began.
DeSmogBlog was launched at the COP 11 global climate summit in Montreal in December 2005. A decade later, here we are on the eve of the critical COP 21 summit in Paris, where world leaders must reach an agreement to safeguard our future from global warming pollution.
Can you make a generous donation now to help DeSmog raise $30,000 to support our unflinching reporting from Paris?
Did you know that DeSmog was the first ‘new media’ blog to ever receive full press credentials at a United Nations climate summit? Our unique brand of journalism has shaken up these major events ever since, and we are preparing to hold truth to power again at COP 21 and beyond.
Climate science denial hasn’t gone away, and we need to keep fighting the fossil fuel industry’s PR pollution. Together we can turn delay and distraction into accountability and action.
Will you help support us for this important work? Just click here and chip in whatever you can — anything you can give will help make DeSmog an even more powerful and independent voice.
Fossil fuel companies and the deniers and delayers they fund are hoping that we all sit back and ignore the epic threat of climate change. But their tobacco playbook tactics are no match for DeSmog’s team of independent journalists who clear their PR pollution every day.
Click here to send your most generous donation and help us fight the fossil fuel industry and science deniers by supporting DeSmog’s 10th anniversary fundraiser.
Thank you for your ongoing support,
Brendan DeMelle, Executive Director, DeSmog
P.S. We rely on individual donors like you. Please make a generous donation now to help us celebrate our 10th anniversary.
* Donations are not tax deductible (but our hugs of gratitude are free)!”
Somebody wants a trip to Paris, and he wants other people to pay for it.
Looks like he wants to stay in the George V too.
http://www.fourseasons.com/paris/
Only €1250 a night right now.
Without reading the full paper it sounds like they took the straightforward statistics that have been used to show a significant change in temperature trends (the hiatus) and used more rigorous statistical methods. In this case, more rigorous is not always better. For example, if you have normally distributed data you can use either a standard t-test, OR you can use a non-parametric test (rank-order, etc). The t-test might show a highly significant difference between your groups, while the non-parametric test might not show significance.
This leaves the open question of whether using a different statistical approach is justified or necessary. If it is necessary, they are pretty much arguing that every previous Global Warming study that did not use their statistical approach is invalid.
When scientists are dealing with data that shows only modest differences or has a low signal-to-noise ratio, but it on the edge of statistical significance they might shop around for a statistical test that gets them “over the finish line”. If they’re doing a t-test, they might use a one-tailed test instead of the standard two-tailed test. All of this is very questionable, and should raise a red flag in the minds of reviewers of these papers. But the fact remains that in science the level of significance is somewhat arbitrary, and some journals give scientists wide latitude in defining what they consider to be significant.
There are numerous recent highly touted Global Warming studies that claim significance of p < 0.10, which is outside the mainstream.
My guess is that this same group could carry out a rigorous statistical analysis of many of the foundational statistical findings of the global warming hypothesis and absolutely trash them. But in this case their goal was to obfuscate the statistics underlying the hiatus. They wanted to replace statistical proof with uncertainty, rather than demonstrate the robustness of the warming claims.
+1
sorry I didn’t read you before commenting below
If the problem was the inaccuracy of the buoys, who checked the accuracy of the ship mounted sensors?
nobody of course … they just wanted to throw out the buoy data …
Ships use seawater internally to cool various ship systems. Seawater intake thermometers used by the US Navy are of 4 kinds, resistance, pneumatic, bimetallic and liquid level. All have a specified accuracy of plus or minus 2F. I understand that merchant ships use the same devices. These thermometers measure the temperature of seawater at the depth of the intake, roughly 20 to 40 feet below the surface for most ships. Their purpose is to provide a measure of seawater temperature so that the proper function of internal ship’s machinery can be assessed. They need be no more accurate than +/- 2F to do that. It seems odd that one would use such primitive devices sampling water at such depths to deduce air temperature changes near the surface of the oceans that are on the order of 0.1 of 0.2 C extending over decades. This makes no sense to me.
That’s +/- 2F on the day of manufacture. 10 years later, who knows?
Beyond that, even if the temperature reported was accurate to .01F, there is still the problem of the depth of sampling varying as the ships ballasting changes. Plus the issue of possible heat contamination from the hull and interior structure of the ship.
Well they had to get rid of the 1940’s blip.
As an former calibration tech for the US Navy, I can tell you that +/- 2F is the proper accuracy range for most ship borne temperature measuring equipment. The equipment is made to perform in the rigors of working ships, its not laboratory grade equipment or accuracy. The equipment is routinely calibrated to ensure its stated accuracy.
However, during calibrations, not all equipment is found to be within calibration tolerances. That’s why they are calibrated in the first place. When this occurs an out of tolerance notice is generated and all usage of the items must be analyzed for effect on measurement. I have rarely if ever seen a disposition of any report that said it affected data. ‘Use As Is’ is the normal reply.
But it gets better than that. When families of measuring equipment show repeated out of tolerance, recommendations for replacement are made. Now we enter the budgetary realm of operations. The ships have to sail and can’t wait for the bean counters. So entire family derations of accuracy are issued for what are affectionally known as dogs. Of course these are all reviewed and approved beforehand, but I seriously doubt that the complete pedigree of ship equipment accuracy is ever included in data use by third party researchers. In 30 years of cal lab work and administration, I’ve never seen such a request or delivered such a report. In practice, these derations can be as much as +/- 10% of full scale and no one blinks an eye. For a 0-200 degree F bimetallic gauge, that’s 20 degrees.
The data from the ships matched the output of the models. That’s all that is needed to prove the accuracy of the data.
IPCC AR5 text box 9.2 acknowledged the pause/hiatus/lull/stasis and their disappointment in the failure of the GCMs to model it.
I hope Willis takes a look at this study……
I LOVE this one.
I am pretty sure that, using their ad hoc technique, “Our results clearly show that, in terms of the statistics of the long-term global temperature data, there never was a global warming” between 1970 and 2000 !
🙂
So indeed , no global warming, no hiatus …
15 of the 18 years of the pause, occur after the year 2000.
Whoever writes the rebuttal to this will hopefully mention that statistics is a common refuge for scientific ideas under fire — largely because the set of people who are actually actually fluent in statistics is truthfully only a subset of the scientific community. This approach will permit the scientists to make a last stand which most people will simply not understand. However, it offers not hope of gaining additional adherents.
This approach was wildly successful at undermining Halton Arp, and it was also used to distract people from an apparently temporal anti-correlation between sunspots and solar neutrino production (a clear violation of the Standard Solar Model).
For Arp, the statistical argument was problematic because it didn’t fully rebut his claims of a direct observation of bridges between objects of wildly different redshifts.
For the neutrino anti-correlation, which was not supposed to occur because of an enormous amount of time that is hypothesized to exist between these two phenomena, it may turn out to matter that the Sun apparently has different “modes” for the solar wind where the dominant wind can originate from either its closed magnetic field lines or the open “coronal hole” field lines (???).
Either way, statistical arguments would not be invoked unless the idea was “on the ropes”, to begin with. It’s a sign of the phase we are in with this.
Got it. This never happened.
This statistical analysis doesn’t change the data. All it does is provide cover for those who want to ignore it or pretend it doesn’t exist.
You can still draw a negative trend line through the RSS data since 1997, and no global warming scientist or model predicted that.
That may be the case for the satellite data, but the CMIP5 surface data models are, so far, fairly consistent with surface observations (update is to July 2015; credit Ed Hawkins): http://www.climate-lab-book.ac.uk/wp-content/uploads/fig-nearterm_all_UPDATE_2015b.png
Of course the surface data is consistent with the data models. The surface data was adjusted to match the data models.
Fairly consistent? They have historical data up until the dashed vertical line. The predictive capability of the models since then is virtually nil. Within 3 years, observed temperatures dropped outside the 5-95% confidence interval. They bounced back inside for a bit, then spent another ~3 years falling down outside the confidence interval.
That’s “fairly consistent” with spectacular failure as far as I’m concerned. Once the observations drop out of the confidence interval, the models should be rejected, at least in real science. In made-up fantasy science, I guess anything goes.
Here is the perfect example of statistics gone wild.
?w=720
As the data steadily diverges from the models, as the model predictions get more and more wrong every year, the “confidence” that CO2 is the global control knob of temperatures goes up and up.
I’d love to see Romano and the rest of this group apply rigorous statistics to the above relationship and show how absurd it truly is.
Not sure what your chart is showing. Is it comparing tropical mid troposphere balloon and satellite data with the modelled global surface projections? If so, then it’s hardly a fair comparison. Also, it stops over 2 years ago, before the very warm years of 2014 and (so far) 2015.
Ed Hawkins has updated the IPCC AR5 Fig 11.25, which uses CMIP5 surface model projections against surface observations, to July 2015: http://www.climate-lab-book.ac.uk/wp-content/uploads/fig-nearterm_all_UPDATE_2015b.png
Looks like he will be updating this chart through the rest of the year: http://www.climate-lab-book.ac.uk/comparing-cmip5-observations/
Also, it stops over 2 years ago…before they re-jiggered the models
…why would anyone use modeled global surface projections?…and present them on a chart as if they have any validity at all?
I’m pretty sure it’s the mid-troposphere for the models as well.
It sounds to me like they break the time series into all possible 15 year periods, Yrs 1-15, 2-16, 3-17, etc, and then test for a difference between the current 15 year “pause” and the mean of all prior 15 year periods. I don’t see where they account for the trend. There are enough ups and downs in the temperature record that a any single 15 year period of flat temperatures will be indistinguishable, in a statistical sense, from the mean of all 15 year periods. I am probably oversimplifying their analysis, but I think I have described the central issue.
I caught the “climate change” part of the Republican debate last night and the candidates who answered the question were unanimous in saying they weren’t going to destroy the economy over it. They didn’t even bother dealing with whether or not it exists as a danger. These scientific fr*uds are pissing in the wind anyway.
We are told: This issue of whether global temps are rising is so simple that NO ONE should be questioning it.
It is so obvious, we are told, that we can just look out the window and see all of these effects of higher global temps.
Yet, the global temps lined up in a row do not reflect this at all, and taking sub-samples of the limited data then applying advanced statistical techniques, developed in recent months, is necessary to show the warming.
Now, let’s just try to get an answer:
is it blatantly obvious, or is it subtle, like how we could live in radon radiation poisoning and never know it, but it is profound-type subtle?
Well, which is it? And if it is so subtle you cannot see it on the temperature charts, can I be excused for ever doubting you CAGW enthusiasts?
+1
Even the IPPC have acknowledge the ‘pause’; the warmunists have produce over 60 contradictory explanations for it Geeze fighting denial with denial….
I feel that a paper entitled “Debunking the hiatus” is more of a political than scientific statement..
The problem is that using a block bootstrap (which I am not sure that I believe) is that the estimated increase in temperature is very small. Statistically significant effects may not be physically significant. The important issue is that the “pause” exists with a rate of change that is far below that predicted by virtually every climate model.
Suppose there had been a sharp uptick in temperatures above the model forecasts rather than the hiatus over the past 20 years.
Choose the likely explanation from the warmists from the following:
1) This is just an anomaly. The temperature will soon drop down to the values predicted by our models.
2) It’s worse than we thought!
With regard to ships giving warmer temperatures than buoys: generally lots of oil gets burned on a ship.
It really wouldn’t surprise me if some of that energy showed up in temperature measurements. I’d guess the reflectivity of the superstructure is probably different from open ocean as well.
Water temps taken by ships come from sensors in the engine cooling intakes. If the ship itself is warmer than the surrounding water, it cannot help but warm the water it is sampling. Engine rooms are notoriously hot places.
if it had happened it would have been inconsistent with the global warming hypothesis
http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2659755
Perfect comic for this …
I love it! Takes me back to the 7th grade and makes me feel young again.
The Hiatus Debate
“Is not!” “Is so.”
“Is not!” “Is so.”
“Not! Not!” “So! So!”
“Not! Not! Not!” “So! So! So!”
“Moron!” “I know you are, but what am I?”
“I said MORON!”
“I know you are, but what am I?”
I believe the study behind this press release takes the “Is Not” side of the debate.
“Using a novel statistical framework that was developed specifically for studying geophysical processes such as global temperature fluctuations…”
Really? Have they no shame or even sense of irony at long last?
In any case all their shouting, posturing, and ruining the economy of the West are wasted effort, even were man-caused global warming real, AND actually a problem.
There are more than six billion people on the planet – one billion between North America, Europe, and Japan. There are two billion between China and India, both growing technologically and neither giving a toss about warming. That leaves another three billion in the developing world, who, while their CO2 contributions are less per capita, in total probably come close to the West in total. So, even if the West’s contribution to CO2 were 50% of the total AND it mattered, it would be a lost cause.
The report states that they combined ship measurements with ocean buoy measurements to arrive at a compromise result. Ok., that’s fine, but I believe we have been using the RSS satellite feeds as the principal, longest running…. stable (read unchanged) form of temperature measurements available to arrive at our proof of the hiatus.
Is this from the Holocaust-Never-Happened playbook?
I’m sure under this technique they could prove it never happened.