From the “fighting denial with denial” department comes this desperate ploy and press release written to snare headlines with gullible media. Meanwhile, just a couple of days ago the UK Met office said the global warming pause may continue.
Global warming ‘hiatus’ never happened, Stanford scientists say
A new study reveals that the evidence for a recent pause in the rate of global warming lacks a sound statistical basis. The finding highlights the importance of using appropriate statistical techniques and should improve confidence in climate model projections.
An apparent lull in the recent rate of global warming that has been widely accepted as fact is actually an artifact arising from faulty statistical methods, Stanford scientists say.
The study, titled “Debunking the climate hiatus” and published online this week in the journal Climatic Change, is a comprehensive assessment of the purported slowdown, or hiatus, of global warming. “We translated the various scientific claims and assertions that have been made about the hiatus and tested to see whether they stand up to rigorous statistical scrutiny,” said study lead author Bala Rajaratnam, an assistant professor of statistics and of Earth system science.
The finding calls into question the idea that global warming “stalled” or “paused” during the period between 1998 and 2013. Reconciling the hiatus was a major focus of the 2013 climate change assessment by the Intergovernmental Panel on Climate Change (IPCC).
Using a novel statistical framework that was developed specifically for studying geophysical processes such as global temperature fluctuations, Rajaratnam and his team of Stanford collaborators have shown that the hiatus never happened.
“Our results clearly show that, in terms of the statistics of the long-term global temperature data, there never was a hiatus, a pause or a slowdown in global warming,” said Noah Diffenbaugh, a climate scientist in the School of Earth, Energy & Environmental Sciences, and a co-author of the study.
Faulty ocean buoys
The Stanford group’s findings are the latest in a growing series of papers to cast doubt on the existence of a hiatus. Another study, led by Thomas Karl, the director of the National Centers for Environmental Information of the National Oceanic and Atmospheric Administration (NOAA) and published recently in the journal Science, found that many of the ocean buoys used to measure sea surface temperatures during the past couple of decades gave cooler readings than measurements gathered from ships. The NOAA group suggested that by correcting the buoy measurements, the hiatus signal disappears.
While the Stanford group also concluded that there has not been a hiatus, one important distinction of their work is that they did so using both the older, uncorrected temperature measurements as well as the newer, corrected measurements from the NOAA group.
“By using both datasets, nobody can claim that we made up a new statistical technique in order to get a certain result,” said Rajaratnam, who is also a fellow at the Stanford Woods Institute for the Environment. “We saw that there was a debate in the scientific community about the global warming hiatus, and we realized that the assumptions of the classical statistical tools being used were not appropriate and thus could not give reliable answers.”
More importantly, the Stanford group’s technique does not rely on strong assumptions to work. “If one makes strong assumptions and they are not correct, the validity of the conclusion is called into question,” Rajaratnam said.
A different approach
Rajaratnam worked with Stanford statistician Joseph Romano and Earth system science graduate student Michael Tsiang to take a fresh look at the hiatus claims. The team methodically examined not only the temperature data but also the statistical tools scientists were using to analyze the data. A look at the latter revealed that many of the statistical techniques climate scientists were employing were ones developed for other fields such as biology or medicine, and not ideal for studying geophysical processes. “The underlying assumptions of these analyses often weren’t justified,” Rajaratnam said.
For example, many of the classical statistical tools often assume a random distribution of data points, also known as a normal or Gaussian distribution. They also ignore spatial and temporal dependencies that are important when studying temperature, rainfall and other geophysical phenomena that can change daily or monthly, and which often depend on previous measurements. For example, if it is hot today, there’s a higher chance that it will be hot tomorrow because a heat wave is already in place.
Global surface temperatures are similarly linked, and one of the clearest examples of this can be found in the oceans. “The ocean is very deep and can retain heat for a long time,” said Diffenbaugh, who is also a senior fellow at the Woods Institute. “The temperature that we measure on the surface of the ocean is a reflection not just of what’s happening on the surface at that moment, but also the amount of trapped heat beneath the surface, which has been accumulating for years.”
While designing a framework that would take temporal dependencies into account, the Stanford scientists quickly ran into a problem. Those who argue for a hiatus claim that during the 15-year period between 1998 and 2013, global surface temperatures either did not increase at all, or they rose at a much slower rate than in the years before 1998. Statistically, however, this is a hard claim to test because the number of data points for the purported hiatus period is relatively small, and most classical statistical tools require large numbers of data points.
There is a workaround, however. A technique that Romano invented in 1992, called “subsampling,” is useful for discerning whether a variable – be it surface temperature or stock prices – has changed in the short term based on limited amount of data. “In order to study the hiatus, we took the basic idea of subsampling and then adapted it to cope with the small sample size of the alleged hiatus period,” Romano said. “When we compared the results from our technique with those calculated using classical methods, we found that the statistical confidence obtained using our framework is 100 times stronger than what was reported by the NOAA group.”
The Stanford group’s technique also handled temporal dependency in a more sophisticated way than in past studies. For example, the NOAA study accounted for temporal dependency when calculating sea surface temperature changes, but it did so in a relatively simple way, with one temperature point being affected only by the temperature point directly prior to it. “In reality, however, the temperature could be influenced by not just the previous data points, but six or 10 points before,” Rajaratnam said.
Pulling marbles out of a jar
To understand how the Stanford group’s subsampling technique differs from the classical techniques that had been used before, imagine placing 50 colored marbles, each one representing a particular year, into a jar. The marbles range from blue to red, signifying different average global surface temperatures.
“If you wanted to determine the likelihood of getting 15 marbles of a certain color pattern, you could repeatedly pull out 15 marbles at a time, plot their average color on a graph, and see where your original marble arrangement falls in that distribution,” Tsiang said. “This approach is analogous to how many climate scientists had previously approached the hiatus problem.”
In contrast, the new strategy that Rajaratnam, Romano and Tsiang invented is akin to stringing the marbles together before placing them into the jar. “Stringing the marbles together preserves their relationships to one another, and that’s what our subsampling technique does,” Tsiang said. “If you ignore these dependencies, you can alter the strength of your conclusions or even arrive at the opposite conclusion.”
When the team applied their subsampling technique to the temperature data, they found that the rate of increase of global surface temperature did not stall or slow down from 1998 to 2013 in a statistically significant manner. In fact, the rate of change in global surface temperature was not statistically distinguishable between the recent period and other periods earlier in the historical data.
The Stanford scientists say their findings should go a long way toward restoring confidence in the basic science and climate computer models that form the foundation for climate change predictions.
“Global warming is like other noisy systems that fluctuate wildly but still follow a trend,” Diffenbaugh said. “Think of the U.S. stock market: There have been bull markets and bear markets, but overall it has grown a lot over the past century. What is clear from analyzing the long-term data in a rigorous statistical framework is that, even though climate varies from year-to-year and decade-to-decade, global temperature has increased in the long term, and the recent period does not stand out as being abnormal.”
###
Debunking the climate hiatus
Bala Rajaratnam, Joseph Romano, Michael Tsiang, Noah S. Diffenbaugh
Abstract
The reported “hiatus” in the warming of the global climate system during this century has been the subject of intense scientific and public debate, with implications ranging from scientific understanding of the global climate sensitivity to the rate in which greenhouse gas emissions would need to be curbed in order to meet the United Nations global warming target. A number of scientific hypotheses have been put forward to explain the hiatus, including both physical climate processes and data artifacts. However, despite the intense focus on the hiatus in both the scientific and public arenas, rigorous statistical assessment of the uniqueness of the recent temperature time-series within the context of the long-term record has been limited. We apply a rigorous, comprehensive statistical analysis of global temperature data that goes beyond simple linear models to account for temporal dependence and selection effects. We use this framework to test whether the recent period has demonstrated i) a hiatus in the trend in global temperatures, ii) a temperature trend that is statistically distinct from trends prior to the hiatus period, iii) a “stalling” of the global mean temperature, and iv) a change in the distribution of the year-to-year temperature increases. We find compelling evidence that recent claims of a “hiatus” in global warming lack sound scientific basis. Our analysis reveals that there is no hiatus in the increase in the global mean temperature, no statistically significant difference in trends, no stalling of the global mean temperature, and no change in year-to-year temperature increases.
The paper is open access, read it here
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.

Keep torturing the data until it says what you want it to say.
‘Ve vill make you talk!’
Statisticians? We don’t need no stinking classic statisticians!
Off to the Gausian Gulag with you till you confess your coldness to the sancttity of the God of Warm! Good Grief?
This paper is a major breakthrough! It used to be garbage in, garbage out. Now it’s garbage in, polished-garbage out!
“Lies, damned lies, and statistics”
Benjamin Disraeli
Well with a title that starts with a word like “debunking” it is blattently obvious we are dealing internet trolls and not scientists
.“Debunking” is argumentative and polemitc, this is an attempt at political point scoring not a scientific study. Do even hope to be taken seriously with a title like that?
Debunking is what you do with people talk through the wrong end, see Moon landing hoaxers, or “Moon Landing is an Hoax…”. You debunk fraudulent made-up stories (like the tales of G.E. Séralini who uses “encrypted emails” for fear of Monsanto interference, when Monsanto for years failed to take any action against him beside refuting his crappy antiglyphosate and antiGMOs studies).
Showing that a scientist is in error is not debunking, it is refuting. Showing that someone who pretends to do science is in fact doing tea leafs reading is a debunking.
Knowing when to debunk and when to refute is really epistemology 101.
It may be that current satellite data is not enough to conclude anything about the climate system… anything.
The various algorithms of statistical mathematics are thoroughly spelled out in numerous standard text books.
” Average ” is obtained by adding all of the members of the data set, and dividing that total by the number of elements in the data set.
The result is always exact, because all the elements of the data set, are exact real numbers.
The result (the average) is always correct regardless of the elements in the data set; it works for any finite set of real numbers. It works whether the numbers are unrelated to each other in any way, or whether they are calculated from some closed form mathematical equation.
The same goes for all the other algorithms of statistical mathematics. they all give a specific result for any finite data set of exact real numbers; and there is no restriction on what those real numbers are.
Now when I say the numbers of the data set are exact; that is not the same as saying they represent any actual real world value of anything; they are just numbers.
Where the big mistake is made is in asserting that the results of any of those statistical mathematics algorithms actually mean anything.
They don’t mean anything except that which they are defined to be in the textbooks.
So the ” average ” of a data set means just that; it is the average.
The ” median ” of the same data set is calculated from a different algorithm from ” average ” and usually gives a different result which, is the median of that data set; by definition. And it doesn’t mean ANYTHING else.
g
So our new discovery for today is that someone has described a new statistical mathematics algorithm, from those we have all seen before; so it generally gives a different result for a given data set; but it too still means nothing, except that which it has been defined to compute.
Lewis, insurance industries are businesses. The aim of a business is to make money and stay in business. The statistics they use attempt to keep the books balanced in their favour. They need be only sufficiently related to actual world events to reliably keep the business in business.
George is correct in that statistics is only the manipulation of numbers. Re-read George’s text again, he qualifies it quite specifically.
“””””…..Statistically, however, this is a hard claim to test because the number of data points for the purported hiatus period is relatively small, and most classical statistical tools require large numbers of data points……”””””
Well actually you have precisely one data point. The history from circa 1987/8 and 2015.
We have no way to rerun it to get another data point.
g
Translation: “The models can’t be wrong! Change the data!”
Exactly!
Ajust the adjustments.(UP)
There, all fixed just in time for the Paris “fund raiser”.
How convenient.
I think they are claiming (in effect) that the surface based data sets do not conform to the Nyquist sampling criterion for sampled data systems.
But we always knew that was so.
And what is this bunk about the ocean buoys being wrong, and the bucket of water on a ship’s deck being correct ??
The ocean buoys showed that water temperature and air temperature are different, and are not correlated.
Well John Christy told us that in Jan 2001.
g
http://www.21stcenturysciencetech.com/articles/ocean.html
The late Oceanographer Dr Robert Stevenson was a “bucket man” and had this to say when writing a critique of Levitus et al (2000):
“Surface water samples were taken routinely, however, with buckets from the deck and the ship’s engine-water intake valve. Most of the thermometers were calibrated into 1/4-degrees Fahrenheit. They came from the U.S. Navy.
Galvanized iron buckets were preferred, mainly because they lasted longer than the wood and canvas. But, they had the disadvantage of cooling quickly in the winds, so that the temperature readings needed to be taken quickly.
I would guess that any bucket-temperature measurement that was closer to the actual temperature by better than 0.5° was an accident, or a good guess. But then, no one ever knew whether or not it was good or bad. Everyone always considered whatever reading was made to be precise, and they still do today.
The archived data used by Levitus, and a plethora of other oceanographers, were taken by me, and a whole cadre of students, post-docs, and seagoing technicians around the world. Those of us who obtained the data, are not going to be snowed by the claims of the great precision of “historical data found stored in some musty archives.”
Guess I’ll wait to see what Steve McIntyre and friends have to say about this technique. I’m sure they will digest it in detail. Until then, it’ just another climate paper.
Originally the subsampling technique, described here:
http://home.uchicago.edu/~amshaikh/webfiles/subsampling_topics.pdf
was for estimating parameters in sets of data that were simply too large to handle, by using samples of the data. In contrast it appears that here, they used this technique to estimate what larger sets of data MIGHT look like, using relatively little data. In other words, they made a huge leap from making inferences about large sets of data from samples, to inferring properties of a hypothetical large data set from a small one.
This is interesting. From what I can tell, in effect they reversed a statistical sampling technique and used it instead to extrapolate.
I’m not a statistician, but it would take some strong evidence to convince me that the technique is valid for this purpose.
This is interesting. From what I can tell, in effect they reversed a statistical sampling technique and used it instead to extrapolate.
**********************************************************************************
What’s all the fuss about, if reversing statistical data is alright for Michael Mann (tiljander) then it’s got to be all right…………oh hang on a minute, let me think this through!
SteveT
Yes that is true, but the problem is that as always, they get to grab the headlines first no matter how much BS it is. We are always playing defence.
“Our new technique basically consists of adding one. To almost everything. We get a much happier answer that way. If we used the old techniques, we kept getting the wrong answers, so it’s obvious that pre-additive statistics doesn’t work right with AGW theory.”
The part I really like is that you can’t tell the recent period from any previous historical period. So is not this an argument for “earth is recovering from an ice age and has been generally warming ever since the last period of glaciation some 15,000 years ago” Nothing to see here take down the bunting on the stage an tell all those dignitaries attending Paris in December to stay home!
Is this like ‘find a temperature you like and use it 10 times’?
Once again, rather than using the high tech Satellite data, they use the less reliable and less accurate surface datasets — all of which are heavily adjusted by partisans. Simple question — if you applied the same method to RSS or UAH what would the results be?
Why would scientists who allegedly have a warming agenda (motivate) and who are able to adjust the data (opportunity) purposely introduce a pause in warming?
What happens to all those papers which cited any of the following papers?
PS I thought we had a consensus on the standstill. So much for consensus eh.
But Jimbo, the 97% consensus is a rather limited one, and does not accommodate every hypothesis in climate science. For instance, one can find paper after paper claiming that climate change is the cause of extreme weather events; but then, you can find an equal number of papers claiming that climate change will moderate weather to such an extent that extreme weather events will disappear.
“Once again, rather than using the high tech Satellite data, they use the less reliable and less accurate surface datasets”
Yes that is a big problem for credibility with this paper, how can any honest scientist totally ignore the Satellite data without at least acknowledging it’s existence and explaining it’s impact or why it is not relevant.
Absent that the study is just a waste of taxpayer $$$, but what is new.
Less accurate and less reliable, but much more manipulated.
Especially when the satellite data is backed up by the radiosonde balloon weather system which appears to be in close agreement with the satellites.
Simple, 1998 was far warmer then any year sense. The “scientists” doing this study appear to think temperature readings before other readings somehow affect current readings. Nonsense, T is what it is, period.
1998 was far warmer then any year sense.
Since …. there is no sense in this study 🙂
Yes and yes
Or indeed if they analysed from 1970 instead of 1950 what would their results be? I feel this paper is far from “robust”.
“By blending fake data with massively adjusted, homogenized, and infilled data, no one can say we reached our conclusion first, then invented some new techniques to prove it.”
Funny how everything “faulty or noisy” only happens in the cooling or neutral direction.
Cut the funding cut the nonsense.
If surface temperature starts trending upwards they will say that the pause (that never happened) has now ended. 😉 Heads we win, tails you lose. This is why it’s now known as Climastrology.
And the practitioners can be called Climate Scientologists…
“If surface temperature starts trending upwards they will say that the pause (that never happened) has now ended.”
I suspect you are right. And then they’ll claim that they never claimed there wasn’t a pause. And the media won’t look into things because memory hole and incompetence.
I sorta like Climate Astrology. “You will meet someone interesting” is replaced by “you will experience warmer temperatures………someday.”
There are statistics and damn lies!! If you have one foot in boiling water and the other foot in iced water, statistically you should be quite comfortable.
The best summation of such methods appeared in the letters page of a newpaper (The National Observer) in 1891
“Sir, —It has been wittily remarked that there are three kinds of falsehood:
the first is a ‘fib,’ the second is a downright lie, and the third and most aggravated is statistics. It is on statistics and on the absence of statistics that the advocate relies…”
and now we have models.
“Using a novel statistical framework that was developed specifically for studying geophysical processes such as global temperature fluctuations”
Translation, we kept torturing the data until it eventually told us what we wanted to hear.
Reminds me of the “novel” statistical tricks used to create the original hockey stick.
‘Ve vill make you hot!’ Yes, torture the thermometer until it gives up.
No, no, the Novel used was “Earth In the Balance.” 😉
They seem to ignore the satellite data.
They ignore their own buoys and use what, bucket measurements and ship intake measurements from where?
They still utilize the buggered up data from land based instrument readings with the problems of UHI’s, and cherry picked locations.
Garbage in, garbage out.
They use only what’s useful in backing their forgone conclusions and call it “science”. they need to be prosecuted for their crimes against humanity as well as slander and libel against all honest scientists. They can not truely believe their lies!
They developed a technique that when used on unadjusted date, it showed no pause.
Then they used it on data that had been adjusted in order to decrease the size of the pause, and once again, it showed no pause.
And in their minds this proves that their technique must be valid?
This is like them saying we drank one beer on monday, two beers on tuesday, three beers on wednesday, thursday and friday but because our beer consumption was rising earlier in the week we actually drank four beers on thursday and five beers on friday. Let them try getting those expenses through the accounts department without a receipt for those 3 extra beers.
My god, the blindness is astounding. The assumptions in his paper are that his “new and improved” method is the indisputably correct method and the relationships between all factors describing temperature at a given point in time are completely proven and understood by him.
What a dolt. Until Rajaratnams assumptions are proven over time with experimentation and evidence, they remain assumptions making any conclusions from them pending at best.
I wonder if there are any actual statisticians in that group.
Climate science has a long history of using “unique” statistical methods without actually bothering to understand statistics.
I loved this quote, because it reminded me of the ‘strong’ assumption of a positive, water vapor feedback that has yet to be found anywhere outside of a theoretical climate model. In an attempt to defend the models, Rajaratnam inadvertently brings up why all of the climate models are crap; the weakness of a strong assumption as the main component of a theory!
The hiatus in the scientific method since 1982 continues.
I wonder whether their method is able to detect the warming since start of the temperature record, or if the method also thinks that there’s no change in that.
That was my thought.
Particularly the 15 years leading up to Hansen’s testimony to the US Congress.
If it can’t find that then they have officially debunked AGW as an issue requiring specific actions to be taken.
Frankly, I’m surprised they didn’t look.
But statistical analysis already had non-gaussian analysis tools. Why did the authors have to use a tool that was made up by one of the authors (Romano)? Twenty years ago, granted, but still one of the author’s own pet tools.
I am no statistician, but something smells very wrong about the technique described. I eagerly await McIntyre’s input.
Amen to that TonyG,
Let’s see what Steve McIntyre at Climate Audit says about this sub-sampling.
All of these bespoke science results are so transparently timed (and created) to influence discussion before the Paris climate get together !
I doubt that SteveMc will bother with such a puérile paper.
So, have I got this right? The modern data taken over the last 20 years with modern instruments is all crap (so we adjust it) but the historical data is more accurate and reliable? Really?
With regard to the whole approach, this is not serious science. Proper science, or indeed mathematics, wouldn’t start with the idea that “we know the solution so let’s adjust the methodology and data until we reach that desired answer”. Proper science would say “let’s look at the hard data and the risk/uncertainty factors inherent in it and see what it’s telling us with any degree of certainty”. Or is that just too simple?
An army of world leaders insist on lies so they can tax thin air.
So any excuse, any tortured data to justify this is good in their eyes but the problem is, will world populations tolerate these immense taxes on nothing? I doubt it seriously.
…so they are changing the data again
And the goal posts move at warp speed.
In the interest of taking this a face value, and in challenging my own assumptions and beliefs, I have some questions. The clearest evidence of the hiatus is the satellite record, which demonstrates some 17 to 18 years without a warming trend. So:
1) Is the satellite data, UAH & RSS, really some sort of statistical analysis? I assumed each month’s temp anomaly was an average over that whole month, which I guess is technically a statistical tool, but hardly the type of analysis that one could argue is “inappropriate”. Am I missing something here?
2) Given that around half of the satellite records shows a lack of warming, how is it possible to claim that this is an insufficient quantity of data points? Is there any legitimacy to this claim?
3) With such a precise measurement system, what type of statistical analysis is really needed? Can’t we just, like, look at the observations and SEE what happened? Am I being naively ignorant here?
Am I missing something obvious here?
rip
Re 3: Can’t we just, like, look at the observations and SEE what happened?
_______________
That’s what they say they did and what they accuse others of not having done thoroughly enough. The paper is open access and available for download here: http://link.springer.com/article/10.1007/s10584-015-1495-y
From what I remember of my statistics, the only reason to ‘subsample’ a population, is the impossibility of sampling the entire population. The surface temperature record or the buoy temperature record by design and necessity, is already a subsample. These are subsamples of the population of temperatures everywhere on the planet at any given moment in time. Why, methinks, do they have to apply their super-special subsampling technique to that which is already a subsample? Subsampling implies missing information. So, to me it seems, they intentionally lose information to gain a trend.
ripshin September 17, 2015 at 8:18 am says:
“With such a precise measurement system, what type of statistical analysis is really needed? Can’t we just, like, look at the observations and SEE what happened.”
I agree, none is needed. I go with Ernest Rutherford who told us that “…If your experiment needs statistics you should have done a better experiment.”
In our case the better experiment would be using satellites. Ground-based data are corrupted and falsified. Here is an example. In 2008 I was researching satellite data for my book “What Warming.” I accidentally discovered that there had been no warming in the eighties and nineties. It extended from 1979 to 1997, an 18 year stretch, just like the present hiatus. A graph of it is found as figure 15 in my book. But cross checking with ground-based sources I found that they were showing a phony “late twentieth century warming” in its place. That same phony warming is also shown by the Stanford worthies who authored the article. They actually don’t know that there was another hiatus before the current one, nor do they know how it was suppressed. I discovered also that a source for that phony warming was HadCRUT3 and put a warning about it into the preface of the book when it came out. Nothing happened. Later I discovered that that GISS and NCDC had been co-conspirators with HadCRUT in this cover-up. They had all used the same computer to adjust their output and the computer left its footprints in exactly the same places in their publicly available temperature curves. They are still there, since the nineties when the deed was done, They constitute sharp upward spikes that look like noise. Two of them sit directly on top of the super El Nino peak. I have periodically mentioned this but have been entirely ignored. This allegedly scientific organization has no discipline and no ethical guidelines and their so-called climate “scientists” ignore any complaints from the outside.
“With such a precise measurement system, what type of statistical analysis is really needed? Can’t we just, like, look at the observations and SEE what happened? Am I being naively ignorant here?”
I am naively ignorant, so I too can’t see why any fancy statistics are needed. If the temp is measured the same way each time, and the raw figures show a flat line, what more do we need?
So the hiatus as measured by surface thermometers and corroborated by satellite measurements is a figment of bad statistics by those doing the temp records. Well, indeed, the problem with 150yr trend has been the very egregious use of subjective data manipulation as the authors point out, jacking up recent temperatures, shoving down past temperatures and especially submerging the real record period of the 1930s/40s. This of course is to feed the ravens in Paris, but it is going to have unconsidered negative consequences. With the discipline of the satellite record pinning the present temperature levels, they will be forced to flatten the slope of warming period of the 1990s, particularly if they want to even irradicate a slowdown in temperatures. This will require the 1930s to be lifted halfway back by these methods and the IPCC’s lower bound in climate sensitivity to become the ‘best estimate’.
To imagine what must be done too erase a significant slowdown during a period of rapid CO2 rise, think of a string layed on the temperature trace and fixed at the ‘present’ end. Now make your adjustments. It seems to me a terrible bargain for them to make for this last ditch effort for Paris – remove the urgency, constrain climate sensitivity to an unscary level and push off thermageddon by a century or more. Yes, these are desperate times for climate troughers. I suspect a fair contingent of those with vestiges of scruples will be unable to swallow this latest serving, especially when they see that with the new record, the jig is virtually up. There will be defections. Mark Steyn’s book “Disgrace” outed a fair number of scientists that may now have less to lose in voicing dissent and it will encourage some younger, frightened scientists to step out of line.
This classic end game stuff – approaching the “Sauve qui peut” stage of a war. (“save himself who can”)
Re thermometer measurements: Part 3.1 of the paper concentrates on temperature trends 1998-2013. There was no statistically significant warming, but they did detect a warming trend.
As of the moment (to August 2015), the trend in both GISS and NOAA/NCDC shows statistically significant warming (0.124 ±0.109 °C/decade (2σ) in GISS; 0.118 ±0.103 °C/decade (2σ) in NOAA).
I don’t think it’s a case of measured temperatures being a figment of bad statistics. To me, the critical graph is the top panel of figure 3. In essence, it seems to me that the “new improved” statistics makes the temperature in 1998 about 0.3degC lower than it really was, thus permitting a slope to reappear. I have no idea whether the statistics are correct or not, but you have to admit it’s a neat “trick”!
Surely this approach must cast some doubt on the whole of the record? Since the authors accuse previous papers of using “naive statistical methods” and state that “only 15 years of data” is not enough for “classical statistics”, I wonder whether they would like to look at the complete record from 1850 onwards instead of just 1998-2013. I also wonder why they chose just that period, since Lord Monckton now puts it back to 1996.
Finally, I note that the MSM has picked up on this very quickly. It fits perfectly with the agenda and we can expect it to be widely quoted. As long as it’s not disproven before Paris it will have done its job.
new and improved or not, 15 or so years is not enough data for any statistical test and is totally naive. The only useful analysis is to look at the actual, original measurements. The only valid conclusion is that the temperature data record is extremely noisy so any statistical model will have error ranges nearly equal to the overall change.
“Finally, I note that the MSM has picked up on this very quickly. It fits perfectly with the agenda and we can expect it to be widely quoted. As long as it’s not disproven before Paris it will have done its job.”
It would seem odd that the MSM would say that new evidence shows that the pause didn’t exist, when they never admitted that there was a pause. It would make it apparent that they failed to give us the whole story.
[snip – fake email address, a valid email address is required to comment here -see result -mod]
splice@onet.pl – Result: Bad
MX record about onet.pl exists.
Connection succeeded to mx.poczta.onet.pl SMTP.
220-mx.poczta.onet.pl ESMTP
> HELO technotarget.com
521 5.5.1 Protocol error
> MAIL FROM:
=> RCPT TO:
Why did they start in 1970?
Because ‘escalator’ started there.
They actually started in 1950 to reduce the slope of the increase to 1997. That way it better matches any increases since.
eg from the paper…
First, a standard regression of global temperature on time is fitted to both the 1998–2013 hiatus period and the period 1950–1997, with errors assumed to be independently and identically distributed (see Fig. 2 top left panel).
If that’s an escalator, the last couple of steps look very much like we’ve hit the landing at the top.
Wiser men know that only 15% of the globe has surface temperature data, which means that 85% of the data used to make that graph was made up.
[snip fake email address -mod]
Splice September 17, 2015 at 8:51 am
Nope, it means only, that you know nothing about measuring temperature anomalies.
——–
Did you read the article? They admit that sample size of the surface temperature data is too small.
Perhaps you can explain to me how you can measure a temperature anomaly in the middle of the Pacific Ocean where there are no weather stations.
“which means that 85% of the data used to make that graph was made up.”
Isn’t that how we get the best data?
Of course they cherry pick the start date to the cold 70s not the warm 30-40s. BTW how are the computer models working.
> cold 70s not the warm 30-40s
Nope:
http://www.woodfortrees.org/plot/hadcrut4gl/from:1920
‘Escalator’ exists since 70s, and thats why it’s the starting date.
No one claims it existed before.
> BTW how are the computer models working.
Quite well. As most of models predicts, we have currently warming rate about 0.16°C/decade at Earth surface, and about 0.12°C/decade in lower troposphere.
Splice, the graphic you posted is not warming at that rate
Nice graphic.
Here’s a better one that demonstrates how ridiculous quoting that blog is, unless its a “how not to science” post:
http://www.populartechnology.net/2012/03/truth-about-skeptical-science.html
Splice,
1) Explain how 2003 peak was warmer than a strong El Nino in 1997/98 because it wasn’t on any world data sets even back in 2005. Nothing supports it apart from deliberate tampering of data made up by infilling regions with no observations. They can chose whatever they like to warm it up with this method and they have done.
2) You have cherry picked by far the worst global non-data set there is and has lost all credibility. Only purpose to use it in a science paper is to highlight how awful it is.
3) Any peak has a rise and flattens at the top. The non data graph describes actually that, so it’s no more that a flat part of the peak.
4) Statistics can show all sorts of rubbish and the red line only makes it appear like it continues to rise because the leveled out area at the top is warmer than the rising part of the peak numerous years before it.
5) It shows a pause at the top, not matter how you spin it. The warming rate has significantly decreased to an almost standstill.
Even the most deliberate tampering of non-data any more towards warming, shows very little warming over the past 13 years.
http://www.woodfortrees.org/plot/gistemp/from:2002/plot/gistemp/from:2002/trend
“lies , damned lies and statistics” ?
Last line of the PR “…even though climate varies from year-to-year and decade-to-decade, global temperature has increased in the long term, and the recent period does not stand out as being abnormal.”
Aren’t they supposed to be proving the recent period’s warming is abnormal?
They made the very warm 1930’s go away so yes, it looks like even with the pause, it is getting warmer and warmer even though this is utterly false, thus the need to eliminate the 1930’s in various ways. Tricky dicky stuff.