Super-Heated Air from Climate Science on NOAA’s “Hottest” Year

Guest Post by Roman Mureika

It was bound to happen eventually. We could see it coming – a feeding frenzy from “really, it is still getting warmer” to “we told you so: this is proof positive that the science is settled and we will all boil or fry!” The latest numbers are in and they show the “hottest” year since temperature data has become available depending on which data you look at.

The cheerleader this time around seems to have been AP science correspondent Seth Borenstein. Various versions of his essay on the topic have permeated most of America’s newspapers including my own hometown Canadian paper. In his articles, e.g. here and here, he throws enormous numbers at us involving probabilities actually calculated (and checked!) by real statisticians which purport to show that the temperatures are still rising and spiraling out of control:

Nine of the 10 hottest years in NOAA global records have occurred since 2000. The odds of this happening at random are about 650 million to 1, according to University of South Carolina statistician John Grego. Two other statisticians confirmed his calculations.

I was duly impressed by this and other numbers in Seth’s article and asked myself what else of this extremely unlikely nature might one find in the NOAA data. With a little bit of searching I was able to locate an interesting tidbit that they clearly missed. If we wind the clock back to 1945 and look back at the previous temperatures, we notice that they also rose somewhat rapidly and new “hot” records were created. In fact, the graphic below shows that the highest 8 temperatures of the 65 year series to that point in time all belonged to the years 1937 to 1944 Furthermore, in that span of eight years, five of these were each a new record! How unlikely is that?

Using the techniques of the AP statisticians, a simple calculation indicates that the chance of all eight years being the highest is 1 in 5047381560 – almost 9 times as unlikely as what occurred in the most recent years! Not to mention the five records…

By now, most of the readers will be mumbling “Nonsense, all these probabilities are meaningless and irrelevant to real-world temperature series” … and they would be absolutely correct! The above calculations were done under the assumption that the temperatures from any one year are all independent of the temperature for any other year. If that were genuinely the case in the real world, a plot of the NOAA series would look like the gray curve in the plot shown below which was done by randomly re-ordering the actual temperatures (in red) from the NOAA data.

For a variety of physical reasons, measured real-world global temperatures have a strong statistical persistence. They do not jump up and down erratically by large amounts and they are strongly auto-correlated over a considerable period of time due to this property. Annual changes are relatively small and when the series has reached a particular level, it may tend to stay around that level for a period of years. If the initial level is a record high then subsequent levels will also be similarly high even if the cause for the initial warming is reduced or disappears. For that reason, making the assumption that yearly temperatures are “independent” leads to probability calculation results which can bear absolutely no relationship to reality. Mr. Borenstein (along with some of the climate scientists he quoted) was unable to understand this and touted them as having enormous importance. The statisticians would probably have indicated what assumptions they had made to him, but he would very likely not have recognized the impact of those assumptions.

How would I have considered the problem of modelling the behaviour of the temperature series? My starting point would be to first look at the behaviour of the changes from year to year rather than the original temperatures themselves to see what information that might provide.

Plot the annual difference series:

change-time-series[1]

Make a histogram:

Calculate some statistics:

Mean = 0.006 = (Temp_2014 – Temp_1880)/134

Median = 0.015

SD = 0.098

# Positive = 71, # Negative = 59, # Equal to 0 = 4

Autocorrelations: Lag1 = -0.225, Lag2 = -0.196, Lag3 = -0.114, Lag4 = 0.217

The autocorrelations could use some further looking into, however, the plots indicate that it might not be unreasonable to assume that the annual changes are independent of each other and of the initial temperature. Now, one can examine the structure of the waiting time from one record year to the next. This can be done with a Monte Carlo procedure using the observed set of 134 changes as a “population” of values to estimate the probability distribution of that waiting time. In that procedure, we randomly sample the change population (with replacement) and continue until the cumulative total of the selected values is greater than zero for the first time. The number of values selected is the number of years it has taken to set a new record and the total can also tell us the amount by which the record would be broken. This is repeated a very large number of times (in this case, 10000) to complete the estimation process.

The results are interesting. The probability of a new record in the year following a record temperature will obviously be the probability that the change between the two years is positive (71 / 134 = 0.530). A run of three or more consecutive record years would then occur about 28% of the time and a run of four or more about 15% of the time given an initial record year.

The first ten values of the probability distribution of the waiting time for a return to a new record as estimated by the Monte Carlo procedure look like this:

Years Probability

1 …….. 0.520

2 …….. 0.140

3 …….. 0.064

4 …….. 0.039

5 …….. 0.027

6 …….. 0.022

7 …….. 0.016

8 …….. 0.012

9 …….. 0.012

10…….. 0.009

Note the rapid drop in the probabilities. After the occurrence of a global record, the next annual temperature is also reasonably likely to be a record, however when the temperature series drops down, it can often take a very long time for it to return to the record level. The probability that it will take at least 5 years is 0.24, at least 18 years is 0.10 and for 45 years or more it is 0.05. The longest return time in the 10000 trial MC procedure was 1661 years! This is due to the persistence characteristics inherent in the model similar to those of a simple random walk or to a Wiener process. However, unlike these stochastic processes, the temperature changes contain a positive “drift” of about 0.6 degrees per century due to the fact that the mean change is not zero thus guaranteeing a somewhat shorter return time to a new record. A duplication of the same MC analysis using changes taken from a normal distribution with mean equal to zero (i.e. no “warming drift”) and standard deviation equal to that of the observed changes produce results very similar to the one above.

The following graph shows the probabilities that the wait for a new record will be a given number of years or longer.

This shows the distribution of the amount by which the old record would be exceeded:

For a more complete analysis of the situation, one would need to take into account the relationships within the change sequence as well as the possible correlation between the current temperature and the subsequent change to the next year (correlation = -0.116). The latter could be a partial result of the autocorrelation in the changes or an indication of negative feedbacks in the earth system itself.

Despite these caveats, it should be very clear that the probabilities calculated for the propaganda campaign to hype the latest record warming are pure nonsense with no relationship to reality. The behaviour of the global temperature series from NOAA in the 21st century is probabilistically unremarkable and consistent with the persistence characteristics of the temperature record as observed in the previous century. Assertions such as “the warmest x of y years were in the recent past” or “there were z records set” when the temperatures had already reached their pre-2000s starting level as providing evidence of the continuation of previous warming are false and show a lack of understanding of the character of the underlying situation. Any claims of an end to the “hiatus” based on a posited 0.04 C increase (which is smaller than the statistical uncertainty of the measurement process) are merely unscientifically motivated assertions with no substantive support. That these claims also come from some noted climate scientists indicates that their science takes a back seat to their activism and reduces their credibility on other matters as a result.

I might add that this time around I was pleased to see some climate scientists who were willing to publicly question the validity of the propaganda probabilities in social media such as Twitter. As well, the (sometimes reluctant) admissions that the 2014 records of other temperature agencies are in a “statistical tie” with their earlier records seems to be a positive step towards a more honest future discussion of the world of climate science.

The NOAA annual data and monthly data can be downloaded from the linked locations.

Note: AP has added a  “clarification” of various issues in the Seth Borenstein article:

In a story Jan. 16, The Associated Press reported that the odds that nine of the 10 hottest years have occurred since 2000 are about 650 million to one. These calculations, as the story noted, treated as equal the possibility of any given year in the records being one of the hottest. The story should have included the fact that substantial warming in the years just prior to this century could make it more likely that the years since were warmer, because high temperatures tend to persist.

0 0 votes
Article Rating

Discover more from Watts Up With That?

Subscribe to get the latest posts sent to your email.

237 Comments
Inline Feedbacks
View all comments
January 23, 2015 8:27 am

Thanks rowan!!

Reply to  Steven Mosher
January 23, 2015 8:28 am

Roman
. Dang auto correct

Reply to  Steven Mosher
January 23, 2015 10:37 am

My auto correct drives me nuts because I switch languages. It insistes u are Moshiri

Joe Crawford
Reply to  Steven Mosher
January 23, 2015 11:35 am

I’ve found it works better for me when it’s turned off.

Jeff Alberts
Reply to  Steven Mosher
January 23, 2015 4:44 pm

As I’ve pointed out many times, you have to be smarter than the device.

January 23, 2015 8:35 am

Borenstein is recycling the same turd that was published by German “scientists”/kiddies in 2008. My first or second post on WUWT at the time was the one word “Infantile”. I’d never heard of Tamino at the time but, when I saw that he was supposedly a statistician but supporting the paper, I knew within 5 seconds of being on his site (for the first and last time) that he was a fr*ud. This cannot be ascribed to scientific incompetence. It’s way too far below the threshold, and so has to be willful attempted deception ….
…. or could Borenstein really be that fkn stupid ??
Nice to see it disemboweled again. Thanks Roman.

timg56
Reply to  philincalifornia
January 23, 2015 1:17 pm

phil,
Seth Borenstein is really that stupid.

DavidMartin
January 23, 2015 8:36 am

Very nice to see a guest post from Roman here, I hope we can look forward to more in the future. One of my favourite regular contributors to Climate Audit. Excellent stuff.

Statistician
January 23, 2015 8:37 am

The media does not understand the structure and meaning of statistical tests.
The statistical results do NOT say there is a 650,000,000:1 likelihood the planet is warming – which is what most people read.
Rather the results say there is a 650,000,000:1 likelihood that temperatures are not willy-nilly random (a white noise process) – which we already knew anyway.
Consider for a minute that the stated test would have given the exact same probability to a sequence of record lows, or to a sequence of nearly identical temperatures.
Any systematic behavior (such as ENSO cycles or global warming) will generate these “impossible” statistical probabilities.

January 23, 2015 8:48 am

Previous, very excellent discussion on this topic. I learned a lot about the random walking problem from this discussion, it was very educational.
http://wattsupwiththat.com/2014/04/24/extreme-times/

Rex
January 23, 2015 10:06 am

‘Hottest’ indeed. At 14.6C I don’t think we’re in danger
of bursting into flames just yet.

Hugh
Reply to  Rex
January 23, 2015 11:02 am

At Arctic amplification region, hotter, like no more -17°C but -15 °C. Hot indeed. Kills people. Drowns Santa and deers. /s
But.
it should be very clear that the probabilities calculated for the propaganda campaign to hype the latest record warming are pure nonsense with no relationship to reality
Kinda they are supposed to be nonsense, they say the probability is one to zillion, yet it happened, so their naive model is wrong, which in their mind proves another model true.
This is a mind boggling jump – a jump to ‘proving’ CAGW based on average temperatures rising 0.3°C couple of decades, which is not unprecedented or dangerous.

January 23, 2015 10:15 am

When you catch a liar out, they invariably go bigger.
Pointman

Matthew R Marler
January 23, 2015 10:25 am

I might add that this time around I was pleased to see some climate scientists who were willing to publicly question the validity of the propaganda probabilities in social media such as Twitter. As well, the (sometimes reluctant) admissions that the 2014 records of other temperature agencies are in a “statistical tie” with their earlier records seems to be a positive step towards a more honest future discussion of the world of climate science.
I agree. It is a step in the right direction.
The odds of this happening at random are about 650 million to 1, according to University of South Carolina statistician John Grego. Two other statisticians confirmed his calculations.
That was a foolish claim. I am glad you wrote this essay highlighting the problems.

Howarth
January 23, 2015 10:29 am

How is the zero point determined for temperature deviations in the NOAA graphs? Does it not change with each new year of temperature recordings? It seems to me NOAA could have set a zero point arbitrarily low which would make each years data set to be seemingly high.

January 23, 2015 10:31 am

This is a very, very important and well written defenestration of the propaganda-lies from Borenstein and NOAA/GISS claims.
This should be a top, sticky post for a few days to week, IMO. Other bloggers should repost or link to this Roman Mureika analysis as well.
Joel

Shub Niggurath
January 23, 2015 10:34 am

RomanM,
The statisticians contacted by Borenstein were specifically asked by him to calculate probabilities as though the years were independent. See WSJ article by Holman Jenkinks – he contacted one of them.

RomanM
Reply to  Shub Niggurath
January 23, 2015 1:07 pm

Shub, I knew that. As I wrote in the post:

For that reason, making the assumption that yearly temperatures are “independent” leads to probability calculation results which can bear absolutely no relationship to reality. Mr. Borenstein (along with some of the climate scientists he quoted) was unable to understand this and touted them as having enormous importance. The statisticians would probably have indicated what assumptions they had made to him, but he would very likely not have recognized the impact of those assumptions.

Had they been aware of the statistical properties of the annual temperature series, they should have indicated to Mr.Borenstein that such a calculation is really meaningless in this context and perhaps have suggested a different approach such as the one that I took. This probably would not have been satisfactory to Seth since the obvious intent was to produce a scenario which would convince people who didn’t know any better that the global temperatures were behaving in an extremely erratic fashion.
I have done these types of calculations for newspapers and radio over the years and I always made sure that I did the calculations correctly and that what I did was relevant and appropriate for the problem I was addressing.

Shub Niggurath
Reply to  Shub Niggurath
January 24, 2015 4:25 am

Thanks RomanM. I did miss that part in your post. With Jenkins contacting the statisticians it becomes clear that it was Borenstein who pushed for this meaningless calculation. When I read the initial claims, I was flabbergasted that any statistician would independently make such nonsensical assertions.
The stack of lies is growing impressively towards Paris.

Bill Parsons
January 23, 2015 10:38 am

Holman Jenkins of WSJ wrote to the statistician “used” by Seth Borenstein:

Mr. Grego tells me AP specifically instructed him to assume “all years had the same probability of being ‘selected’ as one of the 10 hottest years on record.” This is akin to assuming that, because you weighed 195 pounds at some point in your life, there should be an equal chance of you weighing 195 pounds at any point in your life, even when you were a baby.

There is, as even I understand it, inertia governing the fluctuations in the Earth’s climate, which make it highly likely that a 13 degree centigrade year will be followed by average temperatures in the same neighborhood the following year. Whether Borenstein was selectively overlooking or – or just obfuscating – this concept, he might want to simply climb on his bathroom scale and meditate on the following: if his resolution to lose 30 pounds is governed by a completely random result, he has at least a 50% chance of ending 2015 as the “worlds’ biggest loser”.

Bill Parsons
Reply to  Bill Parsons
January 23, 2015 10:41 am

Ah, as noted above by Shub Niggurath…

basicstats
Reply to  Bill Parsons
January 23, 2015 11:17 am

Thank you for this comment (and Shub Niggurath above). It was hard to believe any university statistics professor would not know the difference between a temperature series and what amounts to ‘white noise’. But agenda-driven journalists can be devious in their questions. As Nancy Oreskes (New York Times) put it, “playing dumb with climate..”.

basicstats
Reply to  basicstats
January 23, 2015 11:20 am

Correction: Naomi Oreskes, of course

January 23, 2015 11:41 am

Thanks Roman, good statistical work.

January 23, 2015 11:43 am

Samuel C Cogar January 23, 2015 at 7:25 am
Isn’t LESS COLD what we would expect as a consequence of increased CO2? I don’t know the physics other than what I have read here but I have downloaded a pile of temperature records. What I see is mostly flat trends in the maximum and mean maximums (and often a down trend, [temperature moderation?]). But the minimum and mean minimums frequently show an upward trend (less cold) resulting in an increase in the “average” temperature.
I can’t help but think that for most life forms that less cold makes life a little easier, especially in the winter. The trend in the minimum does not appear to be seasonal. Now, having said that, it looks like we might have a record high in Calgary, Alberta this weekend … But given the number of years of records, and the frequency of “Chinooks” in this part of the world, it may or may not reflect long term climate.

Reply to  Wayne Delbeke
January 24, 2015 5:10 am

The issue is confounded by urban heat sources near the sites, also adding to higher minimums.

Samuel C Cogar
Reply to  Wayne Delbeke
January 24, 2015 8:26 am

Wayne Delbeke: January 23, 2015 at 11:43 am
Only the proponents of CAGW would expect “less cold” as a consequence of increased CO2. But there is no scientific evidence whatsoever that even remotely confirms their expectations.

Joe Crawford
January 23, 2015 11:51 am

Thanks RomanM, I always enjoyed watching you and SteveMac take down the team statistics. I guess that’s getting harder to do now since they’ve had their hands slapped so many times. Now if y’all can just straighten out the media idiots like Seth.

RWturner
January 23, 2015 12:05 pm

Every time I read a Borenstein article I ask myself, “is this guy really that dumb or does he purposely mislead?” I will give him the benefit of doubt and assume he is misleading for the cause.

Reply to  RWturner
January 23, 2015 12:32 pm

Noble Cause Corruption is rampant in the media and Climate Science today. It is as Roman mentioned above, “That these claims also come from some noted climate scientists indicates that their science takes a back seat to their activism and reduces their credibility on other matters as a result.”
I would say it doesn’t reduce their credibility on other matters. Their activism under the guise of climate science completely destroys any credibility they think they have. Since they likely associate and communicate with others in their beliefs on climate change and activism, they do not see how others see them as liars.

January 23, 2015 12:18 pm

Thanks! I love statistics and am further underwhelmed by contemporary university statisticians – glad I retired. Can’t imagine discussions at faculty meetings.

Sun Spot
January 23, 2015 12:35 pm

Canada 2014 was the coldest since 1996, now how does that correlate with the the northern hemisphere is the most effected by AGW. Canadians reading the warmest year ever are simply astounded at the porkies coming out of the White-house.

john cooknell
January 23, 2015 1:21 pm

Despite all the comments above, it gives me great comfort that Al Gore and his fellow IPCC Climate Science Nobel prize winners have arrived at exactly this point in human civilisations history, so that they can save us from our own inevitable self destruction. It does appear to be an improbable piece of luck, so perhaps probability is the key to all of this.
I did once think that perhaps the changes humans have already made to the planet, during earlier parts of human history may have sealed our doom, its just we didn’t have anyone to warn us, or tell us what they were or are.

Reply to  john cooknell
January 23, 2015 1:58 pm

… some people are beyond help.

R. de Haan
January 23, 2015 1:48 pm

Freaking Gore at the Davos World Economic Formum and Ban Kee Moon of the UN receive unsuspected support for a Global Co2 Tax from businesses like DSM’s CEO Siebersma, an incredible hack and hypocrite, in an attempt to keep Global temps under 2 degree Celsius of warming.
The time has come to boycott companies like DSM but also Google, Apple and other big money grabbing freedom destroying zealous and moronic businesses that have lost track of their core business serving their customers with good products and services instead of destroying the very basis of their existence.
Screw those eco fascist bastards.
Never do any business with them for the rest of your life and let them know why you boycott them.
Next we have Dutch minister inviting people to do a check using their postal code to see how high the wter level will be when the dykes break due to the rise of seal levels.
The primary responsibiity of any Government is to take care of the security of their populations.
Instead they scare the helle out of people with their climate lies and doom scenario’s.
The time has come to mobilize some serious legal guns to sew officials like Hennes for failing their primary obligations.
What a bunch of morons.

Bill Parsons
January 23, 2015 1:52 pm

The tendency to look at (or perhaps just portray) anything from a perspective that’s too “up-close” leads to these kind of errors in thinking. “Tomorrow” means something on a gut level. “A decade hence” means little. A generation hence?? (only to the science fiction writers). I think everybody is like that. It’s been the success of the anthropogenic climateers that they have been able to get a certain contingent thinking incredibly short-term… End of the world Thursday at 8 pm.
Roman’s essay helps blow that misconception apart.
The present dismal failure being suffered by the movement (in spite of presidents, pundits and politician / scientists) is the visual information (much of it graph-based) continues to reach the public.

January 23, 2015 1:54 pm

Its obvious whoever contracted the statisticians , University of South Carolina statistician John Grego to do this work failed to inform him of the fact that global temperatures for anyone year can be heavily influenced by the conditions of the previous year.
The Global Climate is a bit like a roller coaster Ride . If the track is marked off evenly with year markers instead of meters, as the car travels and passes a marker it may be on a high section of the track or a low section or half way up or halfway down , each mark is totally dependent on the former level it transitioned from , the track cannot be divided up into sections that can be ridden independently . Its all or nothing and not reliant on the passengers funding.
The only difference of course is the climate is not predetermined .The models have proved this beyond doubt.

January 23, 2015 5:11 pm

Reblogged this on Centinel2012 and commented:
As NOAA plays games with the numbers sometimes they get caught at least by some of us!

Merrick
January 23, 2015 5:40 pm

Yes, and in 1918 you could *truly* (unlike in the case of our friend John Grego) say that 10 of the 10 coldest years on record had occurred since 1900 (based on the NOAA data shown in the first figure above). So I guess the odds of that must be something like 1 in 650 billion?
How did we survive?

Reply to  Merrick
January 23, 2015 6:11 pm

Maybe the Earth Natural Forces acted as it should when humans couldn’t take in all premisses and variables needed?
When will they ever learn?
Btw. one of the so called scholars must have gone old – forgotten he sent me a copy of some raw data long ago with notes of every correction after….. no statistic significance can be found that explain correction, neither between nor within any reasonable explination….

highflight56433
January 23, 2015 6:20 pm

…wondering why it is still snowing….just wondering…yep…. Forecast: -41 F in Fairbanks 01-25-2015 …scorching….

Mac the Knife
Reply to  highflight56433
January 24, 2015 3:13 pm

Whew! Slap on the SPF 45 and order up some ‘boat drinks’!
http://youtu.be/KSfONJYIyGM