Guest Post by Roman Mureika
It was bound to happen eventually. We could see it coming – a feeding frenzy from “really, it is still getting warmer” to “we told you so: this is proof positive that the science is settled and we will all boil or fry!” The latest numbers are in and they show the “hottest” year since temperature data has become available depending on which data you look at.
The cheerleader this time around seems to have been AP science correspondent Seth Borenstein. Various versions of his essay on the topic have permeated most of America’s newspapers including my own hometown Canadian paper. In his articles, e.g. here and here, he throws enormous numbers at us involving probabilities actually calculated (and checked!) by real statisticians which purport to show that the temperatures are still rising and spiraling out of control:
Nine of the 10 hottest years in NOAA global records have occurred since 2000. The odds of this happening at random are about 650 million to 1, according to University of South Carolina statistician John Grego. Two other statisticians confirmed his calculations.
I was duly impressed by this and other numbers in Seth’s article and asked myself what else of this extremely unlikely nature might one find in the NOAA data. With a little bit of searching I was able to locate an interesting tidbit that they clearly missed. If we wind the clock back to 1945 and look back at the previous temperatures, we notice that they also rose somewhat rapidly and new “hot” records were created. In fact, the graphic below shows that the highest 8 temperatures of the 65 year series to that point in time all belonged to the years 1937 to 1944 Furthermore, in that span of eight years, five of these were each a new record! How unlikely is that?

Using the techniques of the AP statisticians, a simple calculation indicates that the chance of all eight years being the highest is 1 in 5047381560 – almost 9 times as unlikely as what occurred in the most recent years! Not to mention the five records…
By now, most of the readers will be mumbling “Nonsense, all these probabilities are meaningless and irrelevant to real-world temperature series” … and they would be absolutely correct! The above calculations were done under the assumption that the temperatures from any one year are all independent of the temperature for any other year. If that were genuinely the case in the real world, a plot of the NOAA series would look like the gray curve in the plot shown below which was done by randomly re-ordering the actual temperatures (in red) from the NOAA data.

For a variety of physical reasons, measured real-world global temperatures have a strong statistical persistence. They do not jump up and down erratically by large amounts and they are strongly auto-correlated over a considerable period of time due to this property. Annual changes are relatively small and when the series has reached a particular level, it may tend to stay around that level for a period of years. If the initial level is a record high then subsequent levels will also be similarly high even if the cause for the initial warming is reduced or disappears. For that reason, making the assumption that yearly temperatures are “independent” leads to probability calculation results which can bear absolutely no relationship to reality. Mr. Borenstein (along with some of the climate scientists he quoted) was unable to understand this and touted them as having enormous importance. The statisticians would probably have indicated what assumptions they had made to him, but he would very likely not have recognized the impact of those assumptions.
How would I have considered the problem of modelling the behaviour of the temperature series? My starting point would be to first look at the behaviour of the changes from year to year rather than the original temperatures themselves to see what information that might provide.
Plot the annual difference series:
Make a histogram:

Calculate some statistics:
Mean = 0.006 = (Temp_2014 – Temp_1880)/134
Median = 0.015
SD = 0.098
# Positive = 71, # Negative = 59, # Equal to 0 = 4
Autocorrelations: Lag1 = -0.225, Lag2 = -0.196, Lag3 = -0.114, Lag4 = 0.217
The autocorrelations could use some further looking into, however, the plots indicate that it might not be unreasonable to assume that the annual changes are independent of each other and of the initial temperature. Now, one can examine the structure of the waiting time from one record year to the next. This can be done with a Monte Carlo procedure using the observed set of 134 changes as a “population” of values to estimate the probability distribution of that waiting time. In that procedure, we randomly sample the change population (with replacement) and continue until the cumulative total of the selected values is greater than zero for the first time. The number of values selected is the number of years it has taken to set a new record and the total can also tell us the amount by which the record would be broken. This is repeated a very large number of times (in this case, 10000) to complete the estimation process.
The results are interesting. The probability of a new record in the year following a record temperature will obviously be the probability that the change between the two years is positive (71 / 134 = 0.530). A run of three or more consecutive record years would then occur about 28% of the time and a run of four or more about 15% of the time given an initial record year.
The first ten values of the probability distribution of the waiting time for a return to a new record as estimated by the Monte Carlo procedure look like this:
Years Probability
1 …….. 0.520
2 …….. 0.140
3 …….. 0.064
4 …….. 0.039
5 …….. 0.027
6 …….. 0.022
7 …….. 0.016
8 …….. 0.012
9 …….. 0.012
10…….. 0.009
Note the rapid drop in the probabilities. After the occurrence of a global record, the next annual temperature is also reasonably likely to be a record, however when the temperature series drops down, it can often take a very long time for it to return to the record level. The probability that it will take at least 5 years is 0.24, at least 18 years is 0.10 and for 45 years or more it is 0.05. The longest return time in the 10000 trial MC procedure was 1661 years! This is due to the persistence characteristics inherent in the model similar to those of a simple random walk or to a Wiener process. However, unlike these stochastic processes, the temperature changes contain a positive “drift” of about 0.6 degrees per century due to the fact that the mean change is not zero thus guaranteeing a somewhat shorter return time to a new record. A duplication of the same MC analysis using changes taken from a normal distribution with mean equal to zero (i.e. no “warming drift”) and standard deviation equal to that of the observed changes produce results very similar to the one above.
The following graph shows the probabilities that the wait for a new record will be a given number of years or longer.

This shows the distribution of the amount by which the old record would be exceeded:

For a more complete analysis of the situation, one would need to take into account the relationships within the change sequence as well as the possible correlation between the current temperature and the subsequent change to the next year (correlation = -0.116). The latter could be a partial result of the autocorrelation in the changes or an indication of negative feedbacks in the earth system itself.
Despite these caveats, it should be very clear that the probabilities calculated for the propaganda campaign to hype the latest record warming are pure nonsense with no relationship to reality. The behaviour of the global temperature series from NOAA in the 21st century is probabilistically unremarkable and consistent with the persistence characteristics of the temperature record as observed in the previous century. Assertions such as “the warmest x of y years were in the recent past” or “there were z records set” when the temperatures had already reached their pre-2000s starting level as providing evidence of the continuation of previous warming are false and show a lack of understanding of the character of the underlying situation. Any claims of an end to the “hiatus” based on a posited 0.04 C increase (which is smaller than the statistical uncertainty of the measurement process) are merely unscientifically motivated assertions with no substantive support. That these claims also come from some noted climate scientists indicates that their science takes a back seat to their activism and reduces their credibility on other matters as a result.
I might add that this time around I was pleased to see some climate scientists who were willing to publicly question the validity of the propaganda probabilities in social media such as Twitter. As well, the (sometimes reluctant) admissions that the 2014 records of other temperature agencies are in a “statistical tie” with their earlier records seems to be a positive step towards a more honest future discussion of the world of climate science.
The NOAA annual data and monthly data can be downloaded from the linked locations.
Note: AP has added a “clarification” of various issues in the Seth Borenstein article:
In a story Jan. 16, The Associated Press reported that the odds that nine of the 10 hottest years have occurred since 2000 are about 650 million to one. These calculations, as the story noted, treated as equal the possibility of any given year in the records being one of the hottest. The story should have included the fact that substantial warming in the years just prior to this century could make it more likely that the years since were warmer, because high temperatures tend to persist.
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.
![change-time-series[1]](https://wattsupwiththat.files.wordpress.com/2015/01/change-time-series1.jpeg?resize=666%2C665&quality=83)
I would like to see the same analysis of the Raw data.
And me !!
What raw data? For one site?
For all sites and then… amalgamate them somehow?
It isn’t clear that there is any raw data for global Temperature. There are only the datasets (HadCRUT GISS, etc.) that we have adopted as out measure of global warming.
I acquired the raw station data used in GISS up to 2013 and am building a new global land station dataset.
It will be some time in the coming, I just barely got the coarse equal-area grid system worked out for the thousands of stations. I am now working out the algorithms to determine the average recorded temperature for each cell while not excluding the influence from nearby stations in other cells and taking altitude and other terrain concerns into account.
“It isn’t clear that there is any raw data for global Temperature. “
I keep an index (TempLS) based on ERSST and unadjusted GHCN. Adjustment doesn’t make much difference. I’ve plotted the progress of the record hot year since 1880 here for various indices, including TempLS. Again, little difference. Here is the plot for the NOAA index:
http://www.moyhu.org.s3.amazonaws.com/misc/timeseries/rex3.png
“”””…..Nine of the 10 hottest years in NOAA global records have occurred since 2000. The odds of this happening at random are about 650 million to 1,…..”””
And nine out of ten of the highest places on planet earth can be found on Mt Everest.
And nine out of ten of the lowest places on planet earth can be found in the Challenger Deep, off the Marianas.
The missing spots haven’t been identified yet.
Highs tend to congregate near peaks, and lows tend to congregate near troughs.
We dunno whether we are at a peak; Lord Monckton hasn’t told us yet.
ps that NOAA plot is in °C relative to the 1961-90 baseline that they publish, so it is shifted from the 20Cen base used in their statements.
wrong.
One of the key datasets is GHCN-D
daily data
unadjusted.
its a superset of the ones you cite
Didn’t Jones/HadCRUT say that all the raw data was gone and the world had to use their adjusted data instead?
the climategate emails provide the answer:
from: Phil Jones
subject: Re: For your eyes only
to: “Michael E. Mann”
Mike,
… The two MMs have been after the CRU station data for years. If they ever hear there
is a Freedom of Information Act now in the UK, I think I’ll delete the file rather than
send to anyone. …
Yes, all conversations about past temperatures is screwed up due to NOAA/NASA/IPCC tampering with the data. If one still takes this wretched mess and makes it into graphs, yes, it STILL doesn’t show hockey stick global warming.
But the fact is, it is cooling, not warming. Ice is growing, it is colder.
No,
over 95% of CRU data came from GHCN.
Willis’ first FOIA sought to determine which stations they used.
all that data is still there
the remaining 5% of CRU data came from NWS.
they claimed confidentiality on this data, when we sought it.
We FOIA for the agreements.
with some of the NWS data Jones had raw files which were then adjusted.
The “lost” raw data was never lost. The originators (NWS) still produce it.
Now CRU does not take raw data from NWS they only take adjusted data.
Here is a hint.
CRU uses about 5K stations.
you can drop those 5K stations from your database and calculate the answer without this data.
the answer doesnt change.
you can take GHCN-D which is all daily data un adjusted. you can randomly pick 1000 stations.
the answer doesnt change.
you can use those 1000 stations to predict the other 20K stations in GHCN-D. you’ll have a good
prediction.
The LIA was real
Dunno what Roman’s association with statistical analysis is; doesn’t really matter. But His study here is quite intriguing.
The idea of “re-ordering” the data, timewise , is very interesting. One thing that you learn about the statistical analysis of ANY DATA SET, is that the result you get, is quite exact. You are simply applying algorithms that are specifically described in Text books, and there is NO uncertainty in the calculated results.
The next thing you learn is that the result of such computations, tells you exactly nothing about ANY number that is not in that data set. If there is to be a NEXT NUMBER to be learned at some future time, your analysis cannot tell you whether that NEXT number, will be Higher, or Lower, or Same as the latest number in your set, or ANY other number in your set.
Consider the first draft lottery back in the 1960-70 Viet Nam era.
A SINGLE drawing relating to the set of integers from 1 to 366, representing the calendar days of a leap year, was made. One “ball” at a time drawn until all had been drawn.
The odds against that drawing resulting in the calendar sequence; Jan 1, Jan 2, Jan3, ….
Dec 29, Dec 30, Dec 31 are 366 ! :1
That is 366 x 365 x 364 x ….. x 3 x 2 x 1 That’s a huge number; close enough to infinity for any practical purpose.
If you feel like it, you can calculate it from the approximation formula :
n ! = n^n.e^-n.sqrt(2pi.n) where n in this case is 366
My calculator gives me 4.405 x 10^782 .
Izzat a big enough number for you ??
Well fortunately, that was NOT the result that happened in that first draft lottery.
A completely different sequence of numbers came out of the drawing.
……””””” BUT “””””…… The actual result of that first draft lottery drawing, was itself a totally remarkable and unprecedented result.
Absolutely nobody could have predicted what that result would be; it was quite unprecedented.
The odds against that actual result having occurred are 366 ! just the same as the strict calendar sequence would be.
So don’t kid yourselves.
No matter how astronomical the odds against any event happening (assuming it is physically possible), that “improbable” event can happen tomorrow.
Statistics tells you NOTHING about any single event.
And wouldn’t you know it. Back then in the VN days, those mathematical statistical experts declared on the basis of I in 4.4 x 10^782 possibilities, that the result was non random, and was biased .
Talk about poppycock garbage.
Bozos like Seth Borenstein should be tarred and feathered. for propagating such patent nonsense. Well maybe pelting with rotting tomatoes would do.
Thanks Roman, for exposing this foolishness, by a total ignoramus.
G
#287 and that was the last time I heard from the draft board
The problem with all of these “Hottest Year” studies is that they fail to explain why the change is taking place. If CO2 were the culprit, the changes, according to their models, would be much larger. The observed changes, even if you assume that they are correct, are too small, even if you attribute the entire increase to CO2. According to the satellite records, there has been no significant warming since October 1996 – over 200 months. The CAGW alarmists have moved the goal posts from catastrophic temperature rise to any rise at all, however small. If you look at their forecasts versus actual data, they have been wrong for 200 months in a row. What is the chance of tossing a coin 200 times in a row and getting 200 heads/
I lost a fun friend once by calculating that he would be better off to flip a coin than to choose courses of action deliberately – his life was indeed a woeful shambles but he was good guitartist to play bluegrass banjo with.
Reminds me of the Seinfeld in which George decides to do the exact opposite of what his judgment tells him he should do. The hapless George says by way of explanatio that if every decision he’s made in life has been wrong so far, then it stands to reason that his instincts are not to be trusted. The joke of course is that as cockamamie as it sounds, his life quick turns around as he gets his dream job with the New York Yankees and starts picking up beautiful women.
The big problem is that most of the public, including journalists, are completely ignorant of probability and statistics, they can barely managed arithmetic. I had an acquaintance who, in a discussion about probability, claimed that if you tossed a coin 5 times and it came up heads every time meant you should bet on tails next time as the “law of averages” meant it had to come up tails. I told him Vegas gets rich off of people like him.
If anything, you should consider betting on heads because one could question the fairness of the coin.
“If anything, you should consider betting on heads because one could question the fairness of the coin.”
That what Talib said in The Black Swan.
Severian
I was interviewed numerous times by journalists about our budget. Their ignorance about financial and accounting matters explained why they were in the profession they were in.
“The problem with all of these “Hottest Year” studies is that they fail to explain why the change is taking place.”
I see it as a much larger and more scientific problem. Even using the GISS or NCDC data analysis, there is a very obvious problem with the type of warming/change we see represented by the anomalies represented by the graph of monthly mean temperatures each year. Especially with the difference in the surface/lower troposphere data, compared to the ocean data. And the seasonal changes.
Global warming theory, which is the basis of claiming current warming fits with an increase in the greenhouse effect, predicts the “most warming” or largest change will be in NH winters, greater over land than oceans, and cold nights will warm faster than warm days.
Even the GISS data shows this is not happening. In fact, quite the reverse is being observed, it’s obvious in all the data analysis, even when the different “temperature analysis” don’t agree on the amounts of change, they all show the “kind” of change occurring. And it does not fit at all with the consensus enhanced greenhouse effect theory.
Indeed, it is cooling off which is why land is cooler than oceans which cool slower and heat slower. So if the oceans are ‘warm’ compared to continents, then this indicates cooling is happening.
“The problem with all of these “Hottest Year” studies is that they fail to explain why the change is taking place. …”
The problem is that the preferred theories of how climate works all have assumed that it must be predictable. Even most dissenters preferred theories presume “cycles” are the culprits. That leads to some real problems since Edward Lorenz’s work in the 1960s shows that weather is a mathematically chaotic phenomenon that follows a strange attractor. Attempting to define “climate” in some way that subsumes weather as a dependent variable caused by “climate” means that the SA has to be a property of climate rather than weather, which puts a leaden ball through the middle of the “its just weather” litany. Self-similarity is not going to vanish merely because of a scale change.
Would you have any thoughts on what strange attractor(s) might be causing an apparent periodicity in glaciation?
You are absolutely right, Walt, they don’t tell you because they are incompetent. Let’s take the “nine out of ten warmest years” claim. Hansen was first to make that exact claim in 2010 when the first decade of this century ended. The only hot year that was not part of the twenty-first century then was the super El Nino year 1998. (It still is, by the way). For Hansen that was proof that carbon dioxide did it, period. Utter nonsense of course but that is impressed into his skull. He and others making similar claims were and are simply ignorant of recent global temperature history. The only way to learn about it is by using satellite data because GISS, NCDC, and Hadcrut all lie about it. Their specific claim is that the eighties and the nineties were a warming period when in fact there was no warming. Global mean temperature stayed constant from 1979 till the beginning of the super El Nino in 1997. I determinrd that when I wrote my book and you will find it as figure 15 in it. That is an 18 year stretch, equivalent to the hiatus we are living through now. ENSO was active and created a row of five El Ninos there. Their amplitude was about half a degree Celsius. The super El Nino that followed was higher – a full Celsius degree – and is obviously not part of ENSO. On each side of that super El Nino sits a La Nina valley. 1999, the La Nina year that followed it, gave rise to a step warming that raised global temperature by a third of a degree Celsius in only three years and then stopped. That is measured from the mean of the eighties and nineties. It peaked in 2002. This makes that step warming the only warming we have experienced since 1979. What is even more interesting is that global temperature stayed at that level and thereby created the hiatus we are experiencing now. Again, we know that from satellite data. The above-mentined three ground-based data sources have continued to keep raising temperatures to the point where the 2010 El Nino is now higher than the super El Nino is, which is impossible. Because of this step warming every year of the 21st century stands above every year of the 20th or 19th century. Those temperature geniuses know that and have rigged their comparisons to make use of this fact. You might as well use the ice age as a temperature baseline and and get the same rankings. Their phony warming in the eighties and nineties makes it impossible to tell that a step warming even took place from 1999-2002. the correct way to compare the twenty-first century temperatures is to create a baseline temperature going back no more than the year 2002. This will show what minuscule temperature changes have actually happened in recent years. You will note that there was a La Nina in 2009 and an El Nino in 2010 but their effects on temperature neutralize one another as you would expect the ENSO oscillation to behave. And what about 2014? It looks to me like a borderline case of an El Nino that did not quite make it. It could be like the first seven years of the 21st century. If so, I suspect that it may be followed by a La Nina-like temperature drop.
It’s really hard to believe that there’s a group of folks who are seemingly rejoicing when they see such results. It’s even harder to believe that many of these folks are scientists.
Really, the truth is that they are no longer scientists. In reality the name “scientist” has very little to do with degrees or current occupation, and much more to do with how one approaches a problem. If you do not use the proper scientific method you are not a scientist. You give them too much credit and stature.
That would be a sad story: To many powerful entities with liberal doses of tax payer money to lure the uninitiated Gruberesc so called Scientists? Just consider what does a quorum of Gruberesc true believers mean. By default, everyone else is DD&S?
“The probability of a new record in the year following a record temperature will obviously be the probability that the change between the two years is positive (71 / 134 = 0.530).”
… which is consistent with the fact that the best uninformed estimate of a value x(t) in a time series X is x(t-1)
Anyway: The temperatures are staying within their long-time boundaries. Not too high, not too low. No sharper acceleration or deceleration. Always the same as for thousands of year ago. Even it it was a bit hotter.. nothing to worry.
http://jonova.s3.amazonaws.com/source/lia-mwp/christiansen-2000-year-temp-reconstruction-cf-fig-5.gif
The above graph got rid of the Medieval Warm Period so it is bogus. There are so many things being tampered with now.
Look before you leap into accusations. The MWP is right there around 1000CE, followed by the LIA around 1650CE. The plot is from a well-known paper which used 32 proxies to reconstruct extra-tropical NH temperatures going back to 1CE:
Christiansen, B. and Ljungqvist, F. C.: Reconstruction of the extra-tropical NH mean temperature over the last millennium with a method that preserves low-frequency variability, J. Climate, 24,6013–6034, 2011.
http://www.clim-past.net/8/765/2012/cp-8-765-2012.pdf
Most of the coolest years are clustered around 1600-1620! That’s statistically unlikely! That’s proof that steam power actually cools the Earth! /s
calibration as per the above graph is nonsense statistics. it is better known as “selecting on the dependent variable”. Google it if you have any doubts.
https://www.google.ca/?gws_rd=ssl#q=selecting+on+the+dependent+variable
The underlying premise of statistics is that your sample is random. A soon as you “calibrate” your sample is no longer random, which violates the assumption on which your statistics relies.
You cannot “calibrate” proxies based on temperature, because it is temperature you are trying to determine. You introduce a circular dependeny into the data that is not allowed for in statistics, so any statistics you get out the back end are screwed up garbage.
The social sciences and medical sciences are filled with nonsense statistical results from “selecting on the dependent variable”. Not wishing to miss a trick, CLimate Science climbed on the bandwagon with “calibration”, ignoring the effect on stattics.
The alarmists were playing Roulette whilst reality was playing Black Jack. The odds change and have to be recalculated with every card, they don’t just reset to 36/1 when the ball spins again!
The “scientific” alarmists know that very well, which is why their public statements are so reprehensible.
Interesting analysis Roman. You were kind enough to use the much-meddled-with temperature trace ‘as is’ of the “Agencies” without comment on it to make your excellent statistical point. Since there was so much talk about chaotic behaviour of climate (which I have criticized to some degree) I once speculated on ‘records’ such as snowfall and floods as random to see what I would get. Treating the first measurement as the first record, I found a rough fit to Ln N(yrs elapsed). Ln 10yrs yields 2 records, Ln 50, 4 records, Ln 100 4 to 5 records, Ln 200, 5 records in that time. To get 6 records, N=400-500 yrs – beyond any interest to us in terms of worrying about it.
I wonder if,we were to get rid of autocorrelation by binning 5 year averages, plus subtracting the long term 0.6C slope, we might find that Ln N(bins) would give us a gross idea of what may lie ahead. Of course, there are apparent cycles from ice age-interglacial and smaller, but the idea might have some validity in stretches of short enough duration. Or, if we wanted to leave the “cycles”, we could switch to cold records from the peak! If climate science weren’t already such a fanciful enterprise, I would never have thought of such an absurd idea!!
Aside from the stupid assumption regarding independence, there is also the underlying assumption that the “9 of the 10 hottest years” occurred since 2000 is true. That’s only true if all of NASA’s adjustments to the historical record are correct and if you ignore error margins – both of which invalidate the statistical calculation as well.
Had not seen your ‘scientific trends’ analogy before Mike. Simple, yet clever – especially the hidden ‘drop off’ when the old guy shrinks. 10/10.
If you’re lucky, you too can be an “old guy” at 60. The alternative isn’t too appealing.
60 is [not] old at all, it is just about middle age.
I like how the red line goes from the baby’s knee to the old man’s head. 😉
You forgot year 100 where the red line goes to 6 feet under.
🙂
Can we ‘adjust’ that ‘6 feet under’ thingy out?
Like the MWP….or LIA?
Nice. You could go one step further for the Climate Science Alarmist Trend and readjust all the heights posthumous.
What we see here is a common occurrence in climate science – assuming a population is random and not auto correlated and proceeding to use the “valid statistical methods” for those narrow conditions without first checking the population to make sure the assumptions apply
What you where? In the post, or in the work it critiques?
Well, it seems to have spurred this:
Doomsday Clock moved closer to midnight
“Efforts at reducing global emissions of heat-trapping gases have so far been entirely insufficient to prevent unacceptable climate disruption,” said the Bulletin’s Richard Somerville.
http://www.cnn.com/2015/01/23/us/feat-doomsday-clock-three-minutes-midnight/index.html
Considering the number of times the world has actually ended during the lifetime of this clock… don’t you think they may need to reset it a bit?
Or maybe they just need better clockmakers.
M Courtney is right: the doomsday clock has been wrong more often than runaway global warming predictions.
Those ‘concerned scientists’ are a bunch of pessimists. If I were setting the clock, it would be at about 11:00 am. The developing world is rising, people are getting healthier and wealthier, the global population is peaking and will begin to decline, and governments are so self-serving that they are not about to pull the trigger on a nuclear war.
What would that get them? They have it *very* good now; they are growing like Topsy, their bureaucrats are smug and self-satisfied, and their rules only apply to the little people. Why ruin such a good thing with a nuclear war?
Harold Camping was a Climate Scientist?
And why should we give a [snip] about some clock?
The Union of Concerned Scientists is a clown union. They just don’t wear the wigs, rubber nose and floppy shoes.
Not all of ’em- as proof, Kenji- tirelessly sniffing out UoCS methane bloviation:
http://wattsupwiththat.com/2013/02/26/kenji-sniffs-out-stupid-claims-by-the-union-of-concerned-scientists/
It is interesting to note that the reviewers of the Technical Support Document for EPA’s Endangerment Finding, included Tom Karl, Director of NCDC and Gavin Schmidt, Director of GISS. The other reviewer was Susan Solomon, at that time Senior Scientist at NOAA, and co-chair, IPCC Working Group 1, 2002-2008. She is now at MIT: Ellen Swallow Richards Professor of Atmospheric Chemistry & Climate Science, MIT, 2012-present.
MikeB – that is awesome. I’m stealing it 🙂
The interannual correlation does make a string of records (or X highest out of Y years) more likely, but the comparison of the 2000s with the 1940s is not fair because there were far few years in the record back then. Nonetheless, the probability of 0.009 is still very small — why do so many people like to bet on that? Oh yeah, because they’re betting with other people’s money.
Barney, who is betting with other people’s money? Do you ever question your fearless leaders with their massive personal carbon footprints? Or are they not betting with other people’s money?
What’s with the questions? You just cast your eyes downward and obey!
/s
So it seems that Barry might advocate “redistribution” of temperatures from other epochs “just to make it fair”. Just as Socialists are so compassionate about spending other folks’ money in support of other advocacies.
/s
What I don’t understand about the CACA perspective is, they are only using the GISS (surface) measurements and not the satellite measurements, as if the troposphere has no connection with the surface. If CO2 is the well mixed gas that it is why would you not measure the whole atmosphere?
They then turn around and talk about total heat content of the oceans. How does CO2 heat the deep oceans but not the troposphere?
Probably (to put it in perspective) because “Caca” is Spanish slang for crap!
Hi Ann. You may see this reply as condescending – but I don’t mean it to be. You ask “If CO2 is the well mixed gas that it is, why would you not measure the whole atmosphere?” As you probably know, CO2 is heavier per square metre than most of its atmospheric cousins. Contrary to the ‘well mixed’ science (based on entropy, diffusion & turbulence), and due to its density, CO2 is actually less evenly distributed and more concentrated in the lower 70% of air below 13 kilometres (8 miles) down to the surface). Saying this, some CO2 molecules do exist as high up as in the Exosphere and Thermosphere.
Figures in pounds per square metre.
Light Gas: Hydrogen 0.09, Helium 0.17 and Methane 0.72
Medium Gas: Neon 0.90, Oxygen 1.43, Nitrogen 1.25, Argon 1.78 and Carbon Dioxide 1.97
Heavy Gas: Krypton 3.73 & Xenon 5.89
Despite all of this, one of the important things to remember is that CO2 is not as ‘evil’ as the climate sophists (proponents) want us to believe and, in total, it represents an insignificant 0.04% of Earth’s entire atmosphere – whilst all the other gases make up 99.96%. Quite a contrast – and therefore cannot be held solely accountable for fluctuations in temperature, be it naturally occurring CO2 or anthropogenic CO2.
PS: I guess, also, welcome to the climate rationalist (opponents) community. (can’t recall seeing you post on WUWT before).
Pounds per cubic meter? At standard temperature and pressure?
I’m glad to see non-atmospheric scientists step into the field and make themselves look silly. I’m not even going to argue global warming. I just ask, did the statisticians even consider the high specific heat content of oceans, which happen to cover 71% of Earth? It’s like computing the odds of +40F anomalies in July using winter average as a baseline, but not taking that into account.
We are living in truly unusual times. All of the record stock market highs, through Dec 31, 2014 in the US occurred in 2014. Many dozens of them. What is the chance of that? Nil, according to Seth and his statistician’s logic. (And that’s without “adjustments”)
To paraphrase Maynard Keynes: Stock markets can remain irrational longer than the average punter can remain solvent.
Just like climate ‘science’.
Brilliant piece of work. Carolina statistician John Grego needs to go back to school, along with the 2 other statisticians that confirmed his work.
Quite obviously if 8 out of the last 9 years in 1946 were the hottest on record, the odds of 9 out of the last 10 years in 2015 being the hottest are pretty damn close to 1-1.
If you fall down drunk 8 out of 9 times, it is a pretty good bet you will sooner of later fall down drunk 9 out of 10 times.
correction: 8 out of the last 9 years in 1945
If one looks at most any multi-year Annual Average Temperature graph it will show an increase in the Average Temperatures for the specified time frame, …… but how does one know if said increase is due to an increase in the Average Winter Temperatures or an increase in the Average Summer Temperatures?
If the Average Winter Temperatures were steadily getting less cold (warmer) over the past 60 years …. which we know is an observational fact …… and the Average Summer Temperatures remained about the same, ……. then wouldn’t that produce an increase in Average Temperatures over said 60 year time frame? ABSOLUTELY IT WOULD.
And if so, wouldn’t that rule out the presumed “greenhouse” effect of atmospheric CO2? ABSOLUTELY IT WOULD.
If the atmospheric CO2 is increasing but the Summer temperatures are not getting hotter then atmospheric CO2 is not affecting near-surface temperatures.
If the Average Summer Temperatures had been increasing at the same rate as the Average Winter Temperatures, which they should have been if atmospheric CO2 is the culprit, then 100+ degree F days would now be commonplace throughout the United States during the Summer months. But they are not commonplace and still only rarely happen except in the desert Southwest where they have always been commonplace.
Now, instead of saying that “the Earth is warming” it is more technically correct to say “the earth has not been cooling off as much during its cold/cool periods or seasons”.
One example of said “short term” non-cooling occcurs quite frequently and is commonly referred to as “Indian Summer”. Read more @ur momisugly http://en.wikipedia.org/wiki/Indian_summer
Given the above, anytime the earth’s average calculated temperature fails to decrease to the temperature recorded for the previous year(s), it will cause an INCREASE or spike in the Average Temperature Calculation results for that period ….. which is cause for many people to falsely believe “the earth is getting hotter”.
The “fuzzy math” calculations and reporting of increases or decreases in/of percentages and percent change …… can make “true believers” out of the naive, gullible and/or miseducated.
Public Educators have expanded,… to a new art form, …. their “fuzzy math” calculations of student “test” grades to include and report “percentiles” …… simply to impress the ell out of the parents, the school boards and the general populace. ….
“Little Johnnie/Janie is in the 80th percentile of his/her Class”
So what, ….. he/she could still be dumb as a box of rocks.
See comment at 11:43am. This issue is confounding to me also. Now, in our area, less cold does have an impact on the viability of Pine Beetles so we do like to see some minus 40 in November to kill the little beasties.
Not unreasonable that if the mean of T^4 around the world increases and its spread evenly that you see more warming in colder regions.
I noticed the at the Hadsst for the NH shows a larger difference between the summer and winter month. http://www.woodfortrees.org/graph/hadsst3nh/from:2008.9/compress:3
after 2000. http://s5.postimg.org/rjv1jifaf/SST_difference.jpg
It might be a problem with the data but average summer SST in the NH show that there is not much warming but the summers are getting more warmer compared to the winters.
I don’t think anyone disagrees that the global temperature has risen about a degree in the last one hundred years. But I also think most people would agree that if the scientists hadn’t calculated it no one would have known.
100% correct. Nothing has changed, Winters went from -15 degrees to -14 degrees in Ohio, and there is still plenty of snow. Crops continue to grow in the Summer. Tornadoes continue to hit in the Spring and Summer. Listening to the CAGW crew and you’d think Ohio would look like Florida in 50 years. In 50 years it will be -13.6 degrees in the Winter.
I wouldn’t disagree about temperature possibly having increased about a degree over the last hundred years, BUT given the amount of fiddling that has been done with the records, I would no longer agree either. The activism has poisoned that well too many times.
I have to agree. Those idiots have screwed up the ‘science’ so badly it’s going to take years to straighten out. I just hope that, when the public finally figures out how much they have been lied to, the activists haven’t totally poisoned the well of public funding for science in general. If that is the case, we may never get it corrected, at least not within the next couple of generations.
Ralph, quite profound . . . .
” . . . . most people would agree that if the scientists hadn’t calculated [one degree rise in the last century], no one would have known.”
Your comment should be elevated to ‘quote of the week’.
Here’s the thought experiment I will introduce at the SC Piedmont Humanists discussion of climate science this Sunday. Yes, something like 12 of the last 15 years have been the hottest global temperatures of the last 150+ year instrumental record.
What if, for the next 1000 years, the global temperature remained exactly the same as 2014? We could then, one thousand years from now, correctly say that 1012 of the past 1015 years were the hottest on record AND that there had been no global warming in over 1000 years!
I see this increasing temperature trend as a positive result of CO2 emissions. As long as it doesn’t get too hot and sea level doesn’t rise too much, it’s much better than another ice age. If CO2 isn’t causing it we need to research how to keep warm in the future. And if it’s CO2 then we need to learn how to use it to set the thermostat to keep comfortable.
So, the statistics games the government and the media use don’t really worry me. I think we should all expect politicians to lie from a little bit to a whole lot. And the media sure seems to play ball with the government. In some cases we see media like Fox going one way and MSNBC going hard in the other direction. But sonetimes all of them pull hard and row together. And they can be wrong as heck.
Take that 97 % line we hear from the USA president…that’s BS. It’s what people use when they don’t have a solid argument…”97 of 100 gurus prefer incense from Bombay”. Big deal. If they have it so right then they need to explain why do we see so much baloney about giant hurricanes and mega droughts.
And I’m not falling for that Phillipines’ trick blaming CO2 for hurricane damage, when they need to stop building on low ground right next to the coast. What do they think? That everybody can afford doing something as stupid as building New Orleans under sea level and expect it to stay dry forever?
As long as it doesn’t get too hot.
Since only S. Carolina in 2012 broke and S. Dakota in 2006 tied their all time high temperature marks, doesn’t look like it is really getting hotter.
http://www.ncdc.noaa.gov/extremes/scec/records
The BoM forecast record highs, again, but once more, they were wrong: http://pindanpost.com/2015/01/23/not-the-hottest/
Jonova also tackles the record that wasn’t. http://joannenova.com.au/2015/01/marble-bars-hidden-history-120f-thats-49-1c-in-the-shade-in-1905-and-1922/#more-40577
If you were to plot the number of analyses based on measurements against the number of analyses based on statistics since the Halt began in 1997…
Thanks, Roman. I was hoping someone would address this topic.
So, what is up with Mitt Romney going all in on the Climate Change fraud.
Did John Holdren hypnotize him total back when Mitt had him on his payroll.
It seems impossible that Romney and his smarts would know better.
Did the IRS catch him in a huge illegal tax deal and they have something on him?
Or is there that much gold in this fraud?
“So, what is up with Mitt Romney going all in on the Climate Change fraud.”
I think sophisticated warmist strategists realized that getting prominent Republicans on board would be their highest-payoff move. They’ve accordingly set up Risky Business and they’ve held intense tete-a-tetes between big-name climatologists and prominent Republicans who’ve joined it in which they’ve provided SkS-style refutations of contrarian points. These one-sided presentations have apparently been effective.
Contrarians should organize “red teams” and offer alarmed Republican politicians the opportunity to hear them out, or to read their online replies to the statements these Republicans have assented to. (This could take several back-and-forths.) Also, red teams could offer to engage in live in-person debates with alarmists’ teams.