Guest essay by Dr Tim Ball
Never try to walk across a river just because it has an average depth of four feet. Martin Friedman
“Statistics: The only science that enables different experts using the same figures to draw different conclusions.“ Evan Esar
I am not a statistician. I took university level statistics because I knew, as a climatologist, I needed to know enough to ask statisticians the right questions and understand the answers. I was mindful of what the Wegman Committee later identified as a failure of those working on the Intergovernmental Panel on Climate Change (IPCC) paleoclimate reconstructions.
It is important to note the isolation of the paleoclimate community; even though they rely heavily on statistical methods they do not seem to be interacting with the statistical community.
Apparently they knew their use and abuse of statistics and statistical methods would not bear examination. It was true of the “hockey stick”, an example of misuse and creation of ‘unique’ statistical techniques to predetermine the result. Unfortunately this is an inherent danger in statistics. A statistics professor told me that the more sophisticated the statistical technique, the weaker the data. Anything beyond basic statistical techniques was ‘mining’ the data and moving further from reality and reasonable analysis. This is inevitable in climatology because of inadequate data. As the US National Research Council Report of Feb 3, 1999 noted,
“Deficiencies in the accuracy, quality and continuity of the records place serious limitations on the confidence that can be placed in the research results.”
Methods in Climatology by Victor Conrad is a classic text that identified most of the fundamental issues in climate analysis. Its strength is it realizes the amount and quality of the data is critical, a theme central to Hubert Lamb’s establishing the Climatic Research Unit (CRU). In my opinion statistics as applied in climate has advanced very little since. True, we now have other techniques like spectral analysis, but it all those techniques, is meaningless if you don’t accept that cycles exist or have records of adequate quality and length.
Ironically, some techniques such as moving averages, remove data. Ice core records are a good example. The Antarctic ice core graphs, first presented in the 1990s, illustrate statistician William Briggs’ admonition.
Now I’m going to tell you the great truth of time series analysis. Ready? Unless the data is measured with error, you never, ever, for no reason, under no threat, SMOOTH the series! And if for some bizarre reason you do smooth it, you absolutely on pain of death do NOT use the smoothed series as input for other analyses! If the data is measured with error, you might attempt to model it (which means smooth it) in an attempt to estimate the measurement error, but even in these rare cases you have to have an outside (the learned word is “exogenous”) estimate of that error, that is, one not based on your current data. (His bold)
A 70 – year smoothing average was applied to the Antarctic ice core records. It eliminates a large amount of what Briggs calls “real data” as opposed to “fictional data” created by the smoothing. The smoothing diminishes a major component of basic statistics, standard deviation of the raw data. It is partly why it received little attention in climate studies, yet is a crucial factor in the impact of weather and climate on flora and fauna. The focus on averages and trends was also responsible. More important from a scientific perspective is its importance for determining mechanisms.
Figure 1: (Partial original caption) Reconstructed CO2 concentrations for the time interval ca8700 and ca6800 calendar years B.P based on CO2 extracted from air in Antarctica ice of Taylor Dome (left curve; ref.2; raw data available via www.ngdc.noaa.gov/paleo/taylor/taylor.html) and SI data for fossil B. pendula and B.pubescens from Lake Lille Gribso, Denmark. The arrows indicate accelerator mass spectrometry 14C chronologies used for temporal control. The shaded time interval corresponds to the 8.2-ka-B.P. cooling event.
Source: Proc. Natl. Acad. Sci. USA 2002 September 17: 99 (19) 12011 -12014.
Figure 1 shows a determination of atmospheric CO2 levels for a 2000-year span comparing data from a smoothed ice core (left) and stomata (right). Regardless of the efficacy of each method of data extraction, it is not hard to determine which plot is likely to yield the most information about mechanisms. Where is the 8.2-ka-BP cooling event in the ice core curve?
At the beginning of the 20th century statistics was applied to society. Universities previously divided into the Natural Sciences and Humanities, saw a new and ultimately larger division emerge, the Social Sciences. Many in the Natural Sciences view Social Science as an oxymoron and not a ‘real’ science. In order to justify the name, social scientists began to apply statistics to their research. A book titled “Statistical Packages for the Social Sciences” (SPSS) first appeared in 1970 and became the handbook for students and researchers. Plug in some numbers and the program provides results. Suitability of data, such as the difference between continuous and discrete numbers, and the technique were little known or ignored, yet affected the results.
Most people know Disraeli’s comment, “There are three kinds of lies: lies, damn lies and statistics”, but few understand how application of statistics affects their lives. Beyond inaccurate application of statistics is the elimination of anything beyond one standard deviation, which removes the dynamism of society. Macdonald’s typifies the application of statistics – they have perfected mediocrity. We sense it when everything sort of fits everyone, but doesn’t exactly fit anyone.
Statistics in Climate
Climate is an average of the weather over time or in a region and until the 1960s averages were effectively the only statistic developed. Ancient Greeks used average conditions to identify three global climate regions, the Torrid, Temperate, and Frigid Zones created by the angle of the sun. Climate research involved calculating and publishing average conditions at individual stations or in regions. Few understand how meaningless a measure it is, although Robert Heinlein implied it when he wrote, “Climate is what you expect, weather is what you get”. Mark Twain also appears aware with his remark that, “Climate lasts all the time, and weather only a few days.” A farmer asked me about the chances of an average summer. He was annoyed with the answer “virtually zero” because he didn’t understand that ‘average’ is a statistic. A more informed question is whether it will be above or below average, but that requires knowledge of two other basic statistics, the variation and the trend.
After WWII predictions for planning and social engineering emerged as postwar societies triggered development of simple trend analysis. It assumed once a trend started it would continue. The mentality persists despite evidence of downturns or upturns; in climate it seems to be part of the rejection of cycles.
Study of trends in climate essentially began in the 1970s with the prediction of a coming mini ice age as temperatures declined from 1940. When temperature increased in the mid-1980s they said this new trend would continue unabated. Political users of climate adopted what I called the trend wagon. The IPCC made the trend inevitable by saying human CO2 was the cause and it would continue to increase as long as industrial development continued. Like all previous trends, it did not last as temperatures trended down after 1998.
For year-to-year living and business the variability is very important. Farmers know you don’t plan next year’s operation on last year’s weather, but reduced variability reduces risk considerably. The most recent change in variability is normal and explained by known mechanisms but exploited as abnormal by those with a political agenda.
John Holdren, Obama’s science Tsar, used the authority of the White House to exploit increased variation of the weather and a mechanism little known to most scientists let alone the public, the circumpolar vortex. He created an inaccurate propaganda release about the Polar Vortex to imply it was something new and not natural therefore due to humans. Two of the three Greek climate zones are very stable, the Tropics and the Polar regions. The Temperate zone has the greatest short-term variability because of seasonal variations. It also has longer-term variability as the Circumpolar Vortex cycles through Zonal and Meridional patterns. The latter creates increased variation in weather statistics, as has occurred recently.
IPCC studies and prediction failures were inevitable because they lack data, manufacture data, lack knowledge of mechanisms and exclude known mechanism. Reduction or elimination of the standard deviation leads to loss of information and further distortion of the natural variability of weather and climate, both of which continue to occur within historic and natural norms.
“… the more sophisticated the statistical technique, the weaker the data.”
When I was in university even the simplest of statistical techniques was going to involve days or even weeks of laborious calculations with slide rule and log tables… so we would consult our prof or faculty advisor to make damn sure that what we were doing was correct, valid, and applicable to what we thought we were doing.
Nowadays statistical toolkits on fast computers allow people to keep applying different algorithms in trivial amounts of time until they happen on one that they think outputs the answer they want… and they don’t even need to understand more than the most basic statistics or math.
A very good piece that most climate change supporters will not understand nor believe.
the fundamental measure of quality control is standard deviation.
it doesn’t matter if your cars are on average “good”. what matters is how many are lemons. US car makers ignored this truism. Asian car makers on the other hand embraced standard deviation. The result? Tens of thousands of abandoned houses in Detroit.
allow people to keep applying different algorithms in trivial amounts of time until they happen on one that they think outputs the answer they want
==============
if three test show your theory is wrong, and 1 test shows your theory is right, you can be pretty sure the theory is wrong. However, in the struggle to publish of perish, the temptation to throw away the 3 tests that show you are wrong is often the only way to survive.
Agreed that in the current paradigm of pay to play GRANT science – the 3 will be lost just as Mann and EA Univerity lost the tained questioned data and emails.
follow the money you will find the director of the choir.
At some point in virtually any statistical analysis, you have to model the process you are measuring somehow. (Do you assume a 1st-order Markov process? 2nd or 3rd order? How many inputs back? Etc.) In something as complex as temperatures over the entire globe and over decades and centuries, statistical analysis is completely entwined in modeling and assumptions. They become inseparable.
Hence, NASA’s constant (monthly) adjustments to temperatures going back to 1880. What happened in May of 2014 to make the temperatures in 1880 change? In NASA’s “data”, they do. It’s models all the way down.
And they’re not being honest about that. When we see plots of “data”, we are not seeing measurements. We are seeing assumptions.
They can not go back as many of the sites they used as some were closed, moved, and vast amounts of the nation and globe were not covered. There was no standard of measurement nor standardization of equipment. So, it is all suspect well into modern time.
The use of ice cores is not a global measurement as it is specific to a location. Volcanic action or huge fires could have altered the trapped gases. All science is not global – all science is first local and if it is significant enough in quantities it might go global.
A lot of people don’t seem to understand that the same mean temperature of 50 degrees can result whether the range is from 40 to 60 degrees or from 0 to 100 degrees, but the experience is qualitatively very different.
Okay, from the very first, as soon as I read that the IPCC agreed that the Climate on earth was non-linear, complex, chaotic, I knew that we would never be able to predict, with any degree of accuracy, the future climate of earth. Just not possible unless we know, within a great degree of accuracy, the exact value of every variable and the exact functioning of every process that makes up the ‘climate’.
As a statistician I have to disagree with Briggs’s comment about never smoothing data. It can be a useful technique to eliminate the effect of known cycles. For example, if we are worried about global warming, it doesn’t help to use daily data, since then we have to understand the error distribution, including autocorrelation, on that daily basis. It is much better to use annual means, which wipe away that portion of the error distribution.
Rich.
“A statistics professor told me that the more sophisticated the statistical technique, the weaker the data. ”
false.
The data and its quality are separate and distinct from the method.
in fact one cannot normally assess the quality of data without a method some of which are very sophisticated.
Of course when you admit to not being a statistician and then appeal to authority.. “my professor said”
you end up saying all manner of things
dccowboy: actually, if mathematical chaos theory is concerned, then it is not just “a great degree of accuracy” that is needed, but an impossibly high (infinite) degree of accuracy. However, within chaos theory there are “attractors”, and it is possible for a system to wander around for quite a long while near an attractor and approximately following reasonable looking laws. During that time frame perhaps good predictions _can_ be made.
We’re just waiting for the IPCC to make any such.
Rich.
This is a great essay Dr. Ball and made even stronger in my eyes by your use of my favorite quote from statistician William Briggs. His admonition is spot on.
The level of statistical incompetence in “climate science” is mind boggling. Thanks for your post pointing that out in such a clear manner.
A few decades back, I asked a PhD statistician
Q: what is the ‘dream job’ for a statistician ?
A: The Tobacco Institute
profitup10, your comment on ice cores is right on! First, just because CO2 might
be elevated in core samples in Antarctica and Siberia, does not mean anything
on the global level. I also agree on the issue of volcanic activity. Dr. Dixie Lee
Ray explained that the reason the ozone layer is thinner at the South Pole is that
there is a volcano in Antarctica that has been continuously erupting for one hundred
years. This might also explain the presence of gasses in the ice samples there.
I am not a scientist, but I instinctively understood the concept of the polar vortex,
something I called cyclonic action as another possible reason atmospheric layers
were thinner in the polar regions. I came up with this theory decades before I
heard the term polar vortex.
A forest fire in Siberia can taint an ice core sample. A volcano in Antarctica can do
the same. The same goes for Michael (The Jerry Sandusky of climate science) Mann
and his tree ring studies. These are indirect measurements that mean nothing on
the global scale and are open to manipulation.
” dccowboy says: June 15, 2014 at 2:24 pm
Just not possible unless we know, within a great degree of accuracy, the exact value of every variable and the exact functioning of every process that makes up the ‘climate’. ”
Even then the computer you use to make teraflops of calculations will round each step and deviate any prediction from the “true” path. There will be no way of knowing that the final result is realistic let alone reasonable even though the numbers might fit some theory.
How does one predict the path of infinite variables when one does not even know how many variables are involved. Nor do they understand how many individual variables combine to create more variables = it is Voodoo not science.
Statisticians – please explain how you normalize the suns activity?
Mosh…seems he was making a joke along the lines of “torturing the data”…
sound familiar?
A new statistical technique?
BIG NEWS Part I: Historic development — New Solar climate model coming
http://joannenova.com.au/2014/06/big-news-part-i-historic-development-new-solar-climate-model-coming/
Ah yes, the mythological Global Average Temperature. Irrelevant, but gives us a lot of fun to argue over.
Adjusting that “historical” temperature record UP over the last four or five decades and adjusting the previous four or five decades DOWN really “highlights” your case for industrial driven global warming.
Doesn’t it!
We in Australia or should I say Jo Nova and her fellow learned ones are right at the minute. Locked in a battle to the “death” with the Australian BOM to release how and why they have adjusted Australias historical temperature records.
While continuing to release press statements of how temperature records a being broken.
She is being blocked at every turn.
Don’t think for one instance that Australia is an isolated case by these fraudsters.
“Universities previously divided into the Natural Sciences and Humanities, saw a new and ultimately larger division emerge, the Social Sciences.”
This sentence violates the “no comma after subject clause” rule. It is not a difficult rule, so I do not understand why I see it violated so often.
Either the comma after “Humanities” is superfluous, or a comma should be placed after “Universities” to make the section between the two commas into a subordinate clause.
English majors need a forum . . who cares about use of commas in a informal discussion? Find some real issues and join the discussion.
The IPCC further ignores the international standard Guidelines for Uncertainty Measurement from BIPM and appears oblivious of Type B error.
It is vitally important to recognise the 2 main error components in any measurement set, these being bias and precision. Precision – loosely the scatter about the mean, leading to ways to calculate standard deviation within a set – is most commonly reported.
As the graphical ice core and shell proxy example above indicates, bias is the deviation from some best value, be it known or estimated by related methods. In the graphed example, there is obvious bias since 2 methods produce vastly different results. Yet nothing seems to have affected the acceptance of one or other of the outcomes. This is so, so poor in proper science.
There are far too many instances where bias is not even considered. If it is not, talk of standard deviations is not just superfluous, it is misleading because it does not tell the full story.
What is the bias in a typical, conventional thermometer measurement from year 1900 or thereabouts? Anyone know of a paper examining this in forensic detail?
David L. Hagen says: June 15, 2014 at 5:29 pm
David, your post appeared as I sent mine off. They say much the same in different prose, but your reference to BIPM is most timely. Here, at the fingertips of researchers using statistics, is an exhaustive and learned exposition of method. It should be compulsory read/learn for every aspiring researcher using statistics.
It seems not to be much used. It rarely appears in the references section of climate science papers. Yet, is so important.
@ur momisugly Neo says:
June 15, 2014 at 2:40 pm
A few decades back, I asked a PhD statistician
Q: what is the ‘dream job’ for a statistician ?
A: Today, the dream job is Catastrophic Loss Modeling.
Consideration of the standard deviation in weather stats would be a nice start. However, the underlying importance involved is one of the degree of certainty concerning how likely your sample is to reflect reality.
Since virtually ALL pre-automation weather station data consists of a perhaps-discontinuous sample of ONE minimum and ONE maximum non-random temperature per day, the certainty concerning these values is fairly low /sarc .
Clearly if we want to actually estimate temperature at one spot on the globe as representative of some sub-segment of the globe (eg a cell), we would need the appropriate number of replicated random samples based on some estimate of the variability of the temperature value. The current data sets, while pretty much useless for the purpose of accurately estimating global temperature changes, actually can provide a useful basis for estimating the number of random replicates required for the desired certainty (confidence interval). So we have that going for us [rofl].
So how about we start by spending all those billions currently wasted funding studies like how brain size is affected by global warming, we start collecting data associated will some estimate of accuracy, variance, standard deviation with actual confidence intervals that reflect reality rather than wishful thinking.
NAH !
Does anyone see the political irony here ??
The left (generally speaking) are proponents of AGW … and also proponents of average, as in bring everyone to average level economically.
The right (generally speaking) are skeptical of AGW …. and also proponents of standard deviation, as in people naturally have a large range of potential & outcomes, and they embrace these differences.
Interesting …. I am sure political sociologist or psychologist could dig into that much deeper.