Standard Deviation, The Overlooked But Essential Climate Statistic

Guest essay by Dr Tim Ball

Never try to walk across a river just because it has an average depth of four feet. Martin Friedman

“Statistics: The only science that enables different experts using the same figures to draw different conclusions.“ Evan Esar

I am not a statistician. I took university level statistics because I knew, as a climatologist, I needed to know enough to ask statisticians the right questions and understand the answers. I was mindful of what the Wegman Committee later identified as a failure of those working on the Intergovernmental Panel on Climate Change (IPCC) paleoclimate reconstructions.

It is important to note the isolation of the paleoclimate community; even though they rely heavily on statistical methods they do not seem to be interacting with the statistical community.

Apparently they knew their use and abuse of statistics and statistical methods would not bear examination. It was true of the “hockey stick”, an example of misuse and creation of ‘unique’ statistical techniques to predetermine the result. Unfortunately this is an inherent danger in statistics. A statistics professor told me that the more sophisticated the statistical technique, the weaker the data. Anything beyond basic statistical techniques was ‘mining’ the data and moving further from reality and reasonable analysis. This is inevitable in climatology because of inadequate data. As the US National Research Council Report of Feb 3, 1999 noted,

“Deficiencies in the accuracy, quality and continuity of the records place serious limitations on the confidence that can be placed in the research results.”

Methods in Climatology by Victor Conrad is a classic text that identified most of the fundamental issues in climate analysis. Its strength is it realizes the amount and quality of the data is critical, a theme central to Hubert Lamb’s establishing the Climatic Research Unit (CRU). In my opinion statistics as applied in climate has advanced very little since. True, we now have other techniques like spectral analysis, but it all those techniques, is meaningless if you don’t accept that cycles exist or have records of adequate quality and length.

Ironically, some techniques such as moving averages, remove data. Ice core records are a good example. The Antarctic ice core graphs, first presented in the 1990s, illustrate statistician William Briggs’ admonition.

Now I’m going to tell you the great truth of time series analysis. Ready? Unless the data is measured with error, you never, ever, for no reason, under no threat, SMOOTH the series! And if for some bizarre reason you do smooth it, you absolutely on pain of death do NOT use the smoothed series as input for other analyses! If the data is measured with error, you might attempt to model it (which means smooth it) in an attempt to estimate the measurement error, but even in these rare cases you have to have an outside (the learned word is “exogenous”) estimate of that error, that is, one not based on your current data. (His bold)

A 70 – year smoothing average was applied to the Antarctic ice core records. It eliminates a large amount of what Briggs calls “real data” as opposed to “fictional data” created by the smoothing. The smoothing diminishes a major component of basic statistics, standard deviation of the raw data. It is partly why it received little attention in climate studies, yet is a crucial factor in the impact of weather and climate on flora and fauna. The focus on averages and trends was also responsible. More important from a scientific perspective is its importance for determining mechanisms.

clip_image002

Figure 1: (Partial original caption) Reconstructed CO2 concentrations for the time interval ca8700 and ca6800 calendar years B.P based on CO2 extracted from air in Antarctica ice of Taylor Dome (left curve; ref.2; raw data available via www.ngdc.noaa.gov/paleo/taylor/taylor.html) and SI data for fossil B. pendula and B.pubescens from Lake Lille Gribso, Denmark. The arrows indicate accelerator mass spectrometry 14C chronologies used for temporal control. The shaded time interval corresponds to the 8.2-ka-B.P. cooling event.

Source: Proc. Natl. Acad. Sci. USA 2002 September 17: 99 (19) 12011 -12014.

Figure 1 shows a determination of atmospheric CO2 levels for a 2000-year span comparing data from a smoothed ice core (left) and stomata (right). Regardless of the efficacy of each method of data extraction, it is not hard to determine which plot is likely to yield the most information about mechanisms. Where is the 8.2-ka-BP cooling event in the ice core curve?

At the beginning of the 20th century statistics was applied to society. Universities previously divided into the Natural Sciences and Humanities, saw a new and ultimately larger division emerge, the Social Sciences. Many in the Natural Sciences view Social Science as an oxymoron and not a ‘real’ science. In order to justify the name, social scientists began to apply statistics to their research. A book titled Statistical Packages for the Social Sciences” (SPSS) first appeared in 1970 and became the handbook for students and researchers. Plug in some numbers and the program provides results. Suitability of data, such as the difference between continuous and discrete numbers, and the technique were little known or ignored, yet affected the results.

Most people know Disraeli’s comment, “There are three kinds of lies: lies, damn lies and statistics”, but few understand how application of statistics affects their lives. Beyond inaccurate application of statistics is the elimination of anything beyond one standard deviation, which removes the dynamism of society. Macdonald’s typifies the application of statistics – they have perfected mediocrity. We sense it when everything sort of fits everyone, but doesn’t exactly fit anyone.

Statistics in Climate

Climate is an average of the weather over time or in a region and until the 1960s averages were effectively the only statistic developed. Ancient Greeks used average conditions to identify three global climate regions, the Torrid, Temperate, and Frigid Zones created by the angle of the sun. Climate research involved calculating and publishing average conditions at individual stations or in regions. Few understand how meaningless a measure it is, although Robert Heinlein implied it when he wrote, “Climate is what you expect, weather is what you get”. Mark Twain also appears aware with his remark that, “Climate lasts all the time, and weather only a few days.” A farmer asked me about the chances of an average summer. He was annoyed with the answer “virtually zero” because he didn’t understand that ‘average’ is a statistic. A more informed question is whether it will be above or below average, but that requires knowledge of two other basic statistics, the variation and the trend.

After WWII predictions for planning and social engineering emerged as postwar societies triggered development of simple trend analysis. It assumed once a trend started it would continue. The mentality persists despite evidence of downturns or upturns; in climate it seems to be part of the rejection of cycles.

Study of trends in climate essentially began in the 1970s with the prediction of a coming mini ice age as temperatures declined from 1940. When temperature increased in the mid-1980s they said this new trend would continue unabated. Political users of climate adopted what I called the trend wagon. The IPCC made the trend inevitable by saying human CO2 was the cause and it would continue to increase as long as industrial development continued. Like all previous trends, it did not last as temperatures trended down after 1998.

For year-to-year living and business the variability is very important. Farmers know you don’t plan next year’s operation on last year’s weather, but reduced variability reduces risk considerably. The most recent change in variability is normal and explained by known mechanisms but exploited as abnormal by those with a political agenda.

John Holdren, Obama’s science Tsar, used the authority of the White House to exploit increased variation of the weather and a mechanism little known to most scientists let alone the public, the circumpolar vortex. He created an inaccurate propaganda release about the Polar Vortex to imply it was something new and not natural therefore due to humans. Two of the three Greek climate zones are very stable, the Tropics and the Polar regions. The Temperate zone has the greatest short-term variability because of seasonal variations. It also has longer-term variability as the Circumpolar Vortex cycles through Zonal and Meridional patterns. The latter creates increased variation in weather statistics, as has occurred recently.

IPCC studies and prediction failures were inevitable because they lack data, manufacture data, lack knowledge of mechanisms and exclude known mechanism. Reduction or elimination of the standard deviation leads to loss of information and further distortion of the natural variability of weather and climate, both of which continue to occur within historic and natural norms.

3.7 3 votes
Article Rating
118 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
GlynnMhor
June 15, 2014 1:12 pm

“… the more sophisticated the statistical technique, the weaker the data.”
When I was in university even the simplest of statistical techniques was going to involve days or even weeks of laborious calculations with slide rule and log tables… so we would consult our prof or faculty advisor to make damn sure that what we were doing was correct, valid, and applicable to what we thought we were doing.
Nowadays statistical toolkits on fast computers allow people to keep applying different algorithms in trivial amounts of time until they happen on one that they think outputs the answer they want… and they don’t even need to understand more than the most basic statistics or math.

June 15, 2014 1:19 pm

A very good piece that most climate change supporters will not understand nor believe.

ferdberple
June 15, 2014 1:32 pm

the fundamental measure of quality control is standard deviation.
it doesn’t matter if your cars are on average “good”. what matters is how many are lemons. US car makers ignored this truism. Asian car makers on the other hand embraced standard deviation. The result? Tens of thousands of abandoned houses in Detroit.

ferdberple
June 15, 2014 1:35 pm

allow people to keep applying different algorithms in trivial amounts of time until they happen on one that they think outputs the answer they want
==============
if three test show your theory is wrong, and 1 test shows your theory is right, you can be pretty sure the theory is wrong. However, in the struggle to publish of perish, the temptation to throw away the 3 tests that show you are wrong is often the only way to survive.

Reply to  ferdberple
June 15, 2014 1:39 pm

Agreed that in the current paradigm of pay to play GRANT science – the 3 will be lost just as Mann and EA Univerity lost the tained questioned data and emails.
follow the money you will find the director of the choir.

June 15, 2014 1:39 pm

At some point in virtually any statistical analysis, you have to model the process you are measuring somehow. (Do you assume a 1st-order Markov process? 2nd or 3rd order? How many inputs back? Etc.) In something as complex as temperatures over the entire globe and over decades and centuries, statistical analysis is completely entwined in modeling and assumptions. They become inseparable.
Hence, NASA’s constant (monthly) adjustments to temperatures going back to 1880. What happened in May of 2014 to make the temperatures in 1880 change? In NASA’s “data”, they do. It’s models all the way down.
And they’re not being honest about that. When we see plots of “data”, we are not seeing measurements. We are seeing assumptions.

Reply to  Randall Hoven
June 15, 2014 1:50 pm

They can not go back as many of the sites they used as some were closed, moved, and vast amounts of the nation and globe were not covered. There was no standard of measurement nor standardization of equipment. So, it is all suspect well into modern time.
The use of ice cores is not a global measurement as it is specific to a location. Volcanic action or huge fires could have altered the trapped gases. All science is not global – all science is first local and if it is significant enough in quantities it might go global.

Quinx
June 15, 2014 2:04 pm

A lot of people don’t seem to understand that the same mean temperature of 50 degrees can result whether the range is from 40 to 60 degrees or from 0 to 100 degrees, but the experience is qualitatively very different.

DC Cowboy
Editor
June 15, 2014 2:24 pm

Okay, from the very first, as soon as I read that the IPCC agreed that the Climate on earth was non-linear, complex, chaotic, I knew that we would never be able to predict, with any degree of accuracy, the future climate of earth. Just not possible unless we know, within a great degree of accuracy, the exact value of every variable and the exact functioning of every process that makes up the ‘climate’.

See - owe to Rich
June 15, 2014 2:27 pm

As a statistician I have to disagree with Briggs’s comment about never smoothing data. It can be a useful technique to eliminate the effect of known cycles. For example, if we are worried about global warming, it doesn’t help to use daily data, since then we have to understand the error distribution, including autocorrelation, on that daily basis. It is much better to use annual means, which wipe away that portion of the error distribution.
Rich.

June 15, 2014 2:29 pm

“A statistics professor told me that the more sophisticated the statistical technique, the weaker the data. ”
false.
The data and its quality are separate and distinct from the method.
in fact one cannot normally assess the quality of data without a method some of which are very sophisticated.
Of course when you admit to not being a statistician and then appeal to authority.. “my professor said”
you end up saying all manner of things

See - owe to Rich
June 15, 2014 2:33 pm

dccowboy: actually, if mathematical chaos theory is concerned, then it is not just “a great degree of accuracy” that is needed, but an impossibly high (infinite) degree of accuracy. However, within chaos theory there are “attractors”, and it is possible for a system to wander around for quite a long while near an attractor and approximately following reasonable looking laws. During that time frame perhaps good predictions _can_ be made.
We’re just waiting for the IPCC to make any such.
Rich.

June 15, 2014 2:39 pm

This is a great essay Dr. Ball and made even stronger in my eyes by your use of my favorite quote from statistician William Briggs. His admonition is spot on.
The level of statistical incompetence in “climate science” is mind boggling. Thanks for your post pointing that out in such a clear manner.

Neo
June 15, 2014 2:40 pm

A few decades back, I asked a PhD statistician
Q: what is the ‘dream job’ for a statistician ?
A: The Tobacco Institute

Leonard Jones
June 15, 2014 2:55 pm

profitup10, your comment on ice cores is right on! First, just because CO2 might
be elevated in core samples in Antarctica and Siberia, does not mean anything
on the global level. I also agree on the issue of volcanic activity. Dr. Dixie Lee
Ray explained that the reason the ozone layer is thinner at the South Pole is that
there is a volcano in Antarctica that has been continuously erupting for one hundred
years. This might also explain the presence of gasses in the ice samples there.
I am not a scientist, but I instinctively understood the concept of the polar vortex,
something I called cyclonic action as another possible reason atmospheric layers
were thinner in the polar regions. I came up with this theory decades before I
heard the term polar vortex.
A forest fire in Siberia can taint an ice core sample. A volcano in Antarctica can do
the same. The same goes for Michael (The Jerry Sandusky of climate science) Mann
and his tree ring studies. These are indirect measurements that mean nothing on
the global scale and are open to manipulation.

son of mulder
June 15, 2014 3:01 pm

” dccowboy says: June 15, 2014 at 2:24 pm
Just not possible unless we know, within a great degree of accuracy, the exact value of every variable and the exact functioning of every process that makes up the ‘climate’. ”
Even then the computer you use to make teraflops of calculations will round each step and deviate any prediction from the “true” path. There will be no way of knowing that the final result is realistic let alone reasonable even though the numbers might fit some theory.

Reply to  son of mulder
June 15, 2014 3:08 pm

How does one predict the path of infinite variables when one does not even know how many variables are involved. Nor do they understand how many individual variables combine to create more variables = it is Voodoo not science.
Statisticians – please explain how you normalize the suns activity?

Latitude
June 15, 2014 3:33 pm

Mosh…seems he was making a joke along the lines of “torturing the data”…
sound familiar?

Mike Jowsey
June 15, 2014 4:31 pm

A new statistical technique?
BIG NEWS Part I: Historic development — New Solar climate model coming
http://joannenova.com.au/2014/06/big-news-part-i-historic-development-new-solar-climate-model-coming/

Eamon Butler
June 15, 2014 4:48 pm

Ah yes, the mythological Global Average Temperature. Irrelevant, but gives us a lot of fun to argue over.

Leigh
June 15, 2014 5:12 pm

Adjusting that “historical” temperature record UP over the last four or five decades and adjusting the previous four or five decades DOWN really “highlights” your case for industrial driven global warming.
Doesn’t it!
We in Australia or should I say Jo Nova and her fellow learned ones are right at the minute. Locked in a battle to the “death” with the Australian BOM to release how and why they have adjusted Australias historical temperature records.
While continuing to release press statements of how temperature records a being broken.
She is being blocked at every turn.
Don’t think for one instance that Australia is an isolated case by these fraudsters.

RoHa
June 15, 2014 5:25 pm

“Universities previously divided into the Natural Sciences and Humanities, saw a new and ultimately larger division emerge, the Social Sciences.”
This sentence violates the “no comma after subject clause” rule. It is not a difficult rule, so I do not understand why I see it violated so often.
Either the comma after “Humanities” is superfluous, or a comma should be placed after “Universities” to make the section between the two commas into a subordinate clause.

Reply to  RoHa
June 15, 2014 5:46 pm

English majors need a forum . . who cares about use of commas in a informal discussion? Find some real issues and join the discussion.

David L. Hagen
June 15, 2014 5:29 pm

The IPCC further ignores the international standard Guidelines for Uncertainty Measurement from BIPM and appears oblivious of Type B error.

June 15, 2014 5:33 pm

It is vitally important to recognise the 2 main error components in any measurement set, these being bias and precision. Precision – loosely the scatter about the mean, leading to ways to calculate standard deviation within a set – is most commonly reported.
As the graphical ice core and shell proxy example above indicates, bias is the deviation from some best value, be it known or estimated by related methods. In the graphed example, there is obvious bias since 2 methods produce vastly different results. Yet nothing seems to have affected the acceptance of one or other of the outcomes. This is so, so poor in proper science.
There are far too many instances where bias is not even considered. If it is not, talk of standard deviations is not just superfluous, it is misleading because it does not tell the full story.
What is the bias in a typical, conventional thermometer measurement from year 1900 or thereabouts? Anyone know of a paper examining this in forensic detail?

June 15, 2014 5:39 pm

David L. Hagen says: June 15, 2014 at 5:29 pm
David, your post appeared as I sent mine off. They say much the same in different prose, but your reference to BIPM is most timely. Here, at the fingertips of researchers using statistics, is an exhaustive and learned exposition of method. It should be compulsory read/learn for every aspiring researcher using statistics.
It seems not to be much used. It rarely appears in the references section of climate science papers. Yet, is so important.

hunter
June 15, 2014 5:39 pm

@ Neo says:
June 15, 2014 at 2:40 pm
A few decades back, I asked a PhD statistician
Q: what is the ‘dream job’ for a statistician ?
A: Today, the dream job is Catastrophic Loss Modeling.

BioBob
June 15, 2014 5:40 pm

Consideration of the standard deviation in weather stats would be a nice start. However, the underlying importance involved is one of the degree of certainty concerning how likely your sample is to reflect reality.
Since virtually ALL pre-automation weather station data consists of a perhaps-discontinuous sample of ONE minimum and ONE maximum non-random temperature per day, the certainty concerning these values is fairly low /sarc .
Clearly if we want to actually estimate temperature at one spot on the globe as representative of some sub-segment of the globe (eg a cell), we would need the appropriate number of replicated random samples based on some estimate of the variability of the temperature value. The current data sets, while pretty much useless for the purpose of accurately estimating global temperature changes, actually can provide a useful basis for estimating the number of random replicates required for the desired certainty (confidence interval). So we have that going for us [rofl].
So how about we start by spending all those billions currently wasted funding studies like how brain size is affected by global warming, we start collecting data associated will some estimate of accuracy, variance, standard deviation with actual confidence intervals that reflect reality rather than wishful thinking.
NAH !

June 15, 2014 5:51 pm

Does anyone see the political irony here ??
The left (generally speaking) are proponents of AGW … and also proponents of average, as in bring everyone to average level economically.
The right (generally speaking) are skeptical of AGW …. and also proponents of standard deviation, as in people naturally have a large range of potential & outcomes, and they embrace these differences.
Interesting …. I am sure political sociologist or psychologist could dig into that much deeper.

Brent Walker
June 15, 2014 6:23 pm

I am just a humble actuary. Although the variance of a distribution is important I like to look much further. I often describe distributions in terms of their skewness and kurtosis. These statistics help me understand what I am working with. Unfortunately no-one seems to want to use the higher levels of understanding anymore. It seems to be all about the quick media bite.

kadaka (KD Knoebel)
June 15, 2014 6:40 pm

From RoHa on June 15, 2014 at 5:25 pm:

“Universities previously divided into the Natural Sciences and Humanities, saw a new and ultimately larger division emerge, the Social Sciences.”
This sentence violates the “no comma after subject clause” rule. It is not a difficult rule, so I do not understand why I see it violated so often.

Here it gathers up the subject to avoid confusion. Without the comma it may be read (with an implied comma) as “Universities previously divided into the Natural Sciences, and Humanities saw a new and ultimately larger division emerge, the Social Sciences.”
Social Sciences emerging from Humanities (apparently) makes sense, the Natural Sciences are divided at universities, etc. Using the comma to gather up the “they” improves readability.
Your suggested use of a comma after “Universities”, could also work.
Note the comma after subject, is good for dramatic effect, making for a “pause” that brings the written words closer to how the writer, would have spoken them.

mebbe
June 15, 2014 7:14 pm

profitup10 says:
June 15, 2014 at 5:46 pm
English majors need a forum . . who cares about use of commas in a informal discussion? Find some real issues and join the discussion.
——————————————————————
Perhaps, people that don’t want to waste their time re-reading some sloppy prose on the off-chance that there was something worthwhile to be read.

RoHa
June 15, 2014 7:21 pm

“Without the comma it may be read (with an implied comma) as “Universities previously divided into the Natural Sciences, and Humanities saw a new and ultimately larger division emerge, the Social Sciences.”
Since “divided into” leads the reader to predict “A and B”, “divided into the Natural Sciences, ” (A alone) would be a perverse reading. Placing a comma after a subject clause is confusing, since it implies (incorrectly) that part of the preceding clause is a subordinate clause.
And, profitup10, as the story about the panda shows, misuse of commas can be an obstacle to even informal communication.

June 15, 2014 7:39 pm

not possible unless we know, within a great degree of accuracy
The accuracy is insufficient and the computers you use do not have long enough words. And I don’t care about your accuracy nor about the length of your computer words. In a chaotic system it is never enough. If nothing else quantum fluctuations will get you. You can’t measure it close enough. Ever. Fundamentally.

June 15, 2014 7:46 pm

Jeff L says:
June 15, 2014 at 5:51 pm

It tends to depend on the environment they live in. In general the right are country folk and the left are city folk.
Behavioral Sink Behavior And Thermodynamics
A thermodynamic explanation of politics

kadaka (KD Knoebel)
June 15, 2014 8:08 pm

From RoHa on June 15, 2014 at 7:21 pm:

Since “divided into” leads the reader to predict “A and B”, “divided into the Natural Sciences, ” (A alone) would be a perverse reading.

Bacteria previously divided into the numerous flora, and Climatosis saw a new and ultimately larger division emerge, the Grantia Suckus.
Bacteria previously divided into the numerous flora and Climatosis saw a new and ultimately larger division emerge, the Grantia Suckus.

And where is this predicting of A and B?

June 15, 2014 8:31 pm

Quinx says:
June 15, 2014 at 2:04 pm
A lot of people do
+++++++++++++++++++++++
I agree with you. Averages often leave out critical information. Averages of averages get worse. Average day doesn’t tell you about the high and low trends. Average the day into months, the months into years, the years into decades and make a trend. But the trend can be deceiving if you don’t know how it was made. Is it an average of averages, or is it a true mean. I assume we are still grading on a bell curve. It would be interesting to see how the degree of difficulty has trended versus the bell curve. Just kidding. Although I did have the opportunity to train a number of university graduates in my career since their academic training …
I love claims that a fraction of a degree change on average is disrupting the environment … an environment with a diurnal variation of 20 degrees C or more.
But in reality, there may be a local impact of 5 degrees C up or down in regional areas resulting in an “average” of a fraction of a degree. It cuts both ways and we don’t know what the “averaging” algorithm has done.
Lots to learn.
Happy Fathers day.

ossqss
June 15, 2014 8:42 pm

Latitude says:
June 15, 2014 at 3:33 pm
Mosh…seems he was making a joke along the lines of “torturing the data”…
sound familiar?
——————————————————–
Ha! nice……
Live the “dash” as they say.
The data can be tortuous in the end…………..
Regards, Ed

richard verney
June 15, 2014 8:51 pm

Steven Mosher says:
June 15, 2014 at 2:29 pm
/////////////////////////
We all know what Dr Tim Ball was seeking to convey by thee statement quoted. It is in effect part of the ‘truism’ observed by Lord Rutherford, namely:
“If your experiment needs statistics, you ought to have done a better experiment.”
If the signal in the data is significant, you do not need some fancy statistical modulation to identify it. The data can stand on its own. If it can’t stand on its own, chances are that you are merely looking at noise.

Greg Goodman
June 15, 2014 9:01 pm

An interesting article, however, one phrase seemed odd.
” A more informed question is whether it will be above or below average, but that requires knowledge of two other basic statistics, the variation and the trend.”
It is very common to see statements like “the trend is” , presented as if it is a fundamental fact of the data and without any recognition of the fact that this is ASSUMING a linear model is applicable to the data and has some predictive value.
Another way of looking at the “trend” is as the average rate of change, so it could be called the “expectation value” (the statistician’s term for the mean) of the rate of change. But what about higher orders: what is the acceleration? Is it speeding up or slowing down? And why are we now fitting higher order polynomial models to the data ? Is that suitable or would some other model ( perhaps periodic ) be more appropriate? Is that implicit choice or ASSUMPTION even being recognised.
Rate of change may be relevant to auto-regressive data like temperature, for example. There is not much sense in taking the “expectation value” of the last 150 years of SST and using it as a “best estimation” for next years SST. Just taking last years value would be a lot better. Adding the average annual change to last years SST may be better still (ie using the “trend” rather than the mean).
That may seem like common sense, but we are already applying some physical knowledge of the system to know that this will be better than just taken the mean as our guess for next year. A model is being applied to the data.
How often are we presented with statements like “if the current trend continues, by the year 2100 …. bhah….blah.” with the implication that this “if” somehow makes sense, that it is at least a likely outcome.
Without presenting some reason for the choice this has no more validity than “if the current average persists, by the year 2100, it will be the same as today”.
The other problem is extrapolation beyond the data. Any reasonable model may give a good estimation for next year. If you project tens years there’s a fair change you will be badly off ( even if you spend 30 years and billions of dollar making your model ). If you project a complex system 100 years hence based on 150 years at most of very poor quality data you have as much chance as guessing next weeks lottery numbers.
The exercise is meaningless. No honest scientist would make such a projection.

John F. Hultquist
June 15, 2014 9:06 pm

Seems like a genuinely fine essay. However, the little bit about “Statistics in Climate” jumps from the Greeks to Mark Twain. This ignores the interesting work of Wladimir Köppen and Rudolf Geiger that began with the concept that native vegetation is the best expression of climate. This was first published in 1884 and temperature records are easier to obtain than the field work necessary to find the boundaries of biomes (ecotone not in use then). Think of vegetation as integrators of weather.
The modern “climate-is-average-temperature” is a hoax.

John F. Hultquist
June 15, 2014 9:19 pm

Greg Goodman says:
June 15, 2014 at 9:01 pm
It is very common to see statements like “the trend is”, presented as if it is a fundamental fact of the data and without any recognition of the fact that this is ASSUMING a linear model is applicable to the data and has some predictive value.

In the WSJ for June 14-15 (p. C6) there is a review [by Mario Livio] of a book you might like:
“How Not to Be Wrong”
by Jordan Ellenberg
Here is a quote from the review: “But as Mr. Ellenberg quickly points out, the relation between prosperity and social services could very well be non-linear.” Then an inverted U form is mentioned.

ferdberple
June 15, 2014 9:33 pm

Steven Mosher says:
June 15, 2014 at 2:29 pm
“A statistics professor told me that the more sophisticated the statistical technique, the weaker the data. ”
false.
The data and its quality are separate and distinct from the method.
=============
I was also troubled by Dr Balls comment in this specific example. I believe the correct answer is:
“A statistics professor told me that the more sophisticated the statistical technique, the weaker the RESULT. ”
Perhaps Dr Ball could reviews this point for us. Thanks,

ferdberple
June 15, 2014 9:40 pm

or alternatively:
“A statistics professor told me that the more sophisticated the statistical technique, the weaker the CONCLUSION.”
I expect Dr Balls original quote was also correct, but not for the obvious reason. Rather that what his Professor meant was that when someone needs to use sophisticated statistics, it is because their underlying data is weak, If you have good data there is no need for sophisticated statistics, The simplest of statistics will suffice with good data.

ferdberple
June 15, 2014 9:42 pm

Steven Mosher says:
June 15, 2014 at 2:29 pm
The data and its quality are separate and distinct from the method.
======
thinking more on this question, I believe you are incorrect. Good data will reveal itself with simple methods, while bad data requires sophisticated methods to extract the signal from the noise,

george e. smith
June 15, 2014 9:47 pm

“””””…..Never try to walk across a river just because it has an average depth of four feet. Martin Friedman
“Statistics: The only science that enables different experts using the same figures to draw different conclusions.“ Evan Esar……””””””
Well I wouldn’t try to walk across a river that had an average depth (at that location) of two feet; I wouldn’t even step into a river that was 4 feet deep, right where I was (not) going to step. A lake maybe; but not a river.
Standard deviation wouldn’t help much; you only get one experiment, in the case where your first try is six sigma out on the deep end.
As for the second quip, by Evan Esar, whoever he is; the problem is in drawing ANY conclusions, from the results of a statistical calculation. The statistics is about numbers (figures), that you already know. Doesn’t tell you anything about any numbers you DON’T already know.

ferdberple
June 15, 2014 9:52 pm

The modern “climate-is-average-temperature” is a hoax.
==================
With one foot in the freezer and one food in the oven you are on average comfortable. climatic science 101. everything else is dialogue.

June 15, 2014 9:54 pm

Seems to me that a cabal of ‘climatologists’, not a big number, decided to work the numbers to create a funding pipeline and a bit of power … Yeah, those dumbass politicians and bureaucrats were easily convinced to satisfy the craving. Seems also to me that a couple of astute politicians came upon this and recognised that they could gain political sway and riches but scaring the bejeezus out of the weak of mind citizens, and then a few ‘industrialists’ got together with a few socialists to hold sway over the errant ‘climatologists’ and the politicians. The result is now the Great Climate Scam … ‘data’ and ‘statistics’ are just tools for manipulation.

Reply to  Streetcred
June 16, 2014 6:10 am

Good points and in my opinion, a bullseye.

george e. smith
June 15, 2014 9:59 pm

“””””……profitup10 says:
June 15, 2014 at 5:46 pm
English majors need a forum . . who cares about use of commas in a informal discussion? Find some real issues and join the discussion…….””””””
Well Dr. Richard Lederer; the world’s leading authority on the English language, says to put a comma anywhere you would “pause” in normal speech. (Most people do have to pause for breath reasons).
Yes he can parse anything you can write; but he says that “language is for communicating.”
I put them wherever I darn well please.

Reply to  george e. smith
June 16, 2014 6:07 am

Ys und my bt is taht lla can rd and comperehend tis setnece? We communicate in many ways, while some prefer long detailed narratives. Most people prefer getting to the point and make it short not flowery.

ferdberple
June 15, 2014 10:03 pm

I often describe distributions in terms of their skewness and kurtosis.
=========
baby steps. start with standard deviation and the normal distribution. then show how this is not the real world. rather, how we can sample the real world to arrive at the normal distribution, and thereby simplify the math to analyze the results in terms of the standard deviation.
while your data may not be normal, you can sample it in such a way that the sample is normal. and thus can be analyzed by standard methods. or you can skip this step, and analyze non-normal data as though it was normal, and the result is almost certainly misleading. this is what we affectionately call climate science 101.

george e. smith
June 15, 2014 10:09 pm

When I went to school, every single exam, regardless of subject, was also an English test. If you couldn’t write reasonably correct English, you were docked marks, even if the answers related to the subject, were quite correct.
If you are content to be around nitwits, who, like, can’t even write a sensible sentence, I know plenty of places to go and seek their company.
WUWT is not such a place.

ferdberple
June 15, 2014 10:25 pm

She is being blocked at every turn.
Don’t think for one instance that Australia is an isolated case by these fraudsters.
=======
the adjacent pair adjustment is biased to perceive short term trend as artificial while ignoring long term trends. this assumes that any long term trend (logging, agriculture, urbanization) is natural while any short term trend (volcanoes) is caused by humans.
the adjacent pair adjustment to US temperatures is then used to reduce observed temperatures so that they match climate models, proving that US models are “correct”. the definition of correctness lies in the ability of reality to match model prediction/. Any model that states otherwise must be noise and thereby must be corrected.

kadaka (KD Knoebel)
June 15, 2014 10:32 pm

Statistics convinces us with great certainty by comparing similar data that we eat with and breathe through our short muzzles.

BioBob
June 15, 2014 11:20 pm

kadaka (KD Knoebel) says:
June 15, 2014 at 10:32 pm
Statistics convinces us with greatestimated certainty by comparing similar data that we eat with and breathe through our short muzzles.
There…I fixed it. 8>P

Old England
June 15, 2014 11:52 pm

A little known failure of Government statistics in the UK was affecting house-building projections made in the 1970s and 1980s to determine the number of houses needed to be built. The statisticians had all the basic data they thought they needed on births, deaths, household sizes and divorce – but they missed one critical piece of data, they knew nothing about re-marriages.
As a result, with a steady increase in the rate of divorce throughout the 1960s, their modelled projections produced future figures which unwittingly assumed that a time would be reached where there were no more married couples… only single adult families. Future housing need was based on these fatally flawed projections.
That, however, is something of an irrelevance today in the UK where there is a chronic housing shortage resulting entirely from the millions of immigrants in the last 10 years – but maybe it helps illustrate how basic errors of thought, judgment and understanding can produce meaningless results from statistical analysis of data.
Having always, instinctively, distrusted analysis of data using smoothing, running averages etc this article is a breath of fresh air – and explains to me why the ‘smoothed’ data is so often at odds with the real, unadulterated data when viewed side by side in graphs.
As climate scientists are only too well aware, you can get whatever outcome you want from statistics if you play with the data in the ‘right’ way.

June 16, 2014 12:00 am

on data averaging :
The CET (the longest record available) shows some of the pitfalls of averaging. Climatologists told us to expect Mediterranean kind of summers on the bases of the CET’s upward annual trend, without even looking at the summer and winter temperatures trends separately.
http://www.vukcevic.talktalk.net/MidSummer-MidWinter.htm
As you can see all the warming took place in the winter time with the summers near ‘zero’ trend. If anything summers got cooler, rather than hotter
I commented about this 2-3 years ago on the RealClimate’ blog, it caused a mighty row with Tamino (the ‘grand’ AGW statistician) and Bailey from SkSc, to the extent that Gavin had to delete some of Tamino’s comments, who went off in a puff , absenting himself for weeks.

Man Bearpig
June 16, 2014 1:01 am

Yes, thank you Mr Ball…
Something I have wondered about for a long while is the NSIDC use of std dev in their Sea Ice charts; are they std deviations based on the individual data points or averages/mean/whatever of groups of data?
Not only that they use 2 std dev where one would normally expect to see 3.
e.g. this chart :
http://nsidc.org/data/seaice_index/images/daily_images/N_stddev_timeseries.png

johnmarshall
June 16, 2014 1:26 am

Many thanks Dr Ball, excellent post.

toorightmate
June 16, 2014 1:56 am

I do not like “statistics”.
However, “statistics” is an essential part of any research – in any field of endeavour.
I am amazed at the number of climatologists who do not understand “stats”, let alone the lack of understanding by the journos.
I guess “stats” just don’t figure in any tertiary arts curriculum.

Reply to  toorightmate
June 16, 2014 6:18 am

Statistics is basic math described by and new set of words, that is jaron designed and used to create the illusions of correctness. http://en.wikipedia.org/wiki/Do-si-do = many spelling of the sme dance steps.

June 16, 2014 2:08 am

Reblogged this on The GOLDEN RULE and commented:
This is a significant post.
If you don’t understand statistics fully, and I don’t, you should still understand what is spelt out here.
How inappropriate use of statistics has lead to many people believing incorrect conclusions and therefore, incorrect decision-making.
A very significant element in the serious errors that relate to the “global warming” acceptance without justification.

June 16, 2014 2:24 am

Sorry to disagree with Dr Ball. But the validity of “standard deviation” depends on the distribution of the data.
Standard deviation is a valid statistic when data data used for the calculations is a random sample drawn from a population that is normally distributed, usually because the underlying process that generated the data was an arithmetic process.
However, not all data is normally distributed. In particular, data generated by a multiplicative process more closely approximates a lognormal distribution. If you transform a lognormally distributed variable by taking logs, you can then validly calculate the standard deviation of the transformed variable.
Wikipedia has an introductory article on the subject under “Data transformation (statistics)”.
It is true that different transformations can sometimes be used to illustrate different perspectives of the same data. For example, you can treat personal income data as either having a lognormal distribution or the Pareto distribution. The lognormal distribution is useful for incomes up to about the 90th percentile but is best up to the lower 50th percentile. So if you wanted to show the change in income for middle class over time, you would use the lognormal distribution. But if you want to focus on the top 10 per cent of incomes, say for tax purposes, you would use the Pareto distribution,
For time series, such as we have in Earth science and climatology, whatever approach you use has problems because the data are not random because of auto-correlation. The risk of spurious correlation between variable is high because the data is not stationary.
In my opinion, most statistical analysis of climate data is worthless because the analysts have insufficient knowledge of the statistical tools they are using. The result is very much like what you get when an unskilled home handyman builds a piece of furniture with a hand drill and hand saw.
As an example of an attempt to use modern statistics to examine the claims of climate alarmists, I offer this paper by an Israeli group that concluded, “We have shown that anthropogenic forcings do not polynomially cointegrate with global temperature and solar irradiance. Therefore, data for 1880–2007 do not support the anthropogenic interpretation of global warming during this period.”
Reference: Beenstock, Reingewertz, and Paldor Polynomial cointegration tests of anthropogenic impact on global warming, Earth Syst. Dynam. Discuss., 3, 561–596, 2012.
URL: http://www.earth-syst-dynam-discuss.net/3/561/2012/esdd-3-561-2012.html
The paper stirred some controversy, as you might imagine, and was later amended slightly. The conclusion was softened, but not so much that the authors did not make their point about the folly of using standard statistical methods to evaluate time series.
We all know that correlation does not imply causality. But we must add another caveat.
CAVEAT: Correlation of time series data is an unreliable way to demonstrate that two variables are related except in a spurious manner.
For time series, polynomial cointegration may reveal that two variables are related, or it may not. Or it may give either false positives or false negatives.
Whatever method you use and however pleased you are that the result supports your preconceived opinions, don’t develop public policy and pass legislation that will cost the country a trillion dollars based on climate statistics. If you do, those of us who survive this madness will come and spit on your grave.

Reply to  Frederick Colbourne
June 16, 2014 6:21 am

More dosado; are you attempting to say that a negative times a negative is never a positive?

AJB
June 16, 2014 2:46 am

Here’s a case in point, topical of late.
MSL (seasonal signals retained). Compare with this.
MSL (seasonal signals removed). Compare with this.
Signal the same within a gnat’s whisker regardless of all the holes in the series.
Comparison to the UAH LT Global Ocean Annual Signal: interesting on the right.
Hopefully the usual suspects can see beyond the running mean eye-candy and ~3:1 difference in underlying resolution.

June 16, 2014 3:23 am

There is always a risk in presenting oipinion on a filed of study (satistics) and at the same time discussing one factual aspect of it.
The less usefull part of this essay are the jokes or quotes about statitics and statisticians. And worst: this aspect captures most of the [useless] discussion.
The more interesting part is in the example, where smoothed time series are presented in comparison with single observations. This underlies the necessity, when massaging data like averaging or smoothing, to also have a look on the raw data and to apply statistical techniques to distinguish what may be significant from what lies within the boundaries of random variations or noise.
And, subjacent to the whole, it also is a case for serious peer reviews of publications where conclusions are drawn from complex statistical analysis. Who controls the quality of the job of the reviewers? Can it be done in a deep and serious manner when it is mostly a benevolent activity within scientific societies.

Reply to  Michel
June 16, 2014 6:25 am

Statistical techniques? Does that translate to twist the numbers to fit your desired end result? Maybe all uses of statistical analysis should include a confidence factor? Based of course on the techniques used in the smoothing.

June 16, 2014 3:26 am

… opinions on a field of study (statistics) …
sorry for the typos it went out too fast.

kowalk
June 16, 2014 3:57 am

Rich says
” It is much better to use annual means, which wipe away that portion of the error distribution.”
This may be true from a statistical point of view. However, its i s important to know, how much winters are becoming warmer compared to summers. When – as the CET data shows – the winters have been warmed about 1.3°C, while the summers were warmed only 0.3°C – over the last 350 years – then there is a distinct interpretation of these results necessary – since there is much less alarmism possible.
Perhaps, averaging should be made over the same months or seasons over several years. But I’m no statistician, so there may be other techniques to answer corresponding questions.
As many other have said already earlier, there is little sense in averaging temperature or others measures over times and regions. We need to analysis climate zones as well as seasons to get sensible results.

June 16, 2014 3:58 am

Q. What do you call a rebellious statistician?
A: A standard deviant.
BTW if you think that statistics can produce any result that you want, then you don’t know anything about statistics. It is like saying that water is poisonous just because people drown.

Man Bearpig
June 16, 2014 4:59 am

Oh yes, said this before but here goes, I have more than the average number of arms, oh yes, and more than the average number of; legs, eyes, ears, etc.

June 16, 2014 6:42 am

Regarding commas:
george e. smith says:
June 15, 2014 at 9:59 pm
I put them wherever I darn well please.

I can’t help but wonder, statistically speaking, how often commas are used in written English?
But, maybe, that’s just me.
🙂

June 16, 2014 6:44 am

Man Bearpig says:
June 16, 2014 at 4:59 am
Oh yes, said this before but here goes, I have more than the average number of arms, oh yes, and more than the average number of; legs, eyes, ears, etc.

Whoa! So do I!
Does that make us “standard deviants”?
🙂

kadaka (KD Knoebel)
June 16, 2014 6:51 am

BioBob said on June 15, 2014 at 11:20 pm:

kadaka (KD Knoebel) says:
June 15, 2014 at 10:32 pm
Statistics convinces us with greatestimated certainty by comparing similar data that we eat with and breathe through our short muzzles.
There…I fixed it. 8>P

No. No you did not, not in any way. And I loathe it when smug bass turds pull that juvenile prank and pretend they did something clever rather than use the acceptable “Should have said”. I devoted a great deal of my dwindling supply of pre-sleep brainwaves to get that just right. You no more “fixed” that than a dog gets “fixed”, try convincing him he used to be broken. “But they’ll live longer, they’ll be happier for it.” Well then whip out your set, bud, and flop them on the table. Afterwards you can thank the doc for making your life so much better. He fixed it for you!
And I state that with 95% confidence.

kadaka (KD Knoebel)
June 16, 2014 7:10 am

profitup10 on June 16, 2014 at 6:21 am:

More dosado; are you attempting to say that a negative times a negative is never a positive?

Two wrongs do not make a right. But three lefts usually do. Sometimes it takes more lefts to make a right, involving a Supreme Court ruling and/or an Executive Order. Soon they might announce the right to be free of the tyranny of the right, which three or more lefts agree is the right thing to do.

June 16, 2014 8:12 am

george e. smith says:
June 15, 2014 at 10:09 pm
=====
Do they still diagram sentences?

Phil C
June 16, 2014 8:26 am

english majors need a forum who cares about use of commas in a informal discussion find some real issues and join the discussion well dr richard lederer the world’s leading authority on the English language says to put a comma anywhere you would pause in normal speech most people do have to pause for breath reasons yes he can parse anything you can write but he says that language is for communicating i put them wherever i darn well please
That sort of illustrates why the Greeks finally started to use punctuation. Instead of criticizing a bit of poor grammar or punctuation why don’t all you grammar/punctucation nags rewrite the text the way you feel makes sense.
For everybody else, including the poor folks who have suffered through an education recently, get a copy of “Elements of Style” by William Strunk Jr. of Cornell University. It’s now in it’s fourth edition for a good reason. It’s a quick, easy read on how to write clear, concise English, as opposed to the many half page, one sentence paragraphs seen in scientific papers.

June 16, 2014 8:29 am

Dr Ball says that up to the 1960s there was basically the mean and little else in statistics. It is a shame he chose not to check that because he might have found that many statistical techniques predate the 1960s by a good distance in time. Lines of best fit – 1800s. Bayes theorem – 1760s. Student t test 1900s. Correlation 1880s. I thought the piece was about the missing standard deviation. The piece was actually about telling us things vary therefore it can’t be caused by humans. That’s tired and stale and not true. Chalk one up to ignorance on Dr Ball’s part.
Statistics began as an attempt to make sense of data. It grew out of the ideas of probability. Statistical techniques are really about measuring probability still. I believe in most areas of science one standard deviation isn’t enough. Two or more is the gatekeeper. Five in some areas of physics. What an ingenious pursuit all of this is!

June 16, 2014 9:26 am

Geoff, “What is the bias in a typical, conventional thermometer measurement from year 1900 or thereabouts? Anyone know of a paper examining this in forensic detail?
It was never measured, Geoff. Not only that, but no national meteorological service, nor the WMO, has ever set up an experiment to estimate the bias or the uncertainty in the surface air temperature record. No one knows how accurate the record is, but climate scientists and official organizations are nevertheless perfectly willing to tout unprecedented rates and extents of warming.

kadaka (KD Knoebel)
June 16, 2014 9:33 am

From Margaret Hardman on June 16, 2014 at 8:29 am:

Dr Ball says that up to the 1960s there was basically the mean and little else in statistics. It is a shame he chose not to check that because he might have found that many statistical techniques predate the 1960s by a good distance in time.

But who was really using them, before the advent of sufficient available computing power? And even then for a while, if you had to choose between having the assistant knock out the means on a desk calculator and then publishing quickly, or fight with department heads to fund, get programmed, then schedule a job on the mainframe to get a more in-depth statistical analysis at least once that not many would care about, which would you choose?

Tom Asiseeitnow
June 16, 2014 9:44 am

Is the average temperature on a given the midpoint between the high and low, or is it the sum total of temp ever minutes of the day, divided by 1,440? And why is the average temp never correlated with altitude, humidity, and wind speed? Much ado about nothing important in terms of future predictability.

June 16, 2014 12:37 pm

Furthermore, the real uncertainty is almost always greater than or equal the sampled data. For how do you know the uncertainty in the data you did not sample? I can plunge a thermometer 300 m at 50 deg N 20 deg W on July 1, 1972. And do it again on Sept 1, 1973. Those two readings do not remotely define the uncertainty in temperatures for the entire North Atlantic for the decade of the 1970s.
But to read Levitus-2009, the uncertainty in the Ocean Heat Content prior to 2003 is based precisely on such poor spatial and temporal sampling of ocean temperatures with unrealistically narrow uncertainty bands. Prior 1996, Ocean Temperatures profiles were primarily done for antisubmarine warfare research, and thus concentrated around submarine petrol areas leaving huge ocean volumes entirely unsampled. See Figure 1 Ocean Temperature data coverage: maps b=1960, c=1985. from Abraham, J. P., et al. (2013) (pdf)

Reply to  Stephen Rasey
June 16, 2014 3:54 pm

I have a Professor friend at UCSD and Scripps Institute that was a Climate change Skeptic until the amount of money for Grant science EXPLODED – thereafter he changed an is a world lecturer for the need on more money to spend research . . . They have no hard evidence it is all OPINION . . the individual I spoke of was a El Nino and La Nina researcher before the $$$ money went to AGW and a global tax to control individual actions.
It is difficult to actually find hard evidence of any Climate change other than the the normal – Ice age – warming stage – hot stage and freezing stages of the geological history as we now see it???? Do any of you remember what you did and said in 8th grade? Well how hot was it when you graduate high school?

Duster
June 16, 2014 1:49 pm

RoHa says:
June 15, 2014 at 5:25 pm
“Universities previously divided into the Natural Sciences and Humanities, saw a new and ultimately larger division emerge, the Social Sciences.”
This sentence violates the “no comma after subject clause” rule. It is not a difficult rule, so I do not understand why I see it violated so often.
Either the comma after “Humanities” is superfluous, or a comma should be placed after “Universities” to make the section between the two commas into a subordinate clause.

Long ago my high school English instructor explained the distinction between what he referred to as as strict constructionists and what he called “relativists.” He himself was a a strict constructionist and explained that although we might very well have been taught “last year” by a relativist, this year we would need to follow the rules. The sentence you address reflects the confusion these alternating standards created in many students. One of my teachers would have called for a comma immediately after “University,” while another teacher I had would have eliminated the comma after humanities and replaced the one following emerge with a colon. That one considered commas more an irritant than an aid to written communication.

Duster
June 16, 2014 2:00 pm

Margaret Hardman says:
June 16, 2014 at 8:29 am
Dr Ball says that up to the 1960s there was basically the mean and little else in statistics. It is a shame he chose not to check that because he might have found that many statistical techniques predate the 1960s by a good distance in time. Lines of best fit – 1800s. Bayes theorem – 1760s. Student t test 1900s. Correlation 1880s. I thought the piece was about the missing standard deviation. The piece was actually about telling us things vary therefore it can’t be caused by humans. That’s tired and stale and not true. Chalk one up to ignorance on Dr Ball’s part.
Statistics began as an attempt to make sense of data. It grew out of the ideas of probability. Statistical techniques are really about measuring probability still. I believe in most areas of science one standard deviation isn’t enough. Two or more is the gatekeeper. Five in some areas of physics. What an ingenious pursuit all of this is!

Margaret, you honestly need to read articles with an eye to the actual subject, which, in this case, was not statistics in general but rather how statistics are purportedly employed in climate “science.” The number of SDs that are regarded as significant tends to vary as you say by discipline, but the fact is that two SDs, while treated as a “Gate Keeper,” still falls within “Las Vegas odds,” and many operators actually avoid implications of significance in less picky fields than Physics.
As concerns the SD, in climate data, it is often missing, as Dr. Ball says.

June 16, 2014 3:05 pm

Duster, I had no more in my eye when I read Dr Ball’s piece. It was transparent enough. It wasn’t about the missing standard deviation which got one mention right at the end, but about the idea of variability which, as I suspect you well know, is a tired and discredited attempt to cast doubt on the causes of climate change.
Kadaka, as a student before the use of personal computing, I carried out many statistical tests, student t, chi squared, linear regression, correlation tests, by hand, on paper, using a pen and a slide rule. If I could do it at eighteen I can’t see why others weren’t doing it. Fancy computers weren’t necessary, just time and the formula.
My point, which perhaps I need to reiterate, is that Dr Ball did not bother to check the facts. He chose to make an unevidenced assertion that turns out to be wrong. In fact, I’d say it’s a whopper.
REPLY: Margaret, the “grandma with an opinion” per her own description, typically only sees what she wishes: that climate scientists are right and pure, and that climate skeptics are stupid/lying, etc., in this case, her vision is that Dr. Ball is stupid and/or deceitful. That’s fine, she’s entitled to her opinion, wrong as it may be. But here’s the thing, from the article, and there is really no way of getting around this:
It is important to note the isolation of the paleoclimate community; even though they rely heavily on statistical methods they do not seem to be interacting with the statistical community.
As the Wegman report documented, and as Steve McIntyre and others have show repeatedly, even today with his latest post on Abrams et al, Climate Science has had this unfortunate tendency to be very shoddy with statistics, and to forge their own paths with statistical methods that nobody else seems to use the same way. These “made to order” stats methods (like Mann has created for his own papers) have no basis elsewhere, and seem to fall apart badly when inspected. Margaret will of course think the same of me as she does of Dr. Ball for saying this, but frankly my dear, I don’t give a damn. What matters is whether the special brand of stats we see in climate science hold up outside of that venue, and as we’ve seen time and again they don’t.
What’s funny is the example that until I made protests about it, NSIDC didn’t include time/date stamps or standard deviation in their sea-ice plots, typically pretty important stuff. – Anthony

Adam
June 16, 2014 6:32 pm

The bottom line is a plot of the IPPC predictions that it made in the past against the atual data which has been observed. There is no statistical analysis required. The two diverge greatly and it is plain for all to see.

June 16, 2014 7:06 pm

A cruel, but often overlooked, reality to statistics is than ALL processing done on a data series, such as removing seasonal changes, ALLWAYS adds to uncertainty in the final result.
You can subtract means to get an anomaly. Whenever you do to the data, you add to the variance of the resulting data = (the variance that was in the data) + (the variance in the average quantity you subtracted from the data.)
(Edit: it was my intention that these comments came before the comment that measured uncertainty is likely an underestimate of real uncertainty.)

Peter Sable
June 17, 2014 12:24 am

Rich “It is much better to use annual means”.
No, it is not better because annual means assume exact 365 day cycles (note 28 day lunar cycle, and 0.25 day shift corrected every 4th year except on some oddball rule I forget, as just two simple examples), and from a signal processing standpoint running means produces frequency dependent phase-shifts, distorting the signal and preventing useful comparisons.
Applying one dimensional statistical techniques to time series data is wrong. Numerous previous posts about this all over WUWT.
Statisticians should be banned from doing signal processing 😛

Robin Edwards
June 17, 2014 4:54 am

There’s been some common sense and some nonsense in the replies to Dr Ball’s article. His recommendations on smoothing just happen to be the same as mine, and so I welcome his statement and hope that others in the climate industry will take note of it. The statement on loss of information by applying any “treatment” to original data is self-evidently correct, but smoothing – a general term for many methods of “simplifying” data by special sorts of averaging, primarily for the benefit of journalists and politicians, it seems to me, has its uses. Again, I do not smooth observational data, but I do recognise that observations are not error free on some scale. One has to take a position (arbitrarily, I think) on what constitutes a discrepancy that has an noticeable effect on ones opinion of the real-world situation. For example, is a temperature change of 0.1 C of any significance to practical people like farmers who will know that a single field can have temperature differences in various parts of noticeably more than 0.1, and that half an hour’s time difference can produce changes of whole degrees, not tenths, in the temperature of a specific site.
Incidentally, the idea of someone doing statistical calculations using a slide rule strikes me as rather funny. Stats is a digital discipline. I used to use a mechanical calculator in the 1950s to sum squares and products, and very time consuming it was too. But I did get the right answer, which you wouldn’t with a slide rule!

G P Hanner
June 17, 2014 5:19 am

“A statistics professor told me that the more sophisticated the statistical technique, the weaker the data.”
And that is the exact reason why I gave up on econometrics.

Robin Edwards
June 17, 2014 6:50 am

Frederick Colbourne writes about the “validity of the standard deviation”. I understand that the Std Dev is simply an estimate (there are others) of the dispersion of a set of data about its mean. It is perfectly valid for any type of data, underlying distribution notwithstanding. It is simple to calculate and is very widely understood. Frederick’s concern is about its “validity” is unwarranted. I think that he may be really concerned about computing inferential statistics (confidence intervals, for example) from a computed standard deviation if the underlying distribution – from which ones sample is presumed to been randomly drawn – happens to be substantially non-normal. This is a frequent occurrence. In fact, I have yet to learn of a naturally occurring distribution that is truly “normal”, though many approximate to it. If one uses the t distribution in computing confidence intervals from the sample standard deviation they will be realistic for approximately normal data, but could be badly wrong it the data are heavily skewed or exhibit kurtosis. If the data are from certain Weibull or log normal distributions confidence intervals computed on the assumption that the data are approximately normal can be nonsensical. It follows that one should attempt to characterize the data, possibly by plotting, before pronouncing authoritatively on its inferential statistics.

June 17, 2014 7:48 am

Anthony, thank you for your comment. It was nothing less than I expected. I cannot tell whether Dr Ball is being deceitful or stupid. What I was pointing out, as I am sure you are bored of hearing, is that he made an easily checkable mistake. The rest of what he wrote was pretty much an opinion piece unsullied by facts. I am glad I am entitled to my opinion. Now I have your approval for that I shall wear it with pride.
I do not buy your assertion that McIntyre or Wegman have demonstrated what you say they have. For example: http://www.csicop.org/si/show/strange_problems_in_the_wegman_report.

Solomon Green
June 17, 2014 8:16 am

Robin Edwards
“Incidentally, the idea of someone doing statistical calculations using a slide rule strikes me as rather funny. Stats is a digital discipline. I used to use a mechanical calculator in the 1950s to sum squares and products, and very time consuming it was too. But I did get the right answer, which you wouldn’t with a slide rule!”.
I, too used a mechanical calculator but I have also used a slide rule and even log tables. It depended how much data was being used and how accurate it was. It was no use calculating a mean to two decimal places if the data was not accurate to one decimal place.
But I agree with every word of Robin Edwards’s second post. Too many compute means, standard deviations, variances and confidence levels from data without first having ascertained whether the distribution that they are assuming is valid or even approximately valid.
Margaret Hardman says:
“…as a student before the use of personal computing, I carried out many statistical tests, student t, chi squared, linear regression, correlation tests, by hand, on paper, using a pen and a slide rule.”
Yes that was how students of other disciplines were taught statistics but those who were really going to use statistics were first taught how to gather the most accurate data (usually by sampling), then to study the data (and as much other relevant material as possible) to guess the underlying distribution and then to test whether that distribution was even a remotely accurate depiction of the real distribution. Only then were we allowed to play around with statistical tests. And even then we were admonished to treat our own estimates with discretion.

Steve
June 17, 2014 8:43 am

Excellent article, thank you for sharing this. People do tend to remember a statistic when it agrees with what they already believe, and find some reason to disregard it when it does not. No where else is this more evident than in the world of climate science news. A human behavior best summed up by the expression “People use statistics like a drunk uses a lamp post, for support, not illumination”

June 17, 2014 9:55 am

The total corruption of climate ‘science’ pal review was exposed in the Climategate email dump. If there was any honesty Mann’s MBH98/99 would have been thrown out. Mann has yet to produce his data and methodologies, although McIntyre & McKittrick reverse engineered most of it. They showed that Mann’s reconstructions were bunkum.
Long-accepted reconstructions, even by the IPCC, showed the MWP and the LIA. But the devious little Mann erased them.

June 17, 2014 12:04 pm

Db, still letting the smoke get in your eyes. Climategate shown to be trash. Mann supported by later studies. You’ll be quoting the Oregon nonsense soon.

RACookPE1978
Editor
June 17, 2014 1:28 pm

Margaret Hardman says:
June 17, 2014 at 12:04 pm

Db, still letting the smoke get in your eyes. Climategate shown to be trash. Mann supported by later studies.

Odd. Seems that those “other studies” you are referring to were written by …. (dramatic pause) .. co-authors of Mann who were PROFITING (through exposure, publication, and co-operation) because of their incestuous relationship with Mann and his crew!
Further, who have “no idea” who the “peer-reviewed” co-conspirators in the cabal were either.
But, of course, “any” evil oil money contaminates “any” actual information, doesn’t it? How much so-called “science” can you buy for 1.3 trillion in tax money from a government-paid “climate” shill (er, scientist)?

June 17, 2014 2:28 pm

Margaret, all those supporting studies merely showed there’s more than one wrong way to get a hockey stick.
More to the point, shape-supporting or not has no relevance to whether Mann’s particular constructions were honest, or not. They were not. No number of quasi-independent hockey sticks will change that judgment.
It was not Climategate that showed the dishonesty. It was not Wegman’s report. It was the contents of Mann’s own “Back to 1400 CENSORED” directory, which showed that he knew his 1400 reconstruction step failed its verification test, and that he knew his invented short-centering method falsely elevated the White Mts. bristle cone series into PC1. Despite knowing the results were false, he published anyway. In anyone’s book, that’s dishonest; anyone perhaps except those dedicated to the perversion of science in which a lie decorated with mathematics serves a “higher truth.”
And all of that is beside the basic point that consensus proxy temperature reconstructions are no more than pseudo-science. They have no physical basis whatever.

MACK1
June 17, 2014 4:09 pm

Two main scourges of life since about 1980 have been recreational drugs, and computer power in the hands of the statistically illiterate – and some people seem to be involved in both at the same time.

June 17, 2014 11:06 pm

I find it risible that there is such an obsession with Mann, as if he were the only climatologist on the planet. I still await a clear indication that any errors he made change his results (cf Tol). I find it doubly amusing that the recursive (look the word up if you dont know what it means) behaviour displayed here is much the same as a dog chasing its tail. Fun while it lasts but gets you nowhere. Meantime real scientists get on with finding out what’s really what (I expect to see that in block quotes). Or perhaps someone will quote me the cut and pasted Wegman pages or the three quotes from 5000 stolen emails.

June 18, 2014 3:06 am

Margaret Hardman says:
I find it risible that there is such an obsession with Mann, as if he were the only climatologist on the planet.
Let’s restate:
“I find it risible that there is such an obsession with Hitler, as if he were the only dictator on the planet.”
See? Mann deserves the opprobrium because he is a charlatan. He still refuses to produce all his data and methodologies after sixteen years! Margaret is trying to defend the indefensible.
Next:
I find it doubly amusing that the recursive (look the word up if you dont know what it means) behaviour displayed here…
“Recursive” as in “Recursive Fury”? Ah, that’s from the odious Mr Lewandowsky. It figures. Birds of a feather, etc.
Next:
…perhaps someone will quote me the cut and pasted Wegman pages or the three quotes from 5000 stolen emails.
Yet another ad hominem fallacy. What Prof Wegman’s paper disclosed was that Mann is a charlatan. Nothing about that has changed. And if Margared wants three Climategate quotes to be doubled and squared, well, we can do that. I will be happy to open that can of worms again. Shall we?
Finally, “stolen” emails? Prove it, Margaret. Again, Margaret uses the “Oh, look! A kitten!” tactic to divert from the self-incrimination of the charlatan clique. Those emails have never been disputed by their authors. They prove conclusively that they are self-serving promoters of the debunked CAGW scam.
The CAGW clique, led by Mann, is plainly dishonest. They are dispicable censors, who have gotten honest scientists fired for the crime of having a different scientific opinion, and they are still at it. No wonder Margaret is upset. She is no different. The really amusing thing is, they are at least feeding at the grant trough, while Margaret is spinning her wheels for nothing.

June 18, 2014 8:21 am

Db, as ever you have surpassed yourself in splenetic wonderment. Your behaviour is recursive in the correct sense of the word. Forget the bogeyman Lewandowsky – I don’t think he will come to get you. Nor do I think Mann will but you never know since I think he might not like being called a charlatan but that’s up to him.
By the way, your rewriting of my sentence about Mann changes the meaning. I’ll leave it to your genius to spot what you did wrong. And can I bring in my “There’s a squirrel” since I think it is fairly clear that the Wegman report is littered with barely concealed plagiarism and sadly, that is no ad hominem. Now if I had said Wegman was a charlatan then that would be an ad hominem. At the risk of sounding patronising, don’t you check anything. I thought that was what skeptics did.
As for being upset, well, bored by the repetitive nature of the comments here and the articles. Spinning the wheels provides some entertainment, but the smoke coming from the wheels of the d-listers here just clouds the vision of some people. My conscience is clean on this matter. I hope that yours is too. As for getting honest scientists fired for the crime of having a different scientific opinion, please name some names. Salby – running off with the university credit card when he should have been teaching. Bengtsson – not sacked. I’m sure there’s others who have lost their jobs for the sin of being no good at it. I thought there were no conspiracy theories allowed on this site. I can hear the sound of scissors.

June 18, 2014 8:47 am

Margaret H says:
Forget the bogeyman Lewandowsky
You quoted him. Lewandowsky doesn’t bother me. Every time I see his name, I recall the Aus student in his class who commented, “Get a bath, grub,” and laugh.
But Lew has obviously colonized your mind. And re: Mann, it is my opinion that he is ia scientific charlatan, because he does not allow other scientists the opportunity to falsify his hokey stick nonsense. They can both come and get me for having that opinion. I clearly understand that if you could, you would censor my opinion, too, just like ScAm and others have done. You climate alarmist propagandists are all alike in that way. You do not want the truth disseminated. So you censor.
Next, Margaret, it is no surprise that you don’t get it. I intended to change the meaning of your sentence, to show the world that it is, in fact, you who has the obsession. Your misdirection regarding Prof Wegman always avoids the points he made about Mann and his clique. You are trying to make Wegman the issue, when the central issue is Mann and his pals.
Regarding your last paragraph, you have no conscience. No propagandist does. Your confirmation bias and your overriding Noble Cause Corruption rule you. If you actually believe [which I doubt] that scientists are not fired or marginalized by Mann’s clique, then I advise you to start reading the Climategate emails, where they explicitly state that is what they are doing. You label skeptics as believing in a ‘conspiracy theory’, when that conspiracy has been proven beyond any doubt. It is no longer a theory, it is a proven fact.
I used to wonder how some people could be so deluded. But I’m past that now; I understand that a large fraction of humanity will make a casual decision, such as watching the Algore movie and then assuming he was being truthful, and then altering their life around defending it. That’s you. And when you have other ‘issues’, it further clouds your thinking process.

June 18, 2014 10:28 am

Margaret, wrote, “I still await a clear indication that any errors he made change his results” Old stuff, Margaret. Here you go. Remove Mann’s false-centering and the hockey stick goes away.
Other wonderfulness among the proxies you love include the misuse of the Yamal series, and the new science of using upside-down proxies to get them to show “warming.”
And that’s all apart from the lovely method of surgically removing the bits of proxies that disagree with the “narrative.” Tendentious Proxectomy — the new ‘proof’ of AGW science.
And all of that is quite apart from the fact that scaling tree ring, coral, or ice core series to match the temperature record is physically unjustifiable. It isn’t science at all.

June 18, 2014 10:43 am

DBStealey
Quoted Lewandowsky – wrong. I used a perfectly good word from the many available in the English language.
Obsession with Mann – wrong. Read his paper, read the critiques, know which one is more convincing.
Wegman – wrong. Shown to have plagiarised. End of.
No conscience – wrong. I think I know my conscience better than you but obviously you know best.
Propagandist – wrong. I dislike hypocrisy.
Firing of climate scientists – wrong. But you tell me what I think. Isn’t that the job of a propagandist?
Casual decision – wrong. My decision was based on the egregious mistakes, if I may be so charitable, provided by sites like this one, and the fact that the so called skeptical side cannot converge on a single coherent idea but splinters faster than the Iraqi army.
Watched algore movie (sic) – no, but I did watch that Channel 4 Truth About Global Warming thing a few years back because I thought it might be the case that global warming wasn’t true but it was so transparently wrong and, hey, propagandist, that it and the laughable Rose journalist at the Daily Mail convinced me otherwise. Luckily I have enough scientific training to understand when someone tries to take me for a scientific ride.
Climategate emails – wrong. Read rather a lot of the yawn inducing things and the shock horror ones in context and see a different reading to you. Strangely, my opinion is shared by all the official inquiries but I suppose there is a flag on that play.
Conspiracy theory – for goodness sake. Wrong in neon letters. Proven beyond doubt. Supporters of the birther movement, 911 truthers and other fake conspiracies would say the same about their pet conspiracies. Not a theory, it’s a fact. Send that one to Bill Maher or Jon Stewart. ROTFLMFAO.
I used to wonder how people could be so deluded – wrong. Here’s how I work. I don’t take anyone’s word for it. I check. I read. I ask myself the questions that I think are important. Then I make my decision. Like I said, my conscience is clear.
As for you, I don’t know how you arrived at your decision. What I do know is you seem to assume too much. But then again, I could be wrong. I don’t take anything for granted.
Other ‘issues’ – dont know what they might be but since you know more about me than I do myself, perhaps you will enlighten me.
In the meantime, I’m looking forward to Spain v Chile.

June 18, 2014 11:06 am

Margaret Hardman says:
Quoted Lewandowsky – wrong. I used a perfectly good word from the many available in the English language.
You lie like a child. Point out where you used “recursive” here prior to Lew’s book coming out.
Obsession with Mann – wrong.
Oh, but exactly right.
Wegman – wrong. Shown to have plagiarised.
Playing the man, not the ball = ad hominem fallacy.
I think I know my conscience better than you but obviously you know best.
As explained, you have no conscience.
My decision was based on the egregious mistakes, if I may be so charitable, provided by sites like this one…
Mistakes at WUWT are corrected via discussion. You are now being corrected, and that will continue indefinitely, until you understand the issues — which so far, you do not.
I thought it might be the case that global warming wasn’t true…
Scientific skeptics have always known for a fact that global warming has been happening since the LIA. Only Mann’s acolytes believe that the climate never changed until the Industrial Revolution — Mann’s Hokey Stick shows a flat T until then. He dishonestly erased the MWP and the LIA. And you bought that nonsense hook, line and sinker. Mann took you for a pseudo-scientific ride, and CAGW is now your religion.
Climategate emails – wrong.
A stupid assertion that is contrary to reams of evidence. Those incriminating emails have never been denied by their authors.
Supporters of the birther movement, 911 truthers and other fake conspiracies…
Margaret is falling down on the propaganda job: she forgot to mention white supremacists and creationists. Only the deluded Margaret would mention the off-topic movements that she did. Earth to Mr. Margaret: as stated above, the Climategate emails prove beyond any doubt that Mann and his clique conspired to get people fired. That is a verifiable fact — and it was done. Only a deluded True Believer would impotently try to lump fake conspiracies in with a verifiable conspiracy. Hope you’re not a lawyer, Margaret, because if that’s how you think you wouldn’t win a case.
I check. I read. I ask myself the questions…
It is clear that you don’t ask the right questions. You are afraid to ask the right questions. Your mind is closed tighter than a submarine hatch. CAGW is your religion, and mile-thick glaciers could descend once again over Chicago. You would still be parroting: “Global warming! Global warming! AWK!!”

tadchem
June 18, 2014 11:52 am

Statistics provides a wonderful analytical tool. As a predictive tool it is totally useless.
It allows you to summarize what you already know but cannot tell you anything you don’t already know.

June 18, 2014 11:53 am

Db, hope you don’t come into any legal firm wanting help because you might struggle to find the exit door.
I note you have resorted to ad hominems. Since you cannot know whether I have a conscience or not beyond what I have said, what I have told you, well, Flywheel, Schyster and Flywheel might defend you but others, I’m not so sure. I find it strangely reassuring that you know better than me what I think and Anthony Watts tells me I am allowed to have an opinion, even if it is wrong. Gee, thanks. As they say in God’s greatest continent, bonzer. I didn’t know I needed to have a licence.
As for religion, no thanks. I can see why you say it but you once again don’t have a clue what I actually think and why I think it. You hope you do but that is your modes operandi. In fact, that seems to be the manner by which this site works. It’s like bread and circuses here, isn’t it? But then you know the rules, as a moderator, and know how ventriloquism is one way to keep the discussion going and one way to ensure that those who come with a rationalist philosophy quickly get outnumbered. No wonder you come across as so sure that you are right. There’s always that trembling concern that when the New York subway becomes a submarine way, you’ll still be quoting the Oregon Petition as if science is done by collecting signatures.
As for questions, I ask them, the important ones, daily. Having weighed the evidence, I come to my own decision. I don’t propagandise because I trust people to make their own decisions. Perhaps ths site could do the same.
Oh, we’ll. Back to work at Sou, Grabbit & Runne, Attorney’s At Law.

June 18, 2014 12:02 pm

Forgot the mention the 9 (count ’em) investigations into the so called Climategate emails and how the skeptical crowd came out on top in, erm, none of them.

June 18, 2014 12:09 pm

To cement her position as a gullible lemming, Margaret cites so-called ‘investigations’, in which no hostile witnesses were called — and in which Michael Mann was allowed to confer with the committe beforehand in order to decide what questions to ask, and what not to ask!
That is the ultimate appeal to a corrupt authority. It is like the police investigating a robbery, but refusing to question eyewitnesses. Only a fool would label those so-called ‘investigations’ as anything other than a whitewash.

June 18, 2014 12:27 pm

[cut the personal insults. .mod]

June 18, 2014 1:23 pm

[Margaret, you are welcome to dial back your hatred, and resubmit this comment, or not. Either way, you’re gonna blow a vein if you keep it up. – Anthony]

ShrNfr
June 18, 2014 3:33 pm

I will just speak of standard deviations and their like. Those, along with correlations, and some other things, are second order statistics. Those only exist in a space with a dimension greater or equal to than L2. You can always compute a sample “standard deviation”, “correlation”, etc. but that does not guarantee that the thing you are sampling lives in that L2 or higher space. Many things do, but there are some that simply do not. You have to estimate the dimension of the space using some other tools to be confident that the “standard deviation” will not diverge to infinity as your sample size increases. The Cotton Futures market was shown by Mandelbrot to probably live in around L1.8. [See Kutner, MIT Press, Random Character of Sock Market Prices] Compute the sample statistic, but please be aware that it may well be that your anticipated 100 year flood occurs every 25 years or so.

george e. smith
June 18, 2014 9:00 pm

@Robin Edwards…….”””””””Incidentally, the idea of someone doing statistical calculations using a slide rule strikes me as rather funny. Stats is a digital discipline. I used to use a mechanical calculator in the 1950s to sum squares and products, and very time consuming it was too. But I did get the right answer, which you wouldn’t with a slide rule!…….”””””
Statistical calculations are all precisely defined, in a myriad of math textbooks. Mostly it is pretty much all simple arithmetic. So any four year old could do it. Well, you might have to coach them about square roots.
But a slide rule is simply a tool for doing arithmetic, and many can do even trigonometry too.
So it is based on logarithms, and other book tabulated data. The cognoscenti, know how to do slide rule math to about 0.1% precision, if you have the right slide rule (K&E).
So when I first started actually doing lens designing (seriously), computation was very expensive, with little in the way of mechanical assistance.
So you didn’t do any calculations you didn’t need to do.
So early researchers, extensively studied the general theory of optical imaging, which resulted in the theory of the Seidel aberrations. As a result, it was quite possible to completely design a very good cemented achromatic doublet objective lens, suitable for world class binoculars ((7 x 50 ) by tracing just three rays, in two different colors. The finished manufacturable lens could be specified, from just those three rays.
The ray tracing process, was reduced to a cyclic spread sheet routine, done using 4 figure log tables; logs of numbers and logs of trig functions.
Any good slide rule could do the same design, to three digits.
Nowadays, people call themselves lens designers, who don’t know a thing about the Seidel aberrations. They are mostly mechanical packaging engineers, who understand that some construction materials actually transmit “light”.
So you use a computer, to do not what was done years ago, by slide rule or pen and paper, plus log tables, but simply trace a lot of rays very cheaply.
I can do that too. If I’m designing something like an LED light “bulb” to replace a 60 Watt incandescent Edison lamp, I can trace a hundred million light rays, with full Fresnel polarized ray optics, through a few surfaces, refractive or reflective, and plot surface illumination at some arbitrary location; perhaps in false color mapping, or actual spread sheet tables.
So the tools don’t matter much; they do the math quickly; a slide rule is a bit slower, but plenty adequate for doing statistics.
The trouble with lens design programs, is they wouldn’t know a good lens, if it came crashing through your living room window.
And if the design is no good, or maybe not even physically realizeable, the computer can’t tell you anything about what needs to be changed. Well, you can tell the computer what is a good lens, and it can hunt for a possible candidate, or even improve not so good ones.
Trouble with that scenario, is that then YOU have to know about lens design, just like the good old days,, and the computer whizzes don’t.
I can’t imagine that calculation of weather/climate statistics, requires anything more than a good slide rule, and how to use it. When was the last time you saw 0.1% precision, in any weather report ??

June 19, 2014 11:31 am

Key facts about “climate change” which are ignored by true believers.
1. The concentration of CO2 in the global atmosphere is lower today, even including human emissions, than it has been during most of the existence of life on Earth.
2. The global climate has been much warmer than it is today during most of the existence of life on Earth. Today we are in an interglacial period of the Pleistocene Ice Age that began 2.5 million years ago and has not ended.
3. There was an Ice Age 450 million years ago when CO2 was about 10 times higher than it is today.
4. Humans evolved in the tropics near the equator. We are a tropical species and can only survive in colder climates due to fire, clothing and shelter.
5. CO2 is the most important food for all life on earth. All green plants use CO2 to produce the sugars that provide energy for their growth and our growth. Without CO2 in the atmosphere carbon-based life could never have evolved.
6. The optimum CO2 level for most plants is about 1600 parts per million, four times higher than the level today. This is why greenhouse growers purposely inject the CO2-rich exhaust from their gas and wood-fired heaters into the greenhouse, resulting in a 40-80 per cent increase in growth.
7. If human emissions of CO2 do end up causing significant warming (which is not certain) it may be possible to grow food crops in northern Canada and Russia, vast areas that are now too cold for agriculture.
8. Whether increased CO2 levels cause significant warming or not, the increased CO2 levels themselves will result in considerable increases in the growth rate of plants, including our food crops and forests.
9. There has been no further global warming for nearly 18 years during which time about 25 per cent of all the CO2 ever emitted by humans has been added to the atmosphere. How long will it remain flat and will it next go up or back down? Now we are out of the realm of facts and back into the game of predictions.

Robin Edwards
June 19, 2014 3:12 pm

George E Smith writes with knowledge and experience about the merits and utility of slide rules. The point I was wanting to make without actually stating it is that slide rules are not very good for simple addition and subtraction. For these essential statistical operations you need something else. An abacus would be ideal if you now how to drive one, I’m sure. True, all the hard stuff like multiplications and long divisions are in the realm of slide rule technology. I used spiral slide rules for years, and really liked them, but for adding up I used pencil and paper until the Monroe appeared, with its 20 (?) digit keyboard. Sums of squares and products of two or three digit numbers became easy using the (x+y)squared = X squared + 2xy + y squared recipe. Magic! Now I simply use 1st, a stats program I wrote years ago aimed originally at interactive multiple regression but now with loads of other stuff. Much easier and always correct if the original are reliable and not wrongly entered!