Bringing Skillful Observation Back To Science

Guest post by Steve Goddard

File:GodfreyKneller-IsaacNewton-1689.jpg
Wikipedia Image: Issac Newton

Archimedes had his eureka moment while sitting in the bathtub.  Newton made a great discovery sitting under an apple tree.  Szilárd discovered nuclear fission while sitting at a red light.

There was a time when observation was considered an important part of science. Climate science has gone the opposite direction, with key players rejecting observation when reality disagrees with computer models and statistics.  Well known examples include making the MWP disappear, and claiming that temperatures continue to rise according to IPCC projections – in spite of all evidence to the contrary.

Here is a simple exercise to demonstrate how absurd this has become.  Suppose you are in a geography class and are asked to measure the height of one of the hills in the Appalachian Plateau Cross Section below.

Image from Dr. Robert Whisonant, Department of Geology, Radford University

How would you go about doing it?  You would visually identify the lowest point in the adjacent valley, the highest point on the hill, and subtract the difference.  Dividing that by the horizontal distance between those two points would give you the average slope.  However, some in the climate science community would argue that is “cherry picking” the data.

They might argue that the average slope across the plateau is zero, therefore there are no hills.

Or they might argue that the average slope across the entire graph is negative, so the cross section represents only a downwards slope. Both interpretations are ridiculous.  One could just as easily say that there are no mountains on earth, because the average slope of the earth’s surface is flat.

Now lets apply the same logic to the graph of Northern Hemisphere snow cover.

It is abundantly clear that there are “peaks” on the left and right side of the graph, and that there is a “valley” in the middle.  It is abundantly clear that there is a “hill” from 1989-2010.  Can we infer that snow cover will continue to increase?  Of course not.  But it is ridiculous to claim that snow extent has not risen since 1989, based on the logic that the linear trend from 1967-2010 is neutral.  It is an abuse of statistics, defies the scientific method, and is a perversion of what science is supposed to be.

Tamino objects to the graph below because it has “less than 90% confidence” using his self-concocted “cherry picking” analysis.

So what is wrong with his analysis?  Firstly, 85% would be a pretty good number for betting.  A good gambler would bet on 55%.  Secondly, the confidence number is used for predicting future trends.  There is 100% confidence that the trend from 1989-2010 is upwards.  He is simply attempting to obfuscate the obvious fact that the climate models were wrong.

Science is for everyone, not just the elite who collect government grant money.  I’m tired of my children’s science education being controlled by people with a political agenda.

Get notified when a new post is published.
Subscribe today!
0 0 votes
Article Rating
422 Comments
Inline Feedbacks
View all comments
February 24, 2010 5:09 pm

Steve Goddard (16:44:08) :
The models don’t show any decline prior to about 1990.
Some do. http://www.leif.org/research/Snow-Cover-1850-2100-Model.png
So isn’t it reasonable to start the measurements there?
No, for many reasons [one being that model showed decline since the 1930s].
the late 1960s and the 1970s were exceptionally snow periods, and we have returned to that level.
No, I count 8 above 46 million km2 during the first half of the data and only 3 above during the last half [if we include 2010],
http://climate.rutgers.edu/snowcover/chart_seasonal.php?ui_set=nhland&ui_season=1
The point is that a few years of anything does not make any significant difference to the long-term trend.

Steve Goddard
February 24, 2010 5:41 pm

Leif,
The earth is neither covered with snow, nor is it bare, so the long-term trend has to be approximately flat. If you pick a long enough time period to measure snow extent you will definitely see no trend. Do you think there is anything surprising about that?
The point of the article is that climate models predicted decline for the last twenty years. Get it? That is the interesting period of time.

Steve Goddard
February 24, 2010 5:44 pm

Leif,
When Rutgers updates their chart, it will show 1978, 2010, 2008, 2003 as the top 4.

February 24, 2010 6:15 pm

Steve Goddard (17:41:56) :
The point of the article is that climate models predicted decline for the last twenty years.
No, http://www.leif.org/research/Snow-Cover-1850-2100-Model.png
Steve Goddard (17:44:03) :
When Rutgers updates their chart, it will show 1978, 2010, 2008, 2003 as the top 4.
If it does that, they have calculated the values incorrectly. Did you understand my calculation:
For NA, the numbers are:
2008:02 17.76 29 515.04
2008:01 17.89 31 554.59 17.83494505
2007:12 17.85 31 553.35
1978:02 18.94 28 530.32
1978:01 18.23 31 565.13 17.83433333
1977:12 16.44 31 509.64
and why not?

B.D.
February 24, 2010 6:18 pm

Re: Steve Goddard (06:57:42) :
Agreed – so much for looming catastrophe.
Re: Leif Svalgaard (16:26:57) :
Any model output prior to 2001 is not a prediction. From 2001 on, the model does not predict well at all. The model does not work.

February 24, 2010 6:22 pm

Steve Goddard (17:44:03) :
When Rutgers updates their chart, it will show 1978, 2010, 2008, 2003 as the top 4.
Grrr, you switched again between NH and NA. We do not disagree for NH.

February 24, 2010 6:25 pm

Steve Goddard (17:41:56) :
If you pick a long enough time period to measure snow extent you will definitely see no trend.
On the contrary, that is where the real trend will show up. Over centuries and millenia. If you pick short enough time periods you will definitely see lots of trends, but they will be spurious.

February 24, 2010 6:51 pm

B.D. (18:18:34) :
Any model output prior to 2001 is not a prediction. From 2001 on, the model does not predict well at all. The model does not work.
Then the decline since 1930s is not a prediction, but an observed fact, no?
Of course, the models don’t work. That is not the point. The point is that the ‘trend’ in snow cover it is not statistically significant [ http://www.leif.org/research/Snow-Cover-1966-2010-NA-Winter.png ], and even then matches that of a nonsense model [ http://www.leif.org/research/Snow-Cover-1850-2100-Overlay2.png ].

B.D.
February 24, 2010 7:06 pm

Re: Leif Svalgaard (18:51:16) :
Of course, the models don’t work. That is not the point.
That was exactly the point of Steve’s post. Just because it devolved into all of the irrelevant nonsense about statistics does not change that.

February 24, 2010 7:18 pm

B.D. (19:06:58) :
“Of course, the models don’t work. That is not the point.”
That was exactly the point of Steve’s post. Just because it devolved into all of the irrelevant nonsense about statistics does not change that.

The models don’t work because they disagree among themselves. Steve’s post is not relevant for this, because he postulates a trend that is not significant. I found one of the models in fair agreement with that [spurious] trend. Steve has not invalidated anything, because his analysis is not correct. And no amount of ‘looking out the window’ or quibbling about whether 1978 or 2008 is the higher down in the fifth decimal digit will change that. I would say that he should stop shoveling.

Steve Goddard
February 24, 2010 7:57 pm

Leif,
How can there be a long term trend for snow cover? If there was, snow would eventually either completely disappear or completely cover the hemisphere, depending on the polarity. NH winter extent has increased by nearly five million km2 over the last twenty years. At that rate it wouldn’t take more than a few hundred years for snow to become ubiquitous. Clearly the trend will have to reverse at some point in the not too distant future and start declining again like it did in the 1980s.
The only trend that makes sense over the long term is something cyclical centered around a mean.

Steve Goddard
February 24, 2010 8:06 pm

Leif,
Quiz for you. Tell me if this trend is statistically significant.
https://spreadsheets.google.com/oimg?key=0AnKz9p_7fMvBdHNjaU1kb1pxb3RfdlFJSEExcnF1c3c&oid=6&v=1267070652337
It has an R^2 value of 2.79, less than the NH data.

February 24, 2010 8:20 pm

Steve Goddard (19:57:01) :
NH winter extent has increased by nearly five million km2 over the last twenty years. At that rate it wouldn’t take more than a few hundred years for snow to become ubiquitous.
I think that just shows that your ‘trend’ is spurious to begin with. To call 22 years a trend interval is not right. The trend shows up over 100 years. The models show such a trend out to year 2100 [that they likely are wrong (or right for the wrong reason – as the case may be) is another point]. If the ‘trend’ from 2008 to 2009 held up all snow would be gone in 22 years. Clearly that is nonsense, so we must increase the time span to more than 1 year. Statistics can tell us how many more years [depends on the variance and the autocorrelation at various lags]. That is what statistics is for.
There has surely been a trend downwards the last 15,000 years until about 7,000 years ago, and there will be one upwards the next 90,000 years or so. So on a 100,000 years scale the trend is indeed cyclic.

February 24, 2010 8:22 pm

Steve Goddard (20:06:07) :
It has an R^2 value of 2.79, less than the NH data.
R^2 is a number between 0 and 1 [both inclusive].

February 24, 2010 8:26 pm

Steve Goddard (20:06:07) :
It has an R^2 value of 2.79, less than the NH data.
Post the data

anna v
February 24, 2010 9:34 pm

In particle physics we have great experience in trying to squeeze a meaning out of few data points.
We do things differently.
1) We do not join measured points with lines to create a cardiogram like plot. It is confusing the issues.
2) we plot histograms if the y axis is a direct number measure that can give the statistical error by simple calculation. If it is a convolution and the error comes another way, we give the error bar on each point. If there is a systematic error in addition we plot that indicating it on the bars.
3) Model outputs drawn on the same plots, could be either computer fits to the data, or direct predictions. In the first case the parameters fitted have the errors with their significance, and in the second the models have an error band.
If I plotted this data plot, my histograms would have tiny errors on top. If I could fit a model its parameters would be highly constrained. Since there is no question of a GCM model fitting the data computationally, the model output should be given in a similar histogram to the data with an error band. The error bands for the GCM models are enormous making moot the choice of particular wiggles and any real one to one comparison with data. It is meaningless. The spaghetti model plots taken together show a downtrend, but it is a choice trend not a statistically given trend, as if one drew it by hand as long as there is no error band on each output curve.
This leaves us with the snow plot. If this were cross section versus energy, for example, there are too few high points to constrain a trend that would definitively say ” crossections are increasing at the x sigma level”.
More data are needed, more winters in this case.
BTW the sun seems to be really getting out of the slump :).

February 24, 2010 10:02 pm

anna v (21:34:29) :
We do not join measured points with lines to create a cardiogram like plot.
My plot of this looks like this:
http://www.leif.org/research/Snow-Cover-1966-2010-NH-Winter.png
My error bands were picked to produce two outliers [marked by red crosses] on either side.
As you say, not enough data.

Steve Goddard
February 24, 2010 10:18 pm

Decimal point in the wrong position. 0.279

Steve Goddard
February 24, 2010 10:19 pm

2100 45906814.4414759
2101 49024311.5922443
2102 48728902.7733808
2103 44604745.8706235
2104 43931558.6416875
2105 47023540.9121646
2106 44483031.473973
2107 46779228.4531988
2108 43938557.5740536
2109 50910102.9993307
2110 45141125.1358875
2111 45787005.3961709
2112 48959117.3375803
2113 51987871.6211294
2114 45167912.9101635
2115 46557349.1996
2116 48252292.2840534
2117 54106557.6805669
2118 50421001.6557413
2119 49232275.8428081
2120 52676808.0614631

Steve Goddard
February 24, 2010 10:25 pm

Leif,
What is shows is that snow cover has increased by about 5 million km2 over the last 20 years, which was not predicted by all nine GCMs which predicted declining snow cover. You can protest all you want, but the change over the last 20 years has been in the wrong direction.

February 24, 2010 11:53 pm

Steve Goddard (22:18:15) :
Decimal point in the wrong position. 0.279
Steve Goddard (22:19:58) :
2100 45906814.4414759
[…]
2120 52676808.0614631

With a t-stat at 2.71 this would ordinarily be significant at the 98.6% level, provided that the end points are selected at random. At an example of what happens with non-random selection, I now decide to exclude the first three points. That pumps R^2 up to 0.46 and increases the statistical significance considerably: by throwing away the three points, the remaining data is now significant at the 99.8% level. So with less data I get a much better result.
Similarly, if I throw away data before 1995 on your NH graph, I pump R^2 up to 0.334. This may be a subtle point, but the criterion for selecting the data should not be derived from looking at the data. The models in the paper start their predictions around 2000, so that would have been an un-biased starting point, but now we begin to run out of data points, so even if R^2 is high, the significance goes down. After all, with only two points left R^2 is 1, but the significance is zero.

Alex Heyworth
February 25, 2010 1:13 am

All this arguing about statistics and statistical significance is in one sense irrelevant: the level of snow coverage is, as a matter of fact, either higher, lower or exactly the same as it was at some time in the past. Looking at Steve’s graph from 1989 to the present, it is clear that currently there is more snow (unless we distrust the measurements).
The issue of statistical significance only arises if we have some model in mind (for example, that snow is increasing at the rate shown by the straight line shown in the same graph). If we test that model (as Tamino apparently did) we find that it fails tests of statistical significance.
Steve, Leif and others, you would do well to read Matt Briggs’s article at http://wmbriggs.com/blog/?p=1958#comments. So would Tamino.

February 25, 2010 1:54 am

Steve Goddard (22:25:40) :
What is shows is that snow cover has increased by about 5 million km2 over the last 20 years, which was not predicted by all nine GCMs which predicted declining snow cover. You can protest all you want, but the change over the last 20 years has been in the wrong direction.
It irks me a bit that you don’t respond to my comments, while I try hard to respond in detail to every one of yours. You say: “increased … the last 20 years, which was not predicted by all nine GCMs” and ignore completely my demonstration that model INM-CM3.0 for example does predict something very similar to the observations:
http://www.leif.org/research/Snow-Cover-1850-2100-Overlay2.png
Check out the blue curve. Is it not much higher at the right-hand side of the box? You confuse weather with climate. That a few years are higher or lower does not mean that the climate has changed. The models try to predict climate, not weather. Many of them don’t do a good job, but even if they were correct, the deviations caused by weather are normal and expected.
I have a feeling that you don’t even do me the courtesy to cast a minimal glance at what I labor to produce, e.g. this one: http://www.leif.org/research/Snow-Cover-1966-2010-NH-Winter.png
As proof that you have seen it, quote me the number #NNNN shown in the lower right-hand corner.
As Alex Heyworth (01:13:46) points out:
“The issue of statistical significance only arises if we have some model in mind (for example, that snow is increasing at the rate shown by the straight line shown in the same graph). If we test that model (as Tamino apparently did) we find that it fails tests of statistical significance.”
the model he is talking about is not the models of the paper, but your assumption of a trend.

Steve Goddard
February 25, 2010 5:35 am

Alex Hayworth,
Tamino calculated 99% significance on that trend line, but then applied an undocumented “cherry picking” correction which he claimed reduced it to “less than 90%.” I was (not) very surprised to find that he found a way to disagree with my assertion that the climate models were failing.

Brian G Valentine
February 25, 2010 6:06 am

Daily fun trivia:
The kerfing or edging around coins (such as the US dime, quarter dollar, …) is an invention of Sir Isaac when he was appointed Master of the British Mint; this is done to preserve coins in circulation from wear