Guest post by Steve Goddard

Archimedes had his eureka moment while sitting in the bathtub. Newton made a great discovery sitting under an apple tree. Szilárd discovered nuclear fission while sitting at a red light.
There was a time when observation was considered an important part of science. Climate science has gone the opposite direction, with key players rejecting observation when reality disagrees with computer models and statistics. Well known examples include making the MWP disappear, and claiming that temperatures continue to rise according to IPCC projections – in spite of all evidence to the contrary.
Here is a simple exercise to demonstrate how absurd this has become. Suppose you are in a geography class and are asked to measure the height of one of the hills in the Appalachian Plateau Cross Section below.

Image from Dr. Robert Whisonant, Department of Geology, Radford University
How would you go about doing it? You would visually identify the lowest point in the adjacent valley, the highest point on the hill, and subtract the difference. Dividing that by the horizontal distance between those two points would give you the average slope. However, some in the climate science community would argue that is “cherry picking” the data.
They might argue that the average slope across the plateau is zero, therefore there are no hills.
Or they might argue that the average slope across the entire graph is negative, so the cross section represents only a downwards slope. Both interpretations are ridiculous. One could just as easily say that there are no mountains on earth, because the average slope of the earth’s surface is flat.
Now lets apply the same logic to the graph of Northern Hemisphere snow cover.
It is abundantly clear that there are “peaks” on the left and right side of the graph, and that there is a “valley” in the middle. It is abundantly clear that there is a “hill” from 1989-2010. Can we infer that snow cover will continue to increase? Of course not. But it is ridiculous to claim that snow extent has not risen since 1989, based on the logic that the linear trend from 1967-2010 is neutral. It is an abuse of statistics, defies the scientific method, and is a perversion of what science is supposed to be.
Tamino objects to the graph below because it has “less than 90% confidence” using his self-concocted “cherry picking” analysis.
So what is wrong with his analysis? Firstly, 85% would be a pretty good number for betting. A good gambler would bet on 55%. Secondly, the confidence number is used for predicting future trends. There is 100% confidence that the trend from 1989-2010 is upwards. He is simply attempting to obfuscate the obvious fact that the climate models were wrong.
Science is for everyone, not just the elite who collect government grant money. I’m tired of my children’s science education being controlled by people with a political agenda.


With all due respect, this post appears to have little to do with observation as a part of science. Someone skilled in the scientific art of observation would simply objectively observe and possibly record data points. All the line-drawing stuff and subsequent analysis would be left to others.
So, if the publisher shows increasing snow cover and uses that to falsify global warming, then someone like Tamino is correct to object on the grounds of confidence levels.
Well, yes and no.
Yes, if that is the only evidence provided. No, if more evidence is available.
If the bulk of the evidence points towards one conclusion, then we can act on that. The fact that each individual piece might only be at 80% becomes less important. (This assumes the evidence is “independent” in a statistical sense.)
So if we wish to decide whether AGW climate models have been accurate in their predictions, then the amount of snow cover to 80% accuracy is acceptable evidence. Not proof, merely evidence. But then no-one is really claiming that it is proof of the failure of the AGW theory.
The warmistas will have the Devil’s own job of proving AGW using evidence only above 95%. They don’t even try: do you reckon their calculated glacier melt trends are 95% accurate? How about sea level trends?
So why should the sceptics be held to much greater evidence? We’re not the ones proposing the outlandish scenario.
Suppose you are in a geography class and are asked to measure the height of one of the hills.
This sounds like an ideal reason to take the students on a surveying expedition! Take an assortment of modern and classical surveyor’s tools and some appropriate camping gear, and engage in some real hands-on field work. But I guess I’m too old-fashioned, since a few minutes with Google Earth would probably give a good-enough-for-climate-science value. ;->
Speaking of observations it’s quite clear to me but apparently to no one else that the jet streams drifted poleward from 1975 to 2000 but have been drifting back equatorward ever since.
Now if that simple observation is correct then there are implications but the whole climate establishment seems to be ignoring them.
However the warmists were happy to announce that the poleward movement was consistent with CO2 forcing and all our fault.
“You usually have a point to make.”
Yes, I do. The point is that science being for everybody does not mean that it does not require knowledge and skill to do it effectively. Everything worth doing has a learning curve, and it take a basic part of science like seperating trends from noise and dismiss it as “elitist” because you don’t immediately understand how it works is, to borrow a phrase from Rush, retarded.
@Dave: I lost you when you started speculating about what climate models predicted for snow extent, while admitting you didn’t know. If you care, look it up. Don’t ask me to respond to a controversy that exists only in your imagination.
Re Vincent (13:32:01) What is the point?
The point was that the climate models all predicted a decrease in snow cover over this same period, and for 100% certain snow cover in the NH increased.
The models were wrong, and if the trend went in the opposite direction, then the CAGW proponents would be making a big deal of this. http://wattsupwiththat.com/2010/02/19/north-america-snow-models-miss-the-mark/
bbc’s latest ‘science in action’ prog continues on its merry AGW way:
Public perception of science
Can we trust science and scientists? It’s a question that is increasingly being asked by the media, and the public, after some high profile apparent mistakes. Did UK scientists manipulate data on global warming? Is the International Panel on Climate Change credible after it admitted that it had made a mistake in asserting that Himalayan glaciers could disappear by 2035? In the past few years there have also been false claims about stem cells, and erroneous warnings about vaccines. Michael Specter is the author of “Denialism”, where he asks why we have begun to fear scientific advances instead of embracing them. He was speaking at TED – Technology, Entertainment, and Design – a conference in California billed as some of the biggest thinkers coming together to spread ideas. This year the theme was “what the world needs now”. Jon Stewart went along to find out more…
If you’d like to attend TED, the next one is in Oxford, England, in July – there a more details on the Science in Action website. There is a fellowship programme, which focuses on attracting people who have world changing ideas, living or working in the Asia, Africa, the Caribbean, Latin America and the Middle East.
Ocean acidification
The oceans are becoming more acidic and at a faster rate than previously measured. This could lead to a massive extinction in the deep seas, according to new research. The ocean is what is known as a carbon sink – it has taken up between a quarter and a third of all atmospheric CO2 since the start of the industrial revolution. A study published in the journal Nature Geoscience shows that the increase in carbon dioxide in the atmosphere is leading to a similar increase in the oceans today. But it is believed that the process is making the seas much more acidic which is damaging the delicate shells of organisms that are critical to the marine food chain. In fact the rate of acidification is now the highest in 55 million years. Danniella Schmidt from the University of Bristol, one of the scientists behind the work, joins us on the programme.
http://www.bbc.co.uk/programmes/p00673xy
btw specter didn’t mention AGW in spite of the opening remarks of presenter, John Stewart. he talked of vaccines and GM food and said “trust the scientists”.
Nature Geoscience: Past constraints on the vulnerability of marine calcifiers to massive carbon dioxide release
Andy Ridgwell & Daniela N. Schmidt
http://www.nature.com/ngeo/journal/vaop/ncurrent/abs/ngeo755.html
Stephen Cauchi in Australia’s Age newspaper, reported the following, which sounds like a Greens’ response to Geoffrey Lean’s ‘rally the green troops’ piece in the UK Tele last week:
21 Feb: Quadrant: Doomed Planet
Assault on reason
The Age reports on a closed meeting by wealthy Green groups to plan their attack on climate sceptics:
Australia green groups have called a strategy meeting to devise ways to hit back at the climate sceptics movement, amid fears they are losing the PR war.
The groups, including Greenpeace, the Wilderness Society, World Wide Fund for Nature, Australian Conservation Foundation and Friends of the Earth, have acknowledged that the public mood has shifted following the collapse of the Copenhagen climate talks and blows to the credibility of the IPCC.
James Norman, of the Australian Conservation Foundation, said the strategy of ignoring climate change sceptics had not worked as it had been taken as confirmation of their claims. ”The stakes are too high to remain silent or disorganised in the face of this systemic disinformation campaign,” Mr Norman said.
He said the global campaign was being funded by anti-climate-change think tanks such as the American Atlas Economic Research Foundation and the British International Policy Network, which had both received grants from oil company ExxonMobil.
”I wouldn’t be surprised if they (ExxonMobil) have connections here in Australia as well,” he said.
Think tank the Climate Institute, lobby group Get Up, and the Liquor Hospitality and Miscellaneous Union will also attend the Sydney meeting, which is not open to the public or the media.
Greenpeace spokesman James Lorenz said the meeting was ”a good opportunity for environmental organisations to put their heads together and have a think about what’s going on”…
http://www.quadrant.org.au/blogs/doomed-planet/2010/02/assault-on-reason
As most skillfully explained by Lewis Carol:
Alice in Wonderland Ch 6 p 364
The question is, will we chose McGraw Hill’s or Webster’s definition of Science or Humpty Dumpty’s? Who shall be the master?
@steven mosher (13:23:57) :
A trend that doesn’t reach 95% confidence (or sometimes 90%) is not statistically significant by definition. Your question is akin to asking if I demand that all triangles have interior angles totaling 180 degrees. We’re talking definitions here.
David (14:15:34) :
Re Vincent (13:32:01) What is the point?
The point was that the climate models all predicted a decrease in snow cover over this same period
The models [as shown by Steve’s post] show a decline since the 1960s, so ‘this same period’ need to be since 1960s, no?
Stuff like this is why ‘climate’ scientists don’t design machines. Machines have to actually work. That ain’t easy in the real world.
Leif –
I choose as my base the Roman Warm Period. If you want, you can choose the middle of the last ice age. I get a slight downward slope. You get one devil of an upward slope. Steve’s points across the last several days have been fairly good. They are not their to extrapolate any trends – but I don’t believe he has claimed to do such. Unlike our AGW friends….
The difference is, Steve Goddard, that time series have peculiar properties even when the input data is random, and so special care has to be taken to distinguish that which may be a signal from random noise.
So its no use complaining about Tamino on this issue. If the time series has significant autocorrelation (and lots of climate data does, especially tree rings and hydrological series like the Nile-ometer) then a random series can display spurious trends over short periods of time which mean precisely nothing.
That’s the problem with your series: its truncated for no reason, it has significant autocorrelation, the R2 is low. The forecast based on it has to be taken with a massive amount of salt.
Steve,
“…it is ridiculous to claim that snow extent has not risen since 1989, based on the logic that the linear trend from 1967-2010 is neutral.”
Nobody has claimed that the historical numbers are wrong and that the snow extent would not have increased from a local minimum in the data. You have put up the weakest of straw men here.
But why do you continue to use a plot of northern hemisphere winter snow extents to back your claim that the data refutes the models. The models are predicting the January North America extent only, which the data shows has no significant trend:
http://climate.rutgers.edu/snowcover/chart_anom.php?ui_set=1&ui_region=nam&ui_month=1
Nor do the models over this period. There’s no contradiction.
Tamino has a good sense of humor. His t-tests are based on untenable assumptions and can be dismissed.
Robert (14:15:01) :
Of course. A 24th mag galaxy is lost in the background noise of 25th mag skyglow the same way that sea-level rise of the past 100 years is lost in the noise of the tides. Both are fuzzy pictures.
An observer looking at 2 pictures taken 100 years apart cannot tell that a sea-level rise event has taken place.
And a current trend is lost in the trees whereupon the forest cannot be seen.
The best one can do is to roll with the current punches… adapt.
Only the current punches are shoving the place towards paralysis of adaptation through a very greedy Climate Bill that is a diode allowing only one direction of change.
i.e. – the big threat is not the climate, it’s the monolith of Agenda.
That’s not science it’s policy, and the outcome is not uncertain, given the eventuality of the swings of climate, should Agenda prevail in the name of science.
Right!
Last night I saw the excellent presentation of Dr. Lindzen, and I would like to ask a question to Dr. Lindzen. Therefore I would really apprectaite if you Anthony could forward this question to Dr. Lindzen.
Dear Dr. Lindzen,
In your presentation you stated that a couple of months or years with accurate observations of climate dynamics and radiation balance possibly could reveal the most important mechanims in our climate. Now, given that many ocean cycles are multi decadal, how could it be possible to determine the governing mechanisms in the climate with a fraction of the samples required to reproduce the various climate signals?
http://en.wikipedia.org/wiki/Nyquist%E2%80%93Shannon_sampling_theorem
My opinion has been that at least 100 years of accurate Argo ocean data would be required to reveal the characteristics of our climate, and I would be most happy if you could explain why this is not required.
Yours Sincerely
Invariant
“Robert (12:09:01) :
A confidence interval is about distinguishing a random distribution from a pattern. By convention, you need to be 95% confident in your trend in order to reject the null hypothesis. 90% is on the bleeding edge of acceptable. Less than 90% is not statistically significant by any measure”
i am sorry, that is completely false; firstly you have to know the distribution of your data and the distribution of your error(s). it is quite possible for you to have a signal that has a Poisson distribution and that the error follows a normal distribution. Moreover, it is also more than likely that your data and error is non-normal and that the distributions are different; in the case of measuring average temperature, the two distributions probably vary during the seasons.
Confidence – or better “confidence intervals” (as I was taught to call them) relate to a the confidence one has in a given probability distribution. In this case, there is a probability of 1.00 that each measured point of historical snow cover extent is correct (assuming each has been measured correctly – if not, then each point has its own probability distribution to do with correctness of measuring – but let’s heroically assume no measurement errors). Note that there’s no confidence interval (or confidence) involved in historical data. The straight line drawn thorugh the graph is a simple first order regression (root of sum of squares of differences of each measurement from the average).
Now, when you project that straight line into the future, for each future time period you calculate a max and min value, based on the historical measured variation of observed points from the historical regression line. The max and mins are calculated based on a given “confidence level”, conventionally 90%. This means you’re 90% confident that the future observed value for any time period will fall between the calculated min and max values for that time period. (Btw, this assumes that future variability is the same as historical variation, but let’s assume – operhaps more heroically – that it is.)
The thing that really annoys me about much of the AGW reporting is the way that the most alarming of the two intervals is always stated: for example, max values for temperatures; min values for Arctic ice extent. That’s not science, it’s flim-flam.
Hey Leif,
From where I am standing, your question, when measured against the responses it engendered, was way over their heads. How high is that?
As an aside for Leif Svalgaard – I researched a comment you made in an earlier post and found your assessment to be correct. Sorry for misunderstanding – Mk
I’m not sure how clear this will come out, but I have to try. Symon, your statement:
“It’s not beyond the wit of man to realise that rising temperatures could possibly cause more snowfall, with more water able to be carried in warmer air”
is correct, but not totally germaine to the graphs in question. The total volume of precipitation does depend on the amount of water vapor in the air, which warming could increase, thus “more snow.” However, the graphs in question are of snow extent, which is more how far south the snow comes, not the amount of snow. I live in northern Arkansas. The precipitation fronts that come through my area in the winter often have rain in the south portion, ice (and downed power lines) in the middle, and snow in the northern portion. Whether we get our precipitation as rain, ice, or snow, depends not on the volume of precipitation, but on how far south the freezing temps get. So, in my part of the state and areas south of me at least, warmer is very unlikely to cause more snow. It could cause more rain, but we have to be colder to get more snow.
KW
After inferential statistical testing a 95% confidence level means that for every 100 times an experiment is conducted and if for only five times the result differs from the rest this means that those five are likely to be by chance. A 0.05 p-value is the same thing noted as a probability and is the usual threshold value for psychology experiments. If an hypothesis is supported by the experimental evidence at the 0.05 level then we can say it is moderately reliably supported, we can trust the findings. For experiments testing the safety of new medicines much more stringent p-values are required, 0.01 or 0.001, because peoples’ lives are at risk. Such simple stats deal in how often you think your results are by chance-as others have said, how much ‘noise’ versus how much ‘signal’.
The big problem with the alarmists is that we can’t apply such tests to their data because their theories do not generate testable hypotheses. Much like the theories of Freud, which I call a typical ‘expanding bag’ theory, whatever observations or results you toss into the bag it just expands a little more and is proclaimed to explain everything. Heavy snow is generated by global warming, heat waves are generated by global warming, much as in Freudian theory you may be diagnosed with a desire to sleep with your mother and when you reject that diagnosis you are told you are in denial, that whatever you say simply goes to convincingly prove this incestuous desire.
Don’t even get me started on the gigantic expanding bag which is religion…
Leif Svalgaard (14:00:19) :
A common device [‘outlier suppression’] to check if a trend is ‘robust’ is to omit the n lowest and n highest points and see if the trend persists, where n can then vary from 0 and up to a reasonable number, e.g. 10% of the total number of points. This gives you an idea about how much is ‘weather’ vs. how much is ‘climate’.
When I was working as an applied mathematician in space sciences, the software would always eliminate the outliers so that we could get an valid trajectory analysis. We were working to at least 8 decimal places much more than the accuracy needed for climate studies. (I always wondered how we can get accuracy to at 10th or 100th of a degree when majority of data is no better than accurate to 1/2 of a degree)
As a data administrator, I always build software tools to analyze the data and identify data that was suspect. I spent probably 20 times as much time on validation and verification then i did on collection, computing and publishing results.
I love this post. It is a classic.
Sceptics have spent a huge amount of effort trying to show that evidence of global warming is not significant. But now there might be a cooling trend and “85% would be a pretty good number for betting. A good gambler would bet on 55%.”
So which is it? Does evidence for climate change have to be proven beyond any doubt, or should we take action on a 55% probability?
“Secondly, the confidence number is used for predicting future trends.”
Wrong.