
Previously on WUWT, I covered this contest. At that time, Doug J. Keenan stated:
There have been many claims of observational evidence for global-warming alarmism. I have argued that all such claims rely on invalid statistical analyses. Some people, though, have asserted that the analyses are valid. Those people assert, in particular, that they can determine, via statistical analysis, whether global temperatures are increasing more that would be reasonably expected by random natural variation. Those people do not present any counter to my argument, but they make their assertions anyway.
In response to that, I am sponsoring a contest: the prize is $100 000. In essence, the prize will be awarded to anyone who can demonstrate, via statistical analysis, that the increase in global temperatures is probably not due to random natural variation.
Doug J. Keenan writes today:
In November 2015, I launched a Contest, with a $100,000 prize: to spot trends in time series—series that were similar to the global temperature series. You blogged about it: “Spot the trend: $100,000 USD prize to show climate & temperature data is not random“.
The Contest has now ended. The Solution and some Remarks have been posted. Briefly, no one came close to winning. Some of the people who entered the Contest are well known researchers.
Many people have claimed that the increase in global temperatures (since 1880) can be shown, statistically, to be more than just random noise. Such claims are wrong, as the Contest has effectively demonstrated. From the perspective of statistics, the increase in temperatures might well be random natural variation.
From his blog: http://www.informath.org/Contest1000.htm
18 August 2016
A paper by Lovejoy et al. was published in Geophysical Research Letters. The paper is about the Contest.
The paper is based on the assertion that the Contest “used a stochastic model with some realism”; the paper then argues that the Contest model has inadequate realism. The paper provides no evidence that I have claimed that the Contest model has adequate realism; indeed, I do not make such a claim. Moreover, my critique of the IPCC statistical analyses (discussed above) argues that no one can choose a model with adequate realism. Thus, the basis for the paper is invalid. I pointed that out to lead author of the paper, Shaun Lovejoy, but Lovejoy published the paper anyway.
When doing statistical analysis, the first step is to choose a model of the process that generated the data. The IPCC did indeed choose a model. I have only claimed that the model used in the Contest is more realistic than the model chosen by the IPCC. Thus, if the Contest model is unrealistic (as it is), then the IPCC model is even more unrealistic. Hence, the IPCC model should not be used. Ergo, the statistical analyses in the IPCC Assessment Report are untenable, as the critique argues.
For an illustration, consider the following. Lovejoy et al. assert that the Contest model implies a typical temperature change of 4 °C every 6400 years—which is too large to be realistic. Yet the IPCC model implies a temperature change of about 41 °C every 6400 years. (To confirm this, see Section 8 of the critique and note that 0.85×6400/133 = 41.) Thus, the IPCC model is far more unrealistic than the Contest model, according to the test advocated by Lovejoy et al. Hence, if the test advocated by Lovejoy et al. were adopted, then the IPCC statistical analyses are untenable.
I expect to have more to say about this in the future.
01 December 2016
Regarding the 1000 series that were generated with the weak PRNG (prior to 22 November 2015), the ANSWER, the PROGRAM (Maple worksheet), and the function to produce the file Answers1000.txt (with the random seed being the seventh perfect number minus one) are now available.
Cowpertwait P.S.P., Metcalfe A.V. (2009), Introductory Time Series with R(Springer). [The analysis of Southern Hemisphere temperatures is in §7.4.6.]
Shumway R.H., Stoffer D.S. (2011), Time Series Analysis and Its Applications(Springer). [Example 2.5 considers the annual changes in global temperatures and argues that the average of those changes is not significantly different from zero; set Problem 5.3 elaborates on that.]
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.
“For an illustration, consider the following. Lovejoy et al. assert that the Contest model implies a typical temperature change of 4 °C every 6400 years—which is too large to be realistic. ”
I hear that as “That would be too large an effect to expect us to detect.”
Ron,
I think it means, “that would be so large that it couldn’t possibly be real”. As in “not only would and could we detect it, but it would appear as large as a billboard, no one could miss it, and it would be so out of proportion to our expectations that it simply could not be based on current physics.”
But then one must ask….if the “experts” competing in the Contest could not find a definitive, conclusive, undeniable signal in a model designed to generate a temperature change of 4C every 6400 years, then how on Earth could any scientist possibly attempt to get people to believe that they can, and have, found one in ANY models that assume a LESSER temperature change??????????
What a bunch of jumbo jumbo! Before I even think about MAN made global warming as even a slight possibility, tell me how the glaciers that covered North America melted long before there were any planes, trains, automobiles or cow flatulance!
Moderation? For this? tell me how the glaciers that covered North America melted long before there were any planes, trains, automobiles or cow flatulance!
tell me how the glaciers that covered North America melted long before there were any planes, trains, automobiles or power plants….
Some serious people seem to have taken this challenge seriously. But to me it seems to exchange hypothesis testing of whether temperatures differ significantly from ‘natural variation’ for a considerably more difficult problem of multiple (binary) classification. Even a ‘good test’ capable of picking a 1C trend with small p-value would convert to a relatively small probability of correctly identifying >900 out of 1000 time series correctly. It is not surprising no one was successful.
Andy: Doug’s contest is meaningless. Random walk statistical models may indeed appear superior to AR(1) models for fitting 20th-century temperature. However, anyone with half a brain should be able to recognize that a random walk model is inappropriate for the Earth, even if it appears reasonable for the 20th century. In a random walk model, the most likely change grows with the square root of the number of steps (or time steps): A likely change of 1 K in 10^2 years (the 20th century) implies a change of 10 K in 10^4 years, 100 K in 10^6 years, and 1000 K in 10^8 years. Absurd! The fact that global temperature has remained within about +/-10 K for the last 100 million years tell us that the physics of climate should never be analyzed using a random walk model EVEN when it performs well for a single century.
Furthermore, science never advances by selecting the best statistical model for analyzing data. Let’s imagine we drop spherical objects with different densities from an airplane and measure how altitude changes with time. What statistical model should we use to analyze the data? Is there any chance we would discovered the correct physics by statistical analysis. Of course not. Our knowledge of physics, however, tells us that the downward force will be mg and that the force that resists motion (air-resistance) varies with the square of the derivative of altitude vs time (Velocity). Doug’s statistical models would never uncover the right equation. We’ve learned these lessons from simpler experiments under carefully controlled conditions. All of these experiments all began with a HYPOTHESIS about the physical law that governed these phenomena – they didn’t begin with data and a search for a statistical model.
Climate science is about physical models, not statistical models. Like it or not, AOGCMs try to build on simpler physics – hypothesis that have survived all attempts to invalidate them and become accepted theories. It is these models that tell us the planet is warming and what statistical model to use to analyze that warming. There are certainly significant problems with GCM’s, but if we relied on only statistics – we would still be stuck in the dark ages.
It seems people are telling us that they don’t need statistical analysis because they know the answer. But they don’t know the answer. What they know is a body of speculation about the future based on theories that might not be right. The whole enterprise of CAGW is built on the ability of models to predict future climate. In order to make good policy officials need to have predictions that they can count on to be accurate. But the accuracy of none of the models has been shown to be even half way acceptable. And by accuracy I’m referring to skill at predicting real world data in the future, not the ability to match other models. Estimates of uncertainty in model inputs and outputs is all about statistical analysis. Climate scientists have ignored the very large errors that can develop in model outputs resulting from systemic error. It’s the same thing that keeps meteorologists from being able to develop models that can predict the weather accurately out past 5 or 10 days.
In any case, in order to prove a given factor has a significant effect on the climate first we must eliminate the possibility that the changes we see in the data are due to random variation. Doing time series analysis on the HADCRUT4 global data from 1850 to now one finds no trend significantly different from zero. There is therefore no correlation to make with any factors except perhaps in the short term.
Not true, actual measurements, that show the rate of cooling change with rel humidity.