Guest Post by Willis Eschenbach
Much has been made of the argument that natural forcings alone are not sufficient to explain the 20th Century temperature variations. Here’s the IPCC on the subject:
I’m sure you can see the problems with this. The computer model has been optimized to hindcast the past temperature changes using both natural and anthropogenic forcings … so of course, when you pull a random group of forcings out of the inputs, it will perform more poorly.
Now, both Anthony and I often get sent the latest greatest models that purport to explain the vagaries of the historical global average temperature record. The most recent one used a cumulative sum of the sunspot series, plus the Pacific Decadal Oscillation and the North Atlantic oscillation, to model the temperature. I keep pointing out to the folks sending them that this is nothing but curve fitting … and in that most recent case, it was curve fitting plus another problem. The problem is that they are using as an input something which is part of the target. The NAO and the PDO are each a part of what makes up the global temperature average. As a result, it is circular to use them as an input.
But I digress. I started out to show how not to model the temperature. In order to do this, I wanted to find whatever the simplest model I could find which a) did not use greenhouse gases, and b) used only the forcings used by the GISS model in the Coupled Model Intercomparison Project Phase 5 (CMIP5). These were:
[1,] “WMGHG” [Well Mixed Greenhouse Gases]
[5,] “SnowAlb_BC” [Snow Albedo (Black Carbon)]
[6,] “Orbital” [Orbital variations involving the Earth’s orbit around the sun]
[7,] “TropAerDir” [Tropospheric Aerosol Direct]
[8,] “TropAerInd” [Tropospheric Aerosol Indirect]
After a bit of experimentation, I found that I could get a very good fit using only Snow Albedo and Orbital variations. That’s one natural and one anthropogenic forcing, but no greenhouse gases. The model uses the formula
Temperature = 2012.7 * Orbital – 27.8 * Snow Albedo – 2.5
and the result looks like this:
The red line is the model, and dang, how about that fit? It matches up very well with the Gaussian smooth of the HadCRUT surface temperature data. Gosh, could it be that I’ve discovered the secret underpinnings of variations in the HadCRUT temperature data?
And here are the statistics of the fit:
Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) -2.4519 0.1451 -16.894 < 2e-16 *** hadbox[, c(9, 10)]SnowAlb_BC -27.7521 3.2128 -8.638 5.36e-14 *** hadbox[, c(9, 10)]Orbital 2012.7179 150.7834 13.348 < 2e-16 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 0.105 on 109 degrees of freedom Multiple R-squared: 0.8553, Adjusted R-squared: 0.8526 F-statistic: 322.1 on 2 and 109 DF, p-value: < 2.2e-16
I mean, an R^2 of 0.85 and a p-value less than 2.2E-16, that’s my awesome model in action …
So does this mean that the global average temperature really is a function of orbital variations and snow albedo?
Don’t be daft.
All that it means is that it is ridiculously easy to fit variables to a given target dataset. Heck, I’ve done it above using only two real-world variables and three tunable parameters. If I add a few more variables and parameters, I can get an even better fit … but it will be just as meaningless as my model shown above.
Please note that I don’t even have to use data. I can fit the historical temperature record with nothing but sine waves … Nicola Scafetta keeps doing this over and over and claiming that he is making huge, significant scientific strides. In my post entitled “Congenital Cyclomania Redux“, I pointed out the following:
So far, in each of his previous three posts on WUWT, Dr. Scafetta has said that the Earth’s surface temperature is ruled by a different combination of cycles depending on the post:
First Post: 20 and 60-year cycles. These were supposed to be related to some astronomical cycles which were never made clear, albeit there was much mumbling about Jupiter and Saturn.
Second Post: 9.1, 10-11, 20 and 60-year cycles. Here are the claims made for these cycles:
9.1 years: this was justified as being sort of near to a calculation of (2X+Y)/4, where X and Y are lunar precession cycles,
“10-11″ years: he never said where he got this one, or why it’s so vague.
20 years: supposedly close to an average of the sun’s barycentric velocity period.
60 years: kinda like three times the synodic period of Jupiter/Saturn. Why three times? Why not?
Third Post: 9.98, 10.9, and 11.86-year cycles. These are claimed to be
9.98 years: slightly different from a long-term average of the spring tidal period of Jupiter and Saturn.
10.9 years: may be related to a quasi 11-year solar cycle … or not.
11.86 years: Jupiter’s sidereal period.
The latest post, however, is simply unbeatable. It has no less than six different cycles, with periods of 9.1, 10.2, 21, 61, 115, and 983 years. I haven’t dared inquire too closely as to the antecedents of those choices, although I do love the “3” in the 983-year cycle.
I bring all of this up to do my best to discourage this kind of bogus curve fitting, whether it is using real-world forcings, “sunspot cycles”, or “astronomical cycles”. Why is it “bogus”? Because it uses tuned parameters, and as I showed above, when you use tuned parameters it is bozo simple to fit an arbitrary dataset using just about anything as input.
But heck, you don’t have to take my word for it. Here’s Freeman Dyson on the subject of the foolishness of using tunable parameters:
When I arrived in Fermi’s office, I handed the graphs to Fermi, but he hardly glanced at them. He invited me to sit down, and asked me in a friendly way about the health of my wife and our newborn baby son, now fifty years old. Then he delivered his verdict in a quiet, even voice. “There are two ways of doing calculations in theoretical physics”, he said. “One way, and this is the way I prefer, is to have a clear physical picture of the process that you are calculating. The other way is to have a precise and self-consistent mathematical formalism. You have neither.” I was slightly stunned, but ventured to ask him why he did not consider the pseudoscalar meson theory to be a self-consistent mathematical formalism.
He replied, “Quantum electrodynamics is a good theory because the forces are weak, and when the formalism is ambiguous we have a clear physical picture to guide us.With the pseudoscalar meson theory there is no physical picture, and the forces are so strong that nothing converges. To reach your calculated results, you had to introduce arbitrary cut-off procedures that are not based either on solid physics or on solid mathematics.”
In desperation I asked Fermi whether he was not impressed by the agreement between our calculated numbers and his measured numbers. He replied, “How many arbitrary parameters did you use for your calculations?” I thought for a moment about our cut-off procedures and said, “Four.” He said, “I remember my friend Johnny von Neumann used to say, with four parameters I can fit an elephant, and with five I can make him wiggle his trunk.” With that, the conversation was over. I thanked Fermi for his time and trouble, and sadly took the next bus back to Ithaca to tell the bad news to the students.
So, you folks who are all on about how this particular pair of “solar cycles”, or this planetary cycle plus the spring tidal period of Jupiter, or this group of forcings miraculously emulates the historical temperature with a high R^2, I implore you to take to heart Enrico Fermi’s advice before trying to sell your whiz-bang model in the crowded marketplace of scientific ideas. Here’s the bar that you need to clear:
“One way, and this is the way I prefer, is to have a clear physical picture of the process that you are calculating. The other way is to have a precise and self-consistent mathematical formalism. You have neither.”
So … if you look at your model and indeed “You have neither”, please be as honest as Freeman Dyson and don’t bother sending your model to me. I can’t speak for Anthony, but these kinds of multi-parameter fitted models are not interesting to me in the slightest.
Finally, note that I’ve done this hindcasting of historical temperatures with a one-line equation and two forcings … so do we think it’s amazing that a hugely complex computer model using ten forcings can hindcast historical temperatures?
My regards to you all on a rainy, rainy night,
The Usual Polite Request: Please quote the exact words that you are discussing. It prevents all kinds of misunderstandings. Only gonna ask once. That’s all.