How Not To Model The Historical Temperature

Guest Post by Willis Eschenbach

Much has been made of the argument that natural forcings alone are not sufficient to explain the 20th Century temperature variations. Here’s the IPCC on the subject:

natural and anthropogenic forcings.png

I’m sure you can see the problems with this. The computer model has been optimized to hindcast the past temperature changes using both natural and anthropogenic forcings … so of course, when you pull a random group of forcings out of the inputs, it will perform more poorly.

Now, both Anthony and I often get sent the latest greatest models that purport to explain the vagaries of the historical global average temperature record. The most recent one used a cumulative sum of the sunspot series, plus the Pacific Decadal Oscillation and the North Atlantic oscillation, to model the temperature. I keep pointing out to the folks sending them that this is nothing but curve fitting … and in that most recent case, it was curve fitting plus another problem. The problem is that they are using as an input something which is part of the target. The NAO and the PDO are each a part of what makes up the global temperature average. As a result, it is circular to use them as an input.

But I digress. I started out to show how not to model the temperature. In order to do this, I wanted to find whatever the simplest model I could find which a) did not use greenhouse gases, and b) used only the forcings used by the GISS model in the Coupled Model Intercomparison Project Phase 5 (CMIP5). These were:

[1,] “WMGHG” [Well Mixed Greenhouse Gases]

[2,] “Ozone”

[3,] “Solar”

[4,] “Land_Use”

[5,] “SnowAlb_BC” [Snow Albedo (Black Carbon)]

[6,] “Orbital” [Orbital variations involving the Earth’s orbit around the sun]

[7,] “TropAerDir” [Tropospheric Aerosol Direct]

[8,] “TropAerInd” [Tropospheric Aerosol Indirect]

After a bit of experimentation, I found that I could get a very good fit using only Snow Albedo and Orbital variations. That’s one natural and one anthropogenic forcing, but no greenhouse gases. The model uses the formula

Temperature = 2012.7 * Orbital – 27.8 * Snow Albedo – 2.5

and the result looks like this:

bogus model orbital and snow albedo.png

The red line is the model, and dang, how about that fit? It matches up very well with the Gaussian smooth of the HadCRUT surface temperature data. Gosh, could it be that I’ve discovered the secret underpinnings of variations in the HadCRUT temperature data?

And here are the statistics of the fit:

Coefficients:

                              Estimate Std. Error t value Pr(>|t|)

(Intercept)                    -2.4519     0.1451 -16.894  < 2e-16 ***

hadbox[, c(9, 10)]SnowAlb_BC  -27.7521     3.2128  -8.638 5.36e-14 ***

hadbox[, c(9, 10)]Orbital    2012.7179   150.7834  13.348  < 2e-16 ***

---

Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 0.105 on 109 degrees of freedom

Multiple R-squared:  0.8553,	Adjusted R-squared:  0.8526

F-statistic: 322.1 on 2 and 109 DF,  p-value: < 2.2e-16

I mean, an R^2 of 0.85 and a p-value less than 2.2E-16, that’s my awesome model in action …

So does this mean that the global average temperature really is a function of orbital variations and snow albedo?

Don’t be daft.

All that it means is that it is ridiculously easy to fit variables to a given target dataset. Heck, I’ve done it above using only two real-world variables and three tunable parameters. If I add a few more variables and parameters, I can get an even better fit … but it will be just as meaningless as my model shown above.

Please note that I don’t even have to use data. I can fit the historical temperature record with nothing but sine waves … Nicola Scafetta keeps doing this over and over and claiming that he is making huge, significant scientific strides. In my post entitled “Congenital Cyclomania Redux“, I pointed out the following:

So far, in each of his previous three posts on WUWT, Dr. Scafetta has said that the Earth’s surface temperature is ruled by a different combination of cycles depending on the post:

First Post: 20 and 60-year cycles. These were supposed to be related to some astronomical cycles which were never made clear, albeit there was much mumbling about Jupiter and Saturn.

Second Post: 9.1, 10-11, 20 and 60-year cycles. Here are the claims made for these cycles:

9.1 years: this was justified as being sort of near to a calculation of (2X+Y)/4, where X and Y are lunar precession cycles,

10-11″ years: he never said where he got this one, or why it’s so vague.

20 years: supposedly close to an average of the sun’s barycentric velocity period.

60 years: kinda like three times the synodic period of Jupiter/Saturn. Why three times? Why not?

Third Post:  9.98, 10.9, and 11.86-year cycles. These are claimed to be

9.98 years: slightly different from a long-term average of the spring tidal period of Jupiter and Saturn.

10.9 years: may be related to a quasi 11-year solar cycle … or not.

11.86 years: Jupiter’s sidereal period.

The latest post, however, is simply unbeatable. It has no less than six different cycles, with periods of 9.1, 10.2, 21, 61, 115, and 983 years. I haven’t dared inquire too closely as to the antecedents of those choices, although I do love the “3” in the 983-year cycle.

I bring all of this up to do my best to discourage this kind of bogus curve fitting, whether it is using real-world forcings, “sunspot cycles”, or “astronomical cycles”. Why is it “bogus”? Because it uses tuned parameters, and as I showed above, when you use tuned parameters it is bozo simple to fit an arbitrary dataset using just about anything as input.

But heck, you don’t have to take my word for it. Here’s Freeman Dyson on the subject of the foolishness of using tunable parameters:

When I arrived in Fermi’s office, I handed the graphs to Fermi, but he hardly glanced at them. He invited me to sit down, and asked me in a friendly way about the health of my wife and our newborn baby son, now fifty years old. Then he delivered his verdict in a quiet, even voice. “There are two ways of doing calculations in theoretical physics”, he said. “One way, and this is the way I prefer, is to have a clear physical picture of the process that you are calculating. The other way is to have a precise and self-consistent mathematical formalism. You have neither.” I was slightly stunned, but ventured to ask him why he did not consider the pseudoscalar meson theory to be a self-consistent mathematical formalism.

He replied, “Quantum electrodynamics is a good theory because the forces are weak, and when the formalism is ambiguous we have a clear physical picture to guide us.With the pseudoscalar meson theory there is no physical picture, and the forces are so strong that nothing converges. To reach your calculated results, you had to introduce arbitrary cut-off procedures that are not based either on solid physics or on solid mathematics.”

In desperation I asked Fermi whether he was not impressed by the agreement between our calculated numbers and his measured numbers. He replied, “How many arbitrary parameters did you use for your calculations?” I thought for a moment about our cut-off procedures and said, “Four.” He said, “I remember my friend Johnny von Neumann used to say, with four parameters I can fit an elephant, and with five I can make him wiggle his trunk.” With that, the conversation was over. I thanked Fermi for his time and trouble, and sadly took the next bus back to Ithaca to tell the bad news to the students.

So, you folks who are all on about how this particular pair of “solar cycles”, or this planetary cycle plus the spring tidal period of Jupiter, or this group of forcings miraculously emulates the historical temperature with a high R^2, I implore you to take to heart Enrico Fermi’s advice before trying to sell your whiz-bang model in the crowded marketplace of scientific ideas. Here’s the bar that you need to clear:

“One way, and this is the way I prefer, is to have a clear physical picture of the process that you are calculating. The other way is to have a precise and self-consistent mathematical formalism. You have neither.”

So … if you look at your model and indeed “You have neither”, please be as honest as Freeman Dyson and don’t bother sending your model to me. I can’t speak for Anthony, but these kinds of multi-parameter fitted models are not interesting to me in the slightest.

Finally, note that I’ve done this hindcasting of historical temperatures with a one-line equation and two forcings … so do we think it’s amazing that a hugely complex computer model using ten forcings can hindcast historical temperatures?

My regards to you all on a rainy, rainy night,

w.

The Usual Polite Request: Please quote the exact words that you are discussing. It prevents all kinds of misunderstandings. Only gonna ask once. That’s all.

 

Get notified when a new post is published.
Subscribe today!
0 0 votes
Article Rating
181 Comments
Inline Feedbacks
View all comments
March 29, 2018 1:29 pm

Astroclimate, I think you express some good points.
The problem as I see it.
1) The computer models the IPCC built are ridiculous. The numerical error that the models build over time makes them impossible to think produces a reasonable result.
2) It is obvious there are many factors and modeling everything would be tantamount to the GUT theory of physics.
3) Ultimately, we do have to have a model or formula because that is the nature of science to create something that predicts. That’s the point.
The definition of science could be “producing a mathematical model that predicts consistently.” If someone says I understand we should be skeptical until they can make a prediction and that prediction can be tested. Repeated predictions and currect answers gains trust and eventually we say it is good enough to be “science.” This is true wether we are talking about psychology, physics, sociology. If we can’t predict we can’t be a science. If our predictions are not useful or in error, then we are not a science. We call something a science when the predictions are reliable enough we can build, plan, offer advice with some certainty.
I think this would be a good reason to say Climate “science” is misnamed because until there are proven relationships that can be measured and depended on then we can’t say this is a science. So far, to my knowledge there is nothing I know they can actually predict at all. That is the reason to doubt this “Science.” What are the things they think they can predict? Why? I don’t see it.
In my opinion to become a science climate people need to start with basics. They need to have measurable predictable things that are repeatable. That is hard but not impossible. I would focus on that instead of computer models which are obviously bogus. Of course, since we all know this is politically motivated that is not where they want to go. They are doing everything they can to justify their politics and almost NO real science. Real science would be:
1) Prove that co2 is excited by radiation and acts in closed environments that have similar gases at different pressures. Be able to show repeatable results.
2) Demonstrate how clouds are formed precisely and simulate producing them in controlled environments.
3) Get a LOT more data about the ocean,
4) the Earths physical shape modified by gravity and how this effects volcano’s, earthquakes, underwater sea vents.
5) Try to make predictions how different types of solar radiation effect the ocean, land and the ocean. Try to understand the sun a lot better.
6) Abandon surface thermostats and put up more satellites to measure everything. The idea of taking a small number of thermostats with incredibly limited coverage and trying to extrapolate to worldwide is too problematic and inconsistent.
7) Instead of building giant supercomputers to run bogus impossible models. Spend the money on large, really large environments to simulate experimental conditions.
you can see my blog: https://logiclogiclogic.wordpress.com/category/climate-change/