How Not To Model The Historical Temperature

Guest Post by Willis Eschenbach

Much has been made of the argument that natural forcings alone are not sufficient to explain the 20th Century temperature variations. Here’s the IPCC on the subject:

natural and anthropogenic forcings.png

I’m sure you can see the problems with this. The computer model has been optimized to hindcast the past temperature changes using both natural and anthropogenic forcings … so of course, when you pull a random group of forcings out of the inputs, it will perform more poorly.

Now, both Anthony and I often get sent the latest greatest models that purport to explain the vagaries of the historical global average temperature record. The most recent one used a cumulative sum of the sunspot series, plus the Pacific Decadal Oscillation and the North Atlantic oscillation, to model the temperature. I keep pointing out to the folks sending them that this is nothing but curve fitting … and in that most recent case, it was curve fitting plus another problem. The problem is that they are using as an input something which is part of the target. The NAO and the PDO are each a part of what makes up the global temperature average. As a result, it is circular to use them as an input.

But I digress. I started out to show how not to model the temperature. In order to do this, I wanted to find whatever the simplest model I could find which a) did not use greenhouse gases, and b) used only the forcings used by the GISS model in the Coupled Model Intercomparison Project Phase 5 (CMIP5). These were:

[1,] “WMGHG” [Well Mixed Greenhouse Gases]

[2,] “Ozone”

[3,] “Solar”

[4,] “Land_Use”

[5,] “SnowAlb_BC” [Snow Albedo (Black Carbon)]

[6,] “Orbital” [Orbital variations involving the Earth’s orbit around the sun]

[7,] “TropAerDir” [Tropospheric Aerosol Direct]

[8,] “TropAerInd” [Tropospheric Aerosol Indirect]

After a bit of experimentation, I found that I could get a very good fit using only Snow Albedo and Orbital variations. That’s one natural and one anthropogenic forcing, but no greenhouse gases. The model uses the formula

Temperature = 2012.7 * Orbital – 27.8 * Snow Albedo – 2.5

and the result looks like this:

bogus model orbital and snow albedo.png

The red line is the model, and dang, how about that fit? It matches up very well with the Gaussian smooth of the HadCRUT surface temperature data. Gosh, could it be that I’ve discovered the secret underpinnings of variations in the HadCRUT temperature data?

And here are the statistics of the fit:

Coefficients:

                              Estimate Std. Error t value Pr(>|t|)

(Intercept)                    -2.4519     0.1451 -16.894  < 2e-16 ***

hadbox[, c(9, 10)]SnowAlb_BC  -27.7521     3.2128  -8.638 5.36e-14 ***

hadbox[, c(9, 10)]Orbital    2012.7179   150.7834  13.348  < 2e-16 ***

---

Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 0.105 on 109 degrees of freedom

Multiple R-squared:  0.8553,	Adjusted R-squared:  0.8526

F-statistic: 322.1 on 2 and 109 DF,  p-value: < 2.2e-16

I mean, an R^2 of 0.85 and a p-value less than 2.2E-16, that’s my awesome model in action …

So does this mean that the global average temperature really is a function of orbital variations and snow albedo?

Don’t be daft.

All that it means is that it is ridiculously easy to fit variables to a given target dataset. Heck, I’ve done it above using only two real-world variables and three tunable parameters. If I add a few more variables and parameters, I can get an even better fit … but it will be just as meaningless as my model shown above.

Please note that I don’t even have to use data. I can fit the historical temperature record with nothing but sine waves … Nicola Scafetta keeps doing this over and over and claiming that he is making huge, significant scientific strides. In my post entitled “Congenital Cyclomania Redux“, I pointed out the following:

So far, in each of his previous three posts on WUWT, Dr. Scafetta has said that the Earth’s surface temperature is ruled by a different combination of cycles depending on the post:

First Post: 20 and 60-year cycles. These were supposed to be related to some astronomical cycles which were never made clear, albeit there was much mumbling about Jupiter and Saturn.

Second Post: 9.1, 10-11, 20 and 60-year cycles. Here are the claims made for these cycles:

9.1 years: this was justified as being sort of near to a calculation of (2X+Y)/4, where X and Y are lunar precession cycles,

10-11″ years: he never said where he got this one, or why it’s so vague.

20 years: supposedly close to an average of the sun’s barycentric velocity period.

60 years: kinda like three times the synodic period of Jupiter/Saturn. Why three times? Why not?

Third Post:  9.98, 10.9, and 11.86-year cycles. These are claimed to be

9.98 years: slightly different from a long-term average of the spring tidal period of Jupiter and Saturn.

10.9 years: may be related to a quasi 11-year solar cycle … or not.

11.86 years: Jupiter’s sidereal period.

The latest post, however, is simply unbeatable. It has no less than six different cycles, with periods of 9.1, 10.2, 21, 61, 115, and 983 years. I haven’t dared inquire too closely as to the antecedents of those choices, although I do love the “3” in the 983-year cycle.

I bring all of this up to do my best to discourage this kind of bogus curve fitting, whether it is using real-world forcings, “sunspot cycles”, or “astronomical cycles”. Why is it “bogus”? Because it uses tuned parameters, and as I showed above, when you use tuned parameters it is bozo simple to fit an arbitrary dataset using just about anything as input.

But heck, you don’t have to take my word for it. Here’s Freeman Dyson on the subject of the foolishness of using tunable parameters:

When I arrived in Fermi’s office, I handed the graphs to Fermi, but he hardly glanced at them. He invited me to sit down, and asked me in a friendly way about the health of my wife and our newborn baby son, now fifty years old. Then he delivered his verdict in a quiet, even voice. “There are two ways of doing calculations in theoretical physics”, he said. “One way, and this is the way I prefer, is to have a clear physical picture of the process that you are calculating. The other way is to have a precise and self-consistent mathematical formalism. You have neither.” I was slightly stunned, but ventured to ask him why he did not consider the pseudoscalar meson theory to be a self-consistent mathematical formalism.

He replied, “Quantum electrodynamics is a good theory because the forces are weak, and when the formalism is ambiguous we have a clear physical picture to guide us.With the pseudoscalar meson theory there is no physical picture, and the forces are so strong that nothing converges. To reach your calculated results, you had to introduce arbitrary cut-off procedures that are not based either on solid physics or on solid mathematics.”

In desperation I asked Fermi whether he was not impressed by the agreement between our calculated numbers and his measured numbers. He replied, “How many arbitrary parameters did you use for your calculations?” I thought for a moment about our cut-off procedures and said, “Four.” He said, “I remember my friend Johnny von Neumann used to say, with four parameters I can fit an elephant, and with five I can make him wiggle his trunk.” With that, the conversation was over. I thanked Fermi for his time and trouble, and sadly took the next bus back to Ithaca to tell the bad news to the students.

So, you folks who are all on about how this particular pair of “solar cycles”, or this planetary cycle plus the spring tidal period of Jupiter, or this group of forcings miraculously emulates the historical temperature with a high R^2, I implore you to take to heart Enrico Fermi’s advice before trying to sell your whiz-bang model in the crowded marketplace of scientific ideas. Here’s the bar that you need to clear:

“One way, and this is the way I prefer, is to have a clear physical picture of the process that you are calculating. The other way is to have a precise and self-consistent mathematical formalism. You have neither.”

So … if you look at your model and indeed “You have neither”, please be as honest as Freeman Dyson and don’t bother sending your model to me. I can’t speak for Anthony, but these kinds of multi-parameter fitted models are not interesting to me in the slightest.

Finally, note that I’ve done this hindcasting of historical temperatures with a one-line equation and two forcings … so do we think it’s amazing that a hugely complex computer model using ten forcings can hindcast historical temperatures?

My regards to you all on a rainy, rainy night,

w.

The Usual Polite Request: Please quote the exact words that you are discussing. It prevents all kinds of misunderstandings. Only gonna ask once. That’s all.

 

0 0 votes
Article Rating
181 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
HAS
March 24, 2018 11:10 pm

Amen to that

Germonio
March 24, 2018 11:12 pm

Willis,
There is I think a difference between modelling and fitting that you are ignoring. You have provided a
two parameter (or function) FIT to the temperature. What a climate model does is take the forcing as the
inputs to a set of nonlinear partial differential equations. There is no reason to expect the output of such a simulation to match the historical record unless the underlying physics is correct. You have two free parameters in your fit – the amplitude of the orbital parameters and the snow albedo. A climate model
would not have the option to change the amplitude of the forcing and so has fewer free parameters than your fit.

Germonio
Reply to  Willis Eschenbach
March 24, 2018 11:27 pm

yes. But the forcing aren’t one of them. They are the inputs to the simulations. Again climate models do not
fit the data they attempt to model the underlying physics and the way they tell the are approximately correct is that they output is similar to the historical record. This is very different from fitting the data using a set of sine waves.

ferdberple
Reply to  Willis Eschenbach
March 25, 2018 12:06 am

tunable parameters
===========
and they are adjusted after the fact to improve the fit to the model builders expectation’s.
in the end the models predict what the model builders expect will happen. else the parameters will be adjusted.

HAS
Reply to  Willis Eschenbach
March 25, 2018 12:14 am

Germonio, to model from the underlying physics GCMs would need to model at that scale. They can’t do that in most cases because of computational constraints. Instead they use lower resolution approximations (thousands of cubic kms in size). Getting to these approximations involves parameter estimation within the relevant small scale models that are used to describe the behaviour of these voxels (3D pixels) that are the smallest element in the GCM. A lot goes on below that scale.
If you see what I mean.

WXcycles
Reply to  Willis Eschenbach
March 25, 2018 12:21 am

” …. they >> attempt << to model the underlying physics … "
—-
Via tuning parameters!
Which is approxing to fit the past data, then projecting this preferred fit.
The data is very limited, but your projection is anything you tune.
And then call it 'science'.
It's shameless rubbish … examining the entrails.

Crispin in Waterloo but really in Ulaanbaatar
Reply to  Willis Eschenbach
March 25, 2018 6:21 am

Yes Geronimo, they try to model the underlying physics, but have three massive problems:
1. Clouds, an in the Eschenbach thunderstorm hypothesis is missing,
2. The feedback error described in detail by Monckton a few days ago in his Amicus Brief
3. The assumption that an atmosphere with no GHG’s would have, just above the ground, a temperature the same as the surface of the moon, in spite of clear sky surface heating of it.
Item 1 has been dealt with and demonstrated to hold a great deal of water.
Item 2 has been soundly addressed by Monckton however he also makes the mistake of accepting the claim that a non-GHG atmosphere would be as cold as a naked moon. He called it ‘the native state’ and pretty clearly implies that means ‘an atmosphere without GHG’s’. Gavin Schmidt is more specific saying explicit that given the same albedo, the atmosphere absent all GHG’s would be the same -18 C as the average temperature of the moon ‘because of having no GHG’s’. Trenberth and other including the IPCC make the more serious error of saying explicitly that a naked moon is -18, that the Earth’s atmosphere is +15 approximately, and that the difference is due to GHG’s only, not the atmosphere – ie. the conduction of heat to the atmosphere from the surface (aka urban heat island effect) would cease in the absence of GHG’s. Implicit in that is the claim that a bald moon and a non-GHG Earth with an atmosphere would be the same temperature either at the surface or just above it, respectively.
So the models are tuned to replicate the temperatures while having to overcome not only those three logic-killing errors, they also are set to have a large heating value for CO2, even for the first few ppm.
In fact, a surface (convection) heated transparent atmosphere would be significantly cooled by the addition of a few ppm of CO2 doing what the non-GHG’s can’t: radiating IR.
Imagine then, how many parameters must be to tuned to overcome these fundamental errors and still claim a large heating role for CO2? It is mathematical gibberish. It is an elephant that wiggles it’s trunks and takes mud baths with a tiger.
Willis shows it is trivial to pick a few data sets and create a parameterized match for the temperature series. If GISS edited the temperatures, Willis could tune his multipliers and reproduce the new curve.
Interestingly, and in slight opposition to Willis’ current article, if 5 or 6 celestial cycles reproduce the past temperatures pretty well and can accurately predict the future temperature profile of the atmosphere then it is a useful tool.
In 2004 a solar cycles-obsessed guy predicted that there would be a major drought in the Continental US this year. If the prediction proves correct, then it is a useful tool. If Willis’ model can match CMIP with a handheld calculator, we can save a lot of money on super computers.

Latitude
Reply to  Willis Eschenbach
March 25, 2018 7:00 am

“and they are adjusted after the fact to improve the fit to the model builders expectation’s.”
obviously….how can anyone combine those two models results….and have the end product go up?

MarkW
Reply to  Willis Eschenbach
March 25, 2018 9:01 am

Germino has demonstrated that he has never actually studied the models he is claiming to defend.
The models do not calculate everything from first principles. They can’t. Not when they use grid cells that are 100km on a side.
Crucial things like clouds are completely parameterized. In other words they are nothing more than variables that are tweaked until the output satisfies the modelers.

TRM
Reply to  Willis Eschenbach
March 25, 2018 9:27 am

“Crispin in Waterloo but really in Ulaanbaatar March 25, 2018 at 6:21 am
Item 1 has been dealt with and demonstrated to hold a great deal of water.”
Groooooaaaaaannnnnnnn. I hope that pun was intended cause it was a good one.

Kristi Silber
Reply to  Willis Eschenbach
March 25, 2018 12:13 pm

Willis, thank you for writing about this. It has been a concern of mine, too. I’m curious – how would you address the matter? How would you try to demonstrate or disprove the idea that human-produced CO2 affects temperature?
Can you please explain the difference between tuning and flux adjustment? I’m a bit confused.
“The computer model has been optimized to hindcast the past temperature changes using both natural and anthropogenic forcings … so of course, when you pull a random group of forcings out of the inputs, it will perform more poorly.”
I still have much to learn about models and their testing, but it seems like this might be relevant:
“The question of whether the twentieth-century warming should be considered a target of model development or an emergent property is polarizing the climate modeling community, with 35% of modelers stating that twentieth-century warming was rated very important to decisive, whereas 30% would not consider it at all during development. Some view the temperature record as an independent evaluation dataset not to be used, while others view it as a valuable observational constraint on the model development. Likewise, opinions diverge as to which measures, either forcing or ECS, are legitimate means for improving the model match to observed warming. The question of developing toward the twentieth-century warming therefore is an area of vigorous debate within the community.”
The stats are from a poll of 22 modeling groups. I think it’s good to note there there is very active debate within the climate modeling community – a sign that all is not “groupthink” and there is a diversity of views. “The Art and Science of Climate Model Tuning” – interesting paper.
https://journals.ametsoc.org/doi/full/10.1175/BAMS-D-15-00135.1
……………………………
Also, this model appears to be tested against pre-industrial climate, not that of the 20th C:
“We integrated the CLIMBER-2 model for 5,000 model years with a prescribed
atmospheric CO2 concentration of 280 ppm to obtain a pre-industrial equilibrium
climate (with no climate drift), from which the CO2 scenario simulations were
started. Model parameters that were not fixed a priori were determined by tuning
the atmospheric and oceanic components separately for present conditions before
coupling (e.g., constants in the cloud parameterisation). No space-dependent parameters
were tuned (so-called ‘hidden flux adjustments’), neither were any flux
adjustments used in the coupling. The model has been validated against the climate
of the Last Glacial Maximum at 21 kyr b.p. (Ganopolski et al., 1998b) and found to agree well with paleo-climate reconstructions, not only for surface temperatures
but also for the simulated changes in thermohaline ocean circulation. Other
time slices (Holocene optimum at 6 kyr b.p., Ganopolski et al., 1998a, Eemian
interglacial at 125 kyr b.p., glacial inception at 115 kyr b.p.) as well as a transient
Holocene experiment (9 kyr b.p. up to the present; manuscripts in preparation) have
also been analysed and have not revealed any major discrepancies between model
and paleo-data.”
http://www.pik-potsdam.de/~Stefan/Publications/Journals/rg99.pdf
Although some model parameters were fixed a priori, I would think that if it is done before the coupling there would not be the circularity you’re talking about. Is this not the case? (This isn’t a full GCM, but still…)
……………………………….
Also:
“However, practices differ significantly on some key aspects, in particular, in the use of initialized forecast analyses as a tool, the explicit use of the historical transient record, and the use of the present-day radiative imbalance vs. the implied balance in the preindustrial era as a target”
https://www.geosci-model-dev.net/10/3207/2017/gmd-10-3207-2017.pdf
Interestingly, the peer review exchanges are available on the journal site, too. I hadn’t seen that before. https://www.geosci-model-dev.net/10/3207/2017/gmd-10-3207-2017-discussion.html
……………………………..
And
“A common claim, however, is that they’re then tuned so as to either match the 20th century warming or to produce specific climate sensitivities. These, however, are not amongst the emergent constraints used for model tuning. As the paper says
“‘None of the models described here use the temperature trend over the historical period directly as a tuning target, nor are any of the models tuned to set climate sensitivity to some preexisting assumption.’”
https://andthentheresphysics.wordpress.com/2017/09/05/climate-model-tuning/ (Quote refers to paper just above.)
……………………………………….
Seems to me there is a diversity of model types, modeling strategies and testing techniques, and that it might be wiser to talk about individual models or techniques rather than generalizing to all models. Either that or show that all models are subject to the circularity you talk about. Or maybe I’m completely misunderstanding you?

Science or Fiction
Reply to  Willis Eschenbach
March 25, 2018 2:46 pm

Thanks a lot Willis for an excellent post with a brilliant example.
I think that the large range in energy fluxes between the CMIP5 models may serve as an indication of the amount of tuning that is involved to fit observed global energy accumulation. (CMIP5 =Climate Model Incomparison Project, the models that IPCC relied on).
See: The energy balance over land and oceans: an assessment based on direct observations and CMIP5 climate models – Wild et al 2014
Here are some examples of the range of energy fluxes that is spanned out by the models:
(See Table 2: Simulated energy balance components averaged over land, oceans and the entire globe from 43 CMIP5/IPCC AR5 models at the TOA, atmosphere, and surface)
Surface (All units: W/m2):

Solar down: 18.6
Solar up: 10.5
Solar net: 17.2
Thermal down: 18.5
Thermal up: 11.8
Thermal net: 15.7
Net radiation: 17.2
Latent heat: 13.9
Sensible heat: 13.1
(Averages are taken over the period 2000–2004)”
—————-
a) Taking into account that the current energy accumulation on earth is estimated from observation of ocean warming to be around 0.6 W/m2 (ref.: IPCC;AR5;WGI;page 181; 2.3.1 Global Mean Radiation Budget: Ref.: “considering a global heat storage of 0.6 W m–2»),
b) Also taking into account that the models arrive at similar results despite a variation in energy fluxes (both up and down) between the models that seems to be 10 fold the observed global energy accumulation.
I think it is fair to assume that the models would have been all over the place if not constrained by heavily tuning to fit various observations.
Anyhow it is pretty clear that the models are heavily tuned. Hence, I think it is also clear that the models per se cannot be regarded as a valid proof for anything.

RACookPE1978
Editor
Reply to  Germonio
March 24, 2018 11:26 pm

But those climate models, for all of their approximations of the billions of partial differential equations at assumed trillions of intersecting planes, do not work.

Nick Stokes
Reply to  Germonio
March 24, 2018 11:34 pm

” What a climate model does is take the forcing as the inputs to a set of nonlinear partial differential equations. “
It actually doesn’t do that. The forcings are diagnostics, trying to summarize the W/m2 effects of the inputs, but they are not input. It’s very hard to input a global average into a spatial pde.
With GHG’s, for example, the gas concentrations are input, or even, sometimes, the emissions. Then someone tries to work out what forcing effect they might have. This might be extracted from intermediate results in the GCM, or from the final results. It could also use the radiative transfer part of the code independently.

HAS
Reply to  Nick Stokes
March 25, 2018 12:21 am

Actually what happens is tht scenarios are created that will generate particular forcings, and these are then used by the modellers to create simulations that produces the target forcing. If you trace back the process the forcings are the input that drives the GCMs.

Nick Stokes
Reply to  Nick Stokes
March 25, 2018 1:05 am

“Actually what happens is that scenarios are created that will generate particular forcings”
I think that’s not really true, though it wouldn’t matter if it were. Consider the famous RCP8.5. RCP stands for representative concentration pathway. It gives what GCM’s need – gas concentration data which are the actual inputs to the GCM. The 8.5 means that the forcing in 2100 is expected to be 8.5 W/m2. The R in RCP means that the RCP’s chosen, 8.5, 6.0, 4.5 and 2.6 span the range of things that might happen. It would be wasteful to spend the effort on scenarios that are too close, while another range remains untested. So the characterisation by 2100 forcing gives a one-number measure of the spacing.
There is now a lot of experience with scenarios and results, and they may well have had an eye on the expected 2100 forcing when designing the scenario. That would make sense. But there is much more to a scenario than that single number. The key thing is the relation between a likely emissions evolution and the gas concentrations that are the actual input to the computation.

HAS
Reply to  Nick Stokes
March 25, 2018 1:48 am

Nick, you claim first that somehow the forcings are not the independent input into GCMs. You now say that if they are, it doesn’t matter – what happens is the emissions and concentrations are the key input. So you are dining out on a technical point of what goes into the model, emissions and concentrations selected for their forcing effect or the forcing itself.
“The scenario development process aims to develop a set of new scenarios that facilitate integrated analysis of climate change across the main scientific communities. The process comprises 3 main phases: 1) an initial phase, developing a set of pathways for emissions, concentrations and radiative forcing, 2) a parallel phase, comprising both the development of new socio-economic storylines and climate model projections, and 3) an integration phase, combining the information from the first phases into holistic mitigation, impacts and vulnerability assessments. The pathways developed in the first phase were called “Representative Concentration Pathways (RCPs)”. They play an important role in providing input for prospective climate model experiments, including both the decadal and long-term projections of climate change.” van Vuuren, D.P., Edmonds, J.A., Kainuma, M. et al. Climatic Change (2011) 109: 1. https://doi.org/10.1007/s10584-011-0157-y
Got it?

Nick Stokes
Reply to  Nick Stokes
March 25, 2018 2:15 am

“You now say that if they are, it doesn’t matter”
No. I say that if trget forcings were used to design scenarios, it wouldn’t matter.
You simply can’t use forcings as input. A global average power has no place in the discretised pde. There is nowhere to put it. You have to do everything by cell.
AS to van Vuuren
“an initial phase, developing a set of pathways for emissions, concentrations and radiative forcing”
You can seek to do that. But the emissions have to be converted to concentrations by some appropriate model. And the radiative forcing could be a target. But what you actually need as input is a set of gas concentrations in each of the grid cells. Here is the descriptio of the treatment of GHGs in CAM 3.0.
“We have chosen to specify globally uniform surface concentrations of the four gases, rather than their surface fluxes.”
And there is certainly nothing there about inputting radiative forcings.

HAS
Reply to  Nick Stokes
March 25, 2018 2:16 am

And just BTW if you think AR6 might be wandering off somewhere else:
“We use the baseline SSP scenarios as the starting point for a comprehensive mitigation analysis. To maximize the usefulness of our assessment for the community scenario process, we select the nominal RCP forcing levels of 2.6, 4.5, and 6.0 W/m2 in 2100 as the long-term climate targets for our mitigation scenarios.” “The Shared Socioeconomic Pathways and their energy, land use, and greenhouse gas emissions implications: An overview.” Riahi et al Global Environmental Change 42 (2017) 153–168

HAS
Reply to  Nick Stokes
March 25, 2018 2:21 am

So you are dining out on a technical point of what goes into the model.
“What a climate model does is take the forcing to develop the inputs to a set of nonlinear partial differential equations.”
There, fixed it.

Nick Stokes
Reply to  Nick Stokes
March 25, 2018 2:27 am

“There, fixed it.”
No. The climate model doesn’t develop the input to a set of pde’s. The climate model is the set of pdes. Someone else figures out the input, as concentration pathways.

John Bills
Reply to  Nick Stokes
March 25, 2018 4:15 am
Reply to  Nick Stokes
March 25, 2018 9:33 am

And the radiative transfer code follows the Schwarzschild equation with an emissivity of 1 all the way up?

HAS
Reply to  Nick Stokes
March 25, 2018 12:11 pm

Nick, you are of course technically correct. Let’s just go back to what I said before you attempted to divert the conversation:
“Actually what happens is that scenarios are created that will generate particular forcings, and these are then used by the modellers to create simulations that produces the target forcing. If you trace back the process the forcings are the input that drives the GCMs.”
It has taken you some time to get there, but you did in the end.

Science or Fiction
Reply to  Nick Stokes
March 25, 2018 2:17 pm

I find this section in the fifth assessment report from the IPCC kind of amusing:
“..Representative Concentration Pathways, are referred to as pathways in order to emphasize that they are not definitive scenarios, but rather internally consistent sets of time-dependent forcing projections that could potentially be realized with more than one underlying socioeconomic scenario. … They are representative in that they are one of several different scenarios, sampling the full range of published scenarios (including mitigation scenarios) at the time they were defined, that have similar RF and emissions characteristics. … The primary objective of these scenarios is to provide all the input variables necessary to run comprehensive climate models in order to reach a target RF … »
ref.: IPCC; WGI; AR5; Page 1045
https://www.ipcc.ch/pdf/assessment-report/ar5/wg1/WG1AR5_Chapter12_FINAL.pdf
This is kind of testing a modeling groups ability to construct an acceptable model by asking the question:
Given that the expected radiative forcing in the year 2100 for this set of input variables, that is called RCP8.5, is 8.5 W/m2 – what is the radiative forcing calculated by your model in the year 2100 for this set of input variables?

Trebla
Reply to  Germonio
March 25, 2018 5:27 am

Geronimo: what about the “tuneable” feedback of water vapour and cloud cover?

Alan Tomalty
Reply to  Trebla
March 25, 2018 1:38 pm

“Actually what happens is that scenarios are created that will generate particular forcings, and these are then used by the modellers to create simulations that produces the target forcing. If you trace back the process the forcings are the input that drives the GCMs.”
Sounds like circular reasoning to me.

Reply to  Germonio
March 25, 2018 6:02 am

Freeman Dyson recognized what few people understand. When you don’t start with a physical model and use parameters what you get is a fitted curve. As the sage said, when you do curve fitting you can fit anything. What you don’t get is a model that can PREDICT anything. A curve fit is useful only in making what appears to be a pattern in the data look cleaner and more impressive.
Curve fitting of any kind is never a substitute for a basic physical model. E=mC^2 is a model. It has made some very useful predictions, but still is lacking in the details, which people are working on. Einstein didn’t come up with a curve fitted to data, he came up with a physical conception, unthought of at the time, that worked to explain some of the data and make verifiable predictions.

Crispin in Waterloo but really in Ulaanbaatar
Reply to  logicalchemist
March 25, 2018 6:37 am

Logicalchemist
“Curve fitting of any kind is never a substitute for a basic physical model.”
Well, sort of, but ‘rules of thumb’ engineering has a long and productive history. Before the invention of the slide rule many, many engineering calculations were done with simple ‘close enough’ formulae requiring only pencil and paper.
I hold that anything that ‘works’ is useful. A lot in QM is like that: imperfect, poorly understood, but good enough for government work.

Reply to  logicalchemist
March 25, 2018 7:07 pm

Indeed, Crispin. I had a professor in undergraduate geophysics who would begin discussions of new ideas with back of the napkin estimates using dimensional analysis and order of magnitude guesstimates. His point every time was about how close you could get with a friend over a beer; zillions of $$$ in additional funding were just to add a couple more significant figures. It made a big impact on me.

Phoenix44
Reply to  Germonio
March 25, 2018 11:10 am

That’s just not right. You can tune all sorts of things to make it fit, including your starting assumptions. And even then the forcings can be tuned – that’s why there is absolutely no agreement about the sensitivity to CO2.
And what do the equations do exactly? Are they fundamental physical processes? No.

Terry
March 24, 2018 11:31 pm

One of the most sensible and concise posts I have seen on this topic of curve fitting I have seen for a while. Well done Willis.

michael hart
March 24, 2018 11:49 pm

Yup. A modeller who cannot easily make their model produce a desired result at will, is only just learning their craft.

March 25, 2018 12:08 am

Hi Willis,
How do you know your input parameters are credible? For example, there are allegations that aerial aerosol data was fabricated to hindcast the global cooling period from ~1940 to ~1977.
Hadcrut (3 or 4?) is perhaps the least “adjusted” of the surface temperature data, but Tony Heller has pointed out serious problems with much of the ST data.
Can you provide data sources for your work?

Reply to  Willis Eschenbach
March 25, 2018 8:34 am

Thank you Willis. I understand that you have created a nonsense model to try to demonstrate your point.
I plotted the CMIP5 inputs to try to better understand these model inputs:
Forcings used by the GISS model in the Coupled Model Intercomparison Project Phase 5 (CMIP5):
Instantaneous Forcing (W/m\S2\)
[1,] “WMGHG” [Well Mixed Greenhouse Gases]
[2,] “Ozone”
[3,] “Solar”
[4,] “Land_Use”
[5,] “SnowAlb_BC” [Snow Albedo (Black Carbon)]
[6,] “Orbital” [Orbital variations involving the Earth’s orbit around the sun]
[7,] “TropAerDir” [Tropospheric Aerosol Direct]
[8,] “TropAerInd” [Tropospheric Aerosol Indirect]
[9,] “StratAer” [Stratospheric Aerosol = major volcanoes]
IF these forcings are credible (a big IF, temporarily made for the sake of argument):
.THEN.
Forcings 2 through 8 inclusive are small (typically within +/-0.6) and tend to cancel out each other.
.AND.
The only large forcings are 1 and 9:
[1,] “WMGHG” [Well Mixed Greenhouse Gases] increasing to +3.5
[9,] “StratAer” [Stratospheric Aerosol = major volcanoes], periodically dropping to -3.6 with major volcanoes.
The major problem I see with the current climate models is that they fail to replicate the global cooling period from ~1940 to ~1977, nor do they adequately replicate “the Pause” that has occurred since ~1996-1997 (or arguably even earlier, possibly since ~1980 – since most of the atmospheric warming after ~1980 is apparently due to the recovery of the atmosphere from 2-3 major volcanoes – El Chichon, Pinatubo and possibly St. Helens – this has been demonstrated elsewhere by the lack of commensurate warming of Nino34 SST’s).
The models also fail to replicate the major role that El Nino and La Nina play in driving global temperatures over the short term – typically ~3 year intervals, and longer term influences of the PDO and AMO.
NOW let’s critique these alleged forcings:
It is generally assumed by skeptics that “WMGHG” is too high. I suggest that the sensitivity of climate to increasing atmospheric CO2 is anywhere from ~3x to ~10x too high in current climate models used by the IPCC and its minions.
While the impact of natural variability including solar variability is poorly understood, it is apparent that low solar activity corresponds to periods during the Little Ice Age, such as the Maunder Minimum and possibly the Dalton Minimum.
My personal prejudice is that increasing atmospheric CO2 has very little impact on global temperatures, and that natural variability has a far greater influence than changes in GHG’s. This conclusion is supported by the global cooling period that occurred from ~1940 to ~1977 and will be further supported if Earth cools in the next decade or so during the lows of SC24 and SC25.
Regards, Allan

ScarletMacaw
Reply to  ALLAN MACRAE
March 25, 2018 6:19 am

And whether or not the input parameters are credible is immaterial to Willis’ point.

Reply to  ScarletMacaw
March 25, 2018 7:03 am

Macaw – obviously true – no argument here on that point.

Reply to  ALLAN MACRAE
March 27, 2018 6:33 am

ALLAN MACRAE:
How could you say aerosols were “fabricated”?
If you had been in Pasadena, California
for the Rose Parade in the 1970s,
there were aerosols all over the place.
You were obviously sleeping
when the history of climate change
was presented in your public school.
Here is modern climate “science”:
— The history of climate change
in five talking points
that you must remember,
because there will be a test:
(1) Aerosols showed up in 1940,
(2) Aerosols killed natural climate change,
which was 4.5 billion years old,
and in a nursing home at the time,
(3) Aerosols took over as the Big Boss of Climate,
causing cooling from 1940 to 1977,
(4) In 1977 all the aerosols fell out of the air,
then CO2 took over as the Big Boss of Climate,
(5) CO2 caused warming from 1977
to the early 2000’s,
and then ‘fell asleep’
from the early 2000s to 2015,
when Mr. Hiatus took over.

WXcycles
March 25, 2018 12:08 am

Thank you Willis, amalgums of cycles and trends always seemed arbitary and baseless to me.
At least when directly checking physical Milankovitch cycles against original data it makes physical sense to look for correlation.
As opposed to inventing processing noise and calling it … whatever floats your boat.

March 25, 2018 12:19 am

Using GISS data is the first mistake….UAH seems to have proven to be better.

March 25, 2018 1:07 am

The 1939 temps seem way to low from historic national records.

Chris
March 25, 2018 1:13 am

And when you are doing curve fitting – you really don’t know anything outside of the domain that was fitted.

Dr. S. Jeevananda Reddy
March 25, 2018 1:23 am

In the figure, though observed and predicted matched, it is a false logic similar to CO2 with population growth.
Temperature anomaly follow the 60 year cycle (varies between – 0.3 and + 0.3 oC) — 60 year moving average shows the linear trend which is a function of several factors — anthropogenic, land use, etc. Natural part can not be part of this trend.
Dr. S. Jeevananda Reddy

sailboarder
Reply to  Dr. S. Jeevananda Reddy
March 25, 2018 2:43 am

“Natural part can not be part of this trend”.
Why? Because the “natural” part has yet to be found?

March 25, 2018 1:24 am

Science:
If you don’t make mistakes, you’re doing it wrong.
If you don’t correct those mistakes, you’re really doing it wrong.
If you can’t accept that you’re mistaken, you’re not doing it at all.
~Anon~

Crispin in Waterloo but really in Ulaanbaatar
Reply to  Roy Denio
March 25, 2018 6:40 am

Roy, similarly, if your card game is Bridge, and you are not going down 1/3 of the time, you are under-bidding.

Kristi Silber
Reply to  Roy Denio
March 25, 2018 2:01 pm

Bravo! Well said, Anon.
A veneer of knowledge hiding ignorance is far worse for the pursuit of truth than admitting, defining and exploring one’s ignorance and using it as an incentive and guide to seek knowledge.
………………………………………….
Certainty that one is right damages the capacity to absorb new, contradictory information, leading to permanent error.
(This does not prohibit one from believing there is a high probability of being right so that one can build on working hypotheses.)

Stevan Reddish
March 25, 2018 2:49 am

Willis,
As you seem to like analyzing data, perhaps you would be interested in a suggestion I have:
We know atmospheric moisture affects the rate of cooling at night. It is assumed increased levels of CO2 slow the rate of nightime cooling The big question is how much the rate of cooling is affected by atmospheric CO2, and how any such effect varies with humidity. Any effect would be most detectable on calm, cloudless, low humidity nights.
Have hourly (or shorter) records of temperature, windspeed, humidity corollated with percent of cloudcover been kept over say, the last 50 years, in a place with very low humidity, such as Antarctica? Could such a record be analyzed for evidence of lower rates of nightime cooling due to increased CO2 levels?
IF atmospheric CO2 has any significant effect on cooling rates, there should be detectable evidence.
I would be interested in any analysis you make along these lines.
SR

Reply to  Stevan Reddish
March 25, 2018 12:08 pm

Got an idea?
Knock yourself out.
Do not dump the work and responsibility on someone else!
A) They are not in your employ.
B) If you do not know how to accomplish your idea, then you have zero validity to suggest someone else spend hours and effort.
C) If you do not know how to proceed, welcome to motivation and an excellent teaching moment. Learn what you need!

Stevan Reddish
Reply to  ATheoK
March 25, 2018 1:03 pm

ATheoK,
Perhaps I worded my post poorly. What you perceived as dumping work on someone, I intended as sharing an idea for research with someone who
A) Has expressed an interest in doing similar research and analysis.
B) Has previously requested people share ideas for research and analysis.
C)Has demonstrated skill at such research and analysis.
If I wanted to suggest an interesting book to an avid reader would you advise me to read it myself, as if I was assigning homework? That avid reader might be sorry you kept him from a good read.
SR

Reply to  ATheoK
March 26, 2018 9:32 pm

SR:
You dumped an assignment and responsibility.

“Stevan Reddish March 25, 2018 at 2:49 am
As you seem to like analyzing data, perhaps you would be interested in a suggestion I have:”

That is a very condescending statement, along with a direct request that Willis spend effort and time on your idea.

“Stevan Reddish March 25, 2018 at 2:49 am
We know”

Stated with the classic ‘Royal we’; a demeaning inclusive reference pretending subordinates are included.

“Stevan Reddish March 25, 2018 at 2:49 am
We know atmospheric moisture affects the rate of cooling at night.
1) It is assumed increased levels of CO2 slow the rate of nightime cooling
2) The big question is how much the rate of cooling is affected by atmospheric CO2,
3) and how any such effect varies with humidity.
4) Any effect would be most detectable on
4a) calm,
4b) cloudless,
4c) low humidity nights.
5) Have hourly (or shorter) records of
6) temperature,
7) windspeed,
8) humidity corollated with
9) percent of cloud cover been
10) kept over say, the last 50 years,
11) in a place with very low humidity,
such as Antarctica?
12) Could such a record be analyzed for evidence of lower rates of nightime cooling due to increased CO2 levels?
I would be interested in any analysis you make along these lines.”

A request for:
Specific location(s) detailed atmospheric CO2, high frequency measurements.
Specific location(s) detailed humidity high frequency measurements.
Preferably on calm cloudless low humidity nights.
Basically a request that requires laborious tasks within tasks within tasks
Hourly or shorter measurements for
Temperature
Windspeed
Humidity
all correlated with cloud cover percentages
over fifty years.
Your participation, SR?
An interest in any analysis…
While you expect Willis to locate sufficiently long, extremely detailed records for multiple datums. Datums, all collected in locations that expressly log cloud cover, humidity, wind speed, temperature at hourly or in shorter periods.
Plus that information should be over a fifty year time period; i.e. dating back to 1967…
If you believe it is that easy to accomplish, then run the analyses yourself. Should be easy.
Otherwise, you have no right to dump such loads of work on anybody you do not employ and pay very good salaries to.

Bloke down the pub
March 25, 2018 3:01 am

However bogus Willis’ two parameter model is, it’d be interesting to see what it predicts for the future. Doesn’t the law of averages mean that with all these models being churned out, sooner or later one of them will prove to be correct?

F. Leghorn
Reply to  Bloke down the pub
March 25, 2018 3:11 am

That is right up there with “an infinite number of monkeys”. Maybe.

Taylor Ponlman
Reply to  F. Leghorn
March 25, 2018 8:18 am

Absolutely a case of ‘infinite number of monkeys’. I remember fondly Bob Newhart’s old routine about the guy whose job it was to ckeck their typewriters for output. One wrote: “To be, or not to be. That is the gerzornenplat…”. Still makes me smile!

mynaturaldiary
March 25, 2018 4:11 am

‘Results from fitting mechanistic models have sometimes been disappointing because not enough attention has been given to discovering what is an appropriate model form. It is easy to collect data that never ‘place the postulated model in jeopardy’ and so it is common (e.g. in chemical engineering) to find different research groups each advocating a different model for the same phenomenon and each proffering data that ‘prove’ their claim.’ (Box and Draper)
In terms of empirical models, instead of identifying a single model based on statistical significance, such as correlation, best subsets regression shows a number of different models, as well as some statistics to help us compare those models to guide the selection process. Best subsets regression should result in an empirical model that conforms to Occam’s razor; i.e. when presented with competing hypothetical answers to a problem, we should select the one that makes the fewest assumptions, with the least overall error.
Here’s an example of it applied to the Central England Temperature record.
https://mynaturaldiary.wordpress.com/2018/03/03/whither-the-weather-2/

March 25, 2018 4:22 am

The most powerful and most overlooked part of the scientific method is the second stage, the one after coming up with the concept. It’s the one that says “How are you going to achieve your goals? How are you going to measure that?”
For a meaningful result you need to make sure you can resolve variations at uncertainty levels less than the expected variations. For a hypothetical result you don’t really have to – you include this in your assumptions. The idea being that maybe sometime later you can reduce the uncertainty. For empirical work you do.
The problem is that academics who deal in theory never have to test this part of the scientific method. And this is the same with temperature measurements. The second problem is that as a society we are prone to believe hypotheticals and supposition rather than ‘ugly facts’.
The more you present temperature data and temperature anomalies the more you are convincing yourself that they are meaningful.
So anyone with a crazy curve fitting idea is on par with people who believe that they have a physical mechanism but have large uncertainty in the data.
You’re doing the same thing.

Reply to  mickyhcorbett75
March 25, 2018 6:12 am

I thought it was:
0. Observe
1. Pose an hypothesis, multiple hypotheses, or a general theory
2. Suggest tests fot it/them
3. Make one or more tests and urge others to join in to do more of the tests, & devise additional tests, or repeat tests to make sure you didn’t fumble while juggling the test tubes, gauges, positrons…
4. If test or tests seem to refute hypothesis, report exactly how test was run and how it came out and revise hypothesis, or revise tests and repeat test
Repeat.

Reply to  mib8
March 25, 2018 7:37 am

If you want to put like that you can. The basic idea and the one often used in laboratory exercises is
Concept and Idea – Thinking
Methodology and Execution – Measuring
Analysis Results and Conclusions
Or in food terms
Mise en plat
Cooking
Eating

Kristi Silber
Reply to  mib8
March 25, 2018 2:47 pm

That’s more or less how standard experimental science is done. Usually experimental replication is done by other researchers in case there is a systematic bias in the first researcher’s methods. Often replication is in the form of a variation of the first experiment that not only tests the first, but provides new information or increases the robustness of the results.
There are other ways of hypothesis testing. For example, one can analyze data that has been gathered in the past for different purposes or gather data for a population at multiple time intervals without any manipulation. These kinds of studies are commonly used in human populations, which can’t always be experimented on. Typically these are multivariate studies that tease out different factors and their interactions and how they effect some parameter(s), and need large sample sizes. The recent article about the fish is a good example of analyzing data already available. (Many said the research was rubbish, but I don’t think they understood the statistical methods.)
Modeling is also a perfectly legitimate way of testing a hypothesis. For instance, one can use equations to represent known interactions, then vary a parameter to see the behavior of another parameter.
Or one can develop a hypothesis and test it by doing a meta-analysis of other research.
…In reality there are a lot of ways of testing hypotheses. Perhaps the most important part (aside from using statistics appropriately) is identifying sources of bias – systematic bias in the data, bias in the experimental procedure, human bias in the interpretations. Good scientists are always cognizant of potential for bias. This is just one reason I have more trust in mainstream climate scientists than many around here. Not all scientists are “good,” but it helps that it’s a very competitive field.

Kristi Silber
Reply to  mickyhcorbett75
March 25, 2018 3:17 pm

“For a meaningful result you need to make sure you can resolve variations at uncertainty levels less than the expected variations.”
Is this not why they do so many iterations of a model? Also, the means of the predicted variables over all the models is a better indicator than any one. I have no idea how the uncertainty is mathematically handled in these cases, I’m just going by intuition, which is sometimes terribly misleading and sometimes a very handy tool.
I wish I knew the post, but there was recently a graphic posted here that shows the means and SD for a bunch of models predicting temp change or something, and it was quite remarkable how many showed overlap.
I’m aware of the uncertainties in modeling, and the modelers certainly are. Everyone knows that clouds and aerosols are sources of uncertainty, among others. Yet even though the groups doing these are apparently not in tight communication about the ways they create their models – not sharing their methods as they build, tune and test them – they have some areas of remarkable agreement. Different groups have different interests, and build their models accordingly.
I just find it very hard to imagine that all these independent groups have been corrupted so that they repeat the same errors. If they were, you’d think they’d do a better job so there wouldn’t be so much uncertainty.
(Tuning can decrease the uncertainty dramatically, but there’s a risk of overtuning and making the model unstable or unrealistic. Good paper if you want to learn about tuning:
https://journals.ametsoc.org/doi/full/10.1175/BAMS-D-15-00135.1)

Reply to  Kristi Silber
March 26, 2018 7:15 am

You are making the academics mistake of slipping into hypothesis. A real measurement requires appreciation of signal to noise. Uncertainty is not reduced by multiple samples as they have to follow i.i.d for that to be true and you won’t know if they do because your signal to noise is below the threshold required to decide that.
As Rutherford would say: if you have to use statistics you should have done a better experiment.
Design to meet expectations. If not then by all means run models but they have little relevance to real world actions.

March 25, 2018 4:31 am

It is very evident that the ENSO has a temporary impact on global temperatures.
There is also a 60 year cycle of some kind in global temperatures.
I would rather try to understand what it is happening with the climate and try to understand what is driving that or how it works, rather than throw up my hands and say “it can’t be done because of curve fitting”.
Humans have advanced because we have tried to develop an understanding of the environment around us. Sometimes it can’t be done and sometimes an incorrect understanding is developed, but much of the time, we figure it out.

arthur4563
Reply to  Bill Illis
March 25, 2018 6:19 am

Nobody is saying that – the point is that curve fitting of a single data series , without any theoretical rational behind the tunable values, doesn’t lend plausibility to the model and cannot be considered proof of its validity. The Model must cross validate – be able to predict across a number of data sets.

Reply to  Bill Illis
March 25, 2018 7:09 am

Bill is spot on. Understanding how the climate varies naturally has to be accomplished before trying to explain why it has varied. Since the 1990’s climate “science” has operated in a bass-akwards manner… With the “why” coming before the “how.”

Kristi Silber
Reply to  David Middleton
March 25, 2018 4:03 pm

I’m curious – how do you know that climate scientists aren’t aware of the way climate varies naturally, at least enough to be able to create models? There are some things that are understood better than they can be modeled simply because of the resolution available with today’s computing capacity. No one denies that the projections are estimates, but when the estimates pretty much agree even at different values of the unknown parameters it seems that should improve your confidence.
Some natural processes can’t be explicitly included in the model because they follow no regular pattern – volcanoes, for instance. There has to be a stochastic variable included to accommodate these things, I would imagine.
It seems to me that people view models through the lens of their professions. Some professions call for a great deal of precision and accuracy, referring directly to material parameters (engineering). Some are highly theoretical, mathematical (physics). Some are familiar with complex, dynamic, interactive, stochastic systems (climate and ecosystems). Some are familiar with meteorology, which relies heavily on modeling but, on different scales (sometimes I wonder if this background makes it harder to imagine how anyone could predict climate decades from now, since weather models can’t predict more than 2 weeks in the future). There are models to reconstruct the geological past, the fossil record, genetic relationships, neural networks…there are thousands of ways models are used. So why is it so hard to imagine climate models are a valid tool, when used responsibly and honestly and did all one could to control for bias? There are ways to do so.
Over 20 years ago I spent three summers camping in the Adirondacks gathering data for a model of forest dynamics. The model was much simpler, but of the same general type (dynamic, predictive, stochastic) as those used for climate. Maybe this background makes me more inclined to believe climate models are valuable.

Reply to  Kristi Silber
March 25, 2018 7:10 pm

How is your reply even remotely related to my comment?

MarkW
Reply to  David Middleton
March 26, 2018 12:53 pm

Kristi, among other things, we know that the climate scientists don’t understand natural variation, because they have stated as much.
Truth is nobody knows why the earth warmed up for the Medieval Warm Period, or why it cooled down for the Little Ice Age. There are theories, but none have been proven.
Ditto, we don’t know why the earth warmed up during the 1930’s, nor why it cooled during the 1970’s.
We’ve been told that we don’t need to know why the previous warmings and coolings happened, because the models tell us that the current warming is due to CO2.
Our response has been that if you can’t prove that the causes of the previous warmings are not operating currently, you can’t claim to have proven that the current warming must be due to CO2.

Reply to  Bill Illis
March 25, 2018 8:45 am

Good comments, thank you Bill and David.
The climate models cited by the IPCC and its minions tend to “run hot”, and also fail to hindcast accurately (unless forced to do so by falsified inputs). In formal engineering terminology, these models are “crap”.

Chimp
Reply to  ALLAN MACRAE
March 25, 2018 9:29 am

CACA is a crock.

Phoenix44
Reply to  Bill Illis
March 25, 2018 11:13 am

No, we are saying it cannot be done WITH curve fitting.

Reply to  Phoenix44
March 26, 2018 3:42 am

The validity of curve fitting depends on the curve, and the logic behind it.
For example, here is a simple relationship between NIno34 SST’s and the volcanic aerosol index that predicts global temperatures 4 months in the future. [Others including Bill Illis have developed earlier and better relationships with a few more input parameters.]
https://www.facebook.com/photo.php?fbid=1618235531587336&set=a.1012901982120697.1073741826.100002027142240&type=3&theater
Note the blue line reflects Nino34 SST’s, which show NO NET WARMING SINCE~1982 (possibly earlier).
Developing a nonsense curve as Willis has done does not disprove all curve-fitting exercises, just as the crash of one deliberately-sabotaged car does not prove that all cars will crash.

Reply to  Phoenix44
March 26, 2018 6:33 pm

Hi Willis,
Here is some information on my aforementioned plot, and on the more detailed work by Bill Illis. I lagged UAH LT global temperature by 4 months to show coherence in my plot, whereas Bill lagged tropical temperature by 3 months in his plot.
The mechanism is tropical Pacific SST’s increase tropical humidity and tropical atmospheric temperatures 3 months later, and global temperatures one month thereafter.
John Christy tells me he wrote something similar in his 1994 paper with Richard McNider. Thought I had something new but – no.
Actual LT temperatures are running about 0.2C higher than my prediction for Feb2018, but I think should drop from +0.2C to about 0.0C soon.
Regards, Allan
https://wattsupwiththat.com/2017/09/20/from-the-the-stupid-it-burns-department-science-denial-not-limited-to-political-right/comment-page-1/#comment-2616345
Re data:
Nino34
http://www.cpc.ncep.noaa.gov/data/indices/sstoi.indices
UAH LT
http://www.nsstc.uah.edu/data/msu/v6.0/tlt/uahncdc_lt_6.0.txt
Sato Aerosol Optical Depth Volcanic Index:
https://data.giss.nasa.gov/modelforce/strataer/tau.line_2012.12.txt

March 25, 2018 5:35 am

If Society Can’t Trust Science, What Can They Trust? Climate Alarmist is Playing San Francisco Judge as a Complete Fool
Dr. Myles Allen must think that the San Francisco Judge is a complete fool. I just finished a post refuting many of his claims, but one example needed to be singled out. In his presentation, Dr. Myles Allen replaced the poster child Mt. Kilimanjaro, which was exposed as a fraud in the Climategate emails, with … Continue reading
https://co2islife.wordpress.com/2018/03/25/climate-alarmist-is-playing-san-francisco-judge-as-a-complete-fool/

Reply to  co2islife
March 25, 2018 6:21 am

“Society” has no mind. It is a hand-wavy collection of individuals interacting, and only each individual has a mind. Those individuals can communicate (& miscommunicate), can agree in part & disagree.
Individuals, OTOH, are ignorant, and collections of individuals are ignorant. Information is expensive. It requires effort. The scientific method is a process for optimizing that effort, getting the most valid information at the least cost.
Because we are ignorant, we also use heuristics. When you mention a San Francisco judge, the heuristic that he or she is most likely either a fool or corrupt happens to work well over the last several decades…but we always look forward to seeing more exceptions.

Kristi Silber
Reply to  co2islife
March 25, 2018 4:28 pm

“The judge needs to ask Dr. Myles Allen how does a glacier melt due to man-made warming when there is no warming?”
Drought. Normal melting, little precipitation to replace the summer melt. Also glaciers can go straight to water vapor when the air is dry.
To attack someone for fraud without considering the alternatives (such as the fact that you might be misunderstanding something or that he’s making a simple error) is a sign of loss of objectivity.
I started reading some of your rebuttal. It’s full of unsubstantiated assertions (and ones I believe to be erroneous, but that’s to be expected!). It also seems to not take into account that this is, after all, a document for the plaintiff, and cherry-picking is to be expected. It’s called a trial, not playing the judge for a fool; this isn’t a good measure of what society at large should expect from climate science.
Your attack goes a long way toward illuminating the reasons so many don’t trust science. It is biased and based on unjustified assumptions. Have you read the NOAA research into the effects of weather station siting that resulted from Anthony’s data? Is there some particular problem with it that makes it irrelevant?

Reply to  Kristi Silber
March 25, 2018 4:37 pm

Yes, but how do all those causes the to CO2? Sublimation isn’t caused by CO2. It isn’t me that is jumping to conclusions, it is the one claiming man made CO2 warming is the cause when there is no documented warming. That is either incompetance or fraud, neither is acceptable.

Reply to  co2islife
March 25, 2018 5:45 pm

You are in gross error. There is in fact documented warming. Your comment is the one that is unacceptable.

Reply to  Anthony Watts
March 25, 2018 6:44 pm

Where is the documented warming in Glacier National Park? I provided the data I could find. Do you have some data showing warming in Glacier National Park That isn’t due to the Urban Heat Island Effect?

Reply to  Anthony Watts
March 25, 2018 7:07 pm

Mr. Watts, the comment was in regards to Dr. Allen claiming the Glacier National Park Glacier was melting due to man made CO2. The graph I provided from the USGS shows a gradual downtrend in temperatures since 1994. Do you have data demonstrating otherwise? The same issue is with My Killimanjaro’s glacier. There has been no warming at the top of the mountain. The leaked Climategate emails demonstrate that the “experts” are aware of that fact, yet did nothing to dispel it, and even worked to promote it. If you have data demonstrating warming in Glacier National Park, or the top of My Killimanjaro, I’ll gladly edit my post.

Reply to  co2islife
March 26, 2018 7:53 am

OK, in the interface we see comments for approval outside of upthread context, so it looked as if you were saying there was no observed warming on a global scale.

Reply to  Anthony Watts
March 26, 2018 4:32 pm

Sorty about that, i confused the issue, and certainly didn’t intend to disrupt the conversation. I love your site, and certainly didn’t intend to create confusion. The satellite data clearly shows slight warming tightly tied to ocean cycles. I’ve repetedely stated to understand the climate understand the oceans, and CO2 doesn’t warm the oceans. MODTRAN demonstrates that the CO2 signature isn’t even measurable until you reach 3km in the atmosphere. We are in total agreement, and I apologize for the confusion.

TA
March 25, 2018 6:01 am

From the article: “So, you folks who are all on about how this particular pair of “solar cycles”, or this planetary cycle plus the spring tidal period of Jupiter, or this group of forcings miraculously emulates the historical temperature”
And considering that the historical temperature record is bogus itself. . .

TA
Reply to  TA
March 25, 2018 6:19 am

Fit this temperature profile:comment image

arthur4563
March 25, 2018 6:13 am

If the tunable parameters were determined on the basis of one’s theory, rather than data points, and then provided a good fit, then one might have something. Irregardless, any model must be cross validated against a different set of data.That is when the “excellent data fit” usually disappears.

Dr. Strangelove
March 25, 2018 6:19 am

“One way, and this is the way I prefer, is to have a clear physical picture of the process that you are calculating. The other way is to have a precise and self-consistent mathematical formalism. You have neither.”
This is true for theoretical and experimental physics. Unfortunately it’s not for atmospheric physics. You can have both physical picture and mathematical formalism but not a model that fits observations perfectly without free parameters. The culprit is chaos. Even a completely deterministic system can still be unpredictable in the long term.

March 25, 2018 6:27 am

Here is one reason I do not believe GCMs. If climate is chaotic, then the more time elapses, the greater the chance that a computer projection will fail. Here is the kicker, this applies to both plus AND minus time. When I see a GCM match history regardless of how far back you go, it has obviously been tuned to give a false impression! If the projection to the past is false, then the projection to the future must also be considered false!

Kristi Silber
Reply to  Jim Gorman
March 25, 2018 4:46 pm

” When I see a GCM match history regardless of how far back you go, it has obviously been tuned to give a false impression! ”
Why is this obvious? How close is the match? I think it’s very risky to say it is “obvious” that someone has done something wrong without understanding why they did what they did. This is a very consistent pattern in climate “skepticism.”
(I’m by no means certain about the following; hopefully someone with expertise will come along a correct my errors, but this is how I understand it.)
There is a chaotic ingredient in weather, but it seems to me that climate is a little different. It’s not trying to predict weather, it predicts trends in averages. So, it’s not going to predict that a hurricane will happen in 2030, but it may predict that hurricanes will get more intense on average over time. This makes the chaos of weather less of a problem. The unknown, unpredictable factors like volcanoes can be represented by a stochastic parameter: one that behaves randomly, but following a normal distribution.

Reply to  Kristi Silber
March 26, 2018 5:38 am

Because every time it is run it ends up with the same historical trend This would be impossible if the “program” was following a truly random and chaotic pattern. Consequently, the modelers are attempting to fool people into thinking their outputs are accurate. They are not. At best, they are projections attached to predetermined information.
The modelers have taken the criticism that their programs do not project backwards accurately and changed them so their output is predetermined. This may look good but it is also inaccurate. The real criticism is that our present knowledge of the atmosphere is inadequate and consequently we cannot design software that addresses the issue. They have unwittingly confirmed the criticism by developing a “fake” way of dealing with it. It also confirms the criticism that future projections are probably inaccurate also.

Kermit Johnson
March 25, 2018 6:28 am

I’m glad to see that there is some attention being paid to climate models and curve-fitting. I have always said that prior to any scientist being allowed to publish any climate model, they first should spend a few years making models of something like the price of corn – or wheat – or cattle. Along with making the models they should be required to actually bet their own money on the results of their models. This is what is so great about modeling markets – there is no “committee” that decides whether they are right or wrong. A quick look at their statements is all the feedback they need.
Isn’t this also why we are in such a mess economically? We now have academics making models – and expecting the markets to behave the way their models say they should.
I’m surprised here, however, that there isn’t more discussion about sensitivity factors in these models. Think about it – each model has its own sensitivity factor.

JRF in Pensacola
March 25, 2018 6:40 am

Willis, a very interesting article. Some clarification, please, for the Great Unwashed.
Are you saying that (some, many, most, all) climate models do not have good physical underpinnings (Fermi’s “clear physical picture of the process”) and are simply making associations rather than correlations?
Could you give an example(s), if any, of models that have a good physical foundation (even if their output is questionable perhaps because of an incorrect input variable or variables)?
I know that Joe Bastardi over at Weatherbell will comment about the physics of the GFS compared to the European. Is that in the same vein as your article?
Thanks.

Reply to  Willis Eschenbach
March 25, 2018 1:31 pm

JRF, see my guest post here some time ago, The Trouble with Models, for illustations and details backing up Willis general (correct) response. See also for a different very explicit critique of CMIP5 essays Models all the Way Down, Humidity is still Wet, and Cloudy Clouds in my ebook Blowing Smoke (foreword from Judith Curry). The latter two essays cover in more detail the two biggest parameterization problems (convection cells with rainout, and clouds) more generally covered in the first.

JRF in Pensacola
Reply to  Willis Eschenbach
March 25, 2018 2:42 pm

Willis, thank you for your reply and upon your mention, I have dug into Navier-Stokes Equations. Obviously not my field of experise. And, Ristvan, thank you for your direction and will have more reading to do.

Yogi Bear
March 25, 2018 6:43 am

“The problem is that they are using as an input something which is part of the target. The NAO and the PDO are each a part of what makes up the global temperature average. As a result, it is circular to use them as an input.”
It could be circular to not regard the NAO as input if it is effected by solar variability regardless of the global mean surface temperature.
I think that you could do a better post on Scafetta’s mathurbations, and list the components for each of his beat period products, so it’s clearer to all how physically ridiculous they are. His root period are the orbital period of Jupiter, and half of the synodic period of Jupiter and Saturn. The beat period of those two are ~61 yrs. He then takes the mean of those two root periods, and with that mean, makes more beat periods against the two root periods. Do the maths on that and you’ll see that his 115 yr beat should actually be 112 yrs.

Richard M
Reply to  Yogi Bear
March 25, 2018 7:20 am

In reality neither the PDO nor the NAO indices are temperatures. Willis is wrong. They are NOT “part of what makes up the global temperature average”. And, even if they were, it would NOT mean they were unimportant to what drives the global temperature.

Chimp
Reply to  Richard M
March 25, 2018 10:54 am

Willis,
Thanks to very cold Humboldt Current, the central, populous coast of Chile is very windy. Hence the frequent destructive fires in Valparaiso and in forests inland.
The California Current is chilly, but the Humboldt comes straight from Antarctica. It’s colder than the Labrador Current. It carries penguins to the Galapagos Islands on the Equator. It makes the Atacama Desert and southern Peru the driest place on earth, rivaled by Namibia, which endures another cold western boundary current from Antarctica.

Yogi Bear
Reply to  Richard M
March 26, 2018 7:50 am

“As a result, if you use the PDO or the NAO as inputs, you are using parts of the very thing that you are trying to model … and that’s not allowed.”
If there is a solar influence on the NAO, it would be proxy for an input. For example an increase in negative NAO during solar minima.

Don K
March 25, 2018 6:45 am

Please note that I don’t even have to use data. I can fit the historical temperature record with nothing but sine waves

Of course you can. That’s Fourier (right?), and you can get (almost) any curve you want with enough waveforms. There’s a somewhat comprehensible discussion of that in Chapter 50 of the Feynman Physics Lectures. Of course you need phenomena that are periodic (tides, sunspot cycles, …) and they need to be at least roughly sine-cosine wavish. And they need to actually be applicable. And you’ll always get an answer even if, as appears to often/usually be the case, the curve you’re looking at isn’t driven by tidy cycles. That’s likely why those who apply Fourier analysis to financial markets generally do not end up rich.

Don K
Reply to  Don K
March 25, 2018 9:08 am

Willis: Afterthought — I assume that your objection is to numerology, not to applying reasonable cyclic adjustments where they seem appropriate. — Seasonal corrections for example?

Juan Slayton
March 25, 2018 7:19 am

For any who, like me, will want to read the rest of Dyson’s remarks, here is the link:
http://lilith.fisica.ufmg.br/dsoares/fdyson.htm

Reply to  Juan Slayton
March 25, 2018 12:19 pm

Thank you, Juan!

phil salmon
March 25, 2018 7:29 am

“One way, and this is the way I prefer, is to have a clear physical picture of the process that you are calculating. The other way is to have a precise and self-consistent mathematical formalism. You have neither.”
Willis, would you apply this condition to attempts to explain a biological phenomenon? Such as, for instance, why the European eel (Anguila anguila) persists in crossing the Atlantic to the Sargasso Sea in the Caribbean, to spawn each year?

Brett Keane
Reply to  Willis Eschenbach
March 26, 2018 3:33 am

Willis – Wegener? Brett

Reply to  phil salmon
March 25, 2018 1:13 pm

Exactly. Incommensurability – great word, thanks!
Dyson and Fermi can afford the “luxury” of requiring precise mathematical and physically mechanistic proof when looking inside the atom or at the big bang.
However with eels and love, a level of complexity is reached where such strict requirements can’t be applied.
My point is that climate has such a level of complexity. Not only does it involve chaotic nonlinear pattern-formation processes, it also is significantly affected by living organisms. Ever since the great oxygenation event a little over 2 billion years ago, living organisms have had a massive effect on climate. Thus for example the attempts to model CO2 effects on climate are flawed if they don’t include the greening effect of CO2 causing an enhancement of transpiration and the hydrological cycle in arid and marginal regions.
But I agree that curve fitting to astrophysical processes is invalidated as you point out by the ease of fitting data to even a limited “toolkit” of proposed oscillating forcers. Also this approach makes another fundamental mistake in assuming the climate to be essentially passive such that all its ups and downs are driven by some external astrophysical agent. This is the same error as alarmist CAGW which requires a completely passive climate and all warming or cooling is imposed by atmospheric and always human-contributed gasses or particles. Both these are wrong. The climate is active, not passive and changes by itself, in and of itself. Optionally with a little help from outside.

Kristi Silber
Reply to  ptolemy2
March 25, 2018 5:12 pm

Ptolemy2
“Thus for example the attempts to model CO2 effects on climate are flawed if they don’t include the greening effect of CO2 causing an enhancement of transpiration and the hydrological cycle in arid and marginal regions.”
I’m pretty sure most new models do this. Some may just include a parameter for the vegetative sink for CO2, but I think there are also parameters representing vegetation and land use.
“This is the same error as alarmist CAGW which requires a completely passive climate and all warming or cooling is imposed by atmospheric and always human-contributed gasses or particles. Both these are wrong. The climate is active, not passive and changes by itself, in and of itself. Optionally with a little help from outside.”
I always wonder who people mean by “alarmist CAGW.” Is this the greenies? The media? Or do you mean climate scientists? At any rate, I doubt there are many people so stupid that they believe all variation in weather is due to human-induced causes. Why do you say such things? Do you actually believe that? Why? Do you realize that just by saying something or reading it again and again it will seem to be true, even if you weren’t sure to begin with? Obviously there aren’t a lot of people you have to convince around here. I’m honestly puzzled why people say the same things again and again – not just any things; the comments have to be DEROGATORY.

Reply to  ptolemy2
March 25, 2018 11:27 pm

Kristi
It was not my intention to be derogatory. I think “alarmist CAGW” is a reasonable description of the dominant body of opinion in the media, politics and academia, that recent warming is anthropogenic and a cause for alarm. There is nothing wrong with alarmism if there is a real threat on the horizon. Churchill was right to br alarmist about political and military developments in Germany in the 1930’s. People are right to be alarmist about antibiotic resistance.
You are right that progress is not helped by derogatory language and labelling. The climate research community is moving toward acceptance for instance that climate is not “passive”, that oceanic circulation shifts can cause 10, 100 – 1000 year timescale climate changes without necessarily outside forcing. This knowledge was of course always there in the oceanography literature, it’s just a question of connections between disciplines.
However it has to be said that a lot of communications about climate research do appear to assume a passive climate, albeit not explicitly. Did climate warm? – has to be CO2 or maybe soot; did it cool? – has to be particulate pollution or a volcano or two (at least the latter is not anthropogenic).
For instance a Canadian academic a few years back published a paper asserting that 99% of recent warming was anthropogenic, “natural” processes could be restricted to no more than 1%. Such a statement had clear political implications, which was evidently intentional. However it cannot have been made on the basis of understanding of “natural” ocean driven multidecadal climate variability.
But yes – cutting out inflammatory language is perhaps the single thing that would most advance the climate debate and the research process. All genuine attempts to advance understanding should be respected, from whichever direction they come.

MarkW
Reply to  ptolemy2
March 26, 2018 12:58 pm

Kristi, just saying “I’m pretty sure most new climate models do this”, doesn’t cut it.
These models have a long history of excluding critical factors.

Reply to  phil salmon
March 25, 2018 1:14 pm

ptolemy2 is phil salmon btw. Forgot that I was still anonymous on this pc.

Brett Keane
Reply to  ptolemy2
March 26, 2018 3:44 am

Kristi, do not play the sympathy card. Climatism has been and is a practitioner of abuse and mendacity. For reasons not really concerned with climate, which is just a tool. CO2 and ridiculous CMIPs, now disavowed by IPCC even, but still used for meaningless scenarios and projections. Not science but politics, so leave off wasting our time please. Your steed has expired. Brett

Kristi Silber
Reply to  ptolemy2
March 28, 2018 12:33 am

Phil,
Thanks for the comment.
” I think “alarmist CAGW” is a reasonable description of the dominant body of opinion in the media, politics and academia, that recent warming is anthropogenic and a cause for alarm.
I think this is widely seen as derogatory. What is an alarmist, anyway? Someone who is concerned about the evidence that things are changing, and the changes to come? I can see talking about some in the media as alarmist, saying the latest storm is a sign of the coming devastation, but I think it’s destructive when the term is applied broadly to the scientific community. Then there is the “catastrophic’ part. What does this mean, exactly? Seems like it’s intentionally exaggerated. What about all those who are simply concerned by the potential for major disruption to human and biological systems? Much of my concern is based on the uncertainty of what will happen through destabilization of communities that have adapted together to their environment. This is what i know most about, so it’s what I think about, but if it’s ever addressed here it’s in a derisive way.
There is far too much knee-jerk dismissal of science that isn’t understood by those dismissing it. There is very little healthy skepticism among many who comment on WUWT; instead, denial is fostered. It’s reached the point that research in other fields that have nothing to do with climate modeling is dismissed just because it talks about a model, even if it’s just a multivariate regression.
The climate debate has become one of politics vs. science. I’d go so far as to say, the “skeptic” movement is anti-science. It promotes more misunderstanding then understanding.
“For instance a Canadian academic a few years back published a paper asserting that 99% of recent warming was anthropogenic, “natural” processes could be restricted to no more than 1%.”
This is obviously foolish. I’m the last to argue there aren’t fools out there. Al Gore is one.
” The climate research community is moving toward acceptance for instance that climate is not “passive”,”
I don’t know what you mean by this.
Please don’t be offended by what I say here. These are my perceptions. It matters to me much less what our carbon policy is than that the scientific community has such widespread public distrust.
Regards,
Kristi

Kristi Silber
Reply to  ptolemy2
March 28, 2018 12:44 am

Brett – sympathy card? You think I want your sympathy? What an absurd idea!
Yep, science has been crushed under the weight of politics. Maybe if hard-core deniers like yourself were less influenced by politics they might actually consider the science without bias.

paqyfelyc
Reply to  ptolemy2
March 28, 2018 2:49 am

@Kristi Silber
March 28, 2018 at 12:33 am
“what is an alarmist, anyway? Someone who is concerned about the evidence that things are changing, and the changes to come? ”
Not just that, but someone who denies that things were changing before and will keep changing anyway (implying that all changes are man doing, and, hence, what man did, he can stop doing and undo), AND that these changes are not just bad, but DOOM (implying that we cannot balance the good and the bad, we just must go backward to previous era, before man CO2-sinned).
” Much of my concern is based on the uncertainty of what will happen through destabilization of communities that have adapted together to their environment.”
Then creationist you are. Adaptation is not a state, it is a process. Living communities cannot be destabilized, because they are not stable in the first place. Most species exist only because change happens, and they themselves prompt change that will destroy (or at least displace or put in dormancy) them.
The poster story of nature conservation failure is how man almost destroyed some redwood (Sequoia sempervirens) by trying to protect it from fire. Trouble is, fire destroys its competitors more than it hurts redwood, so fire shouldn’t suppressed. Likewise, man tried to protect marsh rare species by wetland conservation. Complete failure, as these species depended on ecological succession of drier and wetter phase.
We don’t do this mistake anymore. You still do. Change not only happens, It is necessary for biodiversity.
“The climate debate has become one of politics vs. science. I’d go so far as to say, the “skeptic” movement is anti-science. It promotes more misunderstanding then understanding.”
Oh. Well, just look at thiscomment image?w=664
This is OFFICIAL IPCC figure.
It presents “observations” Vs “model results” natural forcing. There are NO observations of natural forcing, and no way to observe this, {and the very notion of “natural forcing” is just … WTF???… Just think about it. Nature is forcing itself? } .
It presents “observations” Vs “model results” anthropogenic forcing. Likewise, there are NO observations of anthropogenic forcing and no way to observe it. Besides, there is just no reason for anthropogenic forcing to be so jerky, it should be a nice smooth curve, copy-pasted from CO2 concentration at MLO.
So is the state of “climate science”: calls “observation” things it didn’t observes and have no way to observe.
So is your state: You believe this is science, and call anti-science anyone who demands science, that is, proper data not made out of improper modeling.
Who promotes more misunderstanding then understanding? Rhetorical question. You, obviously.
“Please don’t be offended by what I say here.”
Your tone is very polite. Trouble is, such nonsensical belief, and calling “science” pseudoscience, is offending all by itself, for scientific minds like those of most denizen of WUWT
“It matters to me much less what our carbon policy is than that the scientific community has such widespread public distrust.”
Well, I am pissed off that so many people believe in bullshit like organic food, astrology, electromagnetic hypersensitivity, homeopathy, GMO and palm oil and vaccine dangerousness, etc. despite scientific proof (BTW, such anti-science beliefs are very well correlated with CAGW belief; does it surprise you? Not me). Now, I also understand why they do, and recognize their right to act according to their belief. I just don’t recognize their right to have their beliefs turned into law. You see a pattern here?
Remember, Feynman told “Science is the belief in the ignorance of the experts”. You know that Newton was wrong, and Einstein was wrong and didn’t trust himself (which made him SO scientific, after all).
I don’t trust any man nor any theory, unless an insofar it produce some actually working stuff: planes, engines, solar panel and the like. I trust the technicians who say they did this stuff by using a theory, and if this works, well, there is truth enough in the theory. No such thing in “climate science”.
BUT. I believe in science, which is a process. I don’t trust “scientific community”. Moreover, I cannot trust a community that didn’t kicked out Micheal Mann, the way medical community kicked out Jacques Benveniste (just another example of a man doing both very good science, and very bad; contrasting M. Mann, who never did any good science).
Scientific community deserves such widespread public distrust. When it starts being trustworthy, then you can blame the public. Not before. Won’t happen, unfortunately.

March 25, 2018 7:36 am

I have watched people at WUWT convincingly show that using the thickness of tree rings of certain trees as a proxy for local atmospheric temperature is scientifically false.
They are correct in pointing out that for some of the trees used as temperature proxies, the thickness of their tree rings is not solely dependent upon atmospheric temperature. This conclusion is based on the common sense idea that annual growth rate of many trees can be influenced by factors such as soil moisture, cumulative hours of exposure to sunshine (which is affected by cloudiness), maximum or minimum daily temperatures (as opposed to mean daily temperatures), total rainfall etc.etc.
They are also correct in pointing out that some of the scientists using tree rings as long-term temperature proxies have used dubious methods to amalgamate and process their data (e.g. hide-the-decline Michael Mann).
However, these same people at WUWT have then made the sweeping statement that ALL use of tree-rings as temperature proxies are suspect. Anyone with any idea of how tree-ring temperature proxies work knows that this last leap in logic is completely false. It is easy to show that there are some circumstances where tree-ring widths of specifically selected species do in fact primarily depend upon nearby mean seas surface temperatures. That this is indeed the case can be shown by comparing modern instrumental temperature records to measured tree ring widths.
Unfortunately, these “experts” have convinced the majority of the mob that the use of tree-ring widths as temperature proxies is scientific anathema. They have been so successful at doing this that it now become virtually impossible to talk about this diagnostic in a sensible manner without being shouted down.
The same is now becoming true of using curve fitting as a valid diagnostic tool. Of course, there are many ways to use curve fitting that can fool the user into believing that they have found some magical window that allows them to clearly see the underlying physical principles of a natural phenomenon. This is particularly true when curve fitting is used as a diagnostic in climate science because of the inherently complex nature of the physics of the climate system. Many of the systems that are under study are inter-dependent upon other parts of the climate system and so it isn’t long before a hypothesis or model has so many free parameters that it could just about fit any physical system through a simple adjustment of the multitude of fitting parameters.
However, it is logically false to claim that because these dangers exist, it is virtually pointless to use curve fitting method to try an understand the underlying climate physics.
For example, take the 9.1-year cycle that is clearly detected in the world mean temperatures. Wavelet analysis shows that this 9.1-year cyclical pattern is present in the temperature record from 1870 to 1915 and then disappears between 1915 and 1960, before reappearing after this data.
These observational facts allow us to speculate as to why this might be the case.
One hypothesis that has been put forward is that the effect of lunar tides upon the Earth’s climate system may be responsible for this cyclical signal. This based on the simple mathematical fact that if you have two rates associated with the tidal forcings [in this case the 8.85-year lunar apsidal cycle (LAC) and the 9.3 (=18.6/2)-year half lunar nodal cycle (LNC)] they will impact the climate system with a period that is equal to the harmonic mean, giving:
2* (8.85 x 9.3) / (8.85 + 9.3) = 9.069 years = 9.1 years.
This is just the old mathematical problem: If Bob takes 4 hours to dig a hole and Fred takes 2 hours
to dig a hole, how long does it take Bob and Fred working together to dig a hole?
Answer: It is the harmonic mean of their two rates for digging a hole i.e.
2 * (4 x 8) / (4 +8) = 5.33′ hours
Hence, it is not unreasonable to propose that the lunar tides may play a role in influencing the world’s mean temperature.
The question then becomes; “if this is the case, then how could the lunar tides accomplish this task?”
So here is a simple application of curve fitting technique that can validly be used to help a researcher to further investigate the underlying physics.

Reply to  Willis Eschenbach
March 25, 2018 2:39 pm

My apologies. The comments that I made were not specifically directed at you or anyone else in particular. They are general comments which apply to the overall tone of the discussion on these issues.
All I am trying to say is that is very easy to devalue a useful scientific technique or method by pointing out its flaws and inconsistencies. Many of the criticisms that are given are valid. However, I believe that is illogical to then make the conclusion that little of real value can be obtained by using these techniques. I am not specifically accusing you of saying that, however, I fear that some of those who are reading your post are erroneously drawing this false conclusion. I believe that you are too experience a researcher to make such a silly mistake. However, I am left with the impression that some of the other commentators are not being as discerning as you.
If you read the last 1/3 of my post you will see that I give a specific example of a case where curve fitting can actually be used to guide the direction of a research investigation. I believe that It can be useful to see what cyclical frequencies are present in the observational data and that knowledge of these frequencies can be used to draw a limited inference about the underlying physics in some cases.
I will not respond to your personal attacks specifically directed at me nor your smearing of Nicola Scafetta other than to say that it is a serious character flaw in an otherwise stirling researcher and scientist.

Kristi Silber
Reply to  Willis Eschenbach
March 25, 2018 5:23 pm

Willis, I think you overreact. I saw no indication of accusation of wrongdoing, simply discussion in a general way. Not a rant. You are too quick to take offense, and to give it. It must be hard addressing all these comments, but no one’s trying to tear you a new one (that I’ve noticed). Your efforts are appreciated.

Kristi Silber
Reply to  Willis Eschenbach
March 25, 2018 5:33 pm

” However, you have a nasty, ugly habit of smearing excrement on everything in range without making a single verifiable or falsifiable claim.”
Willis, you are freaking paranoid. This person is trying to be nice and not make any personal attacks, not insult anyone. He’s talking generally and may not even know who gave him his impressions. Can you not just have a conversation? Even after he apologized for giving you the wrong impression, you have to insult him,
“And I’m left with the impression that you are a craven coward who is using his anonymity to make ugly accusations without specifying who the hell you are talking about.”
It’s you who are making ugly accusations.

Kristi Silber
Reply to  Willis Eschenbach
March 25, 2018 7:54 pm

Willis,
Thank you for your reply. I was a bit worried about what I may have brought upon myself.
I see where you are coming from, and you made some good points. I had to go back and read the posts. Perhaps if you hadn’t stopped reading just when you did it might have made more sense.
It’s none of my business, in a sense – but on the other hand, exchanges like that might set the tone, make people afraid to comment. And the thing about flinging and sticking excrement is just too graphic. It would be great if that image weren’t around again. Please?
“Thanks, Kristi. Me, I think you under-react”
Huh. I try hard to keep my cool even though I have to wade through scores of comments I find offensive/nonsensical. I don’t care half as much about climate change as I do about the fact that scientists have had their raison d’etre stolen. Without the public’s trust, science loses its value to society. I believe with 98.6% certainty that the distrust is not merited, and that makes me angry. It’s not easy being among the 1.4% minority around here. Not sure why I do it.
I have an article in mind, but it would be very unpopular. I don’t like being attacked, either. Thanks for being civil.
Regards,
Kristi

Ian Wilson
Reply to  Willis Eschenbach
March 26, 2018 8:47 am

My name is Ian Wilson. Because I use blogger.com to post here to WUWT it automatically uses my blog site name on blogger.com which astroclimateconnection. I login to the WUWT comments section using this method because it is convenient and because it allows my comments to be distinguished from another Ian Wilson who posts here from time to time.
I agree that Nicola SCafetta has been all over the place with his claimed solar-planetary cycles but there is method in his madness. Most of the changes in the cycle lengths have come about because of his evolving formulation of what he perceives to be the most likely explanation for what he is observing in the data. I am sure that if I were to review your published research work here at WUWT it would include on-going changes to some of the claims that you make. These changes are to be expected in an ongoing investigation and show that the researcher is reformulating their beliefs and opinions as the evidence unfolds.
Again, I will ignore the personal attacks [fool, slimy amoral nature, astroboy, astroturf, astroslug etc. etc.) and try to appeal to your better nature.
It is impossible to give a specific example where someone has said that tree-ring widths are not trustworthy. However, it cannot be denied that if someone tries to discuss a scientific result on WUWT that relies upon using tree-ring widths as a proxy for atmospheric temperatures, there is usually a spray of comments that poo poo the findings by using the blanket statement that “tree-rings can’t be trusted”. This is not your fault, nor is it Anthony’s fault that this is happening. However, it is hard to deny that these dismissive attitudes are not present when this issue comes up.
All I am trying to do is express my fear that a similar pattern of events could inadvertently result from this particular post, even though you don’t intend it. You [and most of your readers] and I know that curve fitting and spectral analysis is a scientifically valid technique if it is done properly.
I think that on this point we can get some agreement.

Reply to  astroclimateconnection
March 25, 2018 3:24 pm

Sorry, my specific example should have read:
This is just the old mathematical problem: If you travel at 10 mph from town A to town B and 20 mph on the return trip, what is the average speed?
Answer: It is the harmonic mean of the two speeds i.e.
2 / ( (1/ 10) + (1/20)) = 13.333′ mph

Loren Wilson
Reply to  astroclimateconnection
March 25, 2018 3:44 pm

I think there is a math error in your digging example. If Fred takes two hours alone, Bob helping will decrease the time by at least a little, not increase it to more than twice Fred’s unassisted time.

MarkW
Reply to  astroclimateconnection
March 26, 2018 1:04 pm

As you point out, tree rings are affected by many things, not just temperature.
The list of other things is a lot longer than the list you give.
There are other problems.
Tree rings only form during the growing season, so you know nothing about the rest of the year.
Also trees have optimum temperatures. Because of this, both temperature increases as well as temperature decreases can cause decreases in ring growth.
Since it is impossible to filter out all of these other things, the only thing tree rings measure is the quality of the growing season.
It is not “unscientific” to proclaim that tree rings can NEVER be used as temperature proxies.

March 25, 2018 7:38 am

My comment seems to have gone to moderation for some GFR!

Paul Linsay
March 25, 2018 8:16 am

Your Freeman Dyson story reminds me of another story about Fermi told to me by a very senior member of our group when I was a young grad student. He’d made a very careful series of nuclear measurements and then fitted the latest theory to them. He took the data plot with error bars plus the fit plotted over the data and showed it to Fermi. Fermi laid the plot on his desk, pulled a ruler out of a drawer and drew a straight line through the data. “You will never convince me that the theory is any better than that.”

March 25, 2018 8:20 am

Nice post, Willis. The salient curve fitting point applies to a LOT more than just Scaffetta. Wadhams arctic ice and Amstrup polar bears come readily to mind. And your point can be broadened to a lot more modeling and statistical practices in ‘climate science’. Homogenization, sea level rise (Nerem), parameter tuning,…
As Mark Twain said, “There are lies, damned lies, and statistics. Or, to quote physicist Ernest Rutherford, “If you need statistics to make sense of your experiment, you should have done a better experiment.” Or, to more optimistically quote statistician George Box, “All models are wrong, but some are useful.” The climate problem with Box’ observation is, which?

Wim Röst
Reply to  ristvan
March 26, 2018 2:59 am

ristvan March 25, 2018 at 8:20 am: “All models are wrong, but some are useful.” The climate problem with Box’ observation is, which?
Willis Eschenbach: ” Here’s the bar that you need to clear: “One way, and this is the way I prefer, is to have a clear physical picture of the process that you are calculating.” .”
WR: We need a clear physical picture of the processes that create our future weather and future climate. That is one. We need to know all interrelations between factors that play a role and we need to quantify everything. And to quantify well, we need appropriate data over long periods.
We have neither of them.
Most ‘models’ are part of the created ‘virtual world’ in which ‘science is settled’. But we need real world data and we need to understand real world processes. Plus a theory that makes sense.
Great post Willis!

March 25, 2018 9:02 am

Willis,
“That’s one natural and one anthropogenic forcing, ”
I have to quibble with this one. How can snow related albedo be anthropogenic? Orbital variability represents a natural change in forcing, while snow related albedo is the natural response to a change in forcing. Considering snow related albedo anthropogenic is tacitly acknowledging that CO2 is the primary forcing influence on the climate, which it absolutely is not, in fact, it’s not even properly called forcing.
Only the Sun forces the climate system while changing CO2 concentrations and changes to snow albedo are changes to the system. A change to the system can be said to be EQUIVALENT to a change in forcing keeping the system constant. This is what it means when claims are made about CO2 ‘forcing’. Note as well that the pedantic model adds equivalent forcing from CO2 to a system modified with increased CO2 concentrations counting the effect twice.
This is just another of the many levels of misdirection, indirection and misrepresentation between the controversy (alarmism vs. sanity) and what’s actually controversial (the climate sensitivity factor).

Reply to  co2isnotevil
March 25, 2018 9:27 am

Snow and ice albedo rises substantially during the ice ages as a result of orbital and tilt variations eventually resulting in -5.0C temperature changes.
If that explains the ice ages, why would that not work for today’s climate albeit with much smaller changes.

Reply to  Bill Illis
March 25, 2018 10:21 am

Willis,
Yes, ice albedo has an effect, but its not feedback and not anthropogenic. It’s the systems natural response to forcing, where forcing is exclusive to solar input. Changing CO2 concentrations also changes the system, but its the size of the influence this has which is at the root of the controversy, is far lower than claimed by the IPCC and has little effect on the average temperature or where the locations of the average 0C isotherm are per hemisphere.
Note that in the ice ages, a far larger portion of the planet was covered in ice and it’s melting had a proportionally far larger influence on the planets temperature. The magnitude of its influence decreases as the 0C isotherm moves towards the poles. If you were to consider all of the ice on the planet to disappear, the resulting increase in solar energy would only be about half of the emissions required to increase the surface temperature by 3C. This is because 2/3 of it is moot as clouds are already reflecting energy. This effect is also evident in the seasonal response of the planet where surface snow extends nearly as far as ice age glaciers.
The relevant effect of ice and snow is to change the effects of clouds from only trapping heat at the surface when ice is present to both trapping heat at the surface and reflecting away additional energy when the surface is ice free.

Stan Robertson
March 25, 2018 9:35 am

Thank you, Wllis. Very well done!

Henri Masson
March 25, 2018 10:02 am

Willis,
The problem to solve consists actually in, starting from different time series (that could be linked in someway to discover), to find (by natural or artificial intelligence) the structure of a conceptual model that is able to reproduce as well as possible the time series. These time series are not linked to any parameter; they are (non perfect) indicators of the behaviour of some elements of the system. Once such a conceptual (hypothetical) model is defined, a causality analysis (above and beyond the Granger causality approach; see Josuah Pearl book on “Causality”) can be undertaken, and the model can eventually be cleaned from some insignificant links. To the best of my understanding, the most comprehensive and likely HYPOTHETICAL model that could be built looks like this one: https://www.dropbox.com/preview/Climate/meta-model%20climate_20180115.pptx. Remember such a model is highly non-linear and a tiny fluctuation of one of the parametrs could have a significant effect. Also, from “common wisdom”, it is understood that a “primary cause” must send one or many “causal arrows” but not receive any ones. An “effect” must exhibit symetrical characteristics. This is NOT the case for temperature AND CO2 (or other GHGs). They receive and emit many causal arrows in the model; they belong to the category of “relay variables”, embedded in several (in)direct feedback loops. In system analysis it is recommanded NOT to try and modify such relay variables, as the effect is damped out by a strong stabilizing feedback loop, or, on the other hand, leads to an outcome that is highly non predicitble. Relay variables make part of several feedback loops, that can be stabilizing or not. For the temperature, paleoclimatic evidence shows that the climate system is in a chaotic mode, spinning around two strange attractors in the phase plan: the “moderate” and the “glacial” state::
https://www.dropbox.com/preview/Climate/Phase%20plan%20analysis%20of%20Vostok%20data.pptx All other fluctuations observed are actually nothing else than orbital fluctuations around those attractors.It seems obvious to me that the climate system is remarkably stable (the temperature feedback loops must be very effective) and that is simply switches between these two modes. Now, coming back to causality, If you take a look at the first figure linked, you will discover that in this (hypothetical) model, the causes are on the top of the figure: cosmic rays, gravity and electromagnetic planetary fields, météorites. And, I am affraid, these “causes” are not tunable by whatever carbon tax, energy transition or efficiency program. It is possible also that such a complex system generates endogenic fluctuations, resulting simply from its structure. In a nutshell, such a “meta-model” leads to the conclusion that climate fluctuations are natural and of chaotic nature, and thus non predicitble at a longer time horizon (certainly not at a century time scale, as IPCC is claming to do with its projections).

Kristi Silber
Reply to  Henri Masson
March 25, 2018 7:16 pm

Henri,
Interesting post. Unfortunately, I don’t have a dropbox account and couldn’t see your figures, which sound interesting.
“. In a nutshell, such a “meta-model” leads to the conclusion that climate fluctuations are natural and of chaotic nature, and thus non predicitble at a longer time horizon (certainly not at a century time scale, as IPCC is claming to do with its projections).”
I don’t think this is quite true. There are constraints to the behavior of climate: patterns, interactions, feedbacks, lag times and buffers that tend to keep things from getting unstable. Not everything is unpredictable or chaotic; some solar effects on climate are predictable, it’s just that they are sometimes swamped by other events or interactions. You could have a series of volcanoes swamp a change in W/m2, for example.
Predicting averages and trends seems to me very different from predicting individual weather events.

K. Kilty
March 25, 2018 10:54 am

This tendency to see reality in statistics is a problem throughout our society. I have arguments with the PC folks on campus who insist that there being only about 15-20% women and minorities in mechanical engineering is “proof” of some sort of discrimination which they currently explain as “chilly climate”. But they can never tell me anything specific about this chilly climate. They can’t point to a mechanism, or any method by which it works, nor who is involved, or when it occurs, or anything tangible. As nearly as I can see we do somersaults trying to recruit more women and minorities right up to giving them unrealistic assessments of their capabilities and expectations for future success.
Curve fitting climate outcomes is a likelihood sort of analysis–statistical evidence perhaps, but without a solid physical model I don’t find it all that persuasive. Back before the voyager fly-by missions of Jupiter and Saturn I had a short-run correspondence with some astronomers at Cornell who had found radii of the moons of the giant planets through statistical measures of occultation light curves. While they used reasonable models of limb-darkening, they had no way to handle background variations of light curves (somewhat like a parameterization problem). Their estimates of radii could be greatly in error, which I tried to illustrate by way of examples, but which turned out to be quite wrong after the fly-by. People just will not apply much skepticism to their favorite models.

Don K
Reply to  K. Kilty
March 25, 2018 2:06 pm

Curve fitting can work if the wind is fair and the force is with you. Kepler figured out that planets moved in elliptical orbits with the Sun at one of the foci by curve fitting. It was Newton who later (sort of) figured out why. (We still don’t seem to really understand squat about gravity although we can characterize its effects very satisfactorily) But I think Kepler’s work was a rare exception where a single “easily” analyzed natural phenomenon almost completely controlled the situation.
I put “easily” in quotes because what Kepler did was anything but easy given the mathematical and theoretical tool kit he had to work with.
In general I think Willis is dead right. It’s reasonable to try curve fitting on the off chance that you might learn something. But you probably won’t. Then, if fails to tell you anything useful, you should move on. Adding more variables to salvage your failed curve fit is likely to be a total waste of time.

Phoenix44
March 25, 2018 11:24 am

To play Devil’s Advocate slightly though. the Fermi story is largely irrelevant. We are not trying to get to the sort of “truths” that Fermi and Dyson were, but to get to a point where we can say with some degree of reasonable certainty whether or not man-made CO2 is going to be a problem.
The problem is that climate science claims (i) a level of understanding of the climate and (ii) an ability to model that are obviously far beyond their actual capabilities. I am not looking for Fermi’s level of proof, because we are dealing with potentially serious real world problems.

March 25, 2018 11:42 am

Willis. I agree with you entirely on the uselessness of curve fitting in climate modelling
see http://climatesense-norpag.blogspot.com/2017/02/the-coming-cooling-usefully-accurate_17.html
Here is a quote from the link Section 1
“Harrison and Stainforth 2009 say (7): “Reductionism argues that deterministic approaches to science and positivist views of causation are the appropriate methodologies for exploring complex, multivariate systems where the behavior of a complex system can be deduced from the fundamental reductionist understanding. Rather, large complex systems may be better understood, and perhaps only understood, in terms of observed, emergent behavior. The practical implication is that there exist system behaviors and structures that are not amenable to explanation or prediction by reductionist methodologies. The search for objective constraints with which to reduce the uncertainty in regional predictions has proven elusive. The problem of equifinality ……. that different model structures and different parameter sets of a model can produce similar observed behavior of the system under study – has rarely been addressed.” A new forecasting paradigm is required.
An exchange with Javier on a recent WUWT thread went :
“Javier
March 19, 2018 at 11:37 am
Norman, don’t you read my articles here at WUWT? I wrote an article last week about the millennial solar cycle and how it is identified both in solar activity proxies and climate proxies. You can look it up.
The problem is that the millennial cycle does not peak in 2004. It peaks ~ 2095, and definitely between 2050-2100. The article explains it.
Dr Norman Page
March 19, 2018 at 2:00 pm
Javier as you see I wrote -” Looks like we are on the same page” after seeing your 13th article Fig 5 and Fig 7 see also the spectral analysis in comment
https://wattsupwiththat.com/2018/03/13/do-it-yourself-the-solar-variability-effect-on-climate/#comment-2764127
Nowhere in the article do I see an explanation for ” It peaks ~ 2095, and definitely between 2050-2100. ”
Your 5:29 pm comment of the 13th shows a Figure with a peak late in the 21st century. But this looks like a curve derived from some mathematical formula. Nature doesn’t do math – it creates fuzzy cycles. I pick my peak from the extant empirical temperature and neutron data. The 990 – 2004 cycle is not symmetrical – more like a sawtooth shape with about a 650 year down leg and 350 year up leg. Projections which ignore the 2004 apex or turning point are unlikely to be successful in my opinion.”
Here is my forecast to 2100 based on the observed millennial and 60 year cycle picked from the data
in Figs 3 and 4 in the link.comment image
Fig. 12. Comparative Temperature Forecasts to 2100.
Fig. 12 compares the IPCC forecast with the Akasofu (31) forecast (red harmonic) and with the simple and most reasonable working hypothesis of this paper (green line) that the “Golden Spike” temperature peak at about 2003 is the most recent peak in the millennial cycle. Akasofu forecasts a further temperature increase to 2100 to be 0.5°C ± 0.2C, rather than 4.0 C +/- 2.0C predicted by the IPCC. but this interpretation ignores the Millennial inflexion point at 2004. Fig. 12 shows that the well documented 60-year temperature cycle coincidentally also peaks at about 2003.Looking at the shorter 60+/- year wavelength modulation of the millennial trend, the most straightforward hypothesis is that the cooling trends from 2003 forward will simply be a mirror image of the recent rising trends. This is illustrated by the green curve in Fig. 12, which shows cooling until 2038, slight warming to 2073 and then cooling to the end of the century, by which time almost all of the 20th century warming will have been reversed. Easterbrook 2015 (32) based his 2100 forecasts on the warming/cooling, mainly PDO, cycles of the last century. These are similar to Akasofu’s because Easterbrook’s Fig 5 also fails to recognize the 2004 Millennial peak and inversion. Scaffetta’s 2000-2100 projected warming forecast (18) ranged between 0.3 C and 1.6 C which is significantly lower than the IPCC GCM ensemble mean projected warming of 1.1C to 4.1 C. The difference between Scaffetta’s paper and the current paper is that his Fig.30 B also ignores the Millennial temperature trend inversion here picked at 2003 and he allows for the possibility of a more significant anthropogenic CO2 warming contribution.
.

March 25, 2018 12:51 pm

“One way, and this is the way I prefer, is to have a clear physical picture of the process that you are calculating. The other way is to have a precise and self-consistent mathematical formalism. You have neither.”
Hoping to predict the earth’s long-term temperature by having a clear physical picture of the solar system is a non-starter. Consider the recent issues raised by the CERN CLOUD experiments. No one has that clear picture. The other way is probably a non-starter as well, with or without “self-consistent mathematical formalism,” whatever that means. The data bases do not have sufficient coverage in time and space to yield useful results. A third requirement is that the results must be testable within a practical time frame. Now, this is not likely to be achievable within the lifetimes of most humans alive today.
Expectations of long-term climate studies should be defined before embarking on lifetime projects running down rabbit holes and producing nothing of value. A practical goal is to successfully predict global mean temperatures, or whatever, within a range of values narrow enough to realistically guide public policy decisions. Until then, “What if” studies can be deferred for a few decades until the boundary conditions are known, that is, probability weighted estimates, not hot button “high” estimates or “low” estimates that are, by themselves, meaningless.

Frank
March 25, 2018 2:43 pm

WIllis wrote: “After a bit of experimentation, I found that I could get a very good fit using only Snow Albedo and Orbital variations.”
When one performs a multiple linear regression, isn’t one first supposed to analyze the explanatory variables for co-variance? When two variables are highly correlated, I believe one is supposed to eliminate one of such such variables and admit that one can’t know which potential explanatory variable is responsible. However, you did explicitly say that you arrived at your equation by performing a multiple linear regression.
Many ENSO indices involve SST, which is a problem when one is trying to predict explain warming. Christie et al used a cumulative MEI index, which would imply the 1982 El Nino still impacts today’s temperature. If one wants temperature-independent ENSO index, an older version relied up the difference in surface pressure between Tahiti and Darwin. Total atmospheric pressure is conserved.

Frank
Reply to  Willis Eschenbach
March 26, 2018 2:13 am

If you are interested, Wikipedia (and many other places) has an article on the problem of multi-colinnearity in predictors for multiple linear regressions.
“Multicollinearity does not reduce the predictive power or reliability of the model as a whole, at least within the sample data set; it only affects calculations regarding individual predictors. That is, a multivariate regression model with collinear predictors can indicate how well the entire bundle of predictors predicts the outcome variable, but it may not give valid results about any individual predictor, or about which predictors are redundant with respect to others.”
https://en.wikipedia.org/wiki/Multicollinearity
Pressure can’t change temperature globally and thereby contribute to global warming. Temperature in NINO regions can contribute to global warming. So, if I’m trying to separate the natural variability signal from ENSO from the GHG signal, I’d prefer to model the effect of ENSO using the SOI. However, I must admit I am having a hard time turning my gut feeling into a robust rational. So I’ll defer to your argument.
Thanks for your reply.

March 25, 2018 5:34 pm

Willis, you’ve got the trunk and the tail moving on the elephant with just just two parameters.

Gary Pearse
March 25, 2018 9:06 pm

In an above article, it is reported that climate scientists now have a global warming explanation for their embarrassing adventures of getting stuck in the Arctic Ice while surveying the inexorable meltdown. They studiously ignored what’s becoming a Fleet of Fools in Antarctica on a similar quest.
Now you are sending a lot of climate scientists away unhappy that a two parameter model betters their $300 million supercomputer products and it uses just two natural forcings (although I get it that the coefficients have been derived to navigate the temperature swings).
In both the ice and curve-fitting exercises, enormous hubris is on display. That they would then forecast worse- than-we-thought futures with these meaningless creations pretty well sums up the totality of their scientific research. Alas, where are today’s Enrico Fermis and Richard Feymans to save researchers from their hubris.
Nicely done, Willis. The thought occurred to me when you mentioned you can fashion a fit by tuning any inputs , that you could make the futility of the exercise even more obvious by using, say, the actual price of beef over the past century with some other unrelated data set. It’s risen over time and with increased CO2. The USGS has annual mineral and metal prices since 1900 that could also be used.
Cheers, and enjoy the rain while it lasts.
Gary

Reply to  Gary Pearse
March 26, 2018 4:59 am

“Now you are sending a lot of climate scientists away unhappy that a two parameter model betters their $300 million supercomputer products and it uses just two natural forcings (although I get it that the coefficients have been derived to navigate the temperature swings).”
A common mistake. when you compare two models ( a GCM and willis’) you cant simpley compare them on the lowest dimension statistic. For example, If I do a model of an aircraft, say with 6 degrees of freedom, and every last aero detail, it is designed to predict more than one thing. It can of course be used to predict the glide path on landing as well. The simple problem — predicting the glide path– can ALSO be done with a very simple model. in practice the simple model can even outperform the complex model. If you argue that the simple model is “better”, you really miss the point because the simple model can only do one thing.
it cant do take off, or rolls, or turns, or a Herbst Manuever whereas the 6 DOF can. The simple model cant do stalls, or spins or any things that the real model can.
A GCM has to do more than Surface temps. It has to do precipitation, winds, temperature at all altitudes, etc.
For SOME uses a simple model may be better than a complex model. Like calculating a glide path, but no one who works in the modelling business would argue or fret over cases where simple models outperformed complex models.
WRT Willis’s model, There is a reason why Snow Albedo works so well.. Any guess why? And its not a natural forcing.. do you have a wild guess why?
too funny

Gary Pearse
Reply to  Steven Mosher
March 26, 2018 9:06 am

Thank you Steven for your thoughtful comments on the differences between models and the omnibus things they try to show. I think I was clear I didn’t mean that Willis had a useful climate model costing a few dollars by comparison to the science models. Two things:
a) For your aeronautical example, yes they can make a model and test it in a short period and the variables in the model come from physics, a century of successful flight and experiment (wind tunnel etc.). It wouldn’t hurt at the beginning of a radical new design idea to have one that gives some confidence that it would simply first fly – the number one question. They can, of course, make a small physical model, too, aware that, technically, you have to go with materials and air that aren’t right for the downsized physical model.
The latter small assurance is really where we are with climate science. Complex interactions , poor quality and distribution of data sites, limited experimentation, incomplete knowledge of what the variables are make it a different animal than aeronautics. Number one in climate science is temperature. It’s called global warming for goodness sakes, notwithstanding the name changes. Do their models fly (in a forecast)? Perhaps one day, but for the moment they have only crashes and burns and this is because tuning models and parametric manipulations with this basket of variables and unknowns in the way they do it isn’t even at the nailing-two-sticks-together and throwing it stage and is little different from Willis’s model.
b) Aeronautical engineers don’t change the “data” (out of frustration?) to make the electronic model “fit”. I grant you TOBs and station moves, equipment changes…, but am perplexed that, as Mark Steyn noted at a Senate hearing, to the effect that how can we be so confident what the temperature will be in a hundred years when we still don’t know what it will be in 1950! Now take this jambalaya and tune parameters to hindcast a model!
Steven, I believe you are an extraordinarily smart guy but with a blind spot you didn’t used to have. I needn’t explain what the poker term “tell” is. Like climate scientists do all the time to agrandize their craft, they invoke “the physics” when it has been a curve fitting exercise after the sobering attempt at application of physics. Sociology became social “science” after it was thoroughly corrupted by anti capitalist ideology; and what about the Deutche Demokratische Republik invocation?
You always invoke favorable comparisons between climate models that havent worked and sophisticated engineering models that work like a clock (I’m an engineer and uncertainty is always our number one concern – its why most engineers are CAGW sceptics).
Another tell is you now come in to do battle against sceptics on articles showing fairly poor science. I know many here are knee jerk anti global warming types no different than mindless proponents of it. But you seem to show contempt for scepticism in general these days when you know it should be the default position until bonifides are at least half established.
Re albedo, I initially didn’t realize Willis had soot in mind and thought he had erred in labelling albedo as anthropogenic instead of a natural forcing. I’m sure you want to tell me that the data comes from a model. The larger albedo effect is measured by satellite which I’m sure you would also point out is indirect and based on a model. Model is a word not a certificate of worthiness. Good forecasts are the certificate. Plunge a good thermometer into boiling distilled water at sea level and I can predict what it will read.

Peter Lewis Hannan
March 25, 2018 9:49 pm

Thank you for the Fermi – Dyson conversation; I hadn’t seen that before.

March 26, 2018 5:17 am

I have a simple thermal balance model for the 0-2000m of the oceans where the emissivity changes as a log function of CO2 concentration. It has a single factor that is determined to minimise the squared error between the NOAA measured temperature data and the modelled temperature. The measured and modelled temperature anomalies are aligned for year 2017 to align at the time of the more thorough ARGO data. This chart shows the comparison:
https://1drv.ms/b/s!Aq1iAj8Yo7jNgnXLo5LnjuHhohGM
Limiting CO2 to 570ppm results in equilibrium rise of 0.083K degrees from the 1850 level; about 0.64K degrees from the current level.
Same model but using measured sunspot number with a 22 year delay to modify the emissivity rather than using any sensitivity to CO2:
https://1drv.ms/b/s!Aq1iAj8Yo7jNgniSxAGfk6xFTfkM
For this simple model the CO2 dependent emissivity gives a better fit than the sunspot dependent emissivity. Of course both the CO2 and temperature could be driven in the same way by another variable.
The 0-2000m thermal response is highly damped and is a better indication of thermal trends than any other temperature measurement that has all the noise created by the chaos of weather. The heat imbalance reaches 1.4W/sq.m or 504TW globally, which is well within the estimated 1000 to 1500TW transport capacity of the thermohaline deep ocean circulation.

Gary Pearse
Reply to  RickWill
March 26, 2018 9:24 am

Rick, I don’t get the connection between emissivity and either CO2 or sunspots. Emissivity is a function of the nature of the emitting surface alone. How does, say a black ball at a temperature of 290K, know how much CO2 or how many sunspots are above it? Reread the Fermi quote.

Reply to  Gary Pearse
March 26, 2018 2:31 pm

It is not a black ball. It is water that has a thin yet complex surface coating or layer that I have reduced to a single factor that I have termed emissivity as it reduces the rate of heat loss from the surface. In one version of the model the emissivity changes by a small factor based on a log function of the CO2 content in the surface layer. In the other I adjust the emissivity by a small factor based on a linear relationship with sunspots.
My emissivity term is more aptly described as the effective emissivity as it is based on the measured average conditions at the earth’s surface to achieve the initial thermal balance. The use of effective emissivity of the surface, where conditions can be measured, provides a better representation of Earth thermal balance than some non-surface emitting as a black body with an implied temperature somewhere above the actual surface.

Reply to  Willis Eschenbach
March 26, 2018 3:04 pm

Read it as effective emissivity. A single factor that is based on the ratio of emitting power of Earth’s oceans to space with what it would be if it were a black body.
The oceans are the dominant store of heat in the climate system and the ocean surface has the highest temperature meaning all heat flows from that surface, whether it is into the deep ocean by mixing through waves and currents or to the atmosphere by various means and then into space.

Reply to  Willis Eschenbach
March 27, 2018 4:53 am

Possibility the best known and most readily available model of how the atmospheric layer affects the emissive power of the Earth’s surface is MODTRAN:
http://climatemodels.uchicago.edu/modtran/
This enables the user to adjust the surface temperature as well as various atmospheric components and then determine the radiating power of the surface. If you use it with the preset values it produces a radiating power at top of atmosphere of 298.52W/sq.m for a surface temperature of 299.7K. A black body surface at that temperature would emit 457.4W/sq.m. So the effective emissivity is determined to be 0.652 in this example.
If the value of CO2 is set to 280ppm then the radiating power is 300.22W/sq.m. In this case the effective emissivity is 0.656. MODTRAN readily demonstrates how CO2 alters the effective emissivity.
My model determines an effective emissivity of the ocean surface. In one case I use CO2 as the only factor affecting the effective emissivity. In the other I use sunspot # as the sole factor affecting emissivity. It happens that when the effective emissivity is modified by a small factor dependent on the log of CO2 the model gives a good fit to the measured temperature anomaly. The sunspot dependent emissivity not very good fit.

Reply to  Willis Eschenbach
March 27, 2018 9:07 pm

No part of Earth’s surface meets the strict definition of emissivity as it applies only to an isothermal surface. However it is widely referenced as such.
The surface of the earth ocean is most often warmer than the air above and warmer than the water below. So all heat flows from this surface whether up or down. That should be the emitting temperature. I could go through the complex process of analysing the emission and absorption for a myriad layers of gases in the atmosphere similar to MODTRAN but the energy released from the surface all ends up radiated to space. I have lumped all that complexity into a single parameter. In my view it is best described as emissivity as it reduces the radiation from the emitting surface at its average temperature compared with a black surface with no absorption layers would produce.
An observer on the moon would see a multicoloured sphere. Without prior knowledge of the atmosphere they see a surface with varying emittive power due to changes in the emitting surface temperature and the pixel level emissivity. The emissivity changes all over that surface. I reduce all those pixel level values to a single average value that encompassed the very thin atmospheric layer.
I am willing to consider other terms than effective emissivity but it is not absorption as atmospheric absorption only affects a modest proportion of the energy ultimately emitted to space.

Reply to  Willis Eschenbach
March 27, 2018 9:36 pm

Transmittance factor may be a more applicable term as that implies passing through the atmosphere and what is eventually released to space. The single term would lump the average surface emissivity and the average transmittance of the atmospheric layer.

Reply to  Willis Eschenbach
March 28, 2018 8:02 pm

The purpose is to simplify Earth’s energy balance to a single dependent factor that lumps atmospheric, surface and external factors together then use the model to test various theories on how measured changes in the atmosphere or outside the atmosphere, like sunspots, correlate to measured changes in the temperature anomaly.
I will do a write up on the model that covers the key features and results and publish it. The reason I made the initial post here was to make the point that a single parameter model, based on CO2 increasing, gives good correlation with measured 0-2000m ocean temperature anomaly. This temperature is good representation of the total energy in the climate system on earth and has little noise. It is the most likely candidate for determining the actual climate trend. I do not need a myriad of tunable parameters to achieve good correlation; just one that is a log function of CO2.
If the 0-2000m anomaly continues on its current trajectory till 2030 I would say CO2 is a dominant factor. If we see a turn down in the anomaly in the next couple of years it indicates that CO2 is not a dominant player in the energy balance.

al in kansas
March 26, 2018 8:14 am

And the margin of error in the actual measurement is what? And the NIST traceable calibration records are available for review where? Claiming a greater than 1 sigma accuracy of +/-0.5°C for any temperature record is optimistic fantasy at best. This would fail your ISO 9000 audit immediately in industry. This why the CO2 sensitivity is unlikely to be high. We are still bouncing around in the same natural variability range we always have in spite of nearly doubling the CO2. There is no statistically valid evidence of any unusual temperature variation at all.

Mark Fife
March 26, 2018 11:07 am

What is of the most import to me is the methods used to create an Average Global Temperature going back in time. Focusing on the land based data only for a moment, it is pretty clear there are a lot of gaps in the record. Take for example the GHCN daily maxima and data minima data set. I pulled the data from this site.
http://berkeleyearth.org/source-files/
I have been concentrating on the data from 1900 forward. Like every other data set I have down loaded only a small percentage of stations actually cover the entire date range. As in less than 2%. If I were to try and use all the data available here and impute the missing data, then 70% of the data would be imputed. When what you are infilling is more than twice the amount of hard data, you are just guessing.
But, when I pointed this problem out to a climate scientist on twitter, she was unconcerned. Her only concern was whether or not I was using an area weighted average to define a global average. Which is insane. There isn’t enough data to compute a global average.
This is a climate scientist. Peer reviewed and published. And she doesn’t understand why missing 70% of the data is a problem.

Reply to  Mark Fife
March 27, 2018 6:58 am

Mark Fife:
I use the phrase “Over 50%”
to define the amount of
wild guess infilling for the grids.
It’s over 40% for the US,
which allegedly has the best
weather station system in the world.
You have discovered what “Over 50%”
really means — a wild guess percentage
so high few people believe it — that’s why
I say “Over 50%”.
It’s even worse before 1900 with very few
Southern Hemisphere measurements
outside of Australia.
And 1800s thermometers tend to read low,
likely making the 1880 starting point too low,
exaggerating global; warming since 1880.
In addition, “adjustments” to raw data
may account for one third, or more,
of the warming since 1880.
The surface average temperature
compilations are data-free
— they consist of
wild guess infilled data and
“adjusted” raw data
— once raw data are “adjusted”
you no longer have real data —
you have a wild guess of what
the real data would have been
if accurately measured in the first place !
The claimed margins of error
for surface temperatures
of +/- 0.1 degree C.
are complete nonsense,
not based on errors of individual
measurement instruments, and
because the infilled wild guess numbers
can never be verified or falsified.
A conservative margin of error
is +/- 1 degree C.,
meaning the temperature change since
1880 is most likely to have been
in the range of no change, up to + 2 degrees C.
Due to measurement error,
we may have already had +2 degrees
of warming since 1880
without knowing it
— meaning we would be
past the so called +2 degree C.
“tipping point”
(another leftists fairy tale).
The lack of real science,
and the extremely rough
haphazard temperature
“measurements”,
included in modern
climate change “science” ,
is almost unbelievable.
The good news is you’ve figured it out !
My climate change blog
for people with common sense,
so leftists should stay away
http://www.elOnionBloggle.Blogspot.com

Mark Fife
Reply to  Richard Greene
March 27, 2018 11:10 am

I completed a look at the temperature trends from the GHCN from 1900 to 2011 with 493 complete station records. No Global warming to be seen. Of course those records are mainly from the US with some European and Australian records thrown in. There were 5 Australian stations. I was able to salvage 5 additional sets of station data from Australia and produce an Australian record from 1895 to 2011.
So I made up some data too, about 2% of the total. Most were just missing years ranging from one to about 6. I imputed the missing data by average the last 5 points before and the last 5 points after. I imputed the last 10 years of two series with an average of the preceding 10 years. Basically freezing them in place. I was very careful with this. I tested inputting data at the edge of a 90% confidence interval for the average. It changed the individual station graphs a fraction of a degree. It didn’t change the graph of the 10 station average at all. Meaning the effect of being off in estimating the average by just under two standard deviations was less than the effect of rounding off the numbers. That to me is an acceptable and very reasonable amount of potential bias.
One thing I noticed from the Australia data, as I have noted before is any station outside a large urban area cooled off over the last 100 years or so. All the rest cooled off until about the 1940’s and started getting warmer. On average, the warming was about 0.2° from 1895 to 2011. Which, given the variability of the data, is essentially no change at all. They are on average back to the late 1800’s.

1sky1
March 26, 2018 3:16 pm

There’s even a more serious issue here than that of plain “curve-fitting” the presumed “forcings” to match the “observations,” namely: HADCRUT4 is not a reliable, unbiased estimate of global surface temperatures in the multi-decadal and longer range of spectral density components The whole modeling enterprise lacks serious grounding in proven physics and in solid empirical data.

March 26, 2018 10:21 pm

First Principles, also known as the Laws of Physics.
Do any of you know what they are, Bueller, anyone?
Calculate the effect on the so-called Average Temperature of the Earth’s Surface of one more, ten more, one hundred more, 280 more, 400 more ppm of CO2 from First Principles.
No one can, no one will, no one knows why the So-Called Average Temperature of the Surface, or 2 meters above the Surface, of our Planet Earth, Is, or Why it Is.
Greens seek to destroy the industries known as Coal Mining, Oil Exploration, and Gas Mining, because they seek to have our planet return to the Garden of Eden.
Bring it, Mosher with your English degree, Stokes with your Computational Fluid Dynamics, or anyone else. No one will, I tried, the thing is, the CO2 ppm determines how high the altitude at which the Earth’s atmosphere freely radiates to Space occurs. This is the only datum that determines the average amount of Energy contained in the Earth’s Atmosphere, how much comes in, how much goes out.
“Freely Radiates to Space.” The higher this altitude is, the lower the temperature at which this happens is, and the less energy leaves the Atmosphere.
Strangely enough, this is rarely if ever discussed here, endless debates about ECS, but it is all meaningless. How high is the altitude at which the Atmosphere freely radiates to Space???
The only actual question. Without this fact, the GCM’s can calculate weather for the next week if they are lucky, but cannot even guess about the next month, much less the next 100 years.
Time to call a spade a spade.
The higher this Altitude goes, the less Energy leaves the Atmosphere, and the hotter the surface gets. Yes, it is true, Back Radiation has nothing to do with this, it is the Average Energy contained in the Atmosphere, and of course the Lapse Rate.
Time to talk about what is actually going on…………………….
Goodness
Michael

Toto
March 28, 2018 10:02 pm

Don K: “Kepler figured out that planets moved in elliptical orbits with the Sun at one of the foci by curve fitting. It was Newton who later (sort of) figured out why.”
And before that they fitted epicycles. “Everyone” knows that was wrong, but actually they worked well enough. Kepler’s math is better because it is easier and has a direct explanation due to Newton.
http://www.polaris.iastate.edu/EveningStar/Unit2/unit2_sub1.htm
The problem with curve fitting is that “everyone” assumes that if the shoe fits, it belongs to Cinderella. I don’t know what size feet Cinderella has, but I’m pretty sure that whatever it was, there were lots of girls that wore that shoe size. If you find a curve-fit that works, that does not mean it is the one and only correct one.
Willis: “After a bit of experimentation, I found that I could get a very good fit using only Snow Albedo and Orbital variations.”
I like it! (Which is not to say I believe it; I’m not daft), But it’s as good as some others I’ve seen, it’s almost believable. One problem is that it is useless except in hindsight unless someone knows how to predict Snow Albedo.
“I add a few more variables and parameters, I can get an even better fit”
Another good point. Sometimes if the answers are too good, it’s a sign that something is fishy.
And getting a better fit sometimes means you are fitting the noise, not the science.

March 29, 2018 1:29 pm

Astroclimate, I think you express some good points.
The problem as I see it.
1) The computer models the IPCC built are ridiculous. The numerical error that the models build over time makes them impossible to think produces a reasonable result.
2) It is obvious there are many factors and modeling everything would be tantamount to the GUT theory of physics.
3) Ultimately, we do have to have a model or formula because that is the nature of science to create something that predicts. That’s the point.
The definition of science could be “producing a mathematical model that predicts consistently.” If someone says I understand we should be skeptical until they can make a prediction and that prediction can be tested. Repeated predictions and currect answers gains trust and eventually we say it is good enough to be “science.” This is true wether we are talking about psychology, physics, sociology. If we can’t predict we can’t be a science. If our predictions are not useful or in error, then we are not a science. We call something a science when the predictions are reliable enough we can build, plan, offer advice with some certainty.
I think this would be a good reason to say Climate “science” is misnamed because until there are proven relationships that can be measured and depended on then we can’t say this is a science. So far, to my knowledge there is nothing I know they can actually predict at all. That is the reason to doubt this “Science.” What are the things they think they can predict? Why? I don’t see it.
In my opinion to become a science climate people need to start with basics. They need to have measurable predictable things that are repeatable. That is hard but not impossible. I would focus on that instead of computer models which are obviously bogus. Of course, since we all know this is politically motivated that is not where they want to go. They are doing everything they can to justify their politics and almost NO real science. Real science would be:
1) Prove that co2 is excited by radiation and acts in closed environments that have similar gases at different pressures. Be able to show repeatable results.
2) Demonstrate how clouds are formed precisely and simulate producing them in controlled environments.
3) Get a LOT more data about the ocean,
4) the Earths physical shape modified by gravity and how this effects volcano’s, earthquakes, underwater sea vents.
5) Try to make predictions how different types of solar radiation effect the ocean, land and the ocean. Try to understand the sun a lot better.
6) Abandon surface thermostats and put up more satellites to measure everything. The idea of taking a small number of thermostats with incredibly limited coverage and trying to extrapolate to worldwide is too problematic and inconsistent.
7) Instead of building giant supercomputers to run bogus impossible models. Spend the money on large, really large environments to simulate experimental conditions.
you can see my blog: https://logiclogiclogic.wordpress.com/category/climate-change/