How Not To Model The Historical Temperature

Guest Post by Willis Eschenbach

Much has been made of the argument that natural forcings alone are not sufficient to explain the 20th Century temperature variations. Here’s the IPCC on the subject:

natural and anthropogenic forcings.png

I’m sure you can see the problems with this. The computer model has been optimized to hindcast the past temperature changes using both natural and anthropogenic forcings … so of course, when you pull a random group of forcings out of the inputs, it will perform more poorly.

Now, both Anthony and I often get sent the latest greatest models that purport to explain the vagaries of the historical global average temperature record. The most recent one used a cumulative sum of the sunspot series, plus the Pacific Decadal Oscillation and the North Atlantic oscillation, to model the temperature. I keep pointing out to the folks sending them that this is nothing but curve fitting … and in that most recent case, it was curve fitting plus another problem. The problem is that they are using as an input something which is part of the target. The NAO and the PDO are each a part of what makes up the global temperature average. As a result, it is circular to use them as an input.

But I digress. I started out to show how not to model the temperature. In order to do this, I wanted to find whatever the simplest model I could find which a) did not use greenhouse gases, and b) used only the forcings used by the GISS model in the Coupled Model Intercomparison Project Phase 5 (CMIP5). These were:

[1,] “WMGHG” [Well Mixed Greenhouse Gases]

[2,] “Ozone”

[3,] “Solar”

[4,] “Land_Use”

[5,] “SnowAlb_BC” [Snow Albedo (Black Carbon)]

[6,] “Orbital” [Orbital variations involving the Earth’s orbit around the sun]

[7,] “TropAerDir” [Tropospheric Aerosol Direct]

[8,] “TropAerInd” [Tropospheric Aerosol Indirect]

After a bit of experimentation, I found that I could get a very good fit using only Snow Albedo and Orbital variations. That’s one natural and one anthropogenic forcing, but no greenhouse gases. The model uses the formula

Temperature = 2012.7 * Orbital – 27.8 * Snow Albedo – 2.5

and the result looks like this:

bogus model orbital and snow albedo.png

The red line is the model, and dang, how about that fit? It matches up very well with the Gaussian smooth of the HadCRUT surface temperature data. Gosh, could it be that I’ve discovered the secret underpinnings of variations in the HadCRUT temperature data?

And here are the statistics of the fit:

Coefficients:

                              Estimate Std. Error t value Pr(>|t|)

(Intercept)                    -2.4519     0.1451 -16.894  < 2e-16 ***

hadbox[, c(9, 10)]SnowAlb_BC  -27.7521     3.2128  -8.638 5.36e-14 ***

hadbox[, c(9, 10)]Orbital    2012.7179   150.7834  13.348  < 2e-16 ***

---

Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 0.105 on 109 degrees of freedom

Multiple R-squared:  0.8553,	Adjusted R-squared:  0.8526

F-statistic: 322.1 on 2 and 109 DF,  p-value: < 2.2e-16

I mean, an R^2 of 0.85 and a p-value less than 2.2E-16, that’s my awesome model in action …

So does this mean that the global average temperature really is a function of orbital variations and snow albedo?

Don’t be daft.

All that it means is that it is ridiculously easy to fit variables to a given target dataset. Heck, I’ve done it above using only two real-world variables and three tunable parameters. If I add a few more variables and parameters, I can get an even better fit … but it will be just as meaningless as my model shown above.

Please note that I don’t even have to use data. I can fit the historical temperature record with nothing but sine waves … Nicola Scafetta keeps doing this over and over and claiming that he is making huge, significant scientific strides. In my post entitled “Congenital Cyclomania Redux“, I pointed out the following:

So far, in each of his previous three posts on WUWT, Dr. Scafetta has said that the Earth’s surface temperature is ruled by a different combination of cycles depending on the post:

First Post: 20 and 60-year cycles. These were supposed to be related to some astronomical cycles which were never made clear, albeit there was much mumbling about Jupiter and Saturn.

Second Post: 9.1, 10-11, 20 and 60-year cycles. Here are the claims made for these cycles:

9.1 years: this was justified as being sort of near to a calculation of (2X+Y)/4, where X and Y are lunar precession cycles,

10-11″ years: he never said where he got this one, or why it’s so vague.

20 years: supposedly close to an average of the sun’s barycentric velocity period.

60 years: kinda like three times the synodic period of Jupiter/Saturn. Why three times? Why not?

Third Post:  9.98, 10.9, and 11.86-year cycles. These are claimed to be

9.98 years: slightly different from a long-term average of the spring tidal period of Jupiter and Saturn.

10.9 years: may be related to a quasi 11-year solar cycle … or not.

11.86 years: Jupiter’s sidereal period.

The latest post, however, is simply unbeatable. It has no less than six different cycles, with periods of 9.1, 10.2, 21, 61, 115, and 983 years. I haven’t dared inquire too closely as to the antecedents of those choices, although I do love the “3” in the 983-year cycle.

I bring all of this up to do my best to discourage this kind of bogus curve fitting, whether it is using real-world forcings, “sunspot cycles”, or “astronomical cycles”. Why is it “bogus”? Because it uses tuned parameters, and as I showed above, when you use tuned parameters it is bozo simple to fit an arbitrary dataset using just about anything as input.

But heck, you don’t have to take my word for it. Here’s Freeman Dyson on the subject of the foolishness of using tunable parameters:

When I arrived in Fermi’s office, I handed the graphs to Fermi, but he hardly glanced at them. He invited me to sit down, and asked me in a friendly way about the health of my wife and our newborn baby son, now fifty years old. Then he delivered his verdict in a quiet, even voice. “There are two ways of doing calculations in theoretical physics”, he said. “One way, and this is the way I prefer, is to have a clear physical picture of the process that you are calculating. The other way is to have a precise and self-consistent mathematical formalism. You have neither.” I was slightly stunned, but ventured to ask him why he did not consider the pseudoscalar meson theory to be a self-consistent mathematical formalism.

He replied, “Quantum electrodynamics is a good theory because the forces are weak, and when the formalism is ambiguous we have a clear physical picture to guide us.With the pseudoscalar meson theory there is no physical picture, and the forces are so strong that nothing converges. To reach your calculated results, you had to introduce arbitrary cut-off procedures that are not based either on solid physics or on solid mathematics.”

In desperation I asked Fermi whether he was not impressed by the agreement between our calculated numbers and his measured numbers. He replied, “How many arbitrary parameters did you use for your calculations?” I thought for a moment about our cut-off procedures and said, “Four.” He said, “I remember my friend Johnny von Neumann used to say, with four parameters I can fit an elephant, and with five I can make him wiggle his trunk.” With that, the conversation was over. I thanked Fermi for his time and trouble, and sadly took the next bus back to Ithaca to tell the bad news to the students.

So, you folks who are all on about how this particular pair of “solar cycles”, or this planetary cycle plus the spring tidal period of Jupiter, or this group of forcings miraculously emulates the historical temperature with a high R^2, I implore you to take to heart Enrico Fermi’s advice before trying to sell your whiz-bang model in the crowded marketplace of scientific ideas. Here’s the bar that you need to clear:

“One way, and this is the way I prefer, is to have a clear physical picture of the process that you are calculating. The other way is to have a precise and self-consistent mathematical formalism. You have neither.”

So … if you look at your model and indeed “You have neither”, please be as honest as Freeman Dyson and don’t bother sending your model to me. I can’t speak for Anthony, but these kinds of multi-parameter fitted models are not interesting to me in the slightest.

Finally, note that I’ve done this hindcasting of historical temperatures with a one-line equation and two forcings … so do we think it’s amazing that a hugely complex computer model using ten forcings can hindcast historical temperatures?

My regards to you all on a rainy, rainy night,

w.

The Usual Polite Request: Please quote the exact words that you are discussing. It prevents all kinds of misunderstandings. Only gonna ask once. That’s all.

 

Advertisements

  Subscribe  
newest oldest most voted
Notify of
HAS

Amen to that

Germonio

Willis,
There is I think a difference between modelling and fitting that you are ignoring. You have provided a
two parameter (or function) FIT to the temperature. What a climate model does is take the forcing as the
inputs to a set of nonlinear partial differential equations. There is no reason to expect the output of such a simulation to match the historical record unless the underlying physics is correct. You have two free parameters in your fit – the amplitude of the orbital parameters and the snow albedo. A climate model
would not have the option to change the amplitude of the forcing and so has fewer free parameters than your fit.

But those climate models, for all of their approximations of the billions of partial differential equations at assumed trillions of intersecting planes, do not work.

” What a climate model does is take the forcing as the inputs to a set of nonlinear partial differential equations. “
It actually doesn’t do that. The forcings are diagnostics, trying to summarize the W/m2 effects of the inputs, but they are not input. It’s very hard to input a global average into a spatial pde.
With GHG’s, for example, the gas concentrations are input, or even, sometimes, the emissions. Then someone tries to work out what forcing effect they might have. This might be extracted from intermediate results in the GCM, or from the final results. It could also use the radiative transfer part of the code independently.

HAS

Actually what happens is tht scenarios are created that will generate particular forcings, and these are then used by the modellers to create simulations that produces the target forcing. If you trace back the process the forcings are the input that drives the GCMs.

“Actually what happens is that scenarios are created that will generate particular forcings”
I think that’s not really true, though it wouldn’t matter if it were. Consider the famous RCP8.5. RCP stands for representative concentration pathway. It gives what GCM’s need – gas concentration data which are the actual inputs to the GCM. The 8.5 means that the forcing in 2100 is expected to be 8.5 W/m2. The R in RCP means that the RCP’s chosen, 8.5, 6.0, 4.5 and 2.6 span the range of things that might happen. It would be wasteful to spend the effort on scenarios that are too close, while another range remains untested. So the characterisation by 2100 forcing gives a one-number measure of the spacing.
There is now a lot of experience with scenarios and results, and they may well have had an eye on the expected 2100 forcing when designing the scenario. That would make sense. But there is much more to a scenario than that single number. The key thing is the relation between a likely emissions evolution and the gas concentrations that are the actual input to the computation.

HAS

Nick, you claim first that somehow the forcings are not the independent input into GCMs. You now say that if they are, it doesn’t matter – what happens is the emissions and concentrations are the key input. So you are dining out on a technical point of what goes into the model, emissions and concentrations selected for their forcing effect or the forcing itself.
“The scenario development process aims to develop a set of new scenarios that facilitate integrated analysis of climate change across the main scientific communities. The process comprises 3 main phases: 1) an initial phase, developing a set of pathways for emissions, concentrations and radiative forcing, 2) a parallel phase, comprising both the development of new socio-economic storylines and climate model projections, and 3) an integration phase, combining the information from the first phases into holistic mitigation, impacts and vulnerability assessments. The pathways developed in the first phase were called “Representative Concentration Pathways (RCPs)”. They play an important role in providing input for prospective climate model experiments, including both the decadal and long-term projections of climate change.” van Vuuren, D.P., Edmonds, J.A., Kainuma, M. et al. Climatic Change (2011) 109: 1. https://doi.org/10.1007/s10584-011-0157-y
Got it?

“You now say that if they are, it doesn’t matter”
No. I say that if trget forcings were used to design scenarios, it wouldn’t matter.
You simply can’t use forcings as input. A global average power has no place in the discretised pde. There is nowhere to put it. You have to do everything by cell.
AS to van Vuuren
“an initial phase, developing a set of pathways for emissions, concentrations and radiative forcing”
You can seek to do that. But the emissions have to be converted to concentrations by some appropriate model. And the radiative forcing could be a target. But what you actually need as input is a set of gas concentrations in each of the grid cells. Here is the descriptio of the treatment of GHGs in CAM 3.0.
“We have chosen to specify globally uniform surface concentrations of the four gases, rather than their surface fluxes.”
And there is certainly nothing there about inputting radiative forcings.

HAS

And just BTW if you think AR6 might be wandering off somewhere else:
“We use the baseline SSP scenarios as the starting point for a comprehensive mitigation analysis. To maximize the usefulness of our assessment for the community scenario process, we select the nominal RCP forcing levels of 2.6, 4.5, and 6.0 W/m2 in 2100 as the long-term climate targets for our mitigation scenarios.” “The Shared Socioeconomic Pathways and their energy, land use, and greenhouse gas emissions implications: An overview.” Riahi et al Global Environmental Change 42 (2017) 153–168

HAS

So you are dining out on a technical point of what goes into the model.
“What a climate model does is take the forcing to develop the inputs to a set of nonlinear partial differential equations.”
There, fixed it.

“There, fixed it.”
No. The climate model doesn’t develop the input to a set of pde’s. The climate model is the set of pdes. Someone else figures out the input, as concentration pathways.

John Bills

And the radiative transfer code follows the Schwarzschild equation with an emissivity of 1 all the way up?

HAS

Nick, you are of course technically correct. Let’s just go back to what I said before you attempted to divert the conversation:
“Actually what happens is that scenarios are created that will generate particular forcings, and these are then used by the modellers to create simulations that produces the target forcing. If you trace back the process the forcings are the input that drives the GCMs.”
It has taken you some time to get there, but you did in the end.

I find this section in the fifth assessment report from the IPCC kind of amusing:
“..Representative Concentration Pathways, are referred to as pathways in order to emphasize that they are not definitive scenarios, but rather internally consistent sets of time-dependent forcing projections that could potentially be realized with more than one underlying socioeconomic scenario. … They are representative in that they are one of several different scenarios, sampling the full range of published scenarios (including mitigation scenarios) at the time they were defined, that have similar RF and emissions characteristics. … The primary objective of these scenarios is to provide all the input variables necessary to run comprehensive climate models in order to reach a target RF … »
ref.: IPCC; WGI; AR5; Page 1045
https://www.ipcc.ch/pdf/assessment-report/ar5/wg1/WG1AR5_Chapter12_FINAL.pdf
This is kind of testing a modeling groups ability to construct an acceptable model by asking the question:
Given that the expected radiative forcing in the year 2100 for this set of input variables, that is called RCP8.5, is 8.5 W/m2 – what is the radiative forcing calculated by your model in the year 2100 for this set of input variables?

Trebla

Geronimo: what about the “tuneable” feedback of water vapour and cloud cover?

Alan Tomalty

“Actually what happens is that scenarios are created that will generate particular forcings, and these are then used by the modellers to create simulations that produces the target forcing. If you trace back the process the forcings are the input that drives the GCMs.”
Sounds like circular reasoning to me.

Freeman Dyson recognized what few people understand. When you don’t start with a physical model and use parameters what you get is a fitted curve. As the sage said, when you do curve fitting you can fit anything. What you don’t get is a model that can PREDICT anything. A curve fit is useful only in making what appears to be a pattern in the data look cleaner and more impressive.
Curve fitting of any kind is never a substitute for a basic physical model. E=mC^2 is a model. It has made some very useful predictions, but still is lacking in the details, which people are working on. Einstein didn’t come up with a curve fitted to data, he came up with a physical conception, unthought of at the time, that worked to explain some of the data and make verifiable predictions.

Crispin in Waterloo but really in Ulaanbaatar

Logicalchemist
“Curve fitting of any kind is never a substitute for a basic physical model.”
Well, sort of, but ‘rules of thumb’ engineering has a long and productive history. Before the invention of the slide rule many, many engineering calculations were done with simple ‘close enough’ formulae requiring only pencil and paper.
I hold that anything that ‘works’ is useful. A lot in QM is like that: imperfect, poorly understood, but good enough for government work.

Indeed, Crispin. I had a professor in undergraduate geophysics who would begin discussions of new ideas with back of the napkin estimates using dimensional analysis and order of magnitude guesstimates. His point every time was about how close you could get with a friend over a beer; zillions of $$$ in additional funding were just to add a couple more significant figures. It made a big impact on me.

Phoenix44

That’s just not right. You can tune all sorts of things to make it fit, including your starting assumptions. And even then the forcings can be tuned – that’s why there is absolutely no agreement about the sensitivity to CO2.
And what do the equations do exactly? Are they fundamental physical processes? No.

Terry

One of the most sensible and concise posts I have seen on this topic of curve fitting I have seen for a while. Well done Willis.

michael hart

Yup. A modeller who cannot easily make their model produce a desired result at will, is only just learning their craft.

Hi Willis,
How do you know your input parameters are credible? For example, there are allegations that aerial aerosol data was fabricated to hindcast the global cooling period from ~1940 to ~1977.
Hadcrut (3 or 4?) is perhaps the least “adjusted” of the surface temperature data, but Tony Heller has pointed out serious problems with much of the ST data.
Can you provide data sources for your work?

ScarletMacaw

And whether or not the input parameters are credible is immaterial to Willis’ point.

Macaw – obviously true – no argument here on that point.

ALLAN MACRAE:
How could you say aerosols were “fabricated”?
If you had been in Pasadena, California
for the Rose Parade in the 1970s,
there were aerosols all over the place.
You were obviously sleeping
when the history of climate change
was presented in your public school.
Here is modern climate “science”:
— The history of climate change
in five talking points
that you must remember,
because there will be a test:
(1) Aerosols showed up in 1940,
(2) Aerosols killed natural climate change,
which was 4.5 billion years old,
and in a nursing home at the time,
(3) Aerosols took over as the Big Boss of Climate,
causing cooling from 1940 to 1977,
(4) In 1977 all the aerosols fell out of the air,
then CO2 took over as the Big Boss of Climate,
(5) CO2 caused warming from 1977
to the early 2000’s,
and then ‘fell asleep’
from the early 2000s to 2015,
when Mr. Hiatus took over.

WXcycles

Thank you Willis, amalgums of cycles and trends always seemed arbitary and baseless to me.
At least when directly checking physical Milankovitch cycles against original data it makes physical sense to look for correlation.
As opposed to inventing processing noise and calling it … whatever floats your boat.

Macha

Using GISS data is the first mistake….UAH seems to have proven to be better.

The 1939 temps seem way to low from historic national records.

Chris

And when you are doing curve fitting – you really don’t know anything outside of the domain that was fitted.

Dr. S. Jeevananda Reddy

In the figure, though observed and predicted matched, it is a false logic similar to CO2 with population growth.
Temperature anomaly follow the 60 year cycle (varies between – 0.3 and + 0.3 oC) — 60 year moving average shows the linear trend which is a function of several factors — anthropogenic, land use, etc. Natural part can not be part of this trend.
Dr. S. Jeevananda Reddy

“Natural part can not be part of this trend”.
Why? Because the “natural” part has yet to be found?

Science:
If you don’t make mistakes, you’re doing it wrong.
If you don’t correct those mistakes, you’re really doing it wrong.
If you can’t accept that you’re mistaken, you’re not doing it at all.
~Anon~

Crispin in Waterloo but really in Ulaanbaatar

Roy, similarly, if your card game is Bridge, and you are not going down 1/3 of the time, you are under-bidding.

Kristi Silber

Bravo! Well said, Anon.
A veneer of knowledge hiding ignorance is far worse for the pursuit of truth than admitting, defining and exploring one’s ignorance and using it as an incentive and guide to seek knowledge.
………………………………………….
Certainty that one is right damages the capacity to absorb new, contradictory information, leading to permanent error.
(This does not prohibit one from believing there is a high probability of being right so that one can build on working hypotheses.)

Stevan Reddish

Willis,
As you seem to like analyzing data, perhaps you would be interested in a suggestion I have:
We know atmospheric moisture affects the rate of cooling at night. It is assumed increased levels of CO2 slow the rate of nightime cooling The big question is how much the rate of cooling is affected by atmospheric CO2, and how any such effect varies with humidity. Any effect would be most detectable on calm, cloudless, low humidity nights.
Have hourly (or shorter) records of temperature, windspeed, humidity corollated with percent of cloudcover been kept over say, the last 50 years, in a place with very low humidity, such as Antarctica? Could such a record be analyzed for evidence of lower rates of nightime cooling due to increased CO2 levels?
IF atmospheric CO2 has any significant effect on cooling rates, there should be detectable evidence.
I would be interested in any analysis you make along these lines.
SR

Got an idea?
Knock yourself out.
Do not dump the work and responsibility on someone else!
A) They are not in your employ.
B) If you do not know how to accomplish your idea, then you have zero validity to suggest someone else spend hours and effort.
C) If you do not know how to proceed, welcome to motivation and an excellent teaching moment. Learn what you need!

Stevan Reddish

ATheoK,
Perhaps I worded my post poorly. What you perceived as dumping work on someone, I intended as sharing an idea for research with someone who
A) Has expressed an interest in doing similar research and analysis.
B) Has previously requested people share ideas for research and analysis.
C)Has demonstrated skill at such research and analysis.
If I wanted to suggest an interesting book to an avid reader would you advise me to read it myself, as if I was assigning homework? That avid reader might be sorry you kept him from a good read.
SR

SR:
You dumped an assignment and responsibility.

“Stevan Reddish March 25, 2018 at 2:49 am
As you seem to like analyzing data, perhaps you would be interested in a suggestion I have:”

That is a very condescending statement, along with a direct request that Willis spend effort and time on your idea.

“Stevan Reddish March 25, 2018 at 2:49 am
We know”

Stated with the classic ‘Royal we’; a demeaning inclusive reference pretending subordinates are included.

“Stevan Reddish March 25, 2018 at 2:49 am
We know atmospheric moisture affects the rate of cooling at night.
1) It is assumed increased levels of CO2 slow the rate of nightime cooling
2) The big question is how much the rate of cooling is affected by atmospheric CO2,
3) and how any such effect varies with humidity.
4) Any effect would be most detectable on
4a) calm,
4b) cloudless,
4c) low humidity nights.
5) Have hourly (or shorter) records of
6) temperature,
7) windspeed,
8) humidity corollated with
9) percent of cloud cover been
10) kept over say, the last 50 years,
11) in a place with very low humidity,
such as Antarctica?
12) Could such a record be analyzed for evidence of lower rates of nightime cooling due to increased CO2 levels?
I would be interested in any analysis you make along these lines.”

A request for:
Specific location(s) detailed atmospheric CO2, high frequency measurements.
Specific location(s) detailed humidity high frequency measurements.
Preferably on calm cloudless low humidity nights.
Basically a request that requires laborious tasks within tasks within tasks
Hourly or shorter measurements for
Temperature
Windspeed
Humidity
all correlated with cloud cover percentages
over fifty years.
Your participation, SR?
An interest in any analysis…
While you expect Willis to locate sufficiently long, extremely detailed records for multiple datums. Datums, all collected in locations that expressly log cloud cover, humidity, wind speed, temperature at hourly or in shorter periods.
Plus that information should be over a fifty year time period; i.e. dating back to 1967…
If you believe it is that easy to accomplish, then run the analyses yourself. Should be easy.
Otherwise, you have no right to dump such loads of work on anybody you do not employ and pay very good salaries to.

Bloke down the pub

However bogus Willis’ two parameter model is, it’d be interesting to see what it predicts for the future. Doesn’t the law of averages mean that with all these models being churned out, sooner or later one of them will prove to be correct?

F. Leghorn

That is right up there with “an infinite number of monkeys”. Maybe.

Taylor Ponlman

Absolutely a case of ‘infinite number of monkeys’. I remember fondly Bob Newhart’s old routine about the guy whose job it was to ckeck their typewriters for output. One wrote: “To be, or not to be. That is the gerzornenplat…”. Still makes me smile!

mynaturaldiary

‘Results from fitting mechanistic models have sometimes been disappointing because not enough attention has been given to discovering what is an appropriate model form. It is easy to collect data that never ‘place the postulated model in jeopardy’ and so it is common (e.g. in chemical engineering) to find different research groups each advocating a different model for the same phenomenon and each proffering data that ‘prove’ their claim.’ (Box and Draper)
In terms of empirical models, instead of identifying a single model based on statistical significance, such as correlation, best subsets regression shows a number of different models, as well as some statistics to help us compare those models to guide the selection process. Best subsets regression should result in an empirical model that conforms to Occam’s razor; i.e. when presented with competing hypothetical answers to a problem, we should select the one that makes the fewest assumptions, with the least overall error.
Here’s an example of it applied to the Central England Temperature record.
https://mynaturaldiary.wordpress.com/2018/03/03/whither-the-weather-2/

The most powerful and most overlooked part of the scientific method is the second stage, the one after coming up with the concept. It’s the one that says “How are you going to achieve your goals? How are you going to measure that?”
For a meaningful result you need to make sure you can resolve variations at uncertainty levels less than the expected variations. For a hypothetical result you don’t really have to – you include this in your assumptions. The idea being that maybe sometime later you can reduce the uncertainty. For empirical work you do.
The problem is that academics who deal in theory never have to test this part of the scientific method. And this is the same with temperature measurements. The second problem is that as a society we are prone to believe hypotheticals and supposition rather than ‘ugly facts’.
The more you present temperature data and temperature anomalies the more you are convincing yourself that they are meaningful.
So anyone with a crazy curve fitting idea is on par with people who believe that they have a physical mechanism but have large uncertainty in the data.
You’re doing the same thing.

I thought it was:
0. Observe
1. Pose an hypothesis, multiple hypotheses, or a general theory
2. Suggest tests fot it/them
3. Make one or more tests and urge others to join in to do more of the tests, & devise additional tests, or repeat tests to make sure you didn’t fumble while juggling the test tubes, gauges, positrons…
4. If test or tests seem to refute hypothesis, report exactly how test was run and how it came out and revise hypothesis, or revise tests and repeat test
Repeat.

If you want to put like that you can. The basic idea and the one often used in laboratory exercises is
Concept and Idea – Thinking
Methodology and Execution – Measuring
Analysis Results and Conclusions
Or in food terms
Mise en plat
Cooking
Eating

Kristi Silber

That’s more or less how standard experimental science is done. Usually experimental replication is done by other researchers in case there is a systematic bias in the first researcher’s methods. Often replication is in the form of a variation of the first experiment that not only tests the first, but provides new information or increases the robustness of the results.
There are other ways of hypothesis testing. For example, one can analyze data that has been gathered in the past for different purposes or gather data for a population at multiple time intervals without any manipulation. These kinds of studies are commonly used in human populations, which can’t always be experimented on. Typically these are multivariate studies that tease out different factors and their interactions and how they effect some parameter(s), and need large sample sizes. The recent article about the fish is a good example of analyzing data already available. (Many said the research was rubbish, but I don’t think they understood the statistical methods.)
Modeling is also a perfectly legitimate way of testing a hypothesis. For instance, one can use equations to represent known interactions, then vary a parameter to see the behavior of another parameter.
Or one can develop a hypothesis and test it by doing a meta-analysis of other research.
…In reality there are a lot of ways of testing hypotheses. Perhaps the most important part (aside from using statistics appropriately) is identifying sources of bias – systematic bias in the data, bias in the experimental procedure, human bias in the interpretations. Good scientists are always cognizant of potential for bias. This is just one reason I have more trust in mainstream climate scientists than many around here. Not all scientists are “good,” but it helps that it’s a very competitive field.

Kristi Silber

“For a meaningful result you need to make sure you can resolve variations at uncertainty levels less than the expected variations.”
Is this not why they do so many iterations of a model? Also, the means of the predicted variables over all the models is a better indicator than any one. I have no idea how the uncertainty is mathematically handled in these cases, I’m just going by intuition, which is sometimes terribly misleading and sometimes a very handy tool.
I wish I knew the post, but there was recently a graphic posted here that shows the means and SD for a bunch of models predicting temp change or something, and it was quite remarkable how many showed overlap.
I’m aware of the uncertainties in modeling, and the modelers certainly are. Everyone knows that clouds and aerosols are sources of uncertainty, among others. Yet even though the groups doing these are apparently not in tight communication about the ways they create their models – not sharing their methods as they build, tune and test them – they have some areas of remarkable agreement. Different groups have different interests, and build their models accordingly.
I just find it very hard to imagine that all these independent groups have been corrupted so that they repeat the same errors. If they were, you’d think they’d do a better job so there wouldn’t be so much uncertainty.
(Tuning can decrease the uncertainty dramatically, but there’s a risk of overtuning and making the model unstable or unrealistic. Good paper if you want to learn about tuning:
https://journals.ametsoc.org/doi/full/10.1175/BAMS-D-15-00135.1)

You are making the academics mistake of slipping into hypothesis. A real measurement requires appreciation of signal to noise. Uncertainty is not reduced by multiple samples as they have to follow i.i.d for that to be true and you won’t know if they do because your signal to noise is below the threshold required to decide that.
As Rutherford would say: if you have to use statistics you should have done a better experiment.
Design to meet expectations. If not then by all means run models but they have little relevance to real world actions.

It is very evident that the ENSO has a temporary impact on global temperatures.
There is also a 60 year cycle of some kind in global temperatures.
I would rather try to understand what it is happening with the climate and try to understand what is driving that or how it works, rather than throw up my hands and say “it can’t be done because of curve fitting”.
Humans have advanced because we have tried to develop an understanding of the environment around us. Sometimes it can’t be done and sometimes an incorrect understanding is developed, but much of the time, we figure it out.

Nobody is saying that – the point is that curve fitting of a single data series , without any theoretical rational behind the tunable values, doesn’t lend plausibility to the model and cannot be considered proof of its validity. The Model must cross validate – be able to predict across a number of data sets.

Bill is spot on. Understanding how the climate varies naturally has to be accomplished before trying to explain why it has varied. Since the 1990’s climate “science” has operated in a bass-akwards manner… With the “why” coming before the “how.”

Kristi Silber

I’m curious – how do you know that climate scientists aren’t aware of the way climate varies naturally, at least enough to be able to create models? There are some things that are understood better than they can be modeled simply because of the resolution available with today’s computing capacity. No one denies that the projections are estimates, but when the estimates pretty much agree even at different values of the unknown parameters it seems that should improve your confidence.
Some natural processes can’t be explicitly included in the model because they follow no regular pattern – volcanoes, for instance. There has to be a stochastic variable included to accommodate these things, I would imagine.
It seems to me that people view models through the lens of their professions. Some professions call for a great deal of precision and accuracy, referring directly to material parameters (engineering). Some are highly theoretical, mathematical (physics). Some are familiar with complex, dynamic, interactive, stochastic systems (climate and ecosystems). Some are familiar with meteorology, which relies heavily on modeling but, on different scales (sometimes I wonder if this background makes it harder to imagine how anyone could predict climate decades from now, since weather models can’t predict more than 2 weeks in the future). There are models to reconstruct the geological past, the fossil record, genetic relationships, neural networks…there are thousands of ways models are used. So why is it so hard to imagine climate models are a valid tool, when used responsibly and honestly and did all one could to control for bias? There are ways to do so.
Over 20 years ago I spent three summers camping in the Adirondacks gathering data for a model of forest dynamics. The model was much simpler, but of the same general type (dynamic, predictive, stochastic) as those used for climate. Maybe this background makes me more inclined to believe climate models are valuable.

How is your reply even remotely related to my comment?

MarkW

Kristi, among other things, we know that the climate scientists don’t understand natural variation, because they have stated as much.
Truth is nobody knows why the earth warmed up for the Medieval Warm Period, or why it cooled down for the Little Ice Age. There are theories, but none have been proven.
Ditto, we don’t know why the earth warmed up during the 1930’s, nor why it cooled during the 1970’s.
We’ve been told that we don’t need to know why the previous warmings and coolings happened, because the models tell us that the current warming is due to CO2.
Our response has been that if you can’t prove that the causes of the previous warmings are not operating currently, you can’t claim to have proven that the current warming must be due to CO2.

Good comments, thank you Bill and David.
The climate models cited by the IPCC and its minions tend to “run hot”, and also fail to hindcast accurately (unless forced to do so by falsified inputs). In formal engineering terminology, these models are “crap”.

Chimp

CACA is a crock.

Phoenix44

No, we are saying it cannot be done WITH curve fitting.

The validity of curve fitting depends on the curve, and the logic behind it.
For example, here is a simple relationship between NIno34 SST’s and the volcanic aerosol index that predicts global temperatures 4 months in the future. [Others including Bill Illis have developed earlier and better relationships with a few more input parameters.]
https://www.facebook.com/photo.php?fbid=1618235531587336&set=a.1012901982120697.1073741826.100002027142240&type=3&theater
Note the blue line reflects Nino34 SST’s, which show NO NET WARMING SINCE~1982 (possibly earlier).
Developing a nonsense curve as Willis has done does not disprove all curve-fitting exercises, just as the crash of one deliberately-sabotaged car does not prove that all cars will crash.

Hi Willis,
Here is some information on my aforementioned plot, and on the more detailed work by Bill Illis. I lagged UAH LT global temperature by 4 months to show coherence in my plot, whereas Bill lagged tropical temperature by 3 months in his plot.
The mechanism is tropical Pacific SST’s increase tropical humidity and tropical atmospheric temperatures 3 months later, and global temperatures one month thereafter.
John Christy tells me he wrote something similar in his 1994 paper with Richard McNider. Thought I had something new but – no.
Actual LT temperatures are running about 0.2C higher than my prediction for Feb2018, but I think should drop from +0.2C to about 0.0C soon.
Regards, Allan
https://wattsupwiththat.com/2017/09/20/from-the-the-stupid-it-burns-department-science-denial-not-limited-to-political-right/comment-page-1/#comment-2616345
Re data:
Nino34
http://www.cpc.ncep.noaa.gov/data/indices/sstoi.indices
UAH LT
http://www.nsstc.uah.edu/data/msu/v6.0/tlt/uahncdc_lt_6.0.txt
Sato Aerosol Optical Depth Volcanic Index:
https://data.giss.nasa.gov/modelforce/strataer/tau.line_2012.12.txt

co2islife

If Society Can’t Trust Science, What Can They Trust? Climate Alarmist is Playing San Francisco Judge as a Complete Fool
Dr. Myles Allen must think that the San Francisco Judge is a complete fool. I just finished a post refuting many of his claims, but one example needed to be singled out. In his presentation, Dr. Myles Allen replaced the poster child Mt. Kilimanjaro, which was exposed as a fraud in the Climategate emails, with … Continue reading
https://co2islife.wordpress.com/2018/03/25/climate-alarmist-is-playing-san-francisco-judge-as-a-complete-fool/

“Society” has no mind. It is a hand-wavy collection of individuals interacting, and only each individual has a mind. Those individuals can communicate (& miscommunicate), can agree in part & disagree.
Individuals, OTOH, are ignorant, and collections of individuals are ignorant. Information is expensive. It requires effort. The scientific method is a process for optimizing that effort, getting the most valid information at the least cost.
Because we are ignorant, we also use heuristics. When you mention a San Francisco judge, the heuristic that he or she is most likely either a fool or corrupt happens to work well over the last several decades…but we always look forward to seeing more exceptions.

Kristi Silber

“The judge needs to ask Dr. Myles Allen how does a glacier melt due to man-made warming when there is no warming?”
Drought. Normal melting, little precipitation to replace the summer melt. Also glaciers can go straight to water vapor when the air is dry.
To attack someone for fraud without considering the alternatives (such as the fact that you might be misunderstanding something or that he’s making a simple error) is a sign of loss of objectivity.
I started reading some of your rebuttal. It’s full of unsubstantiated assertions (and ones I believe to be erroneous, but that’s to be expected!). It also seems to not take into account that this is, after all, a document for the plaintiff, and cherry-picking is to be expected. It’s called a trial, not playing the judge for a fool; this isn’t a good measure of what society at large should expect from climate science.
Your attack goes a long way toward illuminating the reasons so many don’t trust science. It is biased and based on unjustified assumptions. Have you read the NOAA research into the effects of weather station siting that resulted from Anthony’s data? Is there some particular problem with it that makes it irrelevant?

co2islife

Yes, but how do all those causes the to CO2? Sublimation isn’t caused by CO2. It isn’t me that is jumping to conclusions, it is the one claiming man made CO2 warming is the cause when there is no documented warming. That is either incompetance or fraud, neither is acceptable.

You are in gross error. There is in fact documented warming. Your comment is the one that is unacceptable.

co2islife

Where is the documented warming in Glacier National Park? I provided the data I could find. Do you have some data showing warming in Glacier National Park That isn’t due to the Urban Heat Island Effect?

co2islife

Mr. Watts, the comment was in regards to Dr. Allen claiming the Glacier National Park Glacier was melting due to man made CO2. The graph I provided from the USGS shows a gradual downtrend in temperatures since 1994. Do you have data demonstrating otherwise? The same issue is with My Killimanjaro’s glacier. There has been no warming at the top of the mountain. The leaked Climategate emails demonstrate that the “experts” are aware of that fact, yet did nothing to dispel it, and even worked to promote it. If you have data demonstrating warming in Glacier National Park, or the top of My Killimanjaro, I’ll gladly edit my post.

OK, in the interface we see comments for approval outside of upthread context, so it looked as if you were saying there was no observed warming on a global scale.

co2islife

Sorty about that, i confused the issue, and certainly didn’t intend to disrupt the conversation. I love your site, and certainly didn’t intend to create confusion. The satellite data clearly shows slight warming tightly tied to ocean cycles. I’ve repetedely stated to understand the climate understand the oceans, and CO2 doesn’t warm the oceans. MODTRAN demonstrates that the CO2 signature isn’t even measurable until you reach 3km in the atmosphere. We are in total agreement, and I apologize for the confusion.

TA

From the article: “So, you folks who are all on about how this particular pair of “solar cycles”, or this planetary cycle plus the spring tidal period of Jupiter, or this group of forcings miraculously emulates the historical temperature”
And considering that the historical temperature record is bogus itself. . .

TA

Fit this temperature profile:comment image

If the tunable parameters were determined on the basis of one’s theory, rather than data points, and then provided a good fit, then one might have something. Irregardless, any model must be cross validated against a different set of data.That is when the “excellent data fit” usually disappears.

“One way, and this is the way I prefer, is to have a clear physical picture of the process that you are calculating. The other way is to have a precise and self-consistent mathematical formalism. You have neither.”
This is true for theoretical and experimental physics. Unfortunately it’s not for atmospheric physics. You can have both physical picture and mathematical formalism but not a model that fits observations perfectly without free parameters. The culprit is chaos. Even a completely deterministic system can still be unpredictable in the long term.

Jim Gorman

Here is one reason I do not believe GCMs. If climate is chaotic, then the more time elapses, the greater the chance that a computer projection will fail. Here is the kicker, this applies to both plus AND minus time. When I see a GCM match history regardless of how far back you go, it has obviously been tuned to give a false impression! If the projection to the past is false, then the projection to the future must also be considered false!

Kristi Silber

” When I see a GCM match history regardless of how far back you go, it has obviously been tuned to give a false impression! ”
Why is this obvious? How close is the match? I think it’s very risky to say it is “obvious” that someone has done something wrong without understanding why they did what they did. This is a very consistent pattern in climate “skepticism.”
(I’m by no means certain about the following; hopefully someone with expertise will come along a correct my errors, but this is how I understand it.)
There is a chaotic ingredient in weather, but it seems to me that climate is a little different. It’s not trying to predict weather, it predicts trends in averages. So, it’s not going to predict that a hurricane will happen in 2030, but it may predict that hurricanes will get more intense on average over time. This makes the chaos of weather less of a problem. The unknown, unpredictable factors like volcanoes can be represented by a stochastic parameter: one that behaves randomly, but following a normal distribution.

Jim Gorman

Because every time it is run it ends up with the same historical trend This would be impossible if the “program” was following a truly random and chaotic pattern. Consequently, the modelers are attempting to fool people into thinking their outputs are accurate. They are not. At best, they are projections attached to predetermined information.
The modelers have taken the criticism that their programs do not project backwards accurately and changed them so their output is predetermined. This may look good but it is also inaccurate. The real criticism is that our present knowledge of the atmosphere is inadequate and consequently we cannot design software that addresses the issue. They have unwittingly confirmed the criticism by developing a “fake” way of dealing with it. It also confirms the criticism that future projections are probably inaccurate also.

Kermit Johnson

I’m glad to see that there is some attention being paid to climate models and curve-fitting. I have always said that prior to any scientist being allowed to publish any climate model, they first should spend a few years making models of something like the price of corn – or wheat – or cattle. Along with making the models they should be required to actually bet their own money on the results of their models. This is what is so great about modeling markets – there is no “committee” that decides whether they are right or wrong. A quick look at their statements is all the feedback they need.
Isn’t this also why we are in such a mess economically? We now have academics making models – and expecting the markets to behave the way their models say they should.
I’m surprised here, however, that there isn’t more discussion about sensitivity factors in these models. Think about it – each model has its own sensitivity factor.

JRF in Pensacola

Willis, a very interesting article. Some clarification, please, for the Great Unwashed.
Are you saying that (some, many, most, all) climate models do not have good physical underpinnings (Fermi’s “clear physical picture of the process”) and are simply making associations rather than correlations?
Could you give an example(s), if any, of models that have a good physical foundation (even if their output is questionable perhaps because of an incorrect input variable or variables)?
I know that Joe Bastardi over at Weatherbell will comment about the physics of the GFS compared to the European. Is that in the same vein as your article?
Thanks.

Yogi Bear

“The problem is that they are using as an input something which is part of the target. The NAO and the PDO are each a part of what makes up the global temperature average. As a result, it is circular to use them as an input.”
It could be circular to not regard the NAO as input if it is effected by solar variability regardless of the global mean surface temperature.
I think that you could do a better post on Scafetta’s mathurbations, and list the components for each of his beat period products, so it’s clearer to all how physically ridiculous they are. His root period are the orbital period of Jupiter, and half of the synodic period of Jupiter and Saturn. The beat period of those two are ~61 yrs. He then takes the mean of those two root periods, and with that mean, makes more beat periods against the two root periods. Do the maths on that and you’ll see that his 115 yr beat should actually be 112 yrs.

Richard M

In reality neither the PDO nor the NAO indices are temperatures. Willis is wrong. They are NOT “part of what makes up the global temperature average”. And, even if they were, it would NOT mean they were unimportant to what drives the global temperature.

Chimp

Willis,
Thanks to very cold Humboldt Current, the central, populous coast of Chile is very windy. Hence the frequent destructive fires in Valparaiso and in forests inland.
The California Current is chilly, but the Humboldt comes straight from Antarctica. It’s colder than the Labrador Current. It carries penguins to the Galapagos Islands on the Equator. It makes the Atacama Desert and southern Peru the driest place on earth, rivaled by Namibia, which endures another cold western boundary current from Antarctica.

Yogi Bear

“As a result, if you use the PDO or the NAO as inputs, you are using parts of the very thing that you are trying to model … and that’s not allowed.”
If there is a solar influence on the NAO, it would be proxy for an input. For example an increase in negative NAO during solar minima.

Don K

Please note that I don’t even have to use data. I can fit the historical temperature record with nothing but sine waves

Of course you can. That’s Fourier (right?), and you can get (almost) any curve you want with enough waveforms. There’s a somewhat comprehensible discussion of that in Chapter 50 of the Feynman Physics Lectures. Of course you need phenomena that are periodic (tides, sunspot cycles, …) and they need to be at least roughly sine-cosine wavish. And they need to actually be applicable. And you’ll always get an answer even if, as appears to often/usually be the case, the curve you’re looking at isn’t driven by tidy cycles. That’s likely why those who apply Fourier analysis to financial markets generally do not end up rich.

Don K

Willis: Afterthought — I assume that your objection is to numerology, not to applying reasonable cyclic adjustments where they seem appropriate. — Seasonal corrections for example?

Juan Slayton

For any who, like me, will want to read the rest of Dyson’s remarks, here is the link:
http://lilith.fisica.ufmg.br/dsoares/fdyson.htm

Thank you, Juan!

phil salmon

“One way, and this is the way I prefer, is to have a clear physical picture of the process that you are calculating. The other way is to have a precise and self-consistent mathematical formalism. You have neither.”
Willis, would you apply this condition to attempts to explain a biological phenomenon? Such as, for instance, why the European eel (Anguila anguila) persists in crossing the Atlantic to the Sargasso Sea in the Caribbean, to spawn each year?

Exactly. Incommensurability – great word, thanks!
Dyson and Fermi can afford the “luxury” of requiring precise mathematical and physically mechanistic proof when looking inside the atom or at the big bang.
However with eels and love, a level of complexity is reached where such strict requirements can’t be applied.
My point is that climate has such a level of complexity. Not only does it involve chaotic nonlinear pattern-formation processes, it also is significantly affected by living organisms. Ever since the great oxygenation event a little over 2 billion years ago, living organisms have had a massive effect on climate. Thus for example the attempts to model CO2 effects on climate are flawed if they don’t include the greening effect of CO2 causing an enhancement of transpiration and the hydrological cycle in arid and marginal regions.
But I agree that curve fitting to astrophysical processes is invalidated as you point out by the ease of fitting data to even a limited “toolkit” of proposed oscillating forcers. Also this approach makes another fundamental mistake in assuming the climate to be essentially passive such that all its ups and downs are driven by some external astrophysical agent. This is the same error as alarmist CAGW which requires a completely passive climate and all warming or cooling is imposed by atmospheric and always human-contributed gasses or particles. Both these are wrong. The climate is active, not passive and changes by itself, in and of itself. Optionally with a little help from outside.

Kristi Silber

Ptolemy2
“Thus for example the attempts to model CO2 effects on climate are flawed if they don’t include the greening effect of CO2 causing an enhancement of transpiration and the hydrological cycle in arid and marginal regions.”
I’m pretty sure most new models do this. Some may just include a parameter for the vegetative sink for CO2, but I think there are also parameters representing vegetation and land use.
“This is the same error as alarmist CAGW which requires a completely passive climate and all warming or cooling is imposed by atmospheric and always human-contributed gasses or particles. Both these are wrong. The climate is active, not passive and changes by itself, in and of itself. Optionally with a little help from outside.”
I always wonder who people mean by “alarmist CAGW.” Is this the greenies? The media? Or do you mean climate scientists? At any rate, I doubt there are many people so stupid that they believe all variation in weather is due to human-induced causes. Why do you say such things? Do you actually believe that? Why? Do you realize that just by saying something or reading it again and again it will seem to be true, even if you weren’t sure to begin with? Obviously there aren’t a lot of people you have to convince around here. I’m honestly puzzled why people say the same things again and again – not just any things; the comments have to be DEROGATORY.

Kristi
It was not my intention to be derogatory. I think “alarmist CAGW” is a reasonable description of the dominant body of opinion in the media, politics and academia, that recent warming is anthropogenic and a cause for alarm. There is nothing wrong with alarmism if there is a real threat on the horizon. Churchill was right to br alarmist about political and military developments in Germany in the 1930’s. People are right to be alarmist about antibiotic resistance.
You are right that progress is not helped by derogatory language and labelling. The climate research community is moving toward acceptance for instance that climate is not “passive”, that oceanic circulation shifts can cause 10, 100 – 1000 year timescale climate changes without necessarily outside forcing. This knowledge was of course always there in the oceanography literature, it’s just a question of connections between disciplines.
However it has to be said that a lot of communications about climate research do appear to assume a passive climate, albeit not explicitly. Did climate warm? – has to be CO2 or maybe soot; did it cool? – has to be particulate pollution or a volcano or two (at least the latter is not anthropogenic).
For instance a Canadian academic a few years back published a paper asserting that 99% of recent warming was anthropogenic, “natural” processes could be restricted to no more than 1%. Such a statement had clear political implications, which was evidently intentional. However it cannot have been made on the basis of understanding of “natural” ocean driven multidecadal climate variability.
But yes – cutting out inflammatory language is perhaps the single thing that would most advance the climate debate and the research process. All genuine attempts to advance understanding should be respected, from whichever direction they come.

MarkW

Kristi, just saying “I’m pretty sure most new climate models do this”, doesn’t cut it.
These models have a long history of excluding critical factors.

ptolemy2 is phil salmon btw. Forgot that I was still anonymous on this pc.

Brett Keane

Kristi, do not play the sympathy card. Climatism has been and is a practitioner of abuse and mendacity. For reasons not really concerned with climate, which is just a tool. CO2 and ridiculous CMIPs, now disavowed by IPCC even, but still used for meaningless scenarios and projections. Not science but politics, so leave off wasting our time please. Your steed has expired. Brett

Kristi Silber

Phil,
Thanks for the comment.
” I think “alarmist CAGW” is a reasonable description of the dominant body of opinion in the media, politics and academia, that recent warming is anthropogenic and a cause for alarm.
I think this is widely seen as derogatory. What is an alarmist, anyway? Someone who is concerned about the evidence that things are changing, and the changes to come? I can see talking about some in the media as alarmist, saying the latest storm is a sign of the coming devastation, but I think it’s destructive when the term is applied broadly to the scientific community. Then there is the “catastrophic’ part. What does this mean, exactly? Seems like it’s intentionally exaggerated. What about all those who are simply concerned by the potential for major disruption to human and biological systems? Much of my concern is based on the uncertainty of what will happen through destabilization of communities that have adapted together to their environment. This is what i know most about, so it’s what I think about, but if it’s ever addressed here it’s in a derisive way.
There is far too much knee-jerk dismissal of science that isn’t understood by those dismissing it. There is very little healthy skepticism among many who comment on WUWT; instead, denial is fostered. It’s reached the point that research in other fields that have nothing to do with climate modeling is dismissed just because it talks about a model, even if it’s just a multivariate regression.
The climate debate has become one of politics vs. science. I’d go so far as to say, the “skeptic” movement is anti-science. It promotes more misunderstanding then understanding.
“For instance a Canadian academic a few years back published a paper asserting that 99% of recent warming was anthropogenic, “natural” processes could be restricted to no more than 1%.”
This is obviously foolish. I’m the last to argue there aren’t fools out there. Al Gore is one.
” The climate research community is moving toward acceptance for instance that climate is not “passive”,”
I don’t know what you mean by this.
Please don’t be offended by what I say here. These are my perceptions. It matters to me much less what our carbon policy is than that the scientific community has such widespread public distrust.
Regards,
Kristi

Kristi Silber

Brett – sympathy card? You think I want your sympathy? What an absurd idea!
Yep, science has been crushed under the weight of politics. Maybe if hard-core deniers like yourself were less influenced by politics they might actually consider the science without bias.

paqyfelyc

@Kristi Silber
March 28, 2018 at 12:33 am
“what is an alarmist, anyway? Someone who is concerned about the evidence that things are changing, and the changes to come? ”
Not just that, but someone who denies that things were changing before and will keep changing anyway (implying that all changes are man doing, and, hence, what man did, he can stop doing and undo), AND that these changes are not just bad, but DOOM (implying that we cannot balance the good and the bad, we just must go backward to previous era, before man CO2-sinned).
” Much of my concern is based on the uncertainty of what will happen through destabilization of communities that have adapted together to their environment.”
Then creationist you are. Adaptation is not a state, it is a process. Living communities cannot be destabilized, because they are not stable in the first place. Most species exist only because change happens, and they themselves prompt change that will destroy (or at least displace or put in dormancy) them.
The poster story of nature conservation failure is how man almost destroyed some redwood (Sequoia sempervirens) by trying to protect it from fire. Trouble is, fire destroys its competitors more than it hurts redwood, so fire shouldn’t suppressed. Likewise, man tried to protect marsh rare species by wetland conservation. Complete failure, as these species depended on ecological succession of drier and wetter phase.
We don’t do this mistake anymore. You still do. Change not only happens, It is necessary for biodiversity.
“The climate debate has become one of politics vs. science. I’d go so far as to say, the “skeptic” movement is anti-science. It promotes more misunderstanding then understanding.”
Oh. Well, just look at thiscomment image?w=664
This is OFFICIAL IPCC figure.
It presents “observations” Vs “model results” natural forcing. There are NO observations of natural forcing, and no way to observe this, {and the very notion of “natural forcing” is just … WTF???… Just think about it. Nature is forcing itself? } .
It presents “observations” Vs “model results” anthropogenic forcing. Likewise, there are NO observations of anthropogenic forcing and no way to observe it. Besides, there is just no reason for anthropogenic forcing to be so jerky, it should be a nice smooth curve, copy-pasted from CO2 concentration at MLO.
So is the state of “climate science”: calls “observation” things it didn’t observes and have no way to observe.
So is your state: You believe this is science, and call anti-science anyone who demands science, that is, proper data not made out of improper modeling.
Who promotes more misunderstanding then understanding? Rhetorical question. You, obviously.
“Please don’t be offended by what I say here.”
Your tone is very polite. Trouble is, such nonsensical belief, and calling “science” pseudoscience, is offending all by itself, for scientific minds like those of most denizen of WUWT
“It matters to me much less what our carbon policy is than that the scientific community has such widespread public distrust.”
Well, I am pissed off that so many people believe in bullshit like organic food, astrology, electromagnetic hypersensitivity, homeopathy, GMO and palm oil and vaccine dangerousness, etc. despite scientific proof (BTW, such anti-science beliefs are very well correlated with CAGW belief; does it surprise you? Not me). Now, I also understand why they do, and recognize their right to act according to their belief. I just don’t recognize their right to have their beliefs turned into law. You see a pattern here?
Remember, Feynman told “Science is the belief in the ignorance of the experts”. You know that Newton was wrong, and Einstein was wrong and didn’t trust himself (which made him SO scientific, after all).
I don’t trust any man nor any theory, unless an insofar it produce some actually working stuff: planes, engines, solar panel and the like. I trust the technicians who say they did this stuff by using a theory, and if this works, well, there is truth enough in the theory. No such thing in “climate science”.
BUT. I believe in science, which is a process. I don’t trust “scientific community”. Moreover, I cannot trust a community that didn’t kicked out Micheal Mann, the way medical community kicked out Jacques Benveniste (just another example of a man doing both very good science, and very bad; contrasting M. Mann, who never did any good science).
Scientific community deserves such widespread public distrust. When it starts being trustworthy, then you can blame the public. Not before. Won’t happen, unfortunately.

astroclimateconnection

I have watched people at WUWT convincingly show that using the thickness of tree rings of certain trees as a proxy for local atmospheric temperature is scientifically false.
They are correct in pointing out that for some of the trees used as temperature proxies, the thickness of their tree rings is not solely dependent upon atmospheric temperature. This conclusion is based on the common sense idea that annual growth rate of many trees can be influenced by factors such as soil moisture, cumulative hours of exposure to sunshine (which is affected by cloudiness), maximum or minimum daily temperatures (as opposed to mean daily temperatures), total rainfall etc.etc.
They are also correct in pointing out that some of the scientists using tree rings as long-term temperature proxies have used dubious methods to amalgamate and process their data (e.g. hide-the-decline Michael Mann).
However, these same people at WUWT have then made the sweeping statement that ALL use of tree-rings as temperature proxies are suspect. Anyone with any idea of how tree-ring temperature proxies work knows that this last leap in logic is completely false. It is easy to show that there are some circumstances where tree-ring widths of specifically selected species do in fact primarily depend upon nearby mean seas surface temperatures. That this is indeed the case can be shown by comparing modern instrumental temperature records to measured tree ring widths.
Unfortunately, these “experts” have convinced the majority of the mob that the use of tree-ring widths as temperature proxies is scientific anathema. They have been so successful at doing this that it now become virtually impossible to talk about this diagnostic in a sensible manner without being shouted down.
The same is now becoming true of using curve fitting as a valid diagnostic tool. Of course, there are many ways to use curve fitting that can fool the user into believing that they have found some magical window that allows them to clearly see the underlying physical principles of a natural phenomenon. This is particularly true when curve fitting is used as a diagnostic in climate science because of the inherently complex nature of the physics of the climate system. Many of the systems that are under study are inter-dependent upon other parts of the climate system and so it isn’t long before a hypothesis or model has so many free parameters that it could just about fit any physical system through a simple adjustment of the multitude of fitting parameters.
However, it is logically false to claim that because these dangers exist, it is virtually pointless to use curve fitting method to try an understand the underlying climate physics.
For example, take the 9.1-year cycle that is clearly detected in the world mean temperatures. Wavelet analysis shows that this 9.1-year cyclical pattern is present in the temperature record from 1870 to 1915 and then disappears between 1915 and 1960, before reappearing after this data.
These observational facts allow us to speculate as to why this might be the case.
One hypothesis that has been put forward is that the effect of lunar tides upon the Earth’s climate system may be responsible for this cyclical signal. This based on the simple mathematical fact that if you have two rates associated with the tidal forcings [in this case the 8.85-year lunar apsidal cycle (LAC) and the 9.3 (=18.6/2)-year half lunar nodal cycle (LNC)] they will impact the climate system with a period that is equal to the harmonic mean, giving:
2* (8.85 x 9.3) / (8.85 + 9.3) = 9.069 years = 9.1 years.
This is just the old mathematical problem: If Bob takes 4 hours to dig a hole and Fred takes 2 hours
to dig a hole, how long does it take Bob and Fred working together to dig a hole?
Answer: It is the harmonic mean of their two rates for digging a hole i.e.
2 * (4 x 8) / (4 +8) = 5.33′ hours
Hence, it is not unreasonable to propose that the lunar tides may play a role in influencing the world’s mean temperature.
The question then becomes; “if this is the case, then how could the lunar tides accomplish this task?”
So here is a simple application of curve fitting technique that can validly be used to help a researcher to further investigate the underlying physics.

Sorry, my specific example should have read:
This is just the old mathematical problem: If you travel at 10 mph from town A to town B and 20 mph on the return trip, what is the average speed?
Answer: It is the harmonic mean of the two speeds i.e.
2 / ( (1/ 10) + (1/20)) = 13.333′ mph

Loren Wilson

I think there is a math error in your digging example. If Fred takes two hours alone, Bob helping will decrease the time by at least a little, not increase it to more than twice Fred’s unassisted time.

MarkW

As you point out, tree rings are affected by many things, not just temperature.
The list of other things is a lot longer than the list you give.
There are other problems.
Tree rings only form during the growing season, so you know nothing about the rest of the year.
Also trees have optimum temperatures. Because of this, both temperature increases as well as temperature decreases can cause decreases in ring growth.
Since it is impossible to filter out all of these other things, the only thing tree rings measure is the quality of the growing season.
It is not “unscientific” to proclaim that tree rings can NEVER be used as temperature proxies.

My comment seems to have gone to moderation for some GFR!

Paul Linsay

Your Freeman Dyson story reminds me of another story about Fermi told to me by a very senior member of our group when I was a young grad student. He’d made a very careful series of nuclear measurements and then fitted the latest theory to them. He took the data plot with error bars plus the fit plotted over the data and showed it to Fermi. Fermi laid the plot on his desk, pulled a ruler out of a drawer and drew a straight line through the data. “You will never convince me that the theory is any better than that.”

Nice post, Willis. The salient curve fitting point applies to a LOT more than just Scaffetta. Wadhams arctic ice and Amstrup polar bears come readily to mind. And your point can be broadened to a lot more modeling and statistical practices in ‘climate science’. Homogenization, sea level rise (Nerem), parameter tuning,…
As Mark Twain said, “There are lies, damned lies, and statistics. Or, to quote physicist Ernest Rutherford, “If you need statistics to make sense of your experiment, you should have done a better experiment.” Or, to more optimistically quote statistician George Box, “All models are wrong, but some are useful.” The climate problem with Box’ observation is, which?

Wim Röst

ristvan March 25, 2018 at 8:20 am: “All models are wrong, but some are useful.” The climate problem with Box’ observation is, which?
Willis Eschenbach: ” Here’s the bar that you need to clear: “One way, and this is the way I prefer, is to have a clear physical picture of the process that you are calculating.” .”
WR: We need a clear physical picture of the processes that create our future weather and future climate. That is one. We need to know all interrelations between factors that play a role and we need to quantify everything. And to quantify well, we need appropriate data over long periods.
We have neither of them.
Most ‘models’ are part of the created ‘virtual world’ in which ‘science is settled’. But we need real world data and we need to understand real world processes. Plus a theory that makes sense.
Great post Willis!

Willis,
“That’s one natural and one anthropogenic forcing, ”
I have to quibble with this one. How can snow related albedo be anthropogenic? Orbital variability represents a natural change in forcing, while snow related albedo is the natural response to a change in forcing. Considering snow related albedo anthropogenic is tacitly acknowledging that CO2 is the primary forcing influence on the climate, which it absolutely is not, in fact, it’s not even properly called forcing.
Only the Sun forces the climate system while changing CO2 concentrations and changes to snow albedo are changes to the system. A change to the system can be said to be EQUIVALENT to a change in forcing keeping the system constant. This is what it means when claims are made about CO2 ‘forcing’. Note as well that the pedantic model adds equivalent forcing from CO2 to a system modified with increased CO2 concentrations counting the effect twice.
This is just another of the many levels of misdirection, indirection and misrepresentation between the controversy (alarmism vs. sanity) and what’s actually controversial (the climate sensitivity factor).

Snow and ice albedo rises substantially during the ice ages as a result of orbital and tilt variations eventually resulting in -5.0C temperature changes.
If that explains the ice ages, why would that not work for today’s climate albeit with much smaller changes.

Willis,
Yes, ice albedo has an effect, but its not feedback and not anthropogenic. It’s the systems natural response to forcing, where forcing is exclusive to solar input. Changing CO2 concentrations also changes the system, but its the size of the influence this has which is at the root of the controversy, is far lower than claimed by the IPCC and has little effect on the average temperature or where the locations of the average 0C isotherm are per hemisphere.
Note that in the ice ages, a far larger portion of the planet was covered in ice and it’s melting had a proportionally far larger influence on the planets temperature. The magnitude of its influence decreases as the 0C isotherm moves towards the poles. If you were to consider all of the ice on the planet to disappear, the resulting increase in solar energy would only be about half of the emissions required to increase the surface temperature by 3C. This is because 2/3 of it is moot as clouds are already reflecting energy. This effect is also evident in the seasonal response of the planet where surface snow extends nearly as far as ice age glaciers.
The relevant effect of ice and snow is to change the effects of clouds from only trapping heat at the surface when ice is present to both trapping heat at the surface and reflecting away additional energy when the surface is ice free.

Stan Robertson

Thank you, Wllis. Very well done!

Henri Masson

Willis,
The problem to solve consists actually in, starting from different time series (that could be linked in someway to discover), to find (by natural or artificial intelligence) the structure of a conceptual model that is able to reproduce as well as possible the time series. These time series are not linked to any parameter; they are (non perfect) indicators of the behaviour of some elements of the system. Once such a conceptual (hypothetical) model is defined, a causality analysis (above and beyond the Granger causality approach; see Josuah Pearl book on “Causality”) can be undertaken, and the model can eventually be cleaned from some insignificant links. To the best of my understanding, the most comprehensive and likely HYPOTHETICAL model that could be built looks like this one: https://www.dropbox.com/preview/Climate/meta-model%20climate_20180115.pptx. Remember such a model is highly non-linear and a tiny fluctuation of one of the parametrs could have a significant effect. Also, from “common wisdom”, it is understood that a “primary cause” must send one or many “causal arrows” but not receive any ones. An “effect” must exhibit symetrical characteristics. This is NOT the case for temperature AND CO2 (or other GHGs). They receive and emit many causal arrows in the model; they belong to the category of “relay variables”, embedded in several (in)direct feedback loops. In system analysis it is recommanded NOT to try and modify such relay variables, as the effect is damped out by a strong stabilizing feedback loop, or, on the other hand, leads to an outcome that is highly non predicitble. Relay variables make part of several feedback loops, that can be stabilizing or not. For the temperature, paleoclimatic evidence shows that the climate system is in a chaotic mode, spinning around two strange attractors in the phase plan: the “moderate” and the “glacial” state::
https://www.dropbox.com/preview/Climate/Phase%20plan%20analysis%20of%20Vostok%20data.pptx All other fluctuations observed are actually nothing else than orbital fluctuations around those attractors.It seems obvious to me that the climate system is remarkably stable (the temperature feedback loops must be very effective) and that is simply switches between these two modes. Now, coming back to causality, If you take a look at the first figure linked, you will discover that in this (hypothetical) model, the causes are on the top of the figure: cosmic rays, gravity and electromagnetic planetary fields, météorites. And, I am affraid, these “causes” are not tunable by whatever carbon tax, energy transition or efficiency program. It is possible also that such a complex system generates endogenic fluctuations, resulting simply from its structure. In a nutshell, such a “meta-model” leads to the conclusion that climate fluctuations are natural and of chaotic nature, and thus non predicitble at a longer time horizon (certainly not at a century time scale, as IPCC is claming to do with its projections).

Kristi Silber

Henri,
Interesting post. Unfortunately, I don’t have a dropbox account and couldn’t see your figures, which sound interesting.
“. In a nutshell, such a “meta-model” leads to the conclusion that climate fluctuations are natural and of chaotic nature, and thus non predicitble at a longer time horizon (certainly not at a century time scale, as IPCC is claming to do with its projections).”
I don’t think this is quite true. There are constraints to the behavior of climate: patterns, interactions, feedbacks, lag times and buffers that tend to keep things from getting unstable. Not everything is unpredictable or chaotic; some solar effects on climate are predictable, it’s just that they are sometimes swamped by other events or interactions. You could have a series of volcanoes swamp a change in W/m2, for example.
Predicting averages and trends seems to me very different from predicting individual weather events.

K. Kilty

This tendency to see reality in statistics is a problem throughout our society. I have arguments with the PC folks on campus who insist that there being only about 15-20% women and minorities in mechanical engineering is “proof” of some sort of discrimination which they currently explain as “chilly climate”. But they can never tell me anything specific about this chilly climate. They can’t point to a mechanism, or any method by which it works, nor who is involved, or when it occurs, or anything tangible. As nearly as I can see we do somersaults trying to recruit more women and minorities right up to giving them unrealistic assessments of their capabilities and expectations for future success.
Curve fitting climate outcomes is a likelihood sort of analysis–statistical evidence perhaps, but without a solid physical model I don’t find it all that persuasive. Back before the voyager fly-by missions of Jupiter and Saturn I had a short-run correspondence with some astronomers at Cornell who had found radii of the moons of the giant planets through statistical measures of occultation light curves. While they used reasonable models of limb-darkening, they had no way to handle background variations of light curves (somewhat like a parameterization problem). Their estimates of radii could be greatly in error, which I tried to illustrate by way of examples, but which turned out to be quite wrong after the fly-by. People just will not apply much skepticism to their favorite models.

Don K

Curve fitting can work if the wind is fair and the force is with you. Kepler figured out that planets moved in elliptical orbits with the Sun at one of the foci by curve fitting. It was Newton who later (sort of) figured out why. (We still don’t seem to really understand squat about gravity although we can characterize its effects very satisfactorily) But I think Kepler’s work was a rare exception where a single “easily” analyzed natural phenomenon almost completely controlled the situation.
I put “easily” in quotes because what Kepler did was anything but easy given the mathematical and theoretical tool kit he had to work with.
In general I think Willis is dead right. It’s reasonable to try curve fitting on the off chance that you might learn something. But you probably won’t. Then, if fails to tell you anything useful, you should move on. Adding more variables to salvage your failed curve fit is likely to be a total waste of time.

Phoenix44

To play Devil’s Advocate slightly though. the Fermi story is largely irrelevant. We are not trying to get to the sort of “truths” that Fermi and Dyson were, but to get to a point where we can say with some degree of reasonable certainty whether or not man-made CO2 is going to be a problem.
The problem is that climate science claims (i) a level of understanding of the climate and (ii) an ability to model that are obviously far beyond their actual capabilities. I am not looking for Fermi’s level of proof, because we are dealing with potentially serious real world problems.

Willis. I agree with you entirely on the uselessness of curve fitting in climate modelling
see http://climatesense-norpag.blogspot.com/2017/02/the-coming-cooling-usefully-accurate_17.html
Here is a quote from the link Section 1
“Harrison and Stainforth 2009 say (7): “Reductionism argues that deterministic approaches to science and positivist views of causation are the appropriate methodologies for exploring complex, multivariate systems where the behavior of a complex system can be deduced from the fundamental reductionist understanding. Rather, large complex systems may be better understood, and perhaps only understood, in terms of observed, emergent behavior. The practical implication is that there exist system behaviors and structures that are not amenable to explanation or prediction by reductionist methodologies. The search for objective constraints with which to reduce the uncertainty in regional predictions has proven elusive. The problem of equifinality ……. that different model structures and different parameter sets of a model can produce similar observed behavior of the system under study – has rarely been addressed.” A new forecasting paradigm is required.
An exchange with Javier on a recent WUWT thread went :
“Javier
March 19, 2018 at 11:37 am
Norman, don’t you read my articles here at WUWT? I wrote an article last week about the millennial solar cycle and how it is identified both in solar activity proxies and climate proxies. You can look it up.
The problem is that the millennial cycle does not peak in 2004. It peaks ~ 2095, and definitely between 2050-2100. The article explains it.
Dr Norman Page
March 19, 2018 at 2:00 pm
Javier as you see I wrote -” Looks like we are on the same page” after seeing your 13th article Fig 5 and Fig 7 see also the spectral analysis in comment
https://wattsupwiththat.com/2018/03/13/do-it-yourself-the-solar-variability-effect-on-climate/#comment-2764127
Nowhere in the article do I see an explanation for ” It peaks ~ 2095, and definitely between 2050-2100. ”
Your 5:29 pm comment of the 13th shows a Figure with a peak late in the 21st century. But this looks like a curve derived from some mathematical formula. Nature doesn’t do math – it creates fuzzy cycles. I pick my peak from the extant empirical temperature and neutron data. The 990 – 2004 cycle is not symmetrical – more like a sawtooth shape with about a 650 year down leg and 350 year up leg. Projections which ignore the 2004 apex or turning point are unlikely to be successful in my opinion.”
Here is my forecast to 2100 based on the observed millennial and 60 year cycle picked from the data
in Figs 3 and 4 in the link.comment image
Fig. 12. Comparative Temperature Forecasts to 2100.
Fig. 12 compares the IPCC forecast with the Akasofu (31) forecast (red harmonic) and with the simple and most reasonable working hypothesis of this paper (green line) that the “Golden Spike” temperature peak at about 2003 is the most recent peak in the millennial cycle. Akasofu forecasts a further temperature increase to 2100 to be 0.5°C ± 0.2C, rather than 4.0 C +/- 2.0C predicted by the IPCC. but this interpretation ignores the Millennial inflexion point at 2004. Fig. 12 shows that the well documented 60-year temperature cycle coincidentally also peaks at about 2003.Looking at the shorter 60+/- year wavelength modulation of the millennial trend, the most straightforward hypothesis is that the cooling trends from 2003 forward will simply be a mirror image of the recent rising trends. This is illustrated by the green curve in Fig. 12, which shows cooling until 2038, slight warming to 2073 and then cooling to the end of the century, by which time almost all of the 20th century warming will have been reversed. Easterbrook 2015 (32) based his 2100 forecasts on the warming/cooling, mainly PDO, cycles of the last century. These are similar to Akasofu’s because Easterbrook’s Fig 5 also fails to recognize the 2004 Millennial peak and inversion. Scaffetta’s 2000-2100 projected warming forecast (18) ranged between 0.3 C and 1.6 C which is significantly lower than the IPCC GCM ensemble mean projected warming of 1.1C to 4.1 C. The difference between Scaffetta’s paper and the current paper is that his Fig.30 B also ignores the Millennial temperature trend inversion here picked at 2003 and he allows for the possibility of a more significant anthropogenic CO2 warming contribution.
.

“One way, and this is the way I prefer, is to have a clear physical picture of the process that you are calculating. The other way is to have a precise and self-consistent mathematical formalism. You have neither.”
Hoping to predict the earth’s long-term temperature by having a clear physical picture of the solar system is a non-starter. Consider the recent issues raised by the CERN CLOUD experiments. No one has that clear picture. The other way is probably a non-starter as well, with or without “self-consistent mathematical formalism,” whatever that means. The data bases do not have sufficient coverage in time and space to yield useful results. A third requirement is that the results must be testable within a practical time frame. Now, this is not likely to be achievable within the lifetimes of most humans alive today.
Expectations of long-term climate studies should be defined before embarking on lifetime projects running down rabbit holes and producing nothing of value. A practical goal is to successfully predict global mean temperatures, or whatever, within a range of values narrow enough to realistically guide public policy decisions. Until then, “What if” studies can be deferred for a few decades until the boundary conditions are known, that is, probability weighted estimates, not hot button “high” estimates or “low” estimates that are, by themselves, meaningless.

Frank

WIllis wrote: “After a bit of experimentation, I found that I could get a very good fit using only Snow Albedo and Orbital variations.”
When one performs a multiple linear regression, isn’t one first supposed to analyze the explanatory variables for co-variance? When two variables are highly correlated, I believe one is supposed to eliminate one of such such variables and admit that one can’t know which potential explanatory variable is responsible. However, you did explicitly say that you arrived at your equation by performing a multiple linear regression.
Many ENSO indices involve SST, which is a problem when one is trying to predict explain warming. Christie et al used a cumulative MEI index, which would imply the 1982 El Nino still impacts today’s temperature. If one wants temperature-independent ENSO index, an older version relied up the difference in surface pressure between Tahiti and Darwin. Total atmospheric pressure is conserved.

Willis, you’ve got the trunk and the tail moving on the elephant with just just two parameters.

Gary Pearse

In an above article, it is reported that climate scientists now have a global warming explanation for their embarrassing adventures of getting stuck in the Arctic Ice while surveying the inexorable meltdown. They studiously ignored what’s becoming a Fleet of Fools in Antarctica on a similar quest.
Now you are sending a lot of climate scientists away unhappy that a two parameter model betters their $300 million supercomputer products and it uses just two natural forcings (although I get it that the coefficients have been derived to navigate the temperature swings).
In both the ice and curve-fitting exercises, enormous hubris is on display. That they would then forecast worse- than-we-thought futures with these meaningless creations pretty well sums up the totality of their scientific research. Alas, where are today’s Enrico Fermis and Richard Feymans to save researchers from their hubris.
Nicely done, Willis. The thought occurred to me when you mentioned you can fashion a fit by tuning any inputs , that you could make the futility of the exercise even more obvious by using, say, the actual price of beef over the past century with some other unrelated data set. It’s risen over time and with increased CO2. The USGS has annual mineral and metal prices since 1900 that could also be used.
Cheers, and enjoy the rain while it lasts.
Gary

“Now you are sending a lot of climate scientists away unhappy that a two parameter model betters their $300 million supercomputer products and it uses just two natural forcings (although I get it that the coefficients have been derived to navigate the temperature swings).”
A common mistake. when you compare two models ( a GCM and willis’) you cant simpley compare them on the lowest dimension statistic. For example, If I do a model of an aircraft, say with 6 degrees of freedom, and every last aero detail, it is designed to predict more than one thing. It can of course be used to predict the glide path on landing as well. The simple problem — predicting the glide path– can ALSO be done with a very simple model. in practice the simple model can even outperform the complex model. If you argue that the simple model is “better”, you really miss the point because the simple model can only do one thing.
it cant do take off, or rolls, or turns, or a Herbst Manuever whereas the 6 DOF can. The simple model cant do stalls, or spins or any things that the real model can.
A GCM has to do more than Surface temps. It has to do precipitation, winds, temperature at all altitudes, etc.
For SOME uses a simple model may be better than a complex model. Like calculating a glide path, but no one who works in the modelling business would argue or fret over cases where simple models outperformed complex models.
WRT Willis’s model, There is a reason why Snow Albedo works so well.. Any guess why? And its not a natural forcing.. do you have a wild guess why?
too funny

Gary Pearse

Thank you Steven for your thoughtful comments on the differences between models and the omnibus things they try to show. I think I was clear I didn’t mean that Willis had a useful climate model costing a few dollars by comparison to the science models. Two things:
a) For your aeronautical example, yes they can make a model and test it in a short period and the variables in the model come from physics, a century of successful flight and experiment (wind tunnel etc.). It wouldn’t hurt at the beginning of a radical new design idea to have one that gives some confidence that it would simply first fly – the number one question. They can, of course, make a small physical model, too, aware that, technically, you have to go with materials and air that aren’t right for the downsized physical model.
The latter small assurance is really where we are with climate science. Complex interactions , poor quality and distribution of data sites, limited experimentation, incomplete knowledge of what the variables are make it a different animal than aeronautics. Number one in climate science is temperature. It’s called global warming for goodness sakes, notwithstanding the name changes. Do their models fly (in a forecast)? Perhaps one day, but for the moment they have only crashes and burns and this is because tuning models and parametric manipulations with this basket of variables and unknowns in the way they do it isn’t even at the nailing-two-sticks-together and throwing it stage and is little different from Willis’s model.
b) Aeronautical engineers don’t change the “data” (out of frustration?) to make the electronic model “fit”. I grant you TOBs and station moves, equipment changes…, but am perplexed that, as Mark Steyn noted at a Senate hearing, to the effect that how can we be so confident what the temperature will be in a hundred years when we still don’t know what it will be in 1950! Now take this jambalaya and tune parameters to hindcast a model!
Steven, I believe you are an extraordinarily smart guy but with a blind spot you didn’t used to have. I needn’t explain what the poker term “tell” is. Like climate scientists do all the time to agrandize their craft, they invoke “the physics” when it has been a curve fitting exercise after the sobering attempt at application of physics. Sociology became social “science” after it was thoroughly corrupted by anti capitalist ideology; and what about the Deutche Demokratische Republik invocation?
You always invoke favorable comparisons between climate models that havent worked and sophisticated engineering models that work like a clock (I’m an engineer and uncertainty is always our number one concern – its why most engineers are CAGW sceptics).
Another tell is you now come in to do battle against sceptics on articles showing fairly poor science. I know many here are knee jerk anti global warming types no different than mindless proponents of it. But you seem to show contempt for scepticism in general these days when you know it should be the default position until bonifides are at least half established.
Re albedo, I initially didn’t realize Willis had soot in mind and thought he had erred in labelling albedo as anthropogenic instead of a natural forcing. I’m sure you want to tell me that the data comes from a model. The larger albedo effect is measured by satellite which I’m sure you would also point out is indirect and based on a model. Model is a word not a certificate of worthiness. Good forecasts are the certificate. Plunge a good thermometer into boiling distilled water at sea level and I can predict what it will read.

Peter Lewis Hannan

Thank you for the Fermi – Dyson conversation; I hadn’t seen that before.

RickWill

I have a simple thermal balance model for the 0-2000m of the oceans where the emissivity changes as a log function of CO2 concentration. It has a single factor that is determined to minimise the squared error between the NOAA measured temperature data and the modelled temperature. The measured and modelled temperature anomalies are aligned for year 2017 to align at the time of the more thorough ARGO data. This chart shows the comparison:
https://1drv.ms/b/s!Aq1iAj8Yo7jNgnXLo5LnjuHhohGM
Limiting CO2 to 570ppm results in equilibrium rise of 0.083K degrees from the 1850 level; about 0.64K degrees from the current level.
Same model but using measured sunspot number with a 22 year delay to modify the emissivity rather than using any sensitivity to CO2:
https://1drv.ms/b/s!Aq1iAj8Yo7jNgniSxAGfk6xFTfkM
For this simple model the CO2 dependent emissivity gives a better fit than the sunspot dependent emissivity. Of course both the CO2 and temperature could be driven in the same way by another variable.
The 0-2000m thermal response is highly damped and is a better indication of thermal trends than any other temperature measurement that has all the noise created by the chaos of weather. The heat imbalance reaches 1.4W/sq.m or 504TW globally, which is well within the estimated 1000 to 1500TW transport capacity of the thermohaline deep ocean circulation.

Gary Pearse

Rick, I don’t get the connection between emissivity and either CO2 or sunspots. Emissivity is a function of the nature of the emitting surface alone. How does, say a black ball at a temperature of 290K, know how much CO2 or how many sunspots are above it? Reread the Fermi quote.

RickWill

It is not a black ball. It is water that has a thin yet complex surface coating or layer that I have reduced to a single factor that I have termed emissivity as it reduces the rate of heat loss from the surface. In one version of the model the emissivity changes by a small factor based on a log function of the CO2 content in the surface layer. In the other I adjust the emissivity by a small factor based on a linear relationship with sunspots.
My emissivity term is more aptly described as the effective emissivity as it is based on the measured average conditions at the earth’s surface to achieve the initial thermal balance. The use of effective emissivity of the surface, where conditions can be measured, provides a better representation of Earth thermal balance than some non-surface emitting as a black body with an implied temperature somewhere above the actual surface.

al in kansas

And the margin of error in the actual measurement is what? And the NIST traceable calibration records are available for review where? Claiming a greater than 1 sigma accuracy of +/-0.5°C for any temperature record is optimistic fantasy at best. This would fail your ISO 9000 audit immediately in industry. This why the CO2 sensitivity is unlikely to be high. We are still bouncing around in the same natural variability range we always have in spite of nearly doubling the CO2. There is no statistically valid evidence of any unusual temperature variation at all.

Mark Fife

What is of the most import to me is the methods used to create an Average Global Temperature going back in time. Focusing on the land based data only for a moment, it is pretty clear there are a lot of gaps in the record. Take for example the GHCN daily maxima and data minima data set. I pulled the data from this site.
http://berkeleyearth.org/source-files/
I have been concentrating on the data from 1900 forward. Like every other data set I have down loaded only a small percentage of stations actually cover the entire date range. As in less than 2%. If I were to try and use all the data available here and impute the missing data, then 70% of the data would be imputed. When what you are infilling is more than twice the amount of hard data, you are just guessing.
But, when I pointed this problem out to a climate scientist on twitter, she was unconcerned. Her only concern was whether or not I was using an area weighted average to define a global average. Which is insane. There isn’t enough data to compute a global average.
This is a climate scientist. Peer reviewed and published. And she doesn’t understand why missing 70% of the data is a problem.

Mark Fife:
I use the phrase “Over 50%”
to define the amount of
wild guess infilling for the grids.
It’s over 40% for the US,
which allegedly has the best
weather station system in the world.
You have discovered what “Over 50%”
really means — a wild guess percentage
so high few people believe it — that’s why
I say “Over 50%”.
It’s even worse before 1900 with very few
Southern Hemisphere measurements
outside of Australia.
And 1800s thermometers tend to read low,
likely making the 1880 starting point too low,
exaggerating global; warming since 1880.
In addition, “adjustments” to raw data
may account for one third, or more,
of the warming since 1880.
The surface average temperature
compilations are data-free
— they consist of
wild guess infilled data and
“adjusted” raw data
— once raw data are “adjusted”
you no longer have real data —
you have a wild guess of what
the real data would have been
if accurately measured in the first place !
The claimed margins of error
for surface temperatures
of +/- 0.1 degree C.
are complete nonsense,
not based on errors of individual
measurement instruments, and
because the infilled wild guess numbers
can never be verified or falsified.
A conservative margin of error
is +/- 1 degree C.,
meaning the temperature change since
1880 is most likely to have been
in the range of no change, up to + 2 degrees C.
Due to measurement error,
we may have already had +2 degrees
of warming since 1880
without knowing it
— meaning we would be
past the so called +2 degree C.
“tipping point”
(another leftists fairy tale).
The lack of real science,
and the extremely rough
haphazard temperature
“measurements”,
included in modern
climate change “science” ,
is almost unbelievable.
The good news is you’ve figured it out !
My climate change blog
for people with common sense,
so leftists should stay away
http://www.elOnionBloggle.Blogspot.com

Mark Fife

I completed a look at the temperature trends from the GHCN from 1900 to 2011 with 493 complete station records. No Global warming to be seen. Of course those records are mainly from the US with some European and Australian records thrown in. There were 5 Australian stations. I was able to salvage 5 additional sets of station data from Australia and produce an Australian record from 1895 to 2011.
So I made up some data too, about 2% of the total. Most were just missing years ranging from one to about 6. I imputed the missing data by average the last 5 points before and the last 5 points after. I imputed the last 10 years of two series with an average of the preceding 10 years. Basically freezing them in place. I was very careful with this. I tested inputting data at the edge of a 90% confidence interval for the average. It changed the individual station graphs a fraction of a degree. It didn’t change the graph of the 10 station average at all. Meaning the effect of being off in estimating the average by just under two standard deviations was less than the effect of rounding off the numbers. That to me is an acceptable and very reasonable amount of potential bias.
One thing I noticed from the Australia data, as I have noted before is any station outside a large urban area cooled off over the last 100 years or so. All the rest cooled off until about the 1940’s and started getting warmer. On average, the warming was about 0.2° from 1895 to 2011. Which, given the variability of the data, is essentially no change at all. They are on average back to the late 1800’s.

1sky1

There’s even a more serious issue here than that of plain “curve-fitting” the presumed “forcings” to match the “observations,” namely: HADCRUT4 is not a reliable, unbiased estimate of global surface temperatures in the multi-decadal and longer range of spectral density components The whole modeling enterprise lacks serious grounding in proven physics and in solid empirical data.

First Principles, also known as the Laws of Physics.
Do any of you know what they are, Bueller, anyone?
Calculate the effect on the so-called Average Temperature of the Earth’s Surface of one more, ten more, one hundred more, 280 more, 400 more ppm of CO2 from First Principles.
No one can, no one will, no one knows why the So-Called Average Temperature of the Surface, or 2 meters above the Surface, of our Planet Earth, Is, or Why it Is.
Greens seek to destroy the industries known as Coal Mining, Oil Exploration, and Gas Mining, because they seek to have our planet return to the Garden of Eden.
Bring it, Mosher with your English degree, Stokes with your Computational Fluid Dynamics, or anyone else. No one will, I tried, the thing is, the CO2 ppm determines how high the altitude at which the Earth’s atmosphere freely radiates to Space occurs. This is the only datum that determines the average amount of Energy contained in the Earth’s Atmosphere, how much comes in, how much goes out.
“Freely Radiates to Space.” The higher this altitude is, the lower the temperature at which this happens is, and the less energy leaves the Atmosphere.
Strangely enough, this is rarely if ever discussed here, endless debates about ECS, but it is all meaningless. How high is the altitude at which the Atmosphere freely radiates to Space???
The only actual question. Without this fact, the GCM’s can calculate weather for the next week if they are lucky, but cannot even guess about the next month, much less the next 100 years.
Time to call a spade a spade.
The higher this Altitude goes, the less Energy leaves the Atmosphere, and the hotter the surface gets. Yes, it is true, Back Radiation has nothing to do with this, it is the Average Energy contained in the Atmosphere, and of course the Lapse Rate.
Time to talk about what is actually going on…………………….
Goodness
Michael

Toto

Don K: “Kepler figured out that planets moved in elliptical orbits with the Sun at one of the foci by curve fitting. It was Newton who later (sort of) figured out why.”
And before that they fitted epicycles. “Everyone” knows that was wrong, but actually they worked well enough. Kepler’s math is better because it is easier and has a direct explanation due to Newton.
http://www.polaris.iastate.edu/EveningStar/Unit2/unit2_sub1.htm
The problem with curve fitting is that “everyone” assumes that if the shoe fits, it belongs to Cinderella. I don’t know what size feet Cinderella has, but I’m pretty sure that whatever it was, there were lots of girls that wore that shoe size. If you find a curve-fit that works, that does not mean it is the one and only correct one.
Willis: “After a bit of experimentation, I found that I could get a very good fit using only Snow Albedo and Orbital variations.”
I like it! (Which is not to say I believe it; I’m not daft), But it’s as good as some others I’ve seen, it’s almost believable. One problem is that it is useless except in hindsight unless someone knows how to predict Snow Albedo.
“I add a few more variables and parameters, I can get an even better fit”
Another good point. Sometimes if the answers are too good, it’s a sign that something is fishy.
And getting a better fit sometimes means you are fitting the noise, not the science.