September 11th, 2019 by Roy W. Spencer, Ph. D.
I’ve been asked for my opinion by several people about this new published paper by Stanford researcher Dr. Patrick Frank.
I’ve spent a couple of days reading the paper, and programming his Eq. 1 (a simple “emulation model” of climate model output ), and included his error propagation term (Eq. 6) to make sure I understand his calculations.
Frank has provided the numerous peer reviewers’ comments online, which I have purposely not read in order to provide an independent review. But I mostly agree with his criticism of the peer review process in his recent WUWT post where he describes the paper in simple terms. In my experience, “climate consensus” reviewers sometimes give the most inane and irrelevant objections to a paper if they see that the paper’s conclusion in any way might diminish the Climate Crisis™.
Some reviewers don’t even read the paper, they just look at the conclusions, see who the authors are, and make a decision based upon their preconceptions.
Readers here know I am critical of climate models in the sense they are being used to produce biased results for energy policy and financial reasons, and their fundamental uncertainties have been swept under the rug. What follows is not meant to defend current climate model projections of future global warming; it is meant to show that — as far as I can tell — Dr. Frank’s methodology cannot be used to demonstrate what he thinks he has demonstrated about the errors inherent in climate model projection of future global temperatures.
A Very Brief Summary of What Causes a Global-Average Temperature Change
Before we go any further, you must understand one of the most basic concepts underpinning temperature calculations: With few exceptions, the temperature change in anything, including the climate system, is due to an imbalance between energy gain and energy loss by the system. This is basic 1st Law of Thermodynamics stuff.
So, if energy loss is less than energy gain, warming will occur. In the case of the climate system, the warming in turn results in an increase loss of infrared radiation to outer space. The warming stops once the temperature has risen to the point that the increased loss of infrared (IR) radiation to to outer space (quantified through the Stefan-Boltzmann [S-B] equation) once again achieves global energy balance with absorbed solar energy.
While the specific mechanisms might differ, these energy gain and loss concepts apply similarly to the temperature of a pot of water warming on a stove. Under a constant low flame, the water temperature stabilizes once the rate of energy loss from the water and pot equals the rate of energy gain from the stove.
The climate stabilizing effect from the S-B equation (the so-called “Planck effect”) applies to Earth’s climate system, Mars, Venus, and computerized climate models’ simulations. Just for reference, the average flows of energy into and out of the Earth’s climate system are estimated to be around 235-245 W/m2, but we don’t really know for sure.
What Frank’s Paper Claims
Frank’s paper takes an example known bias in a typical climate model’s longwave (infrared) cloud forcing (LWCF) and assumes that the typical model’s error (+/-4 W/m2) in LWCF can be applied in his emulation model equation, propagating the error forward in time during his emulation model’s integration. The result is a huge (as much as 20 deg. C or more) of resulting spurious model warming (or cooling) in future global average surface air temperature (GASAT).
He claims (I am paraphrasing) that this is evidence that the models are essentially worthless for projecting future temperatures, as long as such large model errors exist. This sounds reasonable to many people. But, as I will explain below, the methodology of using known climate model errors in this fashion is not valid.
First, though, a few comments. On the positive side, the paper is well-written, with extensive examples, and is well-referenced. I wish all “skeptics” papers submitted for publication were as professionally prepared.
He has provided more than enough evidence that the output of the average climate model for GASAT at any given time can be approximated as just an empirical constant times a measure of the accumulated radiative forcing at that time (his Eq. 1). He calls this his “emulation model”, and his result is unsurprising, and even expected. Since global warming in response to increasing CO2 is the result of an imposed energy imbalance (radiative forcing), it makes sense you could approximate the amount of warming a climate model produces as just being proportional to the total radiative forcing over time.
Frank then goes through many published examples of the known bias errors climate models have, particularly for clouds, when compared to satellite measurements. The modelers are well aware of these biases, which can be positive or negative depending upon the model. The errors show that (for example) we do not understand clouds and all of the processes controlling their formation and dissipation from basic first physical principles, otherwise all models would get very nearly the same cloud amounts.
But there are two fundamental problems with Dr. Frank’s methodology.
Climate Models Do NOT Have Substantial Errors in their TOA Net Energy Flux
If any climate model has as large as a 4 W/m2 bias in top-of-atmosphere (TOA) energy flux, it would cause substantial spurious warming or cooling. None of them do.
Why?
Because each of these models are already energy-balanced before they are run with increasing greenhouse gases (GHGs), so they have no inherent bias error to propogate.
For example, the following figure shows 100 year runs of 10 CMIP5 climate models in their pre-industrial control runs. These control runs are made by modelers to make sure that there are no long-term biases in the TOA energy balance that would cause spurious warming or cooling.
Figure 1. Output of Dr. Frank’s emulation model of global average surface air temperature change (his Eq. 1) with a +/- 2 W/m2 global radiative imbalance propagated forward in time (using his Eq. 6) (blue lines), versus the yearly temperature variations in the first 100 years of integration of the first 10 models archived at
https://climexp.knmi.nl/selectfield_cmip5.cgi?id=someone@somewhere .
If what Dr. Frank is claiming was true, the 10 climate models runs in Fig. 1 would show large temperature departures as in the emulation model, with large spurious warming or cooling. But they don’t. You can barely see the yearly temperature deviations, which average about +/-0.11 deg. C across the ten models.
Why don’t the climate models show such behavior?
The reason is that the +/-4 W/m2 bias error in LWCF assumed by Dr. Frank is almost exactly cancelled by other biases in the climate models that make up the top-of-atmosphere global radiative balance. It doesn’t matter how correlated or uncorrelated those various errors are with each other: they still sum to zero, which is why the climate model trends in Fig 1 are only +/- 0.10 C/Century… not +/- 20 deg. C/Century. That’s a factor of 200 difference.
This (first) problem with the paper’s methodology is, by itself, enough to conclude the paper’s methodology and resulting conclusions are not valid.
The Error Propagation Model is Not Appropriate for Climate Models
The new (and generally unfamiliar) part of his emulation model is the inclusion of an “error propagation” term (his Eq. 6). After introducing Eq. 6 he states,
“Equation 6 shows that projection uncertainty must increase in every simulation (time) step, as is expected from the impact of a systematic error in the deployed theory“.
While this error propagation model might apply to some issues, there is no way that it applies to a climate model integration over time. If a model actually had a +4 W/m2 imbalance in the TOA energy fluxes, that bias would remain relatively constant over time. It doesn’t somehow accumulate (as the blue curves indicate in Fig. 1) as the square root of the summed squares of the error over time (his Eq. 6).
Another curious aspect of Eq. 6 is that it will produce wildly different results depending upon the length of the assumed time step. Dr. Frank has chosen 1 year as the time step (with a +/-4 W/m2 assumed energy flux error), which will cause a certain amount of error accumulation over 100 years. But if he had chosen a 1 month time step, there would be 12x as many error accumulations and a much larger deduced model error in projected temperature. This should not happen, as the final error should be largely independent of the model time step chosen. Furthermore, the assumed error with a 1 month time step would be even larger than +/-4 W/m2, which would have magnified the final error after a 100 year integrations even more. This makes no physical sense.
I’m sure Dr. Frank is much more expert in the error propagation model than I am. But I am quite sure that Eq. 6 does not represent how a specific bias in a climate model’s energy flux component would change over time. It is one thing to invoke an equation that might well be accurate and appropriate for certain purposes, but that equation is the result of a variety of assumptions, and I am quite sure one or more of those assumptions are not valid in the case of climate model integrations. I hope that a statistician such as Dr. Ross McKitrick will examine this paper, too.
Concluding Comments
There are other, minor, issues I have with the paper. Here I have outlined the two most glaring ones.
Again, I am not defending the current CMIP5 climate model projections of future global temperatures. I believe they produce about twice as much global warming of the atmosphere-ocean system as they should. Furthermore, I don’t believe that they can yet simulate known low-frequency oscillations in the climate system (natural climate change).
But in the context of global warming theory, I believe the largest model errors are the result of a lack of knowledge of the temperature dependent changes in clouds and precipitation efficiency (thus free-tropospheric vapor, thus water vapor “feedback”) that actually occur in response to a long-term forcing of the system from increasing carbon dioxide. I do not believe it is because the fundamental climate modeling framework is not applicable to the climate change issue. The existence of multiple modeling centers from around the world, and then performing multiple experiments with each climate model while making different assumptions, is still the best strategy to get a handle on how much future climate change there *could* be.
My main complaint is that modelers are either deceptive about, or unaware of, the uncertainties in the myriad assumptions — both explicit and implicit — that have gone into those models.
There are many ways that climate models can be faulted. I don’t believe that the current paper represents one of them.
I’d be glad to be proved wrong.
This begs the question.
If the climate is dependent on the previous state then there is no reason to say the previous imbalance cannot affect the future imbalances. The rate of response to the previous state would then determine the propagation (as the article points out). But why say that year-on–year is inappropriate just because month-on-month is not appropriate?
“Dr. Frank has chosen 1 year as the time step (with a +/-4 W/m2 assumed energy flux error), which will cause a certain amount of error accumulation over 100 years. But if he had chosen a 1 month time step, there would be 12x as many error accumulations and a much larger deduced model error in projected temperature. This should not happen, as the final error should be largely independent of the model time step chosen. Furthermore, the assumed error with a 1 month time step would be even larger than +/-4 W/m2, which would have magnified the final error after a 100 year integrations even more. This makes no physical sense.”
–
It makes perfect mathematical sense.
I believe it is called compound interest.
People usually choose a year to make an annual change, it makes sense.
Unlike Nick Stokes and Mosher, to use the figure you do have to give it a time unit
and the a +/-4 W/m2 assumed energy flux error is an annual rate.
You seem to apply it to each month as a 1/12 th of +/-4 W/m2 assumed energy flux error for your assumption of a much larger deduced model error.
Of course it would increase the amount of error.
This should happen, as the final calculation is dependent on the model time step chosen with a unchanging 1/12 th annual rate applied in 12 steps.
It does reach a limit if done continuously
Or you could do a Nick Stokes and apply the 4 W/m-2 monthly to give a mammoth answer.
He does not believe it has a time component.
That is why, if you are doing a simple annual calculation you use a simple annual rate of error.
If anyone was to do monthly calculations properly you would have to include monthly changes which by definition would be smaller than 1/12 of 4 W/M-2 so as to add up in the year to the annual +/-4 W/m2 assumed energy flux error.
I think your statement should be rewritten to acknowledge that using different time steps if done properly should give a similar answer in all cases.
The comment on choosing a 1 month time step, “that there would be 12x as many error accumulations and a much larger deduced model error in projected temperature is being spread by Nick Stokes using misdirection” as if you change the time frame you have to change the input, per second is different to per month and different to 4 W/M-2 annually which they all have to add up to.
“and the a +/-4 W/m2 assumed energy flux error is an annual rate”
There is no evidence of that, and it isn’t true. Lauer and Hamilton said:
“These give the standard deviation and linear correlation with satellite observations of the total spatial variability calculated from 20-yr annual means.” and gave the result as 4W/m2, not 4 W/m2/year.
In fact, they didn’t even do that at first. They said
“For CMIP5, the correlation of the multimodel mean LCF is 0.93 (rmse = 4 W m22)”
The primary calculation is of the correlation (from which the 4 W/m2 follows). How are you going to turn that into a rate? What would it be for 2 years? 1.86?
Nick,
The 4 W/m2 value is “calculated from 20-yr *annual* means.” So it is annual.
Nick,
I believe that the -/+ 4 W/mw energy flux error is a list of determined errors from observations (year by year) minus the model predicts. So some are on the high side, some on the low side.
So that is 20 single value measurements of error. IS the STD deviation of these errors the value 4, or is 4 the average error amongst the set ? Can you post the actual errors ? Year to year, these errors have their own rates.
As the model is ran at a specific time step, is the yearly (average ?) error for that year used each time step ?
What is the time step period, when the model is ran.
The time step is about 30 minutes. That is the only period that could possibly make sense for accumulation, and would soon give an uncertainty of many hundreds of degrees. There is no reason why a GCM should take into account how Lauer binned his data for averaging.
Nick, “There is no reason why a GCM…”
But there is every reason to take a yearly mean error into account when estimating a projection uncertainty.
You admitted some long time ago the poverty of your position, Nick, when, in a careless moment, you admitted that climate models are engineering models rather than physical models.
Engineering models have little to no predictive value outside their parameter calibration bounds. They certainly have zero predictive value over extended limits past their calibration bounds.
You know that. But you cover it up.
It’s OK, Thomas, et al. Nick plain does not know the meaning of “20-yr annual means.”
For Nick, a per-year average taken over 20 years is not an average per year.
Nick’s is the level of thinking one achieves, apparently, after a career in numerical modeling.
Even Ben Santers says that model suck
– examples of systematic errors include a dry Amazon bias, a warm bias in the eastern parts of tropical ocean basins, differences in the magnitude and frequency of El Nino and La Nina events, biases in sea surface temperatures (SSTs) in the Southern Ocean, a warm and dry bias of land surfaces during summer, and differences in the position of the Southern Hemisphere atmospheric jet –
https://www.nature.com/articles/s41558-018-0355-y
Nick Stokes
“and the a +/-4 W/m2 assumed energy flux error is an annual rate”
There is no evidence of that, and it isn’t true.
Wrong.
You know it.
OK, produce the evidence.
Nick
CMIP5 paper 2019 shows that the fluxes under discussion are yearly calculations also known as Global annual mean sky budgets. Below is the shortwave component but I fully expect is also used for long wave TOA and TCF qed
Global budgets
Figure 2 shows the global annual mean clear-sky budgets as simulated by 38 CMIP5 GCMs at the surface (bottom panel), within the atmosphere (middle panel) and at the TOA (upper panel). The budgets at the TOA that govern the total amount of clear-sky absorption in the climate system, are to some extent tuned to match the CERES reference value, given at 287 Wm−2 for the global mean TOA shortwave clear-sky absorption. Accordingly, the corresponding quantity in the CMIP5 multi-model mean, at 288.6 Wm−2, closely matches the CERES reference (Table 2). Between the individual models, this quantity varies in a range of 10 Wm−2, with a standard deviation of 2.1 Wm−2, and with a maximum deviation of 5 Wm−2 from the CERES reference value (Table 2; Fig. 2 upper panel).
Shortwave clear-sky fluxes
Global budgets
Figure 2 shows the global annual mean clear-sky budgets as simulated by 38 CMIP5 GCMs at the surface (bottom panel), within the atmosphere (middle panel) and at the TOA (upper panel). The budgets at the TOA that govern the total amount of clear-sky absorption in the climate system, are to some extent tuned to match the CERES reference value, given at 287 Wm−2 for the global mean TOA shortwave clear-sky absorption. Accordingly, the corresponding quantity in the CMIP5 multi-model mean, at 288.6 Wm−2, closely matches the CERES reference (Table 2). Between the individual models, this quantity varies in a range of 10 Wm−2, with a standard deviation of 2.1 Wm−2, and with a maximum deviation of 5 Wm−2 from the CERES reference value (Table 2; Fig. 2 upper panel).
How is that evidence? Where is there a statement about rate? Scientists are careful about units. Lauer and Hamilton gave their rmse as 4 W/m2. No /year rate. Same here.
But it is also very clear that it isn’t an annual rate, and the argument that annual mean implies rate is just nonsense. The base quantity they are talking about is clear sky absorption at 287 W/m2. That may be an annual average, but it doesn’t mean that if you averaged over 2 years the absorption would be 574 W/m2. It is still 287 W/m2. And the sd of 2.1 W/m2 will be the same whether averaged over 1 year or 2 or 5. It isn’t a rate.
Nick, “ the argument that annual mean implies rate is just nonsense.”
No one makes that argument except you, Nick. And you’re right, it’s nonsense.
So, do stop.
they have no inherent bias error to propogate –> they have no inherent bias error to propagate.
A practitioner of any discipline must accept *some* tenets of that discipline. A physicist who rejects all physical laws won’t be considered a physicist by other physicists, and won’t be finding work as a physicist. Similarly, Dr Spencer must accept certain practices of his climate science peers, if only to have a basis for peer discussion and to be considered qualified for his climate work. Dr Frank doesn’t have that limitation in his overview of climate science — he is able to reject the whole climate change portfolio in a way which Dr Spencer can’t. This is the elephant in the room.
NZ Willy,
I was thinking the same thing. Dr. Spencer has skin in this game. It’s a shame that he plays along.
Andrew
It seems that modellers spend effort neutralising, cancelling out or suppressing the range of factors that may cause energy imbalance resulting in warming or cooling. Such factors may not be well understood, difficult to model and detract from the main purpose which is to predict the effect of increasing atmospheric carbon dioxide over a long period of time.
As a consequence, the models do not show the range of outcomes that would otherwise be expected and mainly show a relatively narrow range of CO2 induced warming as intended. Such models clearly do not simulate reality but are a complex and expensive way of carrying out a simple calculation.
A model containing all the factors that may introduce imbalance together with estimates of the unknowns such as magnitudes and consequences would produce a much less predictable outcome together with a higher probability that the model would fail to simulate credible reality. The predictive ability would certainly decrease sharply with each iteration of the model run.
It can be claimed that models effectively bypass these problems by reducing such imbalances to net zero before introducing the CO2. It is easy to remove them from the model, but more difficult in the real world.
Dr. Spencer formulated the same thing by other terms. I formulated the same thing with very simple language and I think that you realized the same thing. Climate models do not contain any terms that modelers do not know.
Great. Thanks!
I have to admit that I’m confused. I’ve been frequenting this site lately looking for information that, I would like to use to debunk AGW theory. I thought Dr. Frank’s post would go a long way to finally driving a stake through the AGW monster that won’t seem to die.
Unlike one AGW supporter on this site, I remember very well the AGC scam that was going on back in the 70s until they said, (in my best Emily Litella voice) “nevermind”. Frankly I don’t believe that the miniscule amount of CO2 in the atmosphere affects our temperature even slightly, let alone the even MORE miniscule amount of it that is put there by human activity. I have often questioned why the [quote] science community ignores that small nuclear fusion reactor a mere 93 million miles away as having any impact on our temperatures. I read a quote from one contributor on WUWT that Mars has a higher percentage of CO2 than we have and yet is much colder – which further supported my view that CO2 is, at best a very weak GHG and overall insignificant.
I know that during the time when the AGW crowd was saying the temperatures were changing the most here on earth, the ice caps on Mars were shrinking. Further evidencing that the sun is more responsible for our climate and temperatures than man could ever be.
I don’t have enough of the background in the areas that Dr. Spencer or Dr. Frank or many of those on this site have to intelligently discuss the supporting statistics or math so I look to WUWT to provide cogent rational clear-enough-to-understand explanations to debunk AGW theory.
I met Dr. Spencer once long ago and respect him and I’ve been looking on this post for Dr. Frank’s rebuttal to see his explanation or comments to support his analysis that I was putting so much stock in. But haven’t seen it … yet.
Here is what I believe: CO2 has virtually no effect on our temperatures. Man’s contribution to the atmospheric concentration of CO2 in comparison to what nature puts there (3% I believe) is further miniscule and I propose irrelevant. If atmospheric CO2 does increase (likely it will) it will be beneficial for life. Atmospheric CO2 does not have an effect on severe weather events. Changes to CO2 are not responsible for temperature changes nor sea level rise – again, look to the sun.
I’d like to know if I’m wrong. (Yeah, I know that Loydo and Nick Stokes will tell me I’m wrong but I’ve heard that before by warmunists even though those two present better supporting arguments but being inundated by links utilizing the band width of the internet I don’t find compelling, plus, on the backside, I just no longer believe it nor the hysterics that go with it.)
Sam,
Carbon dioxide is known to be a minor player. Most of Earth’s greenhouse effect is due to water vapor. Models depend upon CO2 changes being amplified through feedback of increased water vapor.
Clouds are formed through a process that combines evaporative effects, convection, the lapse rate, etc, etc, etc. Don’t we know that tropospheric heat transfer is most through convection? Isn’t the failure to model clouds accurately due to both a grid scale to large and the inability to model convection. The focus on radiative forces to model tropospheric temperatures seems misplaced to me. As I said above, climate models have no ability to explain the temperature evolution that we have observed through the Holocene. This is the reason we observe such statements that “we have to get rid of the medieval warm period.” The most recent CMIP5 spaghetti graphs that show a monotonic rise through time driven by CO2 concentrations are the climate community placing a bet that ocean currents, solar effects, changes in the earth’s magnetic field, etc are second or third order effects comparted to CO2 concentration levels, In my view, the entire climate communities intellectual reputation is on the line. The good news is that it won’t take more than 5-10 years to see who is right. To be frank, I don’t see how anyone that has a familiarity with Holocene temperatures can believe that the output from climate models actually captures reality.
All,
Here is my rational explanation regarding traditional temperature prediction programs “tracking high”.
I work for an RE organization and am in charge of the temperature prediction computer program that helps keep a steady RE funding flow going.
I could be working at Dartmouth or UVM or MIT, or any entity dependent on an RE funding flow.
The flows likely would be from entities MAKING BIG BUCKS BY PLAYING THE RE ANGLE, like Warren Buffett.
My job security bias would be to adjust early data to produce low temperatures and later data to produce high temperatures.
I would use clever dampers and other tricks to make the program “behave”, with suitable squiggles to account for el ninos, etc.
Also I do not want to stick out by being too low or too high.
Everyone in the organization would know me as one of them, a team player.
Hence, about 60 or so temperature prediction programs behave the same way, if plotted on the same graph.
Comes along the graph, based on 40-y satellite data, which requires no adjustments at the low end or the high end.
It has plenty of ACCURATE data; no need to fill in any blanks or make adjustments.
Its temperature prediction SLOPE is about 50% of ALL THE OTHERS.
If I were a scientist, that alone would give me a HUGE pause.
However, I am merely an employee, good with numbers and a family to support.
Make waves? Not me.
If the above is not rational enough, here is another.
At Dartmouth the higher ups have decided burning trees is good.
Well, better than burning fossil fuels any way, which is not saying much.
By now all Dartmouth employees mouth in unison “burning trees is good, burning fossill fuels is bad”.
Job security is guaranteed for all.
But what about those pesky ground source heat pumps OTHER universities are using for their ENTIRE campus.
Oh, we looked at that and THEY are MUCH too expensive.
For now, ONLY THIRTYFIVE YEARS, Dartmouth will burn trees.
Dartmouth, with $BILLIONS in endowments, could not possibly afford those heat pumps.
And so it goes , said Kurt Vonnegut, RIP.
… “The errors show that (for example) we do not understand clouds” …
https://youtu.be/8L1UngfqojI?t=50
Too cheesy? Sorry, couldn’t help it…
In response to comment’s from Dr. Frank and many others, I have posted a more precise explanation of my main objection wherein I have quoted Pat’s main conclusion verbatim and why it is wrong. I would agree with him completely if climate models were periodically energy-balanced throughout their runs, but that’s not how they operate.
http://www.drroyspencer.com/2019/09/additional-comments-on-the-frank-2019-propagation-of-error-paper/
As for climate-models, I am a lay person considering all the technicalities, formulas and esoteric reasoning about this uncertainty beast. I try not to get lost in the mist of the expert’s arguments about details. So I ask myself: “What is basic in this discussion?” I would say: The use of parameters in models as an argument to undo or side-step Dr. Franks reasoning.
Isn’t working with parameters like creating a magical black box? “Hey, turning these parameter knobs, it starts working! I don’t learn from it how the system it tries to emulate does work, but who cares! Magic!”
I would feel, already there is this huge uncertainty problem, and now the parameter problem is added to it. Parameterization doesn’t enhance the models, it does exactly the opposite, it makes them even worse. Like turning a pretender into a conman.
Maybe my analysis is wrong, again, the details are beyond me. But I feel this is the essence of the discussion.
As I see it, Dr. Spencer and Dr. Frank are talking past each other. Heck, what Dr. Frank talked about was drilled into me in my analytical chemistry class; so I got his point immediately. Later, I did spectroscopy of various kinds. One of the main issues, to me, is the equivocation inherent in human language; so yes, semantics *do* matter, which got drilled into me from a debate class.
Rethinking all the arguments, I’m afraid Dr. Roy Spencer’s critique is right:
The reason is this: the climate models don’t propagate hardly anything. They don’t take the last climate state and calculate from this the next climate state. There is only one climate state: the unperturbed state of the control run, as shown here in Figure 1. From this, there is just one influence that can really change the climate state: a change in the CO2 forcing.
The only thing that is propagated, if you will, is the amount of the CO2 forcing. All the other subsystems, ocean heat uptake, cloud fraction, water vapor, etc. are coupled via time lags to the development of the CO2 forcing.
In a way the climate models don’t run along the time axis as one might think, they run perpendicular to it. That’s exactly why it is so easy to emulate their behavior with Dr. Franks emulation equation 1. If the climate models would truly propagate all the different states of their many variables, they would run out of control in a very short time and occupy the whole uncertainty range.
Maybe Dr. Spencer has misunderstood some of Dr. Franks arguments. Maybe he has sometimes used the wrong terminology to voice his critique, but his main point seems to be valid:
The propagation of errors is not a big issue with climate models. They are much more in error than that:
One big issue with climate models is the assumption that there is no internal variability, that the control run is valid when it looks like Figure 1.
The second big issue is the assumption that the CO2 forcing, minus aerosol cooling, minus changes in albedo, minus changes in forcings of all the subsystems of the climate system, must always be positive. (Kiehl, 2007)
In other words: the problem with the climate models is, that Dr. Franks equation 1 (also shown here before by Willis Eschenbach) is such a good emulation of the climate models in the first place.
BP, “They don’t take the last climate state and calculate from this the next climate state.”
Yes, they do.
Working with Dr. Spencer’s climate model (.xls) from his website, you can’t make the temperature decrease year over year no matter what you set the parameters to.
The glaring elephant in the room is that climate models assume that there’s a “greenhouse roof” capping emissions to space… and NASA’s own SABER data shows this has never been the case. The atmosphere expands and “breathes” in response to solar input variation. This is old news, and unfortunately, completely ignored by “climate scientists”.
It was SABER which taught us that there was no “hidden heat” in the atmosphere, which set off the search for “hidden heat” in the oceans. Good luck with that.
https://spaceweatherarchive.com/2018/09/27/the-chill-of-solar-minimum/
There seems to be a misunderstanding afoot in the interpretation of the description of uncertainty in iterative climate models. I offer the following examples in the hopes that they clear up some of the mistaken notions apparently driving these erroneous interpretations.
Uncertainty: Describing uncertainty for human understanding is fraught with difficulties, evidence being the lavish casinos that persuade a significant fraction of the population that you can get something from nothing. There are many other examples, some clearer that others, but one successful description of uncertainty is that of the forecast of rain. We know that a 40% chance of rain does not mean it will rain everywhere 40% of the time, nor does it mean that it will rain all of the time in 40% of the places. We however intuitively understand the consequences of comparison of such a forecast with a 10% or a 90% chance of rain.
Iterative Models: Let’s assume we have a collection of historical daily high temperature data for a single location, and we wish to develop a model to predict the daily high temperature at that location on some date in the future. One of the simplest, yet effective, models that one can use to predict tomorrow’s high temperature is to use today’s high temperature. This is the simplest of models, but adequate for our discussion of model uncertainty. Note that at no time will we consider instrument issues such as accuracy, precision and resolution. For our purposes, those issues do not confound the discussion below.
We begin by predicting the high temperatures from the historical data from the day before. (The model is, after all, merely a single day offset) We then measure model uncertainty, beginning by calculating each deviation, or residual (observed minus predicted). From these residuals, we can calculate model adequacy statistics, and estimate the average historical uncertainty that exists in this model. Then, we can use that statistic to estimate the uncertainty in a single-day forward prediction.
Now, in order to predict tomorrow’s high temperature, we apply the model to today’s high temperature. From this, we have an “exact” predicted value ( today’s high temperature). However, we know from applying our model to historical data, that, while this prediction is numerically exact, the actual measured high temperature tomorrow will be a value that contains both deterministic and random components of climate. The above calculated model (in)adequacy statistic will be used to create an uncertainty range around this prediction of the future. So we have a range of ignorance around the prediction of tomorrow’s high temperature. At no time is this range an actual statement of the expected temperature. This range is similar to % chance of rain. It is a method to convey how well our model predicts based on historical data.
Now, in order to predict out two days, we use the “predicted” value for tomorrow (which we know is the same numerical value as today, but now containing uncertainty ) and apply our model to the uncertain predicted value for tomorrow. The uncertainty in the input for the second iteration of the model cannot be ‘canceled out’ before the number is used as input to the second application model. We are, therefore, somewhat ignorant of what the actual input temperature will be for the second round. And that second application of the model adds its ignorance factor to the uncertainty of the predicted value for two days out, lessening the utility of the prediction as an estimate of day-after-tomorrow’s high temperature. This repeats so that for predictions for several days out, our model is useless in predicting what the high temperature actually will be.
This goes on for each step, ever increasing the ignorance and lessening the utility of each successive prediction as an estimate of that day’s high temperature, due to the growing uncertainty.
This is an unfortunate consequence of the iterative nature of such models. The uncertainties accumulate. They are not biases, which are signal offsets. We do not know what the random error will be until we collect the actual data for that step, so we are uncertain of the value to use in that step when predicting.
Maybe Models aren’t Models at all, and should be recognized as what they truly are: ‘Creations’, ‘Informed Imaginings’, Frankensteinian attempts at recreating Nature.
Bill Haag’s example is very clever, and rings true.
However, let’s think about the same model a little differently.
Let’s say our dataset of thousands of days shows the hottest ever day was 34 degrees C and the lowest 5 degrees C. The mean is 20 degrees C, with a standard deviation of +/- 6 degrees C. It’s a nice place to live.
Let’s say today is 20 degrees C. Tomorrow is very unlikely to be colder than 5 or hotter than 34C; its likely closer to 20 than 34. The standard deviation tells us that 19 out of 20 times, tomorrow’s temperature will range between 14 and 26 degrees.
But is this the correct statistic to predict tomorrow’s temperature, given today’s?
Actually, that statistic is a little different. A better statistic would be the uncertainty of the change in temperature from one day to the next.
So let’s say we go back to the dataset and find that 19 out of 20 days are likely to be within +/- 5 degrees C of the day before.
Is this a more helpful statistic? When today’s temperature is in the middle of the range, +/- 5 degrees C sounds fair and reasonable. But what if today’s temperature was 33 degrees C, does +/- 5 degrees C still sound fair and reaonsable – given that it’s never exceeded 34 degrees C ever? Is there really a 1 in 20 chance of reaching 38 degrees C? No, that’s not a good estimate of that chance.
It’s clear that the true uncertainty distribution for hot days is that the next day is more likely to be cooler than warmer. The uncertainty distribution of future temperatures after a very hot day is not symmetrical.
Let’s now try compounding uncertainties in the light of this dataset. Let’s say that we know our uncertainty is +/5 degrees, on average, starting at 20 degrees C. we want to go out two days. Is the uncertainty range now +/- 10 degrees? If we went out 10 days, could the uncertainties add up to +/- 50 degrees Centigrade? Plainly not. We can’t just keep adding uncertainties like that, because should a day actually get hot two days in a row, it has great difficult getting a lot hotter, and becomes more likely to get cooler.
Statistically, random unconstrained uncertainties are added by the the square root of the sample count. Let’s see how that might work out in practice. After four days, our uncertainty would double to 10 degrees C, and after 16 days, double again to 20 degrees C. This would proceed ad infinitum. After 64 days, the extrapolated uncertainty range becomes an impossible +/- 40 degrees C. Since such a range is truly impossible, there must be something wrong with our uncertainty calculation… and there is.
We have to recognise that the uncertainty range for a successive day depends greatly on the temperature of the present day, and that the absolute uncertainty range for any given starting temperature cannot exceed the uncertainty of all possible actual temperatures. In other words, the uncertainty range for any given day cannot exceed +/- 6 degrees C, the standard deviation of the dataset, no matter how far out we push our projection.
An analysis of this kind shows us that that measures of uncertainty cannot not be compounded infinitely – at least, in systems of limited absolute uncertainty, like the example given by Bill Haag.
The same is true for the application of uncertainty extrapolations as performed by Dr. Frank. Their uncertainty bounds do not increase as he predicts. They may well have wide uncertainty bounds, but not for the reasons Dr. Frank proposes. His methodology is fundamentally flawed in that regard. A careful consideration of Bill’s model shows us why.
Your discussion is wrong Chris Thompson, because you’re assigning physical meaning to an uncertainty.
Your mistake becomes very clear when you write that, “could the uncertainties add up to +/- 50 degrees Centigrade? Plainly not. We can’t just keep adding uncertainties like that, because should a day actually get hot two days in a row, it has great difficult getting a lot hotter, and becomes more likely to get cooler.”
That (+/-)50 C says nothing about what the temperature could actually be. It’s an estimate of what you actually know about the temperature 10 days hence. Namely, nothing.
That’s all it means. It’s a statement of your ignorance, not of temperature likelihood.
Thanks, Pat
I posted this in another thread as a reply to this comment from Chris,
———
Chris,
Thank you for the kind words in the first sentence.
However you are not “thinking about the same model a little differently”, you are changing the model. So everything after is not relevant to my points. Perhaps to other points, but not to my example of the projection of uncertainty, which was my point.
Once again, the model was to use the prior day’s high temperature to predict each day’s high temperature. The total range of the data over how ever many days of data you have is irrelevant for this model. From the historical data, a set of residuals are calculated for each observed-minus-predicted pair. These residuals are the ‘error’ in each historical prediction. The residuals are then used to calculate a historical model-goodness statistic (unspecified here to avoid other disagreements posted on the specifics of such calculations)
This model is then used going forward. See the earlier post for details, but it is the uncertainty not the error that is propagated. The model estimate for the second day out from today is forced to use the uncertain estimated value from the model of the first day out, while contributing its own uncertainty to its prediction. And so it goes forward.
You also are confusing uncertainty with error. The uncertainty is a quantity that describes the ignorance of a predicted value. Like the 40% chance of rain, it is not a description of physical reality, or physical future. It doesn’t rain 40% of the time everywhere, nor does it rain all the time in 40% of the places. But the uncertainty of rainfall is communicated without our believing that one of the two physical realities is being predicted.
Bill
For the benefit of all, I’ve put together an extensive post that provides quotes, citations, and URLs for a variety of papers — mostly from engineering journals, but I do encourage everyone to closely examine Vasquez and Whiting — that discuss error analysis, the meaning of uncertainty, uncertainty analysis, and the mathematics of uncertainty propagation.
These papers utterly support the error analysis in “Propagation of Error and the Reliability of Global Air Temperature Projections.”
Summarizing: Uncertainty is a measure of ignorance. It is derived from calibration experiments.
Multiple uncertainties propagate as root sum square. Root-sum-square has positive and negative roots (+/-). Never anything else, unless one wants to consider the uncertainty absolute value.
Uncertainty is an ignorance width. It is not an energy. It does not affect energy balance. It has no influence on TOA energy or any other magnitude in a simulation, or any part of a simulation, period.
Uncertainty does not imply that models should vary from run to run, Nor does it imply inter-model variation. Nor does it necessitate lack of TOA balance in a climate model.
For those who are scientists and who insist that uncertainty is an energy and influences model behavior (none of you will be engineers), or that a (+/-)uncertainty is a constant offset, I wish you a lot of good luck because you’ll not get anywhere.
For the deep-thinking numerical modelers who think rmse = constant offset or is a correlation: you’re wrong.
The literature follows:
Moffat RJ. Contributions to the Theory of Single-Sample Uncertainty Analysis. Journal of Fluids Engineering. 1982;104(2):250-8.
“Uncertainty Analysis is the prediction of the uncertainty interval which should be associated with an experimental result, based on observations of the scatter in the raw data used in calculating the result.
Real processes are affected by more variables than the experimenters wish to acknowledge. A general representation is given in equation (1), which shows a result, R, as a function of a long list of real variables. Some of these are under the direct control of the experimenter, some are under indirect control, some are observed but not controlled, and some are not even observed.
R=R(x_1,x_2,x_3,x_4,x_5,x_6, . . . ,x_N)
It should be apparent by now that the uncertainty in a measurement has no single value which is appropriate for all uses. The uncertainty in a measured result can take on many different values, depending on what terms are included. Each different value corresponds to a different replication level, and each would be appropriate for describing the uncertainty associated with some particular measurement sequence.
The Basic Mathematical Forms
The uncertainty estimates, dx_i or dx_i/x_i in this presentation, are based, not upon the present single-sample data set, but upon a previous series of observations (perhaps as many as 30 independent readings) … In a wide-ranging experiment, these uncertainties must be examined over the whole range, to guard against singular behavior at some points.
Absolute Uncertainty
x_i = (x_i)_avg (+/-)dx_i
Relative Uncertainty
x_i = (x_i)_avg (+/-)dx_i/x_i
Uncertainty intervals throughout are calculated as (+/-)sqrt[(sum over (error)^2].
The uncertainty analysis allows the researcher to anticipate the scatter in the experiment, at different replication levels, based on present understanding of the system.
The calculated value dR_0 represents the minimum uncertainty in R which could be obtained. If the process were entirely steady, the results of repeated trials would lie within (+/-)dR_0 of their mean …”
Nth Order Uncertainty
The calculated value of dR_N, the Nth order uncertainty, estimates the scatter in R which could be expected with the apparatus at hand if, for each observation, every instrument were exchanged for another unit of the same type. This estimates the effect upon R of the (unknown) calibration of each instrument, in addition to the first-order component. The Nth order calculations allow studies from one experiment to be compared with those from another ostensibly similar one, or with “true” values.”
Here replace, “instrument” with ‘climate model.’ The relevance is immediately obvious. An Nth order GCM calibration experiment averages the expected uncertainty from N models and allows comparison of the results of one model run with another in the sense that the reliability of their predictions can be evaluated against the general dR_N.
Continuing: “The Nth order uncertainty calculation must be used wherever the absolute accuracy of the experiment is to be discussed. First order will suffice to describe scatter on repeated trials, and will help in developing an experiment, but Nth order must be invoked whenever one experiment is to be compared with another, with computation, analysis, or with the “truth.”
Nth order uncertainty, “
*Includes instrument calibration uncertainty, as well as unsteadiness and interpolation.
*Useful for reporting results and assessing the significance of differences between results from different experiment and between computation and experiment.
The basic combinatorial equation is the Root-Sum-Square:
dR = sqrt[sum over((dR_i/dx_i)*dx_i)^2]”
https://doi.org/10.1115/1.3241818
Moffat RJ. Describing the uncertainties in experimental results. Experimental Thermal and Fluid Science. 1988;1(1):3-17.
“The error in a measurement is usually defined as the difference between its true value and the measured value. … The term “uncertainty” is used to refer to “a possible value that an error may have.” … The term “uncertainty analysis” refers to the process of estimating how great an effect the uncertainties in the individual measurements have on the calculated result.
THE BASIC MATHEMATICS
This section introduces the root-sum-square (RSS) combination (my bold), the basic form used for combining uncertainty contributions in both single-sample and multiple-sample analyses. In this section, the term dX_i refers to the uncertainty in X_i in a general and nonspecific way: whatever is being dealt with at the moment (for example, fixed errors, random errors, or uncertainties).
Describing One Variable
Consider a variable X_i, which has a known uncertainty dX_i. The form for representing this variable and its uncertainty is
X=X_i(measured) (+/-)dX_i (20:1)
This statement should be interpreted to mean the following:
* The best estimate of X, is X_i (measured)
* There is an uncertainty in X_i that may be as large as (+/-)dX_i
* The odds are 20 to 1 against the uncertainty of X_i being larger than (+/-)dX_i.
The value of dX_i represents 2-sigma for a single-sample analysis, where sigma is the standard deviation of the population of possible measurements from which the single sample X_i was taken.
The uncertainty (+/-)dX_i Moffat described, exactly represents the (+/-)4W/m^2 LWCF calibration error statistic derived from the combined individual model errors in the test simulations of 27 CMIP5 climate models.
For multiple-sample experiments, dX_i can have three meanings. It may represent tS_(N)/(sqrtN) for random error components, where S_(N) is the standard deviation of the set of N observations used to calculate the mean value (X_i)_bar and t is the Student’s t-statistic appropriate for the number of samples N and the confidence level desired. It may represent the bias limit for fixed errors (this interpretation implicitly requires that the bias limit be estimated at 20:1 odds). Finally, dX_i may represent U_95, the overall uncertainty in X_i.
From the “basic mathematics” section above, the over-all uncertainty U = root-sum-square = sqrt[sum over((+/-)dX_i)^2] = the root-sum-square of errors (rmse). That is U = sqrt[(sum over(+/-)dX_i)^2] = (+/-)rmse.
The result R of the experiment is assumed to be calculated from a set of measurements using a data interpretation program (by hand or by computer) represented by
R = R(X_1,X_2,X_3,…, X_N)
The objective is to express the uncertainty in the calculated result at the same odds as were used in estimating the uncertainties in the measurements.
The effect of the uncertainty in a single measurement on the calculated result, if only that one measurement were in error would be
dR_x_i = (dR/dX_i)*dX_i)
When several independent variables are used in the function R, the individual terms are combined by a root-sum-square method.
dR = sqrt[sum over(dR/dX_i)*dX_i)^2]
This is the basic equation of uncertainty analysis. Each term represents the contribution made by the uncertainty in one variable, dX_i, to the overall uncertainty in the result, dR.
http://www.sciencedirect.com/science/article/pii/089417778890043X
Vasquez VR, Whiting WB. Accounting for Both Random Errors and Systematic Errors in Uncertainty Propagation Analysis of Computer Models Involving Experimental Measurements with Monte Carlo Methods. Risk Analysis. 2006;25(6):1669-81.
[S]ystematic errors are associated with calibration bias in the methods and equipment used to obtain the properties. Experimentalists have paid significant attention to the effect of random errors on uncertainty propagation in chemical and physical property estimation. However, even though the concept of systematic error is clear, there is a surprising paucity of methodologies to deal with the propagation analysis of systematic errors. The effect of the latter can be more significant than usually expected.
Usually, it is assumed that the scientist has reduced the systematic error to a minimum, but there are always irreducible residual systematic errors. On the other hand, there is a psychological perception that reporting estimates of systematic errors decreases the quality and credibility of the experimental measurements, which explains why bias error estimates are hardly ever found in literature data sources.
Of particular interest are the effects of possible calibration errors in experimental measurements. The results are analyzed through the use of cumulative probability distributions (cdf) for the output variables of the model.”
A good general definition of systematic uncertainty is the difference between the observed mean and the true value.”
Also, when dealing with systematic errors we found from experimental evidence that in most of the cases it is not practical to define constant bias backgrounds. As noted by Vasquez and Whiting (1998) in the analysis of thermodynamic data, the systematic errors detected are not constant and tend to be a function of the magnitude of the variables measured.”
Additionally, random errors can cause other types of bias effects on output variables of computer models. For example, Faber et al. (1995a, 1995b) pointed out that random errors produce skewed distributions of estimated quantities in nonlinear models. Only for linear transformation of the data will the random errors cancel out.”
Although the mean of the cdf for the random errors is a good estimate for the unknown true value of the output variable from the probabilistic standpoint, this is not the case for the cdf obtained for the systematic effects, where any value on that distribution can be the unknown true. The knowledge of the cdf width in the case of systematic errors becomes very important for decision making (even more so than for the case of random error effects) because of the difficulty in estimating which is the unknown true output value. (emphasisi in original)”
It is important to note that when dealing with nonlinear models, equations such as Equation (2) will not estimate appropriately the effect of combined errors because of the nonlinear transformations performed by the model.
Equation (2) is the standard uncertainty propagation sqrt[sum over(±sys error statistic)^2].
In principle, under well-designed experiments, with appropriate measurement techniques, one can expect that the mean reported for a given experimental condition corresponds truly to the physical mean of such condition, but unfortunately this is not the case under the presence of unaccounted systematic errors.
When several sources of systematic errors are identified, beta is suggested to be calculated as a mean of bias limits or additive correction factors as follows:
beta ~ sqrt[sum over(theta_S_i)^2], where i defines the sources of bias errors and theta_S is the bias range within the error source i. Similarly, the same approach is used to define a total random error based on individual standard deviation estimates,
e_k = sqrt[sum over(sigma_R_i)^2]
A similar approach for including both random and bias errors in one fterm is presented by Deitrich (1991) with minor variations, from a conceptual standpoint, from the one presented by ANSI/ASME (1998)
http://dx.doi.org/10.1111/j.1539-6924.2005.00704.x
Kline SJ. The Purposes of Uncertainty Analysis. Journal of Fluids Engineering. 1985;107(2):153-60.
The Concept of Uncertainty
Since no measurement is perfectly accurate, means for describing inaccuracies are needed. It is now generally agreed that the appropriate concept for expressing inaccuracies is an “uncertainty” and that the value should be provided by an “uncertainty analysis.”
An uncertainty is not the same as an error. An error in measurement is the difference between the true value and the recorded value; an error is a fixed number and cannot be a statistical variable. An uncertainty is a possible value that the error might take on in a given measurement. Since the uncertainty can take on various values over a range, it is inherently a statistical variable.
The term “calibration experiment” is used in this paper to denote an experiment which: (i) calibrates an instrument or a thermophysical property against established standards; (ii) measures the desired output directly as a measurand so that propagation of uncertainty is unnecessary.
The information transmitted from calibration experiments into a complete engineering experiment on engineering systems or a record experiment on engineering research needs to be in a form that can be used in appropriate propagation processes (my bold). … Uncertainty analysis is the sine qua non for record experiments and for systematic reduction of errors in experimental work.
Uncertainty analysis is … an additional powerful cross-check and procedure for ensuring that requisite accuracy is actually obtained with minimum cost and time.
Propagation of Uncertainties Into Results
In calibration experiments, one measures the desired result directly. No problem of propagation of uncertainty then arises; we have the desired results in hand once we complete measurements. In nearly all other experiments, it is necessary to compute the uncertainty in the results from the estimates of uncertainty in the measurands. This computation process is called “propagation of uncertainty.”
Let R be a result computed from n measurands x_1, … x_n„ and W denotes an uncertainty with the subscript indicating the variable. Then, in dimensional form, we obtain: (W_R = sqrt[sum over(error_i)^2]).”
https://doi.org/10.1115/1.3242449
Henrion M, Fischhoff B. Assessing uncertainty in physical constants. American Journal of Physics. 1986;54(9):791-8.
“Error” is the actual difference between a measurement and the value of the quantity it is intended to measure, and is generally unknown at the time of measurement. “Uncertainty” is a scientist’s assessment of the probably magnitude of that error.
https://aapt.scitation.org/doi/abs/10.1119/1.14447
This illustration might clarify the meaning of (+/-)4 W/m^2 of uncertainty in annual average LWCF.
The question to be addressed is what accuracy is necessary in simulated cloud fraction to resolve the annual impact of CO2 forcing?
We know from Lauer and Hamilton that the average CMIP5 (+/-)12.1% annual cloud fraction (CF) error produces an annual average (+/-)4 W/m^2 error in long wave cloud forcing (LWCF).
We also know that the annual average increase in CO2 forcing is about 0.035 W/m^2.
Assuming a linear relationship between cloud fraction error and LWCF error, the (+/-)12.1% CF error is proportionately responsible for (+/-)4 W/m^2 annual average LWCF error.
Then one can estimate the level of resolution necessary to reveal the annual average cloud fraction response to CO2 forcing as, (0.035 W/m^2/(+/-)4 W/m^2)*(+/-)12.1% cloud fraction = 0.11% change in cloud fraction.
This indicates that a climate model needs to be able to accurately simulate a 0.11% feedback response in cloud fraction to resolve the annual impact of CO2 emissions on the climate.
That is, the cloud feedback to a 0.035 W/m^2 annual CO2 forcing needs to be known, and able to be simulated, to a resolution of 0.11% in CF in order to know how clouds respond to annual CO2 forcing.
Alternatively, we know the total tropospheric cloud feedback effect is about -25 W/m^2. This is the cumulative influence of 67% global cloud fraction.
The annual tropospheric CO2 forcing is, again, about 0.035 W/m^2. The CF equivalent that produces this feedback energy flux is again linearly estimated as (0.035 W/m^2/25 W/m^2)*67% = 0.094%.
Assuming the linear relations are reasonable, both methods indicate that the model resolution needed to accurately simulate the annual cloud feedback response of the climate, to an annual 0.035 W/m^2 of CO2 forcing, is about 0.1% CF.
To achieve that level of resolution, the model must accurately simulate cloud type, cloud distribution and cloud height, as well as precipitation and tropical thunderstorms.
This analysis illustrates the meaning of the (+/-)4 W/m^2 LWCF error. That error indicates the overall level of ignorance concerning cloud response and feedback.
The CF ignorance is such that tropospheric thermal energy flux is never known to better than (+/-)4 W/m^2. This is true whether forcing from CO2 emissions is present or not.
GCMs cannot simulate cloud response to 0.1% accuracy. It is not possible to simulate how clouds will respond to CO2 forcing.
It is therefore not possible to simulate the effect of CO2 emissions, if any, on air temperature.
As the model steps through the projection, our knowledge of the consequent global CF steadily diminishes because a GCM cannot simulate the global cloud response to CO2 forcing, and thus cloud feedback, at all for any step.
It is true in every step of a simulation. And it means that projection uncertainty compounds because every erroneous intermediate climate state is subjected to further simulation error.
This is why the uncertainty in projected air temperature increases so dramatically. The model is step-by-step walking away from initial value knowledge further and further into ignorance.
On an annual average basis, the uncertainty in CF feedback is (+/-)144 times larger than the perturbation to be resolved.
The CF response is so poorly known, that even the first simulation step enters terra incognita.