Propagation of Error and the Reliability of Global Air Temperature Projections
Guest essay by Pat Frank
Regular readers at Anthony’s Watts Up With That will know that for several years, since July 2013 in fact, I have been trying to publish an analysis of climate model error.
The analysis propagates a lower limit calibration error of climate models through their air temperature projections. Anyone reading here can predict the result. Climate models are utterly unreliable. For a more extended discussion see my prior WUWT post on this topic (thank-you Anthony).
The bottom line is that when it comes to a CO2 effect on global climate, no one knows what they’re talking about.
Before continuing, I would like to extend a profoundly grateful thank-you! to Anthony for providing an uncensored voice to climate skeptics, over against those who would see them silenced. By “climate skeptics” I mean science-minded people who have assessed the case for anthropogenic global warming and have retained their critical integrity.
In any case, I recently received my sixth rejection; this time from Earth and Space Science, an AGU journal. The rejection followed the usual two rounds of uniformly negative but scientifically meritless reviews (more on that later).
After six tries over more than four years, I now despair of ever publishing the article in a climate journal. The stakes are just too great. It’s not the trillions of dollars that would be lost to sustainability troughers.
Nope. It’s that if the analysis were published, the career of every single climate modeler would go down the tubes, starting with James Hansen. Their competence comes into question. Grants disappear. Universities lose enormous income.
Given all that conflict of interest, what consensus climate scientist could possibly provide a dispassionate review? They will feel justifiably threatened. Why wouldn’t they look for some reason, any reason, to reject the paper?
Somehow climate science journal editors have seemed blind to this obvious conflict of interest as they chose their reviewers.
With the near hopelessness of publication, I have decided to make the manuscript widely available as samizdat literature.
The manuscript with its Supporting Information document is available without restriction here (13.4 MB pdf).
Please go ahead and download it, examine it, comment on it, and send it on to whomever you like. For myself, I have no doubt the analysis is correct.
Here’s the analytical core of it all:
Climate model air temperature projections are just linear extrapolations of greenhouse gas forcing. Therefore, they are subject to linear propagation of error.
Complicated, isn’t it. I have yet to encounter a consensus climate scientist able to grasp that concept.
Willis Eschenbach demonstrated that climate models are just linearity machines back in 2011, by the way, as did I in my 2008 Skeptic paper and at CA in 2006.
The manuscript shows that this linear equation …
… will emulate the air temperature projection of any climate model; fCO2 reflects climate sensitivity and “a” is an offset. Both coefficients vary with the model. The parenthetical term is just the fractional change in forcing. The air temperature projections of even the most advanced climate models are hardly more than y = mx+b.
The manuscript demonstrates dozens of successful emulations, such as these:
Legend: points are CMIP5 RCP4.5 and RCP8.5 projections. Panel ‘a’ is the GISS GCM Model-E2-H-p1. Panel ‘b’ is the Beijing Climate Center Climate System GCM Model 1-1 (BCC-CSM1-1). The PWM lines are emulations from the linear equation.
CMIP5 models display an inherent calibration error of ±4 Wm-2 in their simulations of long wave cloud forcing (LWCF). This is a systematic error that arises from incorrect physical theory. It propagates into every single iterative step of a climate simulation. A full discussion can be found in the manuscript.
The next figure shows what happens when this error is propagated through CMIP5 air temperature projections (starting at 2005).
Legend: Panel ‘a’ points are the CMIP5 multi-model mean anomaly projections of the 5AR RCP4.5 and RCP8.5 scenarios. The PWM lines are the linear emulations. In panel ‘b’, the colored lines are the same two RCP projections. The uncertainty envelopes are from propagated model LWCF calibration error.
For RCP4.5, the emulation departs from the mean near projection year 2050 because the GHG forcing has become constant.
As a monument to the extraordinary incompetence that reigns in the field of consensus climate science, I have made the 29 reviews and my responses for all six submissions available here for public examination (44.6 MB zip file, checked with Norton Antivirus).
When I say incompetence, here’s what I mean and here’s what you’ll find.
Consensus climate scientists:
1. Think that precision is accuracy
2. Think that a root-mean-square error is an energetic perturbation on the model
3. Think that climate models can be used to validate climate models
4. Do not understand calibration at all
5. Do not know that calibration error propagates into subsequent calculations
6. Do not know the difference between statistical uncertainty and physical error
7. Think that “±” uncertainty means positive error offset
8. Think that fortuitously cancelling errors remove physical uncertainty
9. Think that projection anomalies are physically accurate (never demonstrated)
10. Think that projection variance about a mean is identical to propagated error
11. Think that a “±K” uncertainty is a physically real temperature
12. Think that a “±K” uncertainty bar means the climate model itself is oscillating violently between ice-house and hot-house climate states
Item 12 is especially indicative of the general incompetence of consensus climate scientists.
Not one of the PhDs making that supposition noticed that a “±” uncertainty bar passes through, and cuts vertically across, every single simulated temperature point. Not one of them figured out that their “±” vertical oscillations meant that the model must occupy the ice-house and hot-house climate states simultaneously!
If you download them, you will find these mistakes repeated and ramified throughout the reviews.
Nevertheless, my manuscript editors apparently accepted these obvious mistakes as valid criticisms. Several have the training to know the manuscript analysis is correct.
For that reason, I have decided their editorial acuity merits them our applause.
Here they are:
- Steven Ghan___________Journal of Geophysical Research-Atmospheres
- Radan Huth____________International Journal of Climatology
- Timothy Li____________Earth Science Reviews
- Timothy DelSole_______Journal of Climate
- Jorge E. Gonzalez-cruz__Advances in Meteorology
- Jonathan Jiang_________Earth and Space Science
Please don’t contact or bother any of these gentlemen. On the other hand, one can hope some publicity leads them to blush in shame.
After submitting my responses showing the reviews were scientifically meritless, I asked several of these editors to have the courage of a scientist, and publish over meritless objections. After all, in science analytical demonstrations are bullet proof against criticism. However none of them rose to the challenge.
If any journal editor or publisher out there wants to step up to the scientific plate after examining my manuscript, I’d be very grateful.
The above journals agreed to send the manuscript out for review. Determined readers might enjoy the few peculiar stories of non-review rejections in the appendix at the bottom.
Really weird: several reviewers inadvertently validated the manuscript while rejecting it.
For example, the third reviewer in JGR round 2 (JGR-A R2#3) wrote that,
“[emulation] is only successful in situations where the forcing is basically linear …” and “[emulations] only work with scenarios that have roughly linearly increasing forcings. Any stabilization or addition of large transients (such as volcanoes) will cause the mismatch between this emulator and the underlying GCM to be obvious.”
The manuscript directly demonstrated that every single climate model projection was linear in forcing. The reviewer’s admission of linearity is tantamount to a validation.
But the reviewer also set a criterion by which the analysis could be verified — emulate a projection with non-linear forcings. He apparently didn’t check his claim before making it (big oh, oh!) even though he had the emulation equation.
My response included this figure:
Legend: The points are Jim Hansen’s 1988 scenario A, B, and C. All three scenarios include volcanic forcings. The lines are the linear emulations.
The volcanic forcings are non-linear, but climate models extrapolate them linearly. The linear equation will successfully emulate linear extrapolations of non-linear forcings. Simple. The emulations of Jim Hansen’s GISS Model II simulations are as good as those of any climate model.
The editor was clearly unimpressed with the demonstration, and that the reviewer inadvertently validated the manuscript analysis.
The same incongruity of inadvertent validations occurred in five of the six submissions: AM R1#1 and R2#1; IJC R1#1 and R2#1; JoC, #2; ESS R1#6 and R2#2 and R2#5.
In his review, JGR R2 reviewer 3 immediately referenced information found only in the debate I had (and won) with Gavin Schmidt at Realclimate. He also used very Gavin-like language. So, I strongly suspect this JGR reviewer was indeed Gavin Schmidt. That’s just my opinion, though. I can’t be completely sure because the review was anonymous.
So, let’s call him Gavinoid Schmidt-like. Three of the editors recruited this reviewer. One expects they called in the big gun to dispose of the upstart.
The Gavinoid responded with three mostly identical reviews. They were among the most incompetent of the 29. Every one of the three included mistake #12.
Here’s Gavinoid’s deep thinking:
“For instance, even after forcings have stabilized, this analysis would predict that the models will swing ever more wildly between snowball and runaway greenhouse states.”
And there it is. Gavinoid thinks the increasingly large “±K” projection uncertainty bars mean the climate model itself is oscillating increasingly wildly between ice-house and hot-house climate states. He thinks a statistic is a physically real temperature.
A naïve freshman mistake, and the Gavinoid is undoubtedly a PhD-level climate modeler.
The majority of Gavinoid’s analytical mistakes include list items 2, 5, 6, 10, and 11. If you download the paper and Supporting Information, section 10.3 of the SI includes a discussion of the total hash Gavinoid made of a Stefan-Boltzmann analysis.
And if you’d like to see an extraordinarily bad review, check out ESS round 2 review #2. It apparently passed editorial muster.
I can’t finish without mentioning Dr. Patrick Brown’s video criticizing the youtube presentation of the manuscript analysis. This was my 2016 talk for the Doctors for Disaster Preparedness. Dr. Brown’s presentation was also cross-posted at “andthentheresphysics” (named with no appreciation of the irony) and on youtube.
Dr. Brown is a climate modeler and post-doctoral scholar working with Prof. Kenneth Caldiera at the Carnegie Institute, Stanford University. He kindly notified me after posting his critique. Our conversation about it is in the comments section below his video.
Dr. Brown’s objections were classic climate modeler, making list mistakes 2, 4, 5, 6, 7, and 11.
He also made the nearly unique mistake of confusing an root-sum-square average of calibration error statistics with an average of physical magnitudes; nearly unique because one of the ESS reviewers made the same mistake.
Mr. andthentheresphysics weighed in with his own mistaken views, both at Patrick Brown’s site and at his own. His blog commentators expressed fatuous insubstantialities and his moderator was tediously censorious.
That’s about it. Readers moved to mount analytical criticisms are urged to first consult the list and then the reviews. You’re likely to find your objections critically addressed there.
I made the reviews easy to apprise by starting them with a summary list of reviewer mistakes. That didn’t seem to help the editors, though.
Thanks for indulging me by reading this.
I felt a true need to go public, rather than submitting in silence to what I see as reflexive intellectual rejectionism and indeed a noxious betrayal of science by the very people charged with its protection.
Appendix of Also-Ran Journals with Editorial ABM* Responses
Risk Analysis. L. Anthony (Tony) Cox, chief editor; James Lambert, manuscript editor.
This was my first submission. I expected a positive result because they had no dog in the climate fight, their website boasts competence in mathematical modeling, and they had published papers on error analysis of numerical models. What could go wrong?
Reason for declining review: “the approach is quite narrow and there is little promise of interest and lessons that transfer across the several disciplines that are the audience of the RA journal.”
Chief editor Tony Cox agreed with that judgment.
A risk analysis audience not interested to discover there’s no knowable risk to CO2 emissions.
Right.
Asia-Pacific Journal of Atmospheric Sciences. Songyou Hong, chief editor; Sukyoung Lee, manuscript editor. Dr. Lee is a professor of atmospheric meteorology at Penn State, a colleague of Michael Mann, and altogether a wonderful prospect for unbiased judgment.
Reason for declining review: “model-simulated atmospheric states are far from being in a radiative convective equilibrium as in Manabe and Wetherald (1967), which your analysis is based upon.” and because the climate is complex and nonlinear.
Chief editor Songyou Hong supported that judgment.
The manuscript is about error analysis, not about climate. It uses data from Manabe and Wetherald but is very obviously not based upon it.
Dr. Lee’s rejection follows either a shallow analysis or a convenient pretext.
I hope she was rewarded with Mike’s appreciation, anyway.
Science Bulletin. Xiaoya Chen, chief editor, unsigned email communication from “zhixin.”
Reason for declining review: “We have given [the manuscript] serious attention and read it carefully. The criteria for Science Bulletin to evaluate manuscripts are the novelty and significance of the research, and whether it is interesting for a broad scientific audience. Unfortunately, your manuscript does not reach a priority sufficient for a full review in our journal. We regret to inform you that we will not consider it further for publication.”
An analysis that invalidates every single climate model study for the past 30 years, demonstrates that a global climate impact of CO2 emissions, if any, is presently unknowable, and that indisputably proves the scientific vacuity of the IPCC, does not reach a priority sufficient for a full review in Science Bulletin.
Right.
Science Bulletin then courageously went on to immediately block my email account.
*ABM = anyone but me; a syndrome widely apparent among journal editors.
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.
In the computer age it would not matter which time scales you calculate. Even minutes went, one would have these data available. It depends on whether (as it is obviously the CO2 believers do) see CO2 * H2O in every temperature change of the past and then project it to the future. I’m not saying the model is fault-free. No model is this. However, many made it through the review, which are obviously flawless. Even though the average of models in an El Nino Uptick was recently presented by Gavin as evidence of the longer-term accuracy of the models.
However, in the longer term it looks as follows:
And extending this lower level extrapolation backward into the LIA indicates no attribution to CO2. It is a fabricated non-catastrophe! Utterly!
the LIA wasn’t caused
by CO2. its end was due to
three main factors – decline in volcanic
aerosols with decline in ice-albedo
feedback; some solar in first half
of 1900s; and GHGs — CO2’s forcing
changes fastest in the beginning.
crackers345
Name the volcanoes and their eruption dates that were “high” during the Medieval Warming Period (you know, that hot period around the world that cooled off into the LIA) that caused the LIA to “end”. See, when volcano activity is “very high” the extra soot and ash and gasses n the sky COOL the average global temperatures. So, the “lack of atmosphere contaminates cooled the world after they caused it to warm? And you need to name the volcanoes and the eruption dates that cooled things between the Minoean Warm and the Roman Warming Period and then again between the Roman era and the MWP.
The LIA was at its deepest (lowest average global temperature) in 1650, then temperatures have been gradually raising ever since with a characteristic 60-70 short term cycle. So, what exactly does “solar” changes in “the first half of the 1900’s” have to do with a temperature change from 1650 to 2000? See, there was no substantial CO2 change globally between 1650 and the 1930’s and 40’s.
And, in fact, most of the CO2 change has been DURING the period when the earth’s global average temperatures have changed the least!
crackers.. can you give some data showing the rise in volcanic aerosols which created the LIA in the first place, say 1300 to 1700?
There seem to have been quite a few equatorial volcanic eruptions during the period of the LIA – starting with the 1257 eruption of Mt Samalas in Indonesia….
https://www.nature.com/articles/srep34868
http://notrickszone.com/wp-content/uploads/2016/04/Volcanoes_4.jpg
RACook – the volcanic eruptions were
1250-1275 AD. they’re shown in this paper:
Gifford H. Miller, Geirsdottir, A., Zhong, Y., Darren Larsen, Otto-Bliesner, B. L., Holland, M. M., Bailey, D. A., Refsnider, K. A., Scott J. Lehman, Southon, J. R., Anderson, C., Bjornsson, H., Thordarson, T. 2012: Abrupt onset of the Little Ice Age triggered by volcanism and sustained by sea-ice/ocean feedbacks. Geophysical Research Letters, 39: L02708. DOI: 10.1029/2011GL05016
It may not be that the money and the fame keep such papers out, it may be the faith in a belief that results in defending it no matter what. Ask what would convince a true believer that he/she is wrong. My guess is, there is nothing that would. Meaning it’s not about science. So not allowing the publication of material is merely an expected reaction to challenging that which the editor cannot fathom ever to be false. Extreme faith in the belief means protecting it at all costs.
Until a few years ago (until the climate hype), there was a publicly accepted (even from Hollywood) law in the whole of science. Every outsider opinion was at first pitted mercilessly and may it still be right on closer examination. This was publicly known and accepted, so that in the public no “final and all-knowing” science was ever perceived. It was only with the advent of climate science as a media Darling and a vehicle for re-education and social change that this changed. Now, no more public perceptions were perceived and accentuated. One lives in a bubble of self-reinforcing reflection. But every evil has its good, now people turn away from faith. Not because they think that science is not mature (they are already being re-educated), but because they have to solve other more urgent problems. Thus, the 97 percent mentality has developed as a poison for the more advanced goals of the vehicle climate science and for those who are behind it.
I suggest you all see Patrick Frank, PhD presentation at the 34th Annual DDP meeting, July 10, 2016, Omaha, Nebraska.
And I suggest you watch ….
“https://www.youtube.com/watch?time_continue=2&v=rmTuPumcYkI”
Toneb,
Dr. Brown is very sloppy in his attention to significant figures in his calculations. Demonstrating that he is not the sort of person who pays attention to detail, I would be inclined to check all his calculations.
In his last example, where he re-visits the problem of how far the person has walked, in order to address the issue of base-error, if the question is re-framed as to where the person ends up, then the calculation used by Frank is appropriate. And, in the context of this blog, it really is more important to consider what the final future temperature will be (and the uncertainty of it) than how much the temperature will increase over a period of time.
You are kind to Mr Brown, Clyde Spencer.
He says that +/-4 W/m² is a time invariant, while it is NOT. A +/-4 W/m² on 20 years means a +/-1 W/m² on 5 years or +/-0.2 W/m² on 1 year, or +/-20 W/m² per century
And this way things add up finely: wheter you use a 20 step model with +/-0.2 W/m² uncertainty per step, or a 1 step model with +/-4 W/m², in both case you have a +/-4 W/m² uncertainty.
Provided, however, you can reduce uncertainty when you reduce step size.
Trouble is, for climate modeler, you cannot. Uncertainty is certainly no less than +/-0.1 W/m² on a single day, and that’s already to big a step for GCM do work
Toneb, you pointed Dr. Brown’s youtube version.
Try looking here, the original video at his personal site, and the discussion we had below it.
His argument does not withstand critical scrutiny.
Who knew that plant food appears to be the boogeyman of the century in that it merits such attention? Like saying bunnies are horrendous and we must study each soft fiber in their fur.
Good discussion.
Great work Pat.
Good luck! Never before has so much policy been driven by mere curve-fitting.
The computer games are obviously a joke…..and yet, people not only defend them but the science behind them…..and they get away with it!
Pat ….. just giving 2 cents worth. It seems your assertion of propagated linear error is in connection to the article published here about how the temperature is just the result of a random walk. As such, the climate scientist can’t acknowledge your paper, as in their mind, CO2 is actually forcing the climate. I tend to agree with the random walk theory for the small changes in temperature noted over short time periods, but not for the changes that clearly appear as cycles, such as ice ages. But again, CO2 is not relevant to the long term discussion, nor is modeling. This whole model BS exercise is just part of an agenda against fossil fuels. It doesn’t need to be accurate or precise, nor does it have apply to reality.
Good luck.
I don’t know whether it was all this, I suspect a few scienctist adopted the incorrect view of the climate, and thought they were saving the world in the best of the Saturday afternoon B flick, but I know the FF haters jumped on this like as soon as they figured out the scientists didn’t might being used for propaganda, Which they didn’t, because they thought they were trying to save the world.
it is nothing more than virtue signalling of the highest order. the entire community is too far down the rabbit hole to even consider any other view point. this nonsense has and is being taught to pre teen children in schools ,what hope have the next generation of climate scientists got of being objective at the outset.
Yes, it is a campaign, served from Billionaires with interest in new “renewable” energy. Follow the money. CNN international yesterday from Tom Steyer: “In the past there are presidents impeached for far less”. Guys, that’s it. Pres. Trump is dangerous to the shops at this level and now an impeachment is to be created for intimidation. It was not said, however, what the “far less” or even the reason should be. Does not this remind you of the per reviewer of Pat?
Thanks, Dr. Dean – though my error analysis concerns how climate models project temperature. It has no bearing on what the physically real temperature does.
of course, every author thinks
his/her article is 100% correct and vital. why
don’t you
publish the full reviews
so we can
see what the reviewers wrote in
detail?
Gee Crackers and Mark, Nature published an error filled Dr. Mann “hockey Stick” paper,way back in 1998. It is now known as one of the most critically smashed paper in climate research history.
Carry on with your hypocrisy.
it’s the exact opposite — many studies have
replicated a HS.
indeed, crackers345, “many studies have replicated a HS” out of data known to have no trend at all, just by applying the faulty statistical novation of gavin. That the point that destroyed the HS even for IPCC.
He published the full review. It is in the zip file, link in the article.
To me it seems the reviewers brought up arguments without thinking about it and without checking the sources, like the one discussed above with the Cloud Forcing uncertainty. In all reviews I cross checked I found the same thing. Weak arguments in the reviews, easily and clearly debunked by Pat Frank. It looks like the result of the reviews was clear from the beginning on. The procedure just needed to be followed.
In my eyes that conspiracy is even more fascinating than the uncertainty monster which everyone assumed anyways.
Pat there are a lot of boutique journals that will publish your work for a fee. Just pony up and PRESTO! It will solve all your problems with rejection.
So it doesn’t matter that these journals give bogus reasons for rejecting a paper, because there are boutique journals.
Your assertion that the reasons are bogus is “hand waving.”
does it take a hand waver to know one i wonder ?
Mark S. Johnson, take a look at the incompetent reviews, link provided in the text, before making facile dismissals of bogosity.
@ATTP. Your quote “So, if someone wants to argue that the range of possible temperature is 30K (as appears to be suggested by Pat Frank’s error analysis) then one should try to explain how these states all satisfy the condition that they should be in approximate energy balance (or tending towards it).”. No one is remotely arguing that. The point is that the error propagating through the modelling process results in a range of possible temperatures that is very large because of error propagation, not because the actual physical range of possible temperatures is that large.
The point to remember about climate models is that they cannot be used as proof of the CAGW hypothesis because the assumptions of the CAGW hypothesis are programmed into the models. They are merely another representation of the theory, they are not proof of anything.
dbakerber,
You said, “They are merely another representation of the theory, they are not proof of anything.” To expand on that, they are actually very complex working hypotheses that have been formalized by coding the assumed mathematical relationships. In an ideal world, where the Scientific Method was followed, the models should be subject to comparison and validation by empirical evidence. If necessary, the models should be adjusted (revised working hypothesis) to achieve better agreement with reality. Instead, we are basically told that the “science is settled,” and that the models are reliable. Yet, even today, Einstein’s theories are still being tested by replication.
Einstein’s theories are still being tested by replication.
===
To date not a single prediction of GR has proven to be wrong.
In contrast climate science doesn’t make predictions. They make projections. Thus the need to add “science” to their name. Like political science and christian science.
Science Bulletin rejected your paper for insufficient “novelty and significance”. It is clearly significant.
One can only conclude they thought it was not novel.
Absolutely!! His findings are not novel to them because they understand that their models are not correct, that is why they need perpetual funding to keep fixing them. What they should have just written in their rejection is “You’re not helping.”
It’s wrong, mostly, for reasons explained over and over and over again.
Frank’s model for the error propagation would mean that the temperatures should go below 0K in a few centuries. Amongst the much more detailed criticisms provided by others, this provides a basic sanity check showing that his understanding is incorrect.
What happens in the real world (or the models) when the cloud forcing is 4 W/m2 lower? The temperature drops a little, and then the outgoing radiation decreases, and the temperature stabilizes. It doesn’t drop to 0K. It has a static effect on temperatures.
Pat Frank basically treats the static uncertainty of 4 W/m2 as an expanding uncertainty, as if the cloud forcing could keep changing each year by another 4 W/m2 until the Earth froze over or boiled off. This is not realistic, neither in the models or reality. Basically: he’s treating the uncertainty as W/m2/year, rather than W/m2.
And that’s why his paper was rejected. Because it’s wrong.
Forrest, I’m explaining how a static uncertainty in the cloud forcing does not propagate through the models, nor through the real world. And I’m explaining how Pat Frank made his mathematical mistake; essentially by changing the units from W/m2 to W/m2/year.
Stop and think about it. Do you think that a static uncertainty in the cloud forcing can mean that the temperature could be anything? Does that make sense in reality, or in the models? What about the restorative forcings?
OMG, Mr. Winchester. “Frank’s model … would mean that temperatures should go below 0k….” No, Frank’s “model” (I don’t think it’s a model at all) has nothing to do with predicting what temps will do. If I understand correctly it is the flawed model, not temps, MODEL TEMPS, that would go to 0k. IF there were proper error bars showing uncertainty (as distinguished from known errors) with these models, the ERROR BARS (THUS THE MODELS) would go to the impossible realm of heat/cold, not actual temperatures. If the model does that, then it is flawed and cannot be corrected by “adjusting parameters” (newspeak for “tuning”). I’m not a scientist or mathematician, but it’s clear to me. Are you really as thick as ATTP?
Treating it annually is correct. It is uncertainty per year. How far it drags temp off in one direction or the other after x many years depends, mathematically, on the model – the variables in the function – and the values of the parameters – the variables in the function. I think Pat Frank is doing error propagation for functions.
If, after doing this, the range of temperature projections goes ‘off the charts’ into historically unprecedented teritory, a real scientist concludes that there is something about their model that is probably REALLY EFFING WRONG!!!
Since Pat Frank’s model emulates the temp outputs of climate models, and the climate models do not properly propagate the error, the direct implication is that the climate models, when properly propagating error, would show equally as unacceptable error bars, requiring one to conclude that the climate models are also really effing wrong.
If it were me, I’d actually use the historically ridiculous outcomes of my model to help guide the selection of parameters and the values of the parameters. Again, error propagation depends on the function itself.
Doing a proper error propgation could actually be used to help fine tune the climate model. It could turn out that the fine tuning (which results in non-off-the-chart error bars) conicides with models that are far less sensitive to changes in GHGs.
Benjamin Winchester, “Frank’s model for the error propagation would mean that the temperatures should go below 0K in a few centuries.”
It means nothing of the kind. You’re supposing an uncertainty statistic is a physically real temperature. Fatal mistake.
BW “as if the cloud forcing could keep changing each year by another 4 W/m2 until the Earth froze over or boiled off.”
Wrong again. You’re treating a calibration error statistic as though it were an energetic perturbation. Fatal mistake #2.
Take comfort, though, because many PhD-level climate modeler reviewers did the same thing. You’re qualified to be a consensus climatologist, Benjamin, but not to be a scientist.
I always enjoyed dialogue between RPS Gavin.
https://pielkeclimatesci.wordpress.com/?s=Gavin+Schmidt&submit=Search
“I always enjoyed dialogue between RPS Gavin.”
There will be less of it now.
https://twitter.com/ClimateOfGavin/status/921894302283894784
And how did the dialog go? Why was he blocked? And why would that lead to the dishonesty of not citing them? On that single statement alone, he destroyed an “ocean” of trust.
Gavin uses the sneaky approach, he uses the “mute” feature on his Twitter account. At one point, Gavin had blocked me, but now I’ve determined he’s used “mute”.
BTW Gavin is not without his own set of sins, as this video shows his cowardice on full display, when asked to appear on stage with Dr. Roy Spencer.
Anthony Watts
October 23, 2017 at 4:14 pm
A bit to much to be forced and subjected to the “science” of a NASA scientist that requires indisputably the acceptance that actually science and “total” truth are compatible, in expression and method.
But then that is Gavin and his “science” of total truth, and in the same time the contemplation of some uncertainty there still considered by Gav !!!!!
Gavin the sciencie drama queen, maybe……
volcanic models suffer from the pneumonoultramicroscopicsilicovolcanoconiosis
Vuk,
Well isn’t that just supercalifragilisticexpialidocious!?
On this discussion of error propagation through iterative numerical analysis I am not qualified to critique or assess.
But I do know without a doubt this:
the implementation of all of the GCMs with tunable parameters (for real physical processes that are smaller than their grid scale, like convection, precipitation, cloud formation, you know, those pesky trivial things) makes them nothing more than tuned to expectation. They are simply confirmation bias by the modellers who tune them at every run. If they get a wild, bad run (too hot, too cold) happening, they reinitialize, tweak and start again. All of that is by their own admission on the tuned parameterization.
And then they combine them in an “ensemble mean” to give them some false patina of validity and confirmation.
GCM tuning is junk science and their GroupThink completely blinds the model community to this reality that surrounds them like a fetid swamp.
Dear Anthony,
Try a Chinese journal, they are openly amenable to climate skepticism.
And before anyone says ‘Chinese…!!’, as if we are talking about Amazonian nomads, please remember that it is the Chinese who manufacture all of your computer gagetry. In addition, to allay western prejudice, they always have two western reviewers, in addition to their own.
Ralph
It is not Anthony. It is Pat. He must try no “chinese journal” but instead he worked out the pour per review process. “Pfui Deibel” on german.
I try to keep this as simple as possible but not simpler than that.
The IPCC’s climate model is the same as eq. (1) when its parameters are based on the choices made by the IPCC
dTs = λ* RF (1)
where λ is 0.5 K/(W/m-2) and RF as specified by Myhre et al. This is a linear dependency.
Eq (1) gives the climate sensitivity of 1.85⁰C, the IPCC’s official value is 1.9 ⁰C ± 0.15 ⁰C; In the AR5 has been tabulated the TCR mean of 30 GCMs to be 1.8 ⁰C (±0.6 ⁰). The high variation in the TCR value of GCMs is due to various λ-values applied in the models and they are due to various feedbacks applied.
The most complicated computer models and the simplest model give the same results from the CO2 concentration of 280 to 560 ppm. This means that Pat Frank is right: GCMs are built on the linear relationship between the GH gas forcings and the surface temperature.
Dr. Antero Ollila
P.S. When I sent my manuscript of reproducing the radiative forcing of CO2 to Science and Nature, the editors rejected it immediately by saying that this is no interest for our readers. Actually it is one of the cornerstones of the present global warming theory.
Link: https://wattsupwiththat.com/2017/03/17/on-the-reproducibility-of-the-ipccs-climate-sensitivity/
It may be a corner stone, but can you claim that your result was both new and important?
There are thousands of papers which are published on climate change that also are neither new or important. As long as they repeat the mantra of death to earth then they get published.
“New and Important” is just the excuse rather than being honest and saying “Not conforming to the narrative.”
The errors associated with “fudge factor assumptions” are far greater than any propagated statistical errors.
That is what I thought, but really, really can’t decide based on this.
I would agree with the poor understanding of reviewers.
Two of my reviewers stated that CO2 (ie atmospheric pressure) did not reduce by 40% at 4,000 m. And one stated that “the concentration of CO2 at altitude is the same as at sea level, and so plants cannot be starved of CO2 at altitude”.
Both have misunderstood the difference between concentration and partial pressure, even though the figures were clearly marked in micro-bars. I did ask if they would be short of oxygen at the top of Mt Everest, the same as plants would be short of CO2 there, but got no reply.
The review process is certainly broken, because there is no discussion and so reviewers can remain in their bubble of misunderstanding. At least a blog review will throw up plenty of discussion, from many views, and deliver a better understanding.
Ralph
The internet has the real prospect of destroying the scientific journals’ grip on peer-review and thus their business model. The internet as it has already done to the main-stream Newspapers and News magazines precipitous declines in circulation. They cannot control the information flow anymore. Authoritarian regimes (of the Left and Right politically) are desperate to control the internet (as China, Russian, Turkey governments are doing).
Pat Frank’s manuscript will now receive a wider readership (outside of the GroupThink-controlled GCM community) here at WUWT and on other reposted blogs than if the GCM community had simply let it be published in a small, low impact, low circulation journal.
arXiv.org is a place where many physicists place their initial manuscripts for critical review and author replies before submission to a journal.
https://confluence.cornell.edu/display/arxivpub/arXiv+Governance+Model
Maybe Pat Frank could get it to arXiv.org to force an open review by his anonymous reviewers?
Joel O’Bryan, PhD
Only to the point, if people will accept it. They just say well it’s not been reviewed, no attempt to review the physics to see if there is a flaw in that logic, just rebuff.
I know https://micro6500blog.wordpress.com/2016/12/01/observational-evidence-for-a-nonlinear-night-time-cooling-mechanism/
This explains the surface temperature regulating mechanism of Water vapor, and how that is what’s controlling surface temps. And co2 has little to no role.
“Pat Frank’s manuscript will now receive a wider readership “
There is no evidence from the discussion here that anyone (except attp and me) has actually read it, let alone critically. Just the usual knee-jerk, all-purpose stuff.
Nick,
There’s tonnes of peer-reviewed papers over the years that have turned out to be crapola. No malfeasance, nefarious intent, nor bad ethics needed. Peer-review science papers is not the Gold Standard the naive public thinks it is.
What is paramount to the Climate Alarmist establishment though is the suppression of ideas and analyses that run counter to or interfere with the public’s acceptance of economic control goals embodied UNFCCC COP agreements. (Kyoto, Rio, and now Paris agreements).
The GCMs declaring an inflated CO2 sensitivity are the central pillar of the Alarmist’s Big Socialism tent. Without that everything they’ve attempted for the last 30 years toward economic control via energy resources unravels.
…and Pat’s paper is blasphemy. Modellers hate it because it shows them to be mathematical fools, whereby even if his error propagation idea is wrong, the realization by a wider audience that supercomputer-run GCMs are simply massive, expensive Rube-Goldberg implementations of y=mx+b cannot be avoided.
“There is no evidence from the discussion here that anyone (except attp and me) has actually read it, let alone critically”
There is NO evidence that you actually understood it.
Your comment say.. nope.!!
You make all the same mistakes that Pat has listed.
Nick,
I read the paper and the large zip with reviewer comments and the PF replies to them.
I have not posted here yet because that reading takes a lot of time.
Thus, your assertion that only 2 people have read the paper is wrong. Absolutely completely, demonstratively wrong.
What do you do when you are found to be wrong?
As you know from blogs over the years, when I am wrong I acknowledge and correct. Geoff.
Geoff,
I said there was no evidence from the discussion here that anyone had read it. You had not contributed to the discussion. Would you like to? Maybe explain how that cloud cover acquired a year^-1 dimension which seems to be the basis for annual growth?
***Geoff,
I said there was no evidence from the discussion here that anyone had read it. You had not contributed to the discussion. Would you like to? Maybe explain how that cloud cover acquired a year^-1 dimension which seems to be the basis for annual growth?**
AS ALWAYS, NICK DOES NOT ADMIT HE WAS WRONG, BUT CHANGES THE SUBJECT, AS HE HAS BEEN DOING IN HIS SO-CALLED ‘DISCUSSION ON THIS POST.
I tried ArXiv, Joel. They rejected the upload.
Mixing up pressure and concentration is an elementary mistake indeed. And when your reviewer or reader makes such an error, you really need to think what lead to that mistake.
BTW, always add cite when referring to something reviewed, Ellis 2017 (in preparation) would do if it is not out yet…
Well, I did give both ‘micro-bar’ and ‘equivalent surface ppm concentration’, just to try and make things clear. But whatever the notation, I still cannot see how any scientist can think that plants (or indeed animals) would not be starved of CO2 (or oxygen) with altitude. How could anyone think that?
The reference is:
Modulation of Ice Ages via Dust and Abedo. Ellis and Palmer 2016
http://www.sciencedirect.com/science/article/pii/S1674987116300305
Ralph
Two of the comments were…
Quote:
Finally the authors calculate that CO2 drops by 40% at 4000m altitude relative to sea-level. However, this is simply incorrect physics. This means that the calculations of elevation-CO2 deserts are also incorrect.
I strongly suspect that the calculations showing a 120ppm drop in CO2 at 4000m in the tropics and 65ppmb drop at 2000m in the extra-tropics are completely wrong. Observations of CO2 with altitude show tiny variations (less that 5ppmv, e.g. Olsen & Randerson, 2004, figure 3). So this calculation in table 4 is very likely wrong. This means that the emergence of deserts at sub-190ppmv is not realistic because it is based on flawed calculations. This undermines much of the rest of the manuscript.
Endquote.
With pressures given as both ‘micro-bar’ and ‘surface concentration ppm equivalent’, for ease of clarification.
The emergence of CO2 deprivation deserts during the ice age, was primarily at high altitude in already arid locations – like Pategonia and the Gobi region. And we know that new deserts emerged in northern China, because of the huge increase in dust deposition upon the Loess Plateau, which records deposits from nearly all the ice ages. Indeed, the Loess Plateau gives an ice age record that is as valuable as the ice core records from Antarctica.
But the paper was rejected because of that ‘mistake’.
R
And to cap it all, all references to the paper have been deleted from Wiki because I am apparently a ‘climate denier’ – whatever that means…
R
ralf is connolley the clown still at his wiki capers ?
in Manabe and Wetherald (1967), which your analysis is based upon.” and because the climate is complex and nonlinear.
==========
Are climate models “complex and nonlinear”? Is this error analysis looking at climate or climate models?
The climate models oscillate between hot and cold on different runs. The modellers consider this to be prediction error from the true result but it is not, because the future does not exist. It is an imaginary concept.
As a result the future cannot be calculated beyond a probability. It most certainly does not exist as an average of that probability as climate science would have us believe.
The difference between the probability envelope and the average is uncertainty that is a physical property of mistaking the future for something that exists, ignoring that the future is not yet written.
As such this is separate from and independent of the error measurement.
“The future doe not exist”
Agree. It fascinates me that science, in collecting data, always is looking to the past, at traces and leftovers of things “gone by”. The future doesn’t exist by definition, and a confirmation of a prediction can only be done with data from past phenomena. All the phenomena happening in the “now” have continuous and complicated if not chaotic interactions – a dynamic process that may leave some traces that on longer time-scales have some stability (“data”). But the overwhelming majority of the effects of these interactions are lost on us, as they are not stable and static enough to be registered.
So it seems science is always a prisoner of leftovers in the past. Is that maybe the reason why would-be scientists and charlatans often try to escape to a perceived future and pretend to know about it?
doe=does
Errm , you do realise that Hansen retired some time ago, right?
I think careers can go rectum even post mortem. I certainly hope so.
“reputation” would have been a better word choice. With reputation goes career for those still practicing the career choice profession of science.
yet his modeling was a parody of science and maths
ATTP and Nick cannot comprehend the difference between model parameterization uncertainty and real world physical uncertainty.
The uncertainty is in the models, it has nothing to do with the real world
All uncertainty has no bearing on the real world
x amount of output is hindcast tuning not model physics.
it is absolutely absurd that Nick and ATTP cannot tell the differnence between a model and the physical world
it is even more alarming given how much we do not inderstand at all, including interaction of ocean, surface and atmosphere.
throw in lack of resolution, and a heap of parameterizations..
My work requires accuracy, if I performed my function as climate modelers do, I would be fired.
ATTP, you wouldn’t last 5 minutes in my world, neither would Nick
I’d like to see these models run for 5000 years to see what they produce. I bet the outcome is quite unvelievable 😀
see the work of David Archer, U Chicago.
he’s done great work on how CO2 will vary
in the next 100,000 years from our
current emission pulse. some of the
CO2 pulse stays around for basically
forever.
Oh ye. In 100,000 years we will know he was right. Or not. But, for sure, anythings that happen in the next 100,000 years (supervolcano exploding, large enough object hitting Earth, greening of Sahara, …) can be used as a excuse, so we basically never know. “great work”, you said?
Oh also, any output that is a range, is not a prediction. A prediction is a unique guess, a specific value.
A range is a cast net, not uncertainty, not variability, not prediction, not probability.
Who taught these clowns maths?
You might consider that your error analysis is of the model mean of the individual models. Not of the models themselves.
What I would like to see is the actual individual runs for each model. Do they all converge in the future or do they diverge?
I suspect we never see these individual runs for individual models because it would quickly reveal that the models themselves show that an infinite number of different climates can result from just 1 set of forcings.
In other words. The individual model runs will show that the future may be hot or it may be cold regardless of what we do with CO2.
And for that reason only the model means are published. Because they make it appear that we can dictate future climate by controlling CO2.
If you do 1000 model runs and they provide 1000 different outcomes and one is tracking temps, that means you have no idea what you are doing, and got lucky, as in, that is a fact, because if you did know what you were doing, the mean and extremes would cluster around the observed values.
because model runs do not, even come close to clustering around observations, neither the mean now extremes, it is clear models are useless.
Unless you are Zeke hausfather and you tilt the model runs down towards observations, which in my opinion is epic self delusion or outright intentionally misleading
There is 0 accuracy in climate modeling, they cannot even hindcast, they are tuned to get past values
Climate modelers would not make it in any other field of science, they would be eaten alive
This isn’t correct.
These models are deterministic, you give them the exact same inputs, they will generate the same outputs. It’s changing either parameters, or initial conditions where it changes output.
What you’re describing are different but equal initial conditions.
For instance, you should be able to build an initial set of data for 6 months ago, and now. Run the same model parameters, one starting 6 months ago, and one starting now. Simulate out 50 years to the same point, one would expect they would have the same results. I strongly suspect they would be different.
“This isn’t correct.
These models are deterministic, you give them the exact same inputs, they will generate the same outputs. It’s changing either parameters, or initial conditions where it changes output.
What you’re describing are different but equal initial conditions.
For instance, you should be able to build an initial set of data for 6 months ago, and now. Run the same model parameters, one starting 6 months ago, and one starting now. Simulate out 50 years to the same point, one would expect they would have the same results. I strongly suspect they would be different.”
I don’t think I agree with that.
I wasn’t referring to initial starting conditions, I was referring to the instability in “weather” 🙂 generation algorithms, they will not produce exactly the same on different runs.
To produce exactly the same outcome, everything coded into that model, and everything tuned would have to be precise and have built on correction (as it is difficult to get the exact same output using physics theory mathematics. I don’t think, but can’t say, that the models are built like that.
Do you have an example of models producing identical output on different runs?
Cheers
The “Weather Generation Algorithms”, just shuffle up the initialization inputs, or parameter settings, or both, a Monte Carlo Analysis. Computers are numerical calculators, When they do the same equation, and get different results a second time, it’s broken.
Mark says – “For instance, you should be able to build an initial set of data for 6 months ago, and now. Run the same model parameters, one starting 6 months ago, and one starting now.”
this can’t be done, because two important
sources of data are
missing: gridded deep ocean heat content,
and aerosol loading (it’s
needed as a function of
latitude).
climate models are about solving
a boundary value problem.
weather models solve an initial value problem.
Depends what you mean. Can we initialize a realistic model to represent reality, no. But that isn’t what a gcm is. It’s a big fricking circuit, where there’s a bunch of nodes that share data every time step, and between time steps they recalculate the new input data.
Every one of those nodes can be set during initialization, and the numeric solver will generate the same output with the same input.
Diff eq solvers are deterministic.
Now they give you somthing thing you didn’t expect, that you have to figure out. But if you do the same thing, you get the same output or it’s broken.
The use of the Ensemble Mean ensures loyalty to the Group. Any group that wants to be included in the group (think Club membership), must adhere to the rules of the Group set by the gatekeepers. Violate the rules and not only not get included in the Ensemble, but your group’s manuscripts (and your post-docs’ and your grad students’ manuscripts) will get harsh anonymous peer reviews forcing editors’ rejection. There goes your publication, there goes your grants.
This is how a few gatekeepers of the GroupThink enforce conformity on the members and dissent is silenced.