Guest post by Pat Frank
Readers of Watts Up With That will know from Mark I that for six years I have been trying to publish a manuscript with the post title. Well, it has passed peer review and is now published at Frontiers in Earth Science: Atmospheric Science. The paper demonstrates that climate models have no predictive value.
Before going further, my deep thanks to Anthony Watts for giving a voice to independent thought. So many have sought to suppress it (freedom denialists?). His gift to us (and to America) is beyond calculation. And to Charles the moderator, my eternal gratitude for making it happen.
Onward: the paper is open access. It can be found here , where it can be downloaded; the Supporting Information (SI) is here (7.4 MB pdf).
I would like to publicly honor my manuscript editor Dr. Jing-Jia Luo, who displayed the courage of a scientist; a level of professional integrity found lacking among so many during my 6-year journey.
Dr. Luo chose four reviewers, three of whom were apparently not conflicted by investment in the AGW status-quo. They produced critically constructive reviews that helped improve the manuscript. To these reviewers I am very grateful. They provided the dispassionate professionalism and integrity that had been in very rare evidence within my prior submissions.
So, all honor to the editors and reviewers of Frontiers in Earth Science. They rose above the partisan and hewed the principled standards of science when so many did not, and do not.
A digression into the state of practice: Anyone wishing a deep dive can download the entire corpus of reviews and responses for all 13 prior submissions, here (60 MB zip file, Webroot scanned virus-free). Choose “free download” to avoid advertising blandishment.
Climate modelers produced about 25 of the prior 30 reviews. You’ll find repeated editorial rejections of the manuscript on the grounds of objectively incompetent negative reviews. I have written about that extraordinary reality at WUWT here and here. In 30 years of publishing in Chemistry, I never once experienced such a travesty of process. For example, this paper overturned a prediction from Molecular Dynamics and so had a very negative review, but the editor published anyway after our response.
In my prior experience, climate modelers:
· did not know to distinguish between accuracy and precision.
· did not understand that, for example, a ±15 C temperature uncertainty is not a physical temperature.
· did not realize that deriving a ±15 C uncertainty to condition a projected temperature does *not* mean the model itself is oscillating rapidly between icehouse and greenhouse climate predictions (an actual reviewer objection).
· confronted standard error propagation as a foreign concept.
· did not understand the significance or impact of a calibration experiment.
· did not understand the concept of instrumental or model resolution or that it has empirical limits
· did not understand physical error analysis at all.
· did not realize that ‘±n’ is not ‘+n.’
Some of these traits consistently show up in their papers. I’ve not seen one that deals properly with physical error, with model calibration, or with the impact of model physical error on the reliability of a projected climate.
More thorough-going analyses have been posted up at WUWT, here, here, and here, for example.
In climate model papers the typical uncertainty analyses are about precision, not about accuracy. They are appropriate to engineering models that reproduce observables within their calibration (tuning) bounds. They are not appropriate to physical models that predict future or unknown observables.
Climate modelers are evidently not trained in the scientific method. They are not trained to be scientists. They are not scientists. They are apparently not trained to evaluate the physical or predictive reliability of their own models. They do not manifest the attention to physical reasoning demanded by good scientific practice. In my prior experience they are actively hostile to any demonstration of that diagnosis.
In their hands, climate modeling has become a kind of subjectivist narrative, in the manner of the critical theory pseudo-scholarship that has so disfigured the academic Humanities and Sociology Departments, and that has actively promoted so much social strife. Call it Critical Global Warming Theory. Subjectivist narratives assume what should be proved (CO₂ emissions equate directly to sensible heat), their assumptions have the weight of evidence (CO₂ and temperature, see?), and every study is confirmatory (it’s worse than we thought).
Subjectivist narratives and academic critical theories are prejudicial constructs. They are in opposition to science and reason. Over the last 31 years, climate modeling has attained that state, with its descent into unquestioned assumptions and circular self-confirmations.
A summary of results: The paper shows that advanced climate models project air temperature merely as a linear extrapolation of greenhouse gas (GHG) forcing. That fact is multiply demonstrated, with the bulk of the demonstrations in the SI. A simple equation, linear in forcing, successfully emulates the air temperature projections of virtually any climate model. Willis Eschenbach also discovered that independently, awhile back.
After showing its efficacy in emulating GCM air temperature projections, the linear equation is used to propagate the root-mean-square annual average long-wave cloud forcing systematic error of climate models, through their air temperature projections.
The uncertainty in projected temperature is ±1.8 C after 1 year for a 0.6 C projection anomaly and ±18 C after 100 years for a 3.7 C projection anomaly. The predictive content in the projections is zero.
In short, climate models cannot predict future global air temperatures; not for one year and not for 100 years. Climate model air temperature projections are physically meaningless. They say nothing at all about the impact of CO₂ emissions, if any, on global air temperatures.
Here’s an example of how that plays out.
Panel a: blue points, GISS model E2-H-p1 RCP8.5 global air temperature projection anomalies. Red line, the linear emulation. Panel b: the same except with a green envelope showing the physical uncertainty bounds in the GISS projection due to the ±4 Wm⁻² annual average model long wave cloud forcing error. The uncertainty bounds were calculated starting at 2006.
Were the uncertainty to be calculated from the first projection year, 1850, (not shown in the Figure), the uncertainty bounds would be very much wider, even though the known 20th century temperatures are well reproduced. The reason is that the underlying physics within the model is not correct. Therefore, there’s no physical information about the climate in the projected 20th century temperatures, even though they are statistically close to observations (due to model tuning).
Physical uncertainty bounds represent the state of physical knowledge, not of statistical conformance. The projection is physically meaningless.
The uncertainty due to annual average model long wave cloud forcing error alone (±4 Wm⁻²) is about ±114 times larger than the annual average increase in CO₂ forcing (about 0.035 Wm⁻²). A complete inventory of model error would produce enormously greater uncertainty. Climate models are completely unable to resolve the effects of the small forcing perturbation from GHG emissions.
The unavoidable conclusion is that whatever impact CO₂ emissions may have on the climate cannot have been detected in the past and cannot be detected now.
It seems Exxon didn’t know, after all. Exxon couldn’t have known. Nor could anyone else.
Every single model air temperature projection since 1988 (and before) is physically meaningless. Every single detection-and-attribution study since then is physically meaningless. When it comes to CO₂ emissions and climate, no one knows what they’ve been talking about: not the IPCC, not Al Gore (we knew that), not even the most prominent of climate modelers, and certainly no political poser.
There is no valid physical theory of climate able to predict what CO₂ emissions will do to the climate, if anything. That theory does not yet exist.
The Stefan-Boltzmann equation is not a valid theory of climate, although people who should know better evidently think otherwise including the NAS and every US scientific society. Their behavior in this is the most amazing abandonment of critical thinking in the history of science.
Absent any physically valid causal deduction, and noting that the climate has multiple rapid response channels to changes in energy flux, and noting further that the climate is exhibiting nothing untoward, one is left with no bearing at all on how much warming, if any, additional CO₂ has produced or will produce.
From the perspective of physical science, it is very reasonable to conclude that any effect of CO₂ emissions is beyond present resolution, and even reasonable to suppose that any possible effect may be so small as to be undetectable within natural variation. Nothing among the present climate observables is in any way unusual.
The analysis upsets the entire IPCC applecart. It eviscerates the EPA’s endangerment finding, and removes climate alarm from the US 2020 election. There is no evidence whatever that CO₂ emissions have increased, are increasing, will increase, or even can increase, global average surface air temperature.
The analysis is straight-forward. It could have been done, and should have been done, 30 years ago. But was not.
All the dark significance attached to whatever is the Greenland ice-melt, or to glaciers retreating from their LIA high-stand, or to changes in Arctic winter ice, or to Bangladeshi deltaic floods, or to Kiribati, or to polar bears, is removed. None of it can be rationally or physically blamed on humans or on CO₂ emissions.
Although I am quite sure this study is definitive, those invested in the reigning consensus of alarm will almost certainly not stand down. The debate is unlikely to stop here.
Raising the eyes, finally, to regard the extended damage: I’d like to finish by turning to the ethical consequence of the global warming frenzy. After some study, one discovers that climate models cannot model the climate. This fact was made clear all the way back in 2001, with the publication of W. Soon, S. Baliunas, S. B. Idso, K. Y. Kondratyev, and E. S. Posmentier Modeling climatic effects of anthropogenic carbon dioxide emissions: unknowns and uncertainties. Climate Res. 18(3), 259-275, available here. The paper remains relevant.
In a well-functioning scientific environment, that paper would have put an end to the alarm about CO₂ emissions. But it didn’t.
Instead the paper was disparaged and then nearly universally ignored (Reading it in 2003 is what set me off. It was immediately obvious that climate modelers could not possibly know what they claimed to know). There will likely be attempts to do the same to my paper: derision followed by burial.
But we now know this for a certainty: all the frenzy about CO₂ and climate was for nothing.
All the anguished adults; all the despairing young people; all the grammar school children frightened to tears and recriminations by lessons about coming doom, and death, and destruction; all the social strife and dislocation. All the blaming, all the character assassinations, all the damaged careers, all the excess winter fuel-poverty deaths, all the men, women, and children continuing to live with indoor smoke, all the enormous sums diverted, all the blighted landscapes, all the chopped and burned birds and the disrupted bats, all the huge monies transferred from the middle class to rich subsidy-farmers.
All for nothing.
There’s plenty of blame to go around, but the betrayal of science garners the most. Those offenses would not have happened had not every single scientific society neglected its duty to diligence.
From the American Physical Society right through to the American Meteorological Association, they all abandoned their professional integrity, and with it their responsibility to defend and practice hard-minded science. Willful neglect? Who knows. Betrayal of science? Absolutely for sure.
Had the American Physical Society been as critical of claims about CO₂ and climate as they were of claims about palladium, deuterium, and cold fusion, none of this would have happened. But they were not.
The institutional betrayal could not be worse; worse than Lysenkoism because there was no Stalin to hold a gun to their heads. They all volunteered.
These outrages: the deaths, the injuries, the anguish, the strife, the malused resources, the ecological offenses, were in their hands to prevent and so are on their heads for account.
In my opinion, the management of every single US scientific society should resign in disgrace. Every single one of them. Starting with Marcia McNutt at the National Academy.
The IPCC should be defunded and shuttered forever.
And the EPA? Who exactly is it that should have rigorously engaged, but did not? In light of apparently studied incompetence at the center, shouldn’t all authority be returned to the states, where it belongs?
And, in a smaller but nevertheless real tragedy, who’s going to tell the so cynically abused Greta? My imagination shies away from that picture.
An Addendum to complete the diagnosis: It’s not just climate models.
Those who compile the global air temperature record do not even know to account for the resolution limits of the historical instruments, see here or here.
They have utterly ignored the systematic measurement error that riddles the air temperature record and renders it unfit for concluding anything about the historical climate, here, here and here.
These problems are in addition to bad siting and UHI effects.
The proxy paleo-temperature reconstructions, the third leg of alarmism, have no distinct relationship at all to physical temperature, here and here.
The whole AGW claim is built upon climate models that do not model the climate, upon climatologically useless air temperature measurements, and upon proxy paleo-temperature reconstructions that are not known to reconstruct temperature.
It all lives on false precision; a state of affairs fully described here, peer-reviewed and all.
Climate alarmism is artful pseudo-science all the way down; made to look like science, but which is not.
Pseudo-science not called out by any of the science organizations whose sole reason for existence is the integrity of science.

everyone in my profession knows(or should know)
you can be precise without being accurate -and- you can be accurate without being precise.
the best result is being both accurate and precise, but sometimes the results are not either.
land surveyor here. anyone wanting anything measured to 0.01′ will get 3 different results from 3 different surveyors all depending on how they did their work. even if all did it the same way, different instruments are…different and all people are different. is it all plumb, are the traverse points EXACTLY centered with the total station. it is call human error.
granted with new tech, the precision of my work has increased dramatically, yet the accuracy of my work is basically the same, my surveys 30 years ago were in general -adjusted traverse data- one foot in 15 thousand feet to about 1 foot in 30 thousand feet-
Now with the newer instruments, my raw traverse data, something is wrong if the RAW data is not in excess 1 in 15000.
i started doing land survey with total stations that turned angles to the nearest 15 seconds(359 degrees 59 minutes 60 seconds in a circle), over time that went to nearest 10 seconds, then to nearest 5 seconds then to nearest 3 seconds to the total station we have now records angles to the nearest second.
most of our work is coming in – raw- at 1 in 30000 to 1 in 50000 which is well over the legal standards for data for land surveying. at that point precision gets tossed and now we get into accuracy. i don’t bother adjusting the data because at that point we cannot STAKE out the points in the field that precise we are adjusting the angles and distances to where the point data is moving in the thousandths of a foot territory,,at MOST we can stake to the nearest hundredth of a foot.
unfortunately most plans we use for research are worse than my current field traverse work.
Len
You said, “you can be precise without being accurate -and- you can be accurate without being precise.
However, with low precision, one cannot be as confident about the accuracy as one can be with high precision. And, implicit in the low precision is that, basically, the accuracy of individual measurements will vary widely. The best that one can claim is that the average of the measurements may be accurate.
I think this needs to be repeated, from the Conclusions the last sentence says:
Congratulations Pat. It’s sure been a long hard row to hoe.
Can I ask a dumb question? How is it climate modelers have not been asked to put error bars with their forecasts in the past?
Thank you for the essay, and the links to the paper, SI, and prior reviews (I don’t think I’ll read many of those.)
I am glad that you referred to the independent work of Willis Eschenbach.
Thanks, pat.
Some of us are also interested in the halide solvation paper.
Thanks, Michael. I hope you like that paper.
It was difficult working with the data, because the range was short for one of the methods (EXAFS).
But we got it done. I really enjoyed it, and the result with chloride was totally unexpected.
It was classic science, too, in that theory made the prediction (MD) and experiment put it to the test.
I need to ask why do we care about the continued nit-picking of NS / ATTP / etc… They will not learn, because they cannot withstand the consequences of such enlightenment. They will continue with their distractions, their returning to the ‘tempest-in-teapot’ arguments ad infinitum. They cannot do otherwise.
I think what we are dealing with here was described by Upton Sinclair in one of his books “It is hard to get a man to understand when his salary depends upon his not understanding”
So how to shut off the irritations… We change the subject back to the fundamental issue that models have never predicted an actual outcome that has come to pass. That is the central theme. These models don’t work, and Frank has explained why. We are not discussing whether he is correct; he has demonstrated that quite definitively over the totality of his published works.
Yes. For the same reasons given by Pat Frank and others, Lorenz, in a valedictory address, cautioned his peers to only tackle tractable problems. But he has been roundly ignored. They still think they can run, when theory says they may never even be able to walk.
What climate modelers should be worrying about is getting a model which spontaneously produces El Nino’s and Indian Monsoons in a credible fashion. What we get is a tropospheric hotspot that isn’t observed.
Douglas Adams, Hitchhiker’s number strikes again:
The Manabe and Wetherald derived:
f(CO2)= 0.42
result.
From:
https://www.independent.co.uk/life-style/history/42-the-answer-to-life-the-universe-and-everything-2205734.html
Hitchhiker’s fans to this day persist in trying to decipher what they imagine was Adams’ secret motivations. Here are 42 things to fuel their fascination with the number 42.
1. Queen Victoria’s husband Prince Albert died aged 42; they had 42 grandchildren and their great-grandson, Edward VIII, abdicated at the age of 42.
2. The world’s first book printed with movable type is the Gutenberg Bible which has 42 lines per page.
3. On page 42 of Harry Potter and the Philosopher’s Stone, Harry discovers he’s a wizard.
4. The first time Douglas Adams essayed the number 42 was in a sketch called “The Hole in the Wall Club”. In it, comedian Griff Rhys Jones mentions the 42nd meeting of the Crawley and District Paranoid Society.
5. Lord Lucan’s last known location was outside 42 Norman Road, Newhaven, East Sussex.
6. The Doctor Who episode entitled “42” lasts for 42 minutes.
7. Titanic was travelling at a speed equivalent to 42km/hour when it collided with an iceberg.
8. The marine battalion 42 Commando insists that it be known as “Four two, Sir!”
9. In east Asia, including parts of China, tall buildings often avoid having a 42nd floor because of tetraphobia – fear of the number four because the words “four” and “death” sound the same (si or sei). Likewise, four 14, 24, etc.
10. Elvis Presley died at the age of 42.
11. BBC Radio 4’s Desert Island Discs was created in 1942. There are 42 guests per year.
12. Toy Story character Buzz Lightyear’s spaceship is named 42.
13. Fox Mulder’s apartment in the US TV series The X Files was number 42.
14. The youngest president of the United States,Theodore Roosevelt, was 42 when he was elected.
15. The office of Google’s chief executive Eric Schmidt is called Building 42 of the firm’s San Francisco complex.
16. The Bell-X1 rocket plane Glamorous Glennis piloted by Chuck Yeager, first broke the sound barrier at 42,000 feet.
17. The atomic bomb that devastated Nagasaki, Japan, contained the destructive power of 42 million sticks of dynamite.
18. A single Big Mac contains 42 per cent of the recommended daily intake of salt.
19. Cricket has 42 laws.
20. On page 42 of Bram Stoker’s Dracula, Jonathan Harker discovers he is a prisoner of the vampire. And on the same page of Frankenstein, Victor Frankenstein reveals he is able to create life.
21. In Shakespeare’s Romeo and Juliet, Friar Laurence gives Juliet a potion that allows for her to be in a death-like coma for “two and forty hours”.
22. The three best-selling music albums – Michael Jackson’s Thriller, AC/DC’s Back in Black and Pink Floyd’s The Dark Side of the Moon – last 42 minutes.
23. The result of the most famous game in English football – the world cup final of 1966 – was 4-2.
24. The type 42 vacuum tube was one of the most popular audio output amplifiers of the 1930s.
25. A marathon course is 42km and 195m.
26. Samuel Johnson compiled the Dictionary of the English Language, regarded as one of the greatest works of scholarship. In a nine-year period he defined a total of 42,777 words.
27. 42,000 balls were used at Wimbledon last year.
28. The wonder horse Nijinsky was 42 months old in 1970 when he became the last horse to win the English Triple Crown: the Derby; the 2000 Guineas and the St Leger.
29. The element molybdenum has the atomic number 42 and is also the 42nd most common element in the universe.
30. Dodi Fayed was 42 when he was killed alongside Princess Diana.
31. Cell 42 on Alcatraz Island was once home to Robert Stroud who was transferred to The Rock in 1942. After murdering a guard he spent 42 years in solitary confinement in different prisons.
32. In the Book of Revelation, it is prophesised that the beast will hold dominion over the earth for 42 months.
33. The Moorgate Tube disaster of 1975 killed 42 passengers.
34. When the growing numbers of Large Hadron Collider scientists acquired more office space recently, they named their new complex Building 42.
35. Lewis Carroll’s Alice’s Adventures in Wonderland has 42 illustrations.
36. 42 is the favourite number of Dr House, the American television doctor played by Hugh Laurie.
37. There are 42 US gallons in a barrel of oil.
38. In an episode of The Simpsons, police chief Wiggum wakes up to a question aimed at him and replies “42”.
39. Best Western is the world’s largest hotel chain with more than 4,200 hotels in 80 countries.
40. There are 42 principles of Ma’at, the ancient Egyptian goddess – and concept – of physical and moral law, order and truth.
41. Mungo Jerry’s 1970 hit “In the Summertime”, written by Ray Dorset, has a tempo of 42 beats per minute.
42. The band Level 42 chose their name in recognition of The Hitchhiker’s Guide to the Galaxy and not – as is often repeated – after the world’s tallest car park.”
Now see how many references you can find for the numbers 43 or 44 or 45.
With a bit of effort you will find just as many.
Just a bit of afternoon coffee-breakroom levity is all my comment was intended for.
fractional number 0.42 is a bit away from integer 42 anyways.
I was once given a “book of interesting numbers”. It listed similar facts about numbers up to 42 and beyond. When they got to 39, they said
“39 is the first uninteresting number”.
Teddy was 42 when he assumed the presidency upon McKinley’s assassination. He had just turned 46 when elected in 1904.
Excellent paper Pat, congratulations on getting it published. I followed the link to the paper and looked in the references for “Bevington”. I am no expert in the science of global temperature projections but I still have my copy of “Data Reduction and Error Analysis for the Physical Sciences”, not dog-eared but dog chewed on the book’s spine. Philip Bevington graduated from Duke University in 1960 and I graduated from UNC Chapel Hill in 1979. While a grad student, I worked at TUNL, a shared nuclear research facility on Duke’s campus where Bevington was a legend. No one would attempt experimental nuclear physics without understanding error propagation.
Thanks, Jim. I very much appreciate your knowledgeable comment.
Hey Mike,
As a non-scientist, I have trouble visualizing this. How can an object lose (emit) and gain (absorb) energy at the same time? What is the mechanism? (in simple terms)
I imagine that as follows: atom in the warmer body at the same time releases two photons from its outer shell where two electrons collapse to the lower orbit. Meanwhile another photon arrives emitted from the cooler body. It is immediately absorbed by the one of the electrons on the lower shell (which emitted a photon just while ago) which in turn causes that such electron jumps into the higher orbit, or higher energy state. So warmer body emitted two photons of energy and gained one pretty much at the same time. Cooler body emitted one photon and gained two. Cooler body is warming up and warmer body is cooling down but – with interacting cooler body – slower.
I’m sure this is a childish quantum chemistry but makes sense to me.
I am gonna goes lay in a hot bath with an ice pack on my head and think about this.
What I can read from Pat’s article so far: Assumed from models uncertainty +/-12% in cloud coverage translates to annual +/-4 Wm^-2 uncertainty in energy balance (LWCF error). This uncertainty itself is over +/-100 greater than energy fluxes from greenhouse effects, or more exactly CO2 influence what renders predictions how air temperature changes useless – any potential change will be well within the uncertainty envelope due to cloud coverage. Question I’ve got is why this uncertainty accumulates over time? Cannot we just assume normal distribution so energy fluxes due to LWCF cancel out in the longer run?
This illustrates more mecanistically what happens (this is based on implementing the uncertainty propagation directly into the GCMs).
https://wattsupwiththat.com/2019/09/07/propagation-of-error-and-the-reliability-of-global-air-temperature-projections-mark-ii/#comment-2790375
They accumulate because the GCMs calculate all their results for each time step (say 1 hour), then use that result as the starting point for the next time step. So if there is systemic error, like an offset, it is present in step 1. Then that offset is added again in step 2, and so on forward in time. Even a very small error in the beginning soon overwhelms the final result.
Yes, huge estimation variances render the model useless!
Hey Paul,
I’ve got it now – Pat explains in his article error profiles associated with TCF and why we do know those errors are systematic. And yes, in such case they will propagate in each iteration and accumulate in the model output.
Great! I found his talk on youtube very helpful too.
When I look at Figure 2 of your report-
What is conveyed to me is that their models’ air temperatures (as plotted from 2000) have tracked air temperatures from 2000 to 2015 to less than a 0.5 degree or so of error. Does this mean that the models are really that good in predicting the temperature in the past using data from the past, so therefore, predicting forward in time should really be about as accurate? Is that what the general reader of the NAS reports (among others) is supposed to conclude ?
If so, then the errors the IPC talks about really only address the model differences from the different organizations (countries). From their presentation, that’s all the errors that exist in the discussion, so are the only ones dealt with. Are their models are actually that good for past temperatures using the ppm for CO2 from the past ?
I think I’m missing something about this past performance, but I can’t locate any explanation for how well they seemed to be tracking the actuals from 2000 to 2015.
I would ALSO (if I were them) be excited to then plot these matters into the future 10, 20, 30 years as they are doing, based on their predicted growth in CO2, which maybe easier to predict than other things that effect climate.
I know I’m missing something here though. Greater minds than mine are commenting on this report.
Lets see if I can get this right- The models are built with equations that assume CO2 contributes to the temperature. They are “trained” using with backwards calculations using real world numbers to adjust the critical parameters. Then they try multiple runs forward from the calibration date using different combinations of starting values for the parameters and the temperature.
Once that is done with the different models they are all started at the same date with the same temperature numbers. The numbers will start to scatter because computer calculations have limited number of “decimal” places. The model has to have code to detect this an damp it down so that the projected temperature does not go out of limits. If the temperature in Timbuktu started to go off after X number of iterations toward 50°C that would not be good.
As it goes on the model eventually reaches 2100CE or which ever end date set. It generally traces a more or less linear path because that is how the parameters were set initially. There have been other papers also showing that the models don’t do realistic projections of temperature. It’s also been show that the way the computer code is constructed what should be minor rounding errors in the calculation quickly accumulate until the actual cumulative error in the projected temperature is many times larger than the changes in the projected temperature, even though the projection might not change drastically.
Charlie, Figure 2 doesn’t show simulation of actual terrestrial air temperatures.
The points are model air temperature projections for various hypothetical scenarios of CO2 emission.
The lines were produced using eqn. 1, the linear emulation equation.
So, there’s nothing in Figure 2 about climate models accurately predicting air temperature.
As I said in a comment way up this thread, WUWT has now replaced the “Journals” as a place for peer reviewed science publication. Going through the comments one can see trash, ignorant, intelligent, insightful, and brilliant, “reviews” of the paper with the author’s immediate responses. This is a new paradigm in scientific publication. It’s a whole lot quicker than the “traditional”, paper based, domain. It still has some rough edges but with a little work for the “community” some rules/guidelines could be formalized so this could really be revolutionary.
Why should climate models be predictive??
First, the climate is a chaotic system that is dominated by chaos!
(I know that statement is redundant but it bears repeating considering the enthusiasm by which “climate scientists” with access to way too much “big iron” fail to appreciate the implications!)
And second, the estimation variance of the GCM makes a mockery of any predictive parameter obtained by the very “climate scientists” that ignored my first point!
Together, these facts are compounded by $Billions wasted on a pipe dream that could be settled just as well with a dartboard! Ok, if it makes you happy, blindfold the thrower!
Ha!
“Look how tight my grouping is!!! I MUST have hit the target” – Kudos, Pat
Drive by, self-absorbed, too clever by half comments are starting to annoy me.
Please, if you think you have a valid statistical criticism to make, do so. That’s science. Snarky, baseless drive-bys, not so much. As in, not at all.
I welcome genuine criticism of Pat’s six years of labor of love, as does he, as a true scientist. Please do us the favor of presicely and accurately quantifying and qualifying whatever is your objection.
We’re all ears and eyes.
Thanks!
Really, John?
I got the impression that rocker71 was giving the author a compliment!
If it takes a mathematical equation to do that, maybe a sextuple integral is the ticket! (Yes, there is such a thing!)
There are other ways to communicate rather than using “genuine criticism”, you know!
And if my suggestion annoys you, well, good!
It was indeed intended to be a compliment. It was admittedly a drive-by comment. But not intended as snark. Pat’s paper provides a rigorous basis for my remark, which was merely intended as layman characterization of the set of fallacies it seems the paper exposes. My expression of kudos was sincere. This paper made my day.
Thanks, rocker. I got it the first time around. 🙂
And thanks for the high praise, John Tillman. 🙂
John Tillman: Please do us the favor of presicely and accurately quantifying and qualifying whatever is your objection.
It is an indirect reference to a common distinction between accuracy and reproducibility/reliability. You can have a weapon that reliably shoots to the left of a target by the same amount.
Yes. And sighting it in requires adjusting the sights until the weapon doesn’t do that any more, unless there’s wind from the right.
John Tillman: And sighting it in requires adjusting the sights until the weapon doesn’t do that any more,
So you understood the analogy all along?
My apologies for not properly understanding your comment.
Obviously, I also interpreted rocker71’s “drive-by” comment as a compliment to Pat .
The quotation marks indicated that he was satirizing the faulty confidence of climate modelers, which Pat revealed, and then he complimented Pat for uncovering the basis of this faulty confidence, and that’s what my little Dropbox pic attempts to visualize — the poster child of false scientific confidence in front of a target that shows precision that is way off target (i.e, inaccurate) from reality.
Humor can be a tough gig for overly logical minds, like Mr. Spock, Data, and others.
I am the real Don Klipstein, and I first started reading this WUWT post at or a little before 10 PM EDT Monday 9/8/2019, 2 days after it was posted. All posts and attempted posts by someone using my name earlier than my submission at 7:48 PDT this day are by someone other than me.
(Thanks for the tip, cleaning up the mess) SUNMOD
Donald,
I am another long-time WUWT blogger and occasional thread writer whose name has been taken in vain over the last couple of months. I have done no more than mention the theft to CTM who has been exceptionally busy doing a very good job with Anthony. Geoff S
If supposedly non-linear computer model results can be replicated with a linear model, then doesn’t that unto itself debunk the computer models? I though the whole point of the computer models was to take the non-linear effects into account(after all, it was supposed to be the feedback effects that caused all the problems.)
Am I missing something here?
Pat,
My comments are based on your presentation as I have not had a chance to review your paper.
1. The linear equation is straight forward
2. The derivation of model error is the heart of your paper
3. The error propagation is straight forward.
The model error of 4wm2 is the critical aspect of your paper. The lagged correlation of the difference of the averages would appear to be a statistically valid method to separate random error from model error.
Whether the difference of the averages is a reliable estimate for the magnitude of the error is something that would need to be confirmed by replication/derivation in other works. I’m somewhat concerned that the variance might also play a role, but it need not.
I’m assuming at this point that the derivation of model error is standard methodology in physics and chemistry, and that my concerns are groundless. Given that you have provided the background and references for this derivation then the paper would appear sound.
It seems to me a very big deal, to have this 4wm2 model error figure published. Regardless of the conclusions surrounding error propagation, this is the first time I’ve seen anyone provide a measure for the climate model error.
I agree completely, that the SD of the climate model output themselves is not a measure of model error. All that does is compare the models against themselves. If they rely on common data or common theory, and the data or theory is wrong, the SD of the climate models cannot detect this.
I am also late to this party, and very interested in the implications of +/-4W/m^2 from cloud forcing. I have yet to read the paper, but until I do I don’t understand why this uncertainty, which is worth about one doubling of CO2 so equivalent to say 2K, can then propagate into the future to be +/-18K. Given the past stability of Earth’s temperature, it is very unlikely that several of these 4W/m^2 are going to combine to make a very large error.
On the other hand, the mere existence of +/-4W/m^2 makes it ludicrous to ask a la Paris that global T be kept below 1.5K above 1850-1900, unless we luck out and get the negative sign instead of the positive.
Pat, I will look at the paper, but any early information on propagation of the 4W would be appreciated here.
Rich.
The average annual ±4 W/m^2 is a calibration error statistic deriving from theory error within the GCMs, See-owe.
It shows up in every step of a simulation, though almost certainly of smaller magnitude in a 20 minute time step.
It means the simulation continually wanders away from the correct climate phase-space trajectory by some unknown amount with every step.
As we don’t know the actual error in a simulated future climate, we derive a reliability of the projection by propagating the recurring simulation uncertainty.
Thanks for pointing out, in solid scientific fashion, that “the Emperor Has no Clothes” in the conventional climate-modelling field. This has been pretty obvious for many years now, if for no other reason than “scientists” who hide their work, refuse to release date for replication checks by others (notably McIntyre & McKittrick), and carry out vicious character-assassination attacks on anyone with the temerity to question Received Truth in the Holy Church of Climatology. Both Pielkes, McI and McK, and many others have already experienced this. Now will come your time in the barrel.
On a more positive note, it’s becoming more & more obvious that Standard Climate Modelling as practiced in the Climatology Industry just isn’t working. So science will self-correct — perhaps sooner than we think!
Nolo Permittere Illegitimi Carborundum = “Don’t Let the Bastards grind you down!”
Having studied a variety of subjects, I’m becoming less and less convinced that things do just “self correct”. As one of the most simple ones, not a single ancient writer even hints that there were Celts in Britain and many make it clear that the Celts only lived on the continent. But despite pointing out this historic fact for several decades, I’ve not see the slightest movement.
Of course, most of the time these delusions exist and we simply are not aware of them, because they become some oft repeated that unless someone actually looks at the evidence entirely from scratch and tries to work out how we got where we are, it wouldn’t be obvious anything is wrong.
Another classic and more science-like example was the Piltdown Man. Questions had been raised very soon after the discovery, but it took around 30 years for this relatively simply proved deception to be finally accepted as a lie. However, that is an example,where except for the original fraudsters, there was very little financial interest in keeping the fraud going. In contrast, the climate deception is a very lucrative money-machine.
Indeed, I strongly suspect that these deceptions continue to exist UNLESS there is a strong financial incentive in academia to do away with them.
That there were Celts in North America, with many examples of Consaine Ogham found there, still gives the archaeological establishment conniptions. And the vicious ad-hominem attacks Dr. Barry Fell endured are another example from Academia.
The “climate” climate is identical – forbidding that ancient sea-peoples mastered open ocean navigation, which no animal can do, is identical to forcing an “ecology” end-of-the-world mass brainwashing on youngsters.
Today the cure for all this is Artemis – look to the stars, (as in fact the Celts, Phoenicicans, did then),
and the mastery of fire – fusion. There are no limits to growth!
Those that refuse will indeed end up as the menu on Satanasia (Columbus’ cannibal island).
According to Caesar (who ought to know), the Celts were a subgroup of the Gauls of France and lived in NW France which is roughly “Normandy”. So, the first recorded Celtic invasion was in 1066.
Ogham is a script found in Ireland and related to Irish (which is not a Celtic language … because the Irish were not Celts). Again not one ancient writer even suggests such nonsense. That myth was created in 1707 by a Welshman.
I saw a program not long ago about the introduction of stone tools from Europe to the US. Like the Celtic myth, the myth has developed amongst US academics that the population of the US must have occurred from the west. There is however compelling evidence for some population from Europe. But despite the lack of any credible argument to refute it, this assertion is strong denied. The reason? Probably the same as the Celtic myth, the same as global warming, the same reason we have daft ideas in Physics that remain unchallenged: once academics buy into a particular view, they do not accept change.
However, that doesn’t mean everyone who challenges academia is right. On the other hand, it doesn’t mean daft ideas are wrong. There is a founding myth of Britain that it was Brutus of Troy. Recently it’s been discovered that early settlers had DNA from Anatolia.
Another “myth” is King Arthur. I recently discovered that in Strathclyde there were similar sounding names for their Kings and even a person who shared many characteristics with Merlin. Indeed, one of King Arthur’s famous battles was the battle of Badon Hill, which sounds very like Biadon Hill a likely old name of Roman Subdobiadon or Dumbarton Hill.
I’ve even found what looks like the Roman Fort at Dumbarton which supports this …. but met with a wall of silence when I showed local archaeologists the evidence showing what appears to be part of a fort.
According to Caesar, Gallia Celtica covered most of modern France and much of Switzerland:
But all Gauls and many other groups from Anatolia to the British Isles belonged to Celtic culture and spoke Celtic languages. The differences between the Celtic languages of Britain (Brythonic), ancestral to Welsh and Breton, and those (Goidelic) preceding today’s Gaelic of Ireland and Highland Scotland, have been attributed to immigration into Ireland by Celts from Galicia in NW Spain.
“Celt” comes from the Greek “Keltoi”.
John Tillman.
In the very first lines of Caesar’s Gallic war he tells us: “All Gaul is divided into three parts, one of which the Belgae inhabit, the Aquitani another, those who in their own language are called Celts, in our Gauls, the third. All these differ from each other in language,”
“Celtic culture” is in fact nothing of the sort. Instead it is the combination of Hallstat Culture from Austria and La Tene Culture from the German speaking part of Switzerland. Far from being typical of the area of the Celts as would be required to be “Celtic” culture, artefacts falsely referred to as “Celtic” are largely missing from the area of the area of the Celts, so whatever it was, it wasn’t Celtic.
The reason this myth has become so widespread is because there is a law in France prohibiting the study of different nationalities in France.
Let the Stones Speak. Ogham writing , with no vowels, is found from Spain to Oaklahoma.
Look at the Roman Alphabet, numerous languages can be written with it. Same with Ogham.
Also be carefull to read in the right direction.
It is when well known names crop up that one identifies either a legend or religion.
Caesar, a relatively late blow-in with vowels, did gather information on is planned conquest.
Mike Haseler, I have it on repeated testimony that “The Gallo-Roman Empire” is widely taught in France.
Pat Frank, I’m a little late to this discussion, but I was wondering if you have been able to, or might be able to, present the specifics of your paper to those who have actually been behind the production of GCMs, such as those at NOAA/GFDL. I went to school with a modeler who was there (who shall remain nameless), and I have to believe that he (or she) would be responsive to the thrust of your argument, without the (figurative, if not literal) stomping of feet or the calling of names, and who might at least entertain your thoughts. I wonder (not just rhetorically) if this could in any way be productive if it could come about? Frankly (no pun intended, really!), I would love to see you present your paper at one of their brown bag lunch seminars. Thank you for your work, Dr. Frank.
Thanks 4caster. Maybe in some future date, after all this has played out.
“Dr. Luo chose four reviewers, three of whom” will never be taken seriously again.
Loydo you are a troll crawl back into your hole .
Back to your cave you are so naive.
Drive by comments like that are plain Dumb
Pat, I’ve now read the paper and have removed my earlier objection that was based on your presentation. I find the paper to be a simple, elegant solution to placing bounds on model accuracy. The strength of the paper lies in the simplicity of the methodology.
I did not realize that 4wm2 was a lower bound. When this is treated as a lower bound, then it is reasonable to use the difference of the averages. The variance would then contribute to the upper bound. Additionally, I was not aware that the 4wm2 built on earlier work and was consistent with other attempts to bound the accuracy of climate models in cloud forecasting.
It would be interesting to see this approach applied to historical data for financial models, as further validation of the methodology. Financial models are not nearly as controversial politically, which should allow for a less biased review of the methods employed.
Thanks, Ferd.
You’re right it’s a very simple and straight-forward analysis.
Propagate model calibration error. Standard in the physical sciences.
It’s been striking to me the number of educated folks who don’t get it.
I’ve gotten several emails, now, from physical scientists who agree with the analysis, but who don’t feel confident about speaking out in the vicious political climate the alarm-mongering folks have produced.
Pat, I have now read the relevant bits of your paper, but unlike Ferd I am not led to total admiration. I do admire the way that it has been put together, but I believe there to be a fatal flaw in the analysis, which I shall describe below.
Figure 4, and ensuing correlation analysis, show that errors in TCF (Total Cloud Fraction) are correlated with latitude. However, since we are interested in global temperature, is that important? Isn’t it the mean TCF weighted by its effect on forcing which is important? Still, let that pass. Also, since the temperature time series is of most importance, why isn’t there any analysis of inter-year TCF correlations?
You derive, using work of others, a TCF error per year of 4Wm^-2.
Your Equation (4) gives the n-year error as RMS of sums of variances and covariances of lag 1 year spread over the n years. You ignore larger lags, but note by the way that multi-year lags are likely because of ENSO and the solar cycle. My paper “On the influence of solar cycle lengths and carbon dioxide on global temperatures” https://github.com/rjbooth88/hello-climate/files/1835197/s-co2-paper-correct.docx shows in Appendix A that the significance of lags goes in the order 1, 18, 13, 3 years on HadCRUT4 data. Still, lag-1 is the most important.
But then, 18 lines below Equation (4) is the unnumbered equation
u_c = sqrt(u_a^2+…+u_z^2) (*)
Now this equation has dropped even the lag 1 year covariances!
The result from that is that after n years the error will be 4sqrt(n) Wm^-2. So for example after 81 years that would give 36 Wm^-2, which is about 10 CO2 doublings, which using a sensitivity of about 2K (see my paper again) gives 20K.
So I do now see how your resulting large error bounds are obtained.
So one flaw is to ignore autocorrelation in the TCF time series, but the greater flaw as I see it is that global warming is not wholly a cumulative process. If we had 10 years of +2 units from TCF, followed by 1 year of -8 units, the temperature after 11 years would not reflect 10*2-8 = +12 units. Air and ground temperatures respond very quickly to radiation, whereas the oceans can store some of it but when put into Kelvin units that tends to be tiny. (Also, in my paper I estimate the half-life for stored upper ocean warming to be 20 years.) So the post year 11 temperature would be derived from somewhere between +12 and -8 TCF units, and I wouldn’t want to guess the exact value. Your model of error propagation does not reflect that reality, but the models themselves can, and hence suffer less error than you predict.
Apparently your peer reviewers didn’t spot that point.
By the way moderators, I agree with Janice Moore’s comment upstream about all comments being put into moderation. There really ought to be a white list for people like her – and me 🙂
See – Owe to Rich, first, you appear to be confusing physical error with uncertainty.
Second, I do not “ show that errors in TCF (Total Cloud Fraction) are correlated with latitude” rather I show that TCF errors are pair-wise correlated between GCMs.
Third and fourth, I do not “derive, using work of others, a TCF error per year of 4Wm^-2”
The ±4W/m^2 is average annual long wave cloud forcing (LWCF) calibration error statistic. It comes from TCF error, but is not the TCF error metric.
The LWCF error statistic comes from Lauer and Hamilton. I did not derive it.
Fifth, eqn. 4 does not involve any lag-1 component. It just gives the standard equation for error propagation.
As the ±4W/m^2 is an average calibration statistic derived from the cloud simulation error of 27 GCMs, it’s not clear at all that there is a covariance term to propagate. It represents the average uncertainty produced by model theory-error. How does a 27-GCM average of theory-error co-vary?
Sixth, your “ n years the error will be 4sqrt(n) Wm^-2.” the unnumbered equation again lays out the general rule for error propagation in more usable terms. The u_a … in that equation are general, and do not represent W/m^2. You show a very fundamental mistaken understanding there.
Seventh, your, “after 81 years that would give 36 Wm^-2,” nowhere do I propagate W/m^2. Equations 5 and 6 should have made it clear to you that I propagate the derived uncertainty in temperature.
Eighth, your “So I do now see how your resulting large error bounds are obtained. ” It’s very clear that you do not. You’ll notice, for example that nowhere does a sensitivity of 2K or of anything else enter anywhere into my uncertainty analysis.
Your need to convert W/m^2 to temperature using a sensitivity number fully separates my work from your understanding of it.
Ninth, your subsequent analysis in terms of error (“So one flaw … greater flaw as I see it is that global warming is not wholly a cumulative process….) shows that you’ve completely missed the point that the propagation is in terms of uncertainty. The actual physical error is completely unknown in a futures projection.
Tenth, your, “hence suffer less error than you predict.” I don’t predict error. You’ve failed to understand anything of the analysis.
Eleventh “Apparently your peer reviewers didn’t spot that point.” because it’s not there. It’s wrong, it exists in your head, and it’s nowhere else.
The article is written with accuracy and entertaining with precision. Well done in defining where modelers can improve.
I will obviously let the author answer, but you say, “but the greater flaw as I see it is that global warming is not wholly a cumulative process. If we had 10 years of +2 units from TCF, followed by 1 year of -8 units, the temperature after 11 years would not reflect 10*2-8 = +12 units.”
This could be true. That could be one realization of a theoretical climate state within the bounds of the uncertainty. There are a transfinite number of such states possible. But he is not using the error bars to predict the temperature, rather the uncertainty in the temperature (I think in other places the author has used the term “ignorance band”). The fact that the uncertainty swamps the signal indicates that the model is not capable of predicting anything due to CO2 forcing because of the uncertainty in TCF.
You also say, “u_c = sqrt(u_a^2+…+u_z^2) (*)
Now this equation has dropped even the lag 1 year covariances!”
If you read, the author states, “The linearity that completely describes air temperature projections justifies the linear propagation of error.” You could explain why you disagree with that.
JQP, the linearity that exists is from radiative forcing at the epoch (time) when the measurement is taken, and hence linear in any error at that time. It may even be linear in an error arising 30 years earlier, and indeed I take that view in my paper (which incidentally is in the Journal of Atmospheric and Solar-Terrestrial Physics Volume 173), but the coefficient of linearity is not unity. That is to say there is a decay process for the effect of departures from a model, so an error 30 years ago has much less effect than one last year. If this were not the case, the climate itself as well as Pat Frank’s imputation about the models would be wildly unconstrained, as a billion butterflies flapping their wings in 1066 would have a great influence on today’s climate.
If those butterflies could have chaotically pushed us over a tipping point, then perhaps that might be true, but it wouldn’t then be a linear effect anyway.
Hope this helps to clarify.
You still don’t get it. Pat isn’t saying anything about the actual climate; he is saying that the current theory is so incomplete that the known uncertainties arising from that make it impossible to know anything about the future climate. The uncertainty envelope (what people are erroneously calling error bars) in his graphs show this clearly. That doesn’t mean the models are useless – they can still be used to study various weather and climatic processes in order to improve the theory, but they have no predictive value at this time. In science it is critical to admit what you don’t know, and can’t conclude about the things you are studying.
The author refers to them as “uncertainty bars.” What’s the difference?
Paul, exactly correct! Surely, everyone agrees that uncertainty in anything increases with time going forward, or backward, once one leaves actual measurements. To argue otherwise defies logic.
The modelers had to start somewhere and they have had to “tune” based on historical data because climate is chaotic and full of unknowns (and it was the only place they could go to make any sense of the models relative to actual climate data). Science simply does not have all of the parts in place to accurately predict something this complex. Therefore, logic (and common sense) should tell all of us that any prediction becomes more uncertain the farther we look into the future (or the past when actual measurements are unavailable).
I was taught that a regression line has no predictive value. Why? Because the line must have actual measurements and only those measurements are its world. Do we extend it, anyway? Sometimes but only at our peril.
Pat Frank’s paper satisfies basic logic and common sense. Predicting the future is “iffy” and it gets “iffier” the farther into the future you look.
Exactly right, Paul, thanks 🙂
BtK, error bars are about actual physical error.
Uncertainty bounds (bars) represent an ignorance width — how reliable a prediction is.
Error bars are not available for a predicted future magnitude.
Oh, okay, that nails it very clearly for me. Of course there are no error bars for a predicted future magnitude, because there are no real-world instrumental measurements for the future — the future hasn’t arrived yet, and so no actual measurements have been made yet.