Guest post by Pat Frank
Readers of Watts Up With That will know from Mark I that for six years I have been trying to publish a manuscript with the post title. Well, it has passed peer review and is now published at Frontiers in Earth Science: Atmospheric Science. The paper demonstrates that climate models have no predictive value.
Before going further, my deep thanks to Anthony Watts for giving a voice to independent thought. So many have sought to suppress it (freedom denialists?). His gift to us (and to America) is beyond calculation. And to Charles the moderator, my eternal gratitude for making it happen.
Onward: the paper is open access. It can be found here , where it can be downloaded; the Supporting Information (SI) is here (7.4 MB pdf).
I would like to publicly honor my manuscript editor Dr. Jing-Jia Luo, who displayed the courage of a scientist; a level of professional integrity found lacking among so many during my 6-year journey.
Dr. Luo chose four reviewers, three of whom were apparently not conflicted by investment in the AGW status-quo. They produced critically constructive reviews that helped improve the manuscript. To these reviewers I am very grateful. They provided the dispassionate professionalism and integrity that had been in very rare evidence within my prior submissions.
So, all honor to the editors and reviewers of Frontiers in Earth Science. They rose above the partisan and hewed the principled standards of science when so many did not, and do not.
A digression into the state of practice: Anyone wishing a deep dive can download the entire corpus of reviews and responses for all 13 prior submissions, here (60 MB zip file, Webroot scanned virus-free). Choose “free download” to avoid advertising blandishment.
Climate modelers produced about 25 of the prior 30 reviews. You’ll find repeated editorial rejections of the manuscript on the grounds of objectively incompetent negative reviews. I have written about that extraordinary reality at WUWT here and here. In 30 years of publishing in Chemistry, I never once experienced such a travesty of process. For example, this paper overturned a prediction from Molecular Dynamics and so had a very negative review, but the editor published anyway after our response.
In my prior experience, climate modelers:
· did not know to distinguish between accuracy and precision.
· did not understand that, for example, a ±15 C temperature uncertainty is not a physical temperature.
· did not realize that deriving a ±15 C uncertainty to condition a projected temperature does *not* mean the model itself is oscillating rapidly between icehouse and greenhouse climate predictions (an actual reviewer objection).
· confronted standard error propagation as a foreign concept.
· did not understand the significance or impact of a calibration experiment.
· did not understand the concept of instrumental or model resolution or that it has empirical limits
· did not understand physical error analysis at all.
· did not realize that ‘±n’ is not ‘+n.’
Some of these traits consistently show up in their papers. I’ve not seen one that deals properly with physical error, with model calibration, or with the impact of model physical error on the reliability of a projected climate.
More thorough-going analyses have been posted up at WUWT, here, here, and here, for example.
In climate model papers the typical uncertainty analyses are about precision, not about accuracy. They are appropriate to engineering models that reproduce observables within their calibration (tuning) bounds. They are not appropriate to physical models that predict future or unknown observables.
Climate modelers are evidently not trained in the scientific method. They are not trained to be scientists. They are not scientists. They are apparently not trained to evaluate the physical or predictive reliability of their own models. They do not manifest the attention to physical reasoning demanded by good scientific practice. In my prior experience they are actively hostile to any demonstration of that diagnosis.
In their hands, climate modeling has become a kind of subjectivist narrative, in the manner of the critical theory pseudo-scholarship that has so disfigured the academic Humanities and Sociology Departments, and that has actively promoted so much social strife. Call it Critical Global Warming Theory. Subjectivist narratives assume what should be proved (CO₂ emissions equate directly to sensible heat), their assumptions have the weight of evidence (CO₂ and temperature, see?), and every study is confirmatory (it’s worse than we thought).
Subjectivist narratives and academic critical theories are prejudicial constructs. They are in opposition to science and reason. Over the last 31 years, climate modeling has attained that state, with its descent into unquestioned assumptions and circular self-confirmations.
A summary of results: The paper shows that advanced climate models project air temperature merely as a linear extrapolation of greenhouse gas (GHG) forcing. That fact is multiply demonstrated, with the bulk of the demonstrations in the SI. A simple equation, linear in forcing, successfully emulates the air temperature projections of virtually any climate model. Willis Eschenbach also discovered that independently, awhile back.
After showing its efficacy in emulating GCM air temperature projections, the linear equation is used to propagate the root-mean-square annual average long-wave cloud forcing systematic error of climate models, through their air temperature projections.
The uncertainty in projected temperature is ±1.8 C after 1 year for a 0.6 C projection anomaly and ±18 C after 100 years for a 3.7 C projection anomaly. The predictive content in the projections is zero.
In short, climate models cannot predict future global air temperatures; not for one year and not for 100 years. Climate model air temperature projections are physically meaningless. They say nothing at all about the impact of CO₂ emissions, if any, on global air temperatures.
Here’s an example of how that plays out.
Panel a: blue points, GISS model E2-H-p1 RCP8.5 global air temperature projection anomalies. Red line, the linear emulation. Panel b: the same except with a green envelope showing the physical uncertainty bounds in the GISS projection due to the ±4 Wm⁻² annual average model long wave cloud forcing error. The uncertainty bounds were calculated starting at 2006.
Were the uncertainty to be calculated from the first projection year, 1850, (not shown in the Figure), the uncertainty bounds would be very much wider, even though the known 20th century temperatures are well reproduced. The reason is that the underlying physics within the model is not correct. Therefore, there’s no physical information about the climate in the projected 20th century temperatures, even though they are statistically close to observations (due to model tuning).
Physical uncertainty bounds represent the state of physical knowledge, not of statistical conformance. The projection is physically meaningless.
The uncertainty due to annual average model long wave cloud forcing error alone (±4 Wm⁻²) is about ±114 times larger than the annual average increase in CO₂ forcing (about 0.035 Wm⁻²). A complete inventory of model error would produce enormously greater uncertainty. Climate models are completely unable to resolve the effects of the small forcing perturbation from GHG emissions.
The unavoidable conclusion is that whatever impact CO₂ emissions may have on the climate cannot have been detected in the past and cannot be detected now.
It seems Exxon didn’t know, after all. Exxon couldn’t have known. Nor could anyone else.
Every single model air temperature projection since 1988 (and before) is physically meaningless. Every single detection-and-attribution study since then is physically meaningless. When it comes to CO₂ emissions and climate, no one knows what they’ve been talking about: not the IPCC, not Al Gore (we knew that), not even the most prominent of climate modelers, and certainly no political poser.
There is no valid physical theory of climate able to predict what CO₂ emissions will do to the climate, if anything. That theory does not yet exist.
The Stefan-Boltzmann equation is not a valid theory of climate, although people who should know better evidently think otherwise including the NAS and every US scientific society. Their behavior in this is the most amazing abandonment of critical thinking in the history of science.
Absent any physically valid causal deduction, and noting that the climate has multiple rapid response channels to changes in energy flux, and noting further that the climate is exhibiting nothing untoward, one is left with no bearing at all on how much warming, if any, additional CO₂ has produced or will produce.
From the perspective of physical science, it is very reasonable to conclude that any effect of CO₂ emissions is beyond present resolution, and even reasonable to suppose that any possible effect may be so small as to be undetectable within natural variation. Nothing among the present climate observables is in any way unusual.
The analysis upsets the entire IPCC applecart. It eviscerates the EPA’s endangerment finding, and removes climate alarm from the US 2020 election. There is no evidence whatever that CO₂ emissions have increased, are increasing, will increase, or even can increase, global average surface air temperature.
The analysis is straight-forward. It could have been done, and should have been done, 30 years ago. But was not.
All the dark significance attached to whatever is the Greenland ice-melt, or to glaciers retreating from their LIA high-stand, or to changes in Arctic winter ice, or to Bangladeshi deltaic floods, or to Kiribati, or to polar bears, is removed. None of it can be rationally or physically blamed on humans or on CO₂ emissions.
Although I am quite sure this study is definitive, those invested in the reigning consensus of alarm will almost certainly not stand down. The debate is unlikely to stop here.
Raising the eyes, finally, to regard the extended damage: I’d like to finish by turning to the ethical consequence of the global warming frenzy. After some study, one discovers that climate models cannot model the climate. This fact was made clear all the way back in 2001, with the publication of W. Soon, S. Baliunas, S. B. Idso, K. Y. Kondratyev, and E. S. Posmentier Modeling climatic effects of anthropogenic carbon dioxide emissions: unknowns and uncertainties. Climate Res. 18(3), 259-275, available here. The paper remains relevant.
In a well-functioning scientific environment, that paper would have put an end to the alarm about CO₂ emissions. But it didn’t.
Instead the paper was disparaged and then nearly universally ignored (Reading it in 2003 is what set me off. It was immediately obvious that climate modelers could not possibly know what they claimed to know). There will likely be attempts to do the same to my paper: derision followed by burial.
But we now know this for a certainty: all the frenzy about CO₂ and climate was for nothing.
All the anguished adults; all the despairing young people; all the grammar school children frightened to tears and recriminations by lessons about coming doom, and death, and destruction; all the social strife and dislocation. All the blaming, all the character assassinations, all the damaged careers, all the excess winter fuel-poverty deaths, all the men, women, and children continuing to live with indoor smoke, all the enormous sums diverted, all the blighted landscapes, all the chopped and burned birds and the disrupted bats, all the huge monies transferred from the middle class to rich subsidy-farmers.
All for nothing.
There’s plenty of blame to go around, but the betrayal of science garners the most. Those offenses would not have happened had not every single scientific society neglected its duty to diligence.
From the American Physical Society right through to the American Meteorological Association, they all abandoned their professional integrity, and with it their responsibility to defend and practice hard-minded science. Willful neglect? Who knows. Betrayal of science? Absolutely for sure.
Had the American Physical Society been as critical of claims about CO₂ and climate as they were of claims about palladium, deuterium, and cold fusion, none of this would have happened. But they were not.
The institutional betrayal could not be worse; worse than Lysenkoism because there was no Stalin to hold a gun to their heads. They all volunteered.
These outrages: the deaths, the injuries, the anguish, the strife, the malused resources, the ecological offenses, were in their hands to prevent and so are on their heads for account.
In my opinion, the management of every single US scientific society should resign in disgrace. Every single one of them. Starting with Marcia McNutt at the National Academy.
The IPCC should be defunded and shuttered forever.
And the EPA? Who exactly is it that should have rigorously engaged, but did not? In light of apparently studied incompetence at the center, shouldn’t all authority be returned to the states, where it belongs?
And, in a smaller but nevertheless real tragedy, who’s going to tell the so cynically abused Greta? My imagination shies away from that picture.
An Addendum to complete the diagnosis: It’s not just climate models.
Those who compile the global air temperature record do not even know to account for the resolution limits of the historical instruments, see here or here.
They have utterly ignored the systematic measurement error that riddles the air temperature record and renders it unfit for concluding anything about the historical climate, here, here and here.
These problems are in addition to bad siting and UHI effects.
The proxy paleo-temperature reconstructions, the third leg of alarmism, have no distinct relationship at all to physical temperature, here and here.
The whole AGW claim is built upon climate models that do not model the climate, upon climatologically useless air temperature measurements, and upon proxy paleo-temperature reconstructions that are not known to reconstruct temperature.
It all lives on false precision; a state of affairs fully described here, peer-reviewed and all.
Climate alarmism is artful pseudo-science all the way down; made to look like science, but which is not.
Pseudo-science not called out by any of the science organizations whose sole reason for existence is the integrity of science.
Wow! What a blockbuster of an article, Anthony. Knowledgeable people in the scientific and political communities need to read this article and the accompanying paper and SI and take this to heart. Especially the Republican Party. But of course, we know what that apocalyptic types will do. We must not allow that, especially in any attempt by the Trump administration to overturn the endangerment finding made by the previous (incompetent) administration.
Politicians, are mostly too thick, to understand it. That applies to any politician, any party, any country.
Politicians are following the money, as they always have done, that is, money for their own pockets
And the politicians (e.g. Trudeau, Gore) continue to fly a lot, continue to drive around in big SUVs, and continue to live in multiple, very large houses.
Often very close to those rising oceans……
Adam Gallon
100% correct, and the very reason why the climate alarmists stole a march on sceptics. They framed their argument politically, and everyone, irrespective of education, has the right to a political opinion. Sceptics chose the scientific route, and less that 10% of the world is scientifically educated.
When it comes time to vote, guess who gets the most, and the cheapest votes for their $/£?
And , so it seems , are the scientists “ mostly too thick “ !
Not too thick, only unsceptical. The more intelligent you are, the more you can utilise your intelligence to justify your need to believe something, and the more prone you will be to confirmation bias.
Cherry picking evidence that suits you and finding justifications to hand wave away that which doesn’t requires a good brain, but good brains are still at the mercy of their owners’ emotional defences, including pride, self interest, misplaced fear and stubbornness.
100 percent spot on. This is done in all areas of our society. Government, non government, business, media. People wordsmithing their agendas, justifying their actions. Corruption years ago was someone taking money under the table. These days it is taken over the table, people are just smarter in justifying their course of action. Sadly most of them believe their own rheteric.
Someone needs to get this Paper to trump…..
Have HIM go public with it….
And have him ask for rebuttal from the Climate Science world….. Since the clown media exacts their *freedom denialist* on him whenever they can…..
Don’t sit on this…….
Trump wouldn’t read it. He’d ask for the short version in 25 words or less.
The CACA crock is built upon climate models that don’t model climate, climatologically useless air temperature measurements, and proxy paleo-temperature reconstructions which don’t reconstruct temperature.
Edited down to 25 words.
Because error propagation.
@DanM: No, please please please don’t give it to Trump. His credibility is low — and falls further with every tweet. Trump’s daily dribble of dubious pronouncements is easily dismissed as ignorant, self-serving prattle.
We “deniers” need to stay focused on science vs. non-science, as the article’s author suggests. “Climate science” presents a non-falsifiable theory as inevitable outcome — as Richard Feynman once said, that is not science.
If we are to convince the “more educated” segment of society of the perniciousness of “climate science”, we must disentangle the science from the politics. The two are antithetical: The former is, very generally speaking, about parsing signal from noise; the latter is, very generally speaking, the exact opposite.
The “more educated” don’t get that yet, don’t get that their religious belief in CO2-induced End Times is based on corrupted scriptures. When they do, enlightenment will follow.
Richie, the skeptic community has been riven with dislike or distrust for too long. The spat between Anthony Watts and Tony Heller is a good example.
You may dislike Trump’s tweeting, but he is uniquely willing and able to take climastrology head on, and his tweets probably reach a group of people that your preferred approach never would.
If Trump picks this up and tweets it around, good for him, good for everyone. If you want to engage your community in a scientific debate, good for you.
There are plenty of alarmists out there pushing out nonsense, we don’t need to criticize each other for doing what we can, where we can, to push it back.
Richie, Trump is on only one in 30 years in politics to call BS on these climate terrorists. The only one to call BS on China trade practices. The only one willing to rescind a deal to give nukes to Iran. The only one to even mention the USA cannot just have everyone in the world move here. The only one to suggest NATO pay their own way. You need to listen to people that did not spout ‘Russian collusion” for 2+ years knowing it was a bald faced lie. You better get on board, as this guy is the ONLY one with credibility.
Trump will not read this paper and nor should he , he is not a scientist and has never pretended to be ! Contrary to popular belief I am sure, Trump nor all Ex presidents make ALL decisions such as this on subjects . Not one single person has the amount of knowledge or education required to “Run” a country . Trump rely’s on his advisers I am sure which is the right thing to do.
Actually, as grad of a good B-school, Trump must have taken statistics courses. He could read and understand, or at least get the gist of this paper, but his attention span is short and digesting the whole thing would be a waste of time for any president.
The abstract and conclusions, with a graph or two, in his daily summary would be the most for which we could or should hope.
Politicians don’t WANT to understand it (except Trump) AGW is a HUGE gravy train for them…
Not sure if I completely agree, Adam.
I agree that there are many who are too thick to understand, but there are also many who don’t want to understand and others that don’t have time to understand.
The don’t want to understand people don’t care. They are either already hard core Warm Cult or have been informed by their spin merchants that Climate Change is what their voters want. These are either ‘Science is Settled… and if it isn’t, then it should be’ or would support the reintroduction of blood sports if their internal polling said it would win them another term.
Then there are the ones who don’t have time to care. Politicians are busy people. All that sunshine isn’t going to get blown up people’s…. ummm… egos by itself you know. They don’t have time to sit down and read reports, they have Important Meetings to attend. Hence they surround themselves with staffers who – nominally – do all the reading for them and feed them the 10 word summary. Now that all sounds fine and dandy, and Your Country May Vary, but here in Oz most staffers are the 24 year olds who have successfully backstabbed and grovelled their way through the ‘Young’ branch of their party and the associated faction politics. Since very few of these people have anything remotely resembling a STEM background they are for all extents and purposes, masculine bovine mammaries.
Like they say, Sausages and Laws. 🙁
Great article.
In layman’s terms:
If the climate modelers were financial advisors the world would be living under one gigantic bridge.
If climate modelers were engineers, there wouldn’t be any bridges to live under.
Climate modelling is Cargo Cult Science!
Was it Freeman Dyson or Richard Feynman who stated this years ago?
Climate models are not real models.
Real models make right predictions.
Climate models make wrong predictions.
The so called “climate models”, and government bureaucrat “scientists” who programmed them, are merely props for the faith based claim that a climate crisis is in progress.
If people who joined conventional religions believed that, they would point to a bible as “proof”.
In the unconventional “religion” of climate change, their “bible” is the IPCC report, and their “priests” are government bureaucrat and university “scientists”.
Scientists and computer models are used as props, to support an “appeal to authority” about a “coming” climate crisis, coming for over 30 years, that never shows up !
In the conventional religions, the non-scientist “priests” and their bibles say: ‘You must do as we say, or you will go to hell’.
In the climate change “religion”, the scientist “priests” say: ‘You must do as we say, or the Earth will turn into hell for your children’.
” … the whole aim of practical politics is to keep the populace alarmed (and hence clamorous to be led to safety) by menacing it with an endless series of hobgoblins, most of them imaginary.”
— From H. L. Mencken’s In Defense of Women (1918).
.
.
My climate science blog:
http://www.elOnionBloggle.Blogspot.com
.
.
Concerning the Green New Deal:
“Politics is the art
of looking for trouble,
finding it everywhere,
diagnosing it incorrectly,
and applying the wrong remedies.”
Groucho Marx
It was Feynman. A commencement speech in the 70’s.
[excerpt from this excellent article]
“In their hands, climate modeling has become a kind of subjectivist narrative, in the manner of the critical theory pseudo-scholarship that has so disfigured the academic Humanities and Sociology Departments, and that has actively promoted so much social strife. Call it Critical Global Warming Theory. Subjectivist narratives assume what should be proved (CO₂ emissions equate directly to sensible heat), their assumptions have the weight of evidence (CO₂ and temperature, see?), and every study is confirmatory (it’s worse than we thought).
Subjectivist narratives and academic critical theories are prejudicial constructs. They are in opposition to science and reason. Over the last 31 years, climate modeling has attained that state, with its descent into unquestioned assumptions and circular self-confirmations.”
…
Raising the eyes, finally, to regard the extended damage: I’d like to finish by turning to the ethical consequence of the global warming frenzy. After some study, one discovers that climate models cannot model the climate. This fact was made clear all the way back in 2001, with the publication of W. Soon, S. Baliunas, S. B. Idso, K. Y. Kondratyev, and E. S. Posmentier Modeling climatic effects of anthropogenic carbon dioxide emissions: unknowns and uncertainties. Climate Res. 18(3), 259-275, available here. The paper remains relevant.
In a well-functioning scientific environment, that paper would have put an end to the alarm about CO₂ emissions. But it didn’t.
Instead the paper was disparaged and then nearly universally ignored (Reading it in 2003 is what set me off. It was immediately obvious that climate modelers could not possibly know what they claimed to know). There will likely be attempts to do the same to my paper: derision followed by burial.
But we now know this for a certainty: all the frenzy about CO₂ and climate was for nothing.
[end of excerpt]
So the false narrative of global warming alarmism has once again been exposed, even though this paper was REPRESSED for SIX YEARS!
It is absolutely clear, based on the evidence , that global warming and climate change alarmism was not only false, but fraudulent. Its senior proponents have cost society tens of trillions of dollars and many millions of lives – an entire global population has been traumatized by global warming alarmism, the greatest scientific fraud in history – these are crimes against humanity and their proponents belong in jail – for life!
Oh yes indeed. But who will do it?
Too many people are making too much money off of this ridiculous hoax. Too many politicians are acquiring too much power off of this insanity. The media spin their narrative continually because it plays into the leftist desire to smash capitalism.
We need somebody to break this thing once and for all. Trump has tried but he is so controversial in so many ways that the message is lost. So we will all just continue in our own little way trying to change the opinion of those close to us and hope that our own prophet will appear and throw the money lenders out of the temple of pseudo-science once and for all.
Is jail for life sufficient punishment for the theft of trillions in treasure and loss of life for tens of millions?
Pat’s math nonsense has been completely dismantled here:
https://patricktbrown.org/2017/01/25/do-propagation-of-error-calculations-invalidate-climate-model-projections-of-global-warming/
here:
https://andthentheresphysics.wordpress.com/2017/01/26/guest-post-do-propagation-of-error-calculations-invalidate-climate-model-projections/
and years ago here:
https://tamino.wordpress.com/2011/06/02/frankly-not/
…and by Nick Stokes here:
https://moyhu.blogspot.com/2019/09/another-round-of-pat-franks-propagation.html
and two years ago here:
https://moyhu.blogspot.com/2017/11/pat-frank-and-error-propagation-in-gcms.html
Unlike other zombie myth let us hope this one is finally laid to rest.
Hahahaha — tamino. Fortunately haven’t heard anything about that abomination for yrs….
Calling tamino names is not a scientific argument. We are claiming to be scientists. The standards that we impose upon ourselves must be rigorous.
Where is an analysis of the tamino paper that refutes tamino?
Apparently you didn’t bother to read Pat Frank’s responses.
It’s not surprising that a young computer gamer would object to Pat’s work. Young Dr. Brown would need to find a new career should Pat’s conclusions be confirmed.
There are 5 references put forward to refute Mr. Frank’s paper.
Calling people names is not a scientific argument.
You refer to Mr. Frank’s responses. Where are the references to his responses that allow a review of the arguments? Who are the people with sufficient credibility who stand behind Mr. Frank’s work and refute the arguments (pseudo arguments) put forth in these five references.
Lord Moncton has made a different argument that claims to demolish the alarmists. But the alarmists have put forward a criticism of Moncton that I have not seen addressed.
Rigorous argument is the hallmark of science. There is no shortcut.
No name calling. GIGO GCMs are science-free games. They are worse than worthless wastes of taxpayer dollars, except to show how deeply unphysical is the CACA scam.
Patrick Brown’s arguments did not withstand the test of debate, carried out beneath his video.
ATTP thinks (+/-) means constant offset. And Tamino ran away from the debate — which was about a different analysis anyway.
Nick’s moyhu posts are just safe-space reiterations of the arguments he failed to establish in open debate.
Skeptic t-shirt.
https://www.amazon.com/dp/B07XNNBBXZ
It’s brilliant, and the list looks pretty complete. 🙂
Missing at least one relevant point:
Uncertainty Propagation
I’ll have my wife add it on there after mine arrives. She’s good at that sort of thing and has produced a number of fun items for me to wear.
I never, ever buy t-shirts. I just bought this one. I could not resist. Thanks for the link.
How do you see the back? – there is no simple link !!!
Click on the small image of the back, then hover your mouse/cursor over the resultant view to see a magnified view.
OK, thanks for that ! I could’t find the image of the back of the Shirt…
It was way over to the left side on my screen, and couldn’t seem to locate it until I knew what to look for,
The list is pretty damn complete, thx !
JPP
I have been waiting for years hoping that someone would come up with an A+B proof that definitively buries the non-scientific proceedings of the “climate religion”. Pat Frank’s publication hits that nail with a beautiful hammer! Every student writing a report about a practical physics experiment has to calculate the error margins. That these so-called scientists (some are even at ETH Zurich) don’t even seem to understand what an error margin means was a real shock to me. Just recently I’ve been reading something about the UN urging for haste and mentioning that scientific arguments are not relevant anymore and should be ignored… Do you see something coming?
“…for giving a voice to independent thought.”
Although it’s been a struggle for some of us.
Add to the list of what people don’t know.
Most people don’t understand that at this distance from the sun objects get hot (394 K) not cold (- 430 F).
The atmosphere/0.3 albedo cools the earth compared to no atmosphere.
And because of a contiguous participating media, i.e. atmospheric molecules, ideal BB LWIR upwelling from the surface/oceans is not possible.
396 W/m^2 upwelling is not possible.
333 W/m^2 downwelling/”back” LWIR 100% perpetual loop does not exist.
RGHE theory goes into the rubbish bin of previous failed consensual theories.
Nick, you’ve got it wrong, it’s not a 333 feedback loop…. 396 – 333 = 63 watts per sq. M radiated from the ground to the sky on average. At the basic physics of it all, the negative term in the Stephan-Boltzmann two surfaces equation, which is referred to as “back radiation”, 333 watts in this case, is how much the energy content of the wave function of the hotter body is negated by a cooler body’s wave function. But only high level physicists think of it in those terms. Most just use the back radiation concept. So do climatologists. Engineers prefer to just use SB to calculate heat transfer from hot to cold directly, to be sure they don’t inadvertently get dreaded temperature crosses in their heat exchangers.
DMac
The 396 W/m^2 is a theoretical “what if” calculation for the ideal LWIR from a surface at 16 C, 289 K. It does not, in actual fact, exist.
The only way a surface radiates BB is into an vacuum where there are no other heat transfer processes occurring.
As demonstrated in the classical fashion, by actual experiment:
https://principia-scientific.org/debunking-the-greenhouse-gas-theory-with-a-boiling-water-pot/
No 396, no 333, no RGHE, no GHG warming.
“…how much the energy content of the wave function of the hotter body is negated by a cooler body’s wave function…”
Classical handwavium nonsense. If a cold body “negated” a hot body there would be refrigerators without power cords. I don’t know of any. You?
I think you both have it wrong. Two objects near each other send radiation back and forth continuously and the outgoing flux can be calculated using the temperature and albedo of each. The fact that an IR Thermometer works at all proves this to be true.
In the case of the Earth’s surface and a “half-silvered” atmosphere, there is a continuous escaping to space of some of the radiation from the surface (directly) and from the atmosphere (directly and indirectly) according to the GHG concentration.
I am weary of arguments that there is no “circuit” between the atmosphere and the surface. Of course there is – there is a thermal energy “circuit” between all objects that have line-of-sight of each other, including between me and the Sun. There is nothing mysterious about this. That is how radiation works.
A simple demonstration of this is to build a fire using one stick. Observe it. Make a sustainable fire as small as possible. Now split the stick in two and make another fire, placing the two sticks in parallel about 10 mm apart. The fire can be smaller than the previous one because the thermal radiation back and forth between the two is conserved. There is no net energy gain doing this for either stick, but there is net benefit (if the object is to make the smallest possible fire).
Radiation continues regardless of whether there is anything “on the receiving end” and always will.
“Two objects near each other send radiation back and forth continuously and the outgoing flux can be calculated using the temperature and albedo of each. The fact that an IR Thermometer works at all proves this to be true. ”
Is this what you have in mind: Q = sigma * A * (T1^4 – T2^4)
Where are the other 5 terms? 2 Qs, 2 epsilon, second area?
This is not “net” energy, it’s the work required to maintain the different temperatures.
Nonsense.
Two objects one hot and one cold: energy flows (heat) from the hot to the cold (EXCLUSIVELY) until they come to equilibrium. The only way to reverse this energy flow is by adding work in the form of a refrigeration cycle.
IR instruments are designed, fabricated and applied based on temperature sensing elements. Power flux is inferred based on an assumed emissivity.
Assuming 1.0 for the earth’s surface or much of molecular anything else is just flat wrong.
The Instruments & Measurements
But wait, you say, upwelling LWIR power flux is actually measured.
Well, no it’s not.
IR instruments, e.g. pyrheliometers, radiometers, etc. don’t directly measure power flux. They measure a relative temperature compared to heated/chilled/calibration/reference thermistors or thermopiles and INFER a power flux using that comparative temperature and ASSUMING an emissivity of 1.0. The Apogee instrument instruction book actually warns the owner/operator about this potential error noting that ground/surface emissivity can be less than 1.0.
That this warning went unheeded explains why SURFRAD upwelling LWIR with an assumed and uncorrected emissivity of 1.0 measures TWICE as much upwelling LWIR as incoming ISR, a rather egregious breach of energy conservation.
This also explains why USCRN data shows that the IR (SUR_TEMP) parallels the 1.5 m air temperature, (T_HR_AVG) and not the actual ground (SOIL_TEMP_5). The actual ground is warmer than the air temperature with few exceptions, contradicting the RGHE notion that the air warms the ground.
Sun warms the surface, surface warms the air, energy moves from surface to ToA according to Q = U A dT, same as the insulated walls of a house.
Nonsense.
All objects radiate, unless they are at absolute zero.
Net energy flows from the hot object to the cold object, but energy IS flowing in both directions.
This is a reply to your claim that “Two objects one hot and one cold: energy flows (heat) from the hot to the cold (EXCLUSIVELY) until they come to equilibrium.” Wrong. Energy flows in both directions (unless one happened to be at absolute zero); however, the energy flowing from the hotter object to the colder one is greater than the energy flow in the opposite direction. The result is that the NET flow is unidirectional until equilibrium. But flow =/= net flow.
You are absolutely correct CinW. Very close to the Earth’s surface a downward facing calculation using MODTRAN will produce the Stefan-Boltzmann with an emissivity of 0.97 just about exactly. The typical earth materials have emissivities averaging to about 0.97.
As one rises away from the Earth’s surface the calculated effective emissivity of the downward view will decline, eventually to a value of 0.63 or so, because of the intervening IR active gasses.
Claiming the SB law applies only to a cavity in vacuum is an utterly immaterial argument. The lack of a cavity is why emissivity is less than one for surfaces in vacuum.
I think you mean each of the 2 separated fires is a bit smaller than the original single fire…view factor considerations…but I’m thinking draft is an important factor for sticks 10 mm apart versus 0 mm…
Kevin kilty – September 7, 2019 at 6:50 pm
Utterly silly claim, ….. with no basis in fact.
The atmosphere is constantly moving across the surface of the Earth in weather patterns so it is unlikely that they will ever reach equilibrium unless a weather pattern becomes stuck and the surface is given time to reach equilibrium with the atmosphere. The surface temperature is heated by solar radiation but cools or heats up with thermal interaction with the atmosphere close to it. My model of the thermal Earth does not have any back radiation there is local thermal equilibrium between the surface and overlying atmosphere if the atmosphere remains static to give time for equilibrium to be reached.
Donald P, ….. I criticized Kevin kilty simply because the ppm density of Kevin’s stated “IR active gasses” is pretty much constantly changing, with H2O vapor being the dominant one. Also, the IR being radiated from the surface is not polarized, meaning, ……. the higher the elevation from the emitting surface, ….. the more diffused or spread out the IR radiation is. Just like the visible light from a lightbulb decreases in intensity (brightness) the farther away the viewer is.
Samuel C Cogar,
Before launching into someone, you ought to know what you are talking about. Run some models using the U of Chicago wrapper for MODTRAN and see what you get looking down close to the surface and again high in the atmosphere. I have run hundreds of MODTRAN models and they are very educational. By the way, MODTRAN is among the most reliable codes of any sort around (Tech. Cred. 9), so do not hide behind “its just a model”.
I have no idea why you do not understand the impact of IR active gasses in an atmosphere. The ramifications involve the sensors and controls in millions of boilers, furnaces, power plants, etc. Every day, all day long.
Nick, busses could drive through the holes in your experiment. You can’t disprove the negative term Thot^4-Tcold^4 in the SB equation with a boiling kettle. Because the instrumentation on many fired heaters and industrial furnaces confirm it every hour, every day, worldwide. I’ve designed some if them. SB is right, so there is a RGHE resulting from CO2 and H2O in the atmosphere. I know H2O and CO2 absorb and emit CO2 from many years of calculating it and reading instruments that confirm it. End of story.
>>>>>>MarkW
September 7, 2019 at 5:21 pm
”Nonsense.
All objects radiate, unless they are at absolute zero.
Net energy flows from the hot object to the cold object, but energy IS flowing in both directions.”<<<<<<
No reply function under your post so I put this here….please forgive..
As a non-scientist, I have trouble visualizing this. How can an object lose (emit) and gain (absorb) energy at the same time? What is the mechanism? (in simple terms)
Mike
How do things lose and gain energy [not heat] at the same time?
Consider two flashlights (torches in the UK) pointing at each other. The light from each shines out from the bulb and is, in part, received by the other. Now, suppose the batteries in one start to fade and the emission of light decreases. Will this affect the amount of light emerging from the other one? Not at all. Nothing about one light affects what the other does. They both shine as they are able, or not if they are turned off.
Nick S above is thinking about conduction of heat, not radiation of energy. Different rules apply for that. There are three modes of energy transfer: conduction, convection and radiation. People with no high school science education frequently confuse conduction and radiation lumping both into “transfer”.
Light is not conducted through the air from one flashlight to the other – it is radiated, and this would happen even if there was no air at all.
Now consider that the original IMAX projector had a 25 kilowatt short arc Xenon bulb in it which produced enough light to brightly illuminate that hundred foot wide screen. Point one at a flashlight. Is the flashlight’s radiance in any way “countered” or “dimmed” or “enhanced”? No not at all. They are independent, disconnected systems with a gap between that can only be bridged by the radiation of photons.
Infra-red radiations is a form of light, light with a wavelength below what we can perceive. Some insects can see IR, some snakes, not us. Some can see UV. We can’t see that either. Not being able to see it doesn’t mean it is not flowing like the visible photons from a flashlight. IR camera can see the IR radiation. The temperature is converted to colour scale for convenience. Basically it is a size-for-size wavelength conversion device.
It happens that all material in the universe is capable of emitting photons, but not nearly equally., however. Non-radiative gases are so-termed because they don’t emit (much) IR, but they will emit something if heated high enough. That doesn’t happen in the atmosphere.
It isn’t quite true that all objects will radiate energy down to absolute zero. That only applies to black objects or gases with absorption bands in the IR. We are only talking about IR radiation when we discuss the climate.
Something very interesting and rather counter-intuitive is that an object such as a piece of steel will have a certain emissivity, say 0.85. (Water is almost absolutely black in IR, BTW.) When the steel is heated hundreds of degrees, until it is glowing yellow, for example, the emissivity rating stays essentially the same.
If you heat a black rock from 0 to 700 C, it can be seen easily in the dark, glowing, but it is still “black”, it is just very hot,radiating energy like crazy. Hold your hand up to it. Feel the radiation warm your skin. Your skin is radiating energy too, back to the hot rock. Not nearly as much so you gain more than you lose.
A glowing object retains (pretty much) the emissivity that it has at room temperature. We see it glow because our eyes are colder than the rock. For this reason, missiles tracking aircraft with “heat-seeking technology” chill the receptor to a very low temperature, often using de-compressed nitrogen gas which is stored nearby. When the missile is armed and “ready” it means the gas is flowing and the sensor is chilled. If the pilot doesn’t fire it within a certain time, the gas is depleted and the missile is essentially useless.
When the receptor is very cold, it “sees” the aircraft much more easily, even if the skin temperature is -60C, so it works.
IR radiation is like stretched light. Almost any solid object emits it all the time, in all directions. When the amount received from all the objects in a room balances with what the receiving object emits, its temperature stops changing. That is the very definition of thermal equilibrium. In=Out=stable temperature. It does not mean the flashlights stopped shining.
Crispin, excellent explanation. But Nick will not accept it and repeat his nonsense over and over again.
I refer to a thought experiment from the first time I heard this entire line of argumentation:
Consider two stars in isolation in space.
One is at 5000°K
One is at 6000°K
Now bring those stars into a close orbit,far enough away so negligible mass is being transferred gravitationally, but each is intercepting a large portion of the radiation being emitted from the other one.
Clearly each star is now gaining considerable energy from the other, and the temperature of each will rise.
Each star has the same core temperature and the same internal flux from the core to the photosphere, but now each also has additional heat flux from the nearby star.
So, what happens to the temperature of each star?
It is obvious, to me at least, that both stars will become hotter.
The cooler one will make the hotter one even hotter, and the hotter one will make the cooler one hotter as well, as each star is now being warmed by energy that was previously radiating away to empty space.
Can anyone imagine or describe how the cooler star is not heating the warmer star?
My assertion is that the same logic applies to two such objects no matter what the absolute or relative temperatures of each might happen to be.
If the two objects are of identical diameter, the warmer star will be adding more energy to the cooler star than it is getting back from the cooler star.
But a situation could be easily postulated wherein the cooler star has different diameter than the warmer star, such that the flow is exactly equal from one star to the other, as can a scenario in which the cooler star sufficiently different in diameter that it is adding more energy to the warmer star than it is getting back from the other.
In this last case, the cooler object is actually warming the warmer star more than it is itself being warmed by the warmer star.
I’ve often come across that scenario or similar ( which underpins the entire radiative AGW hypothesis) many times and it is only in the past few minutes with the help of a bottle of wine that the solution has flashed into my mind.
I always knew that the cooler star won’t make the warmer star hotter but it will slow down the rate of cooling of the warmer star. I think that is generally accepted.
However, the novel point which I now present is that, in addition, the warmer star then being warmer than it otherwise would have been will then radiate heat away faster than it otherwise would have done so the net effect is that the two stars combined will lose heat at exactly the same rate as if they had not been radiating between themselves.
Meanwhile the warmer star’s radiation to the cooler star will indeed warm the cooler star but being warmer than it otherwise would have been the cooler star will also radiate heat away faster than it otherwise would have done so the net effect, again, is that the two stars combined will lose heat at exactly the same rate as if they had not been radiating between themselves.
The reason is that radiation operates at the speed of light which is effectively instantaneously at the distances involved so all one is doing is swapping energy between the two instantaneously with no net reduction in the speed of energy loss to space of the combined two units.
In order to get any additional net heating one needs an energy transfer mechanism that is slower than the speed of light i.e. not radiation.
Therefore, conduction and convection being slower than the speed of light are the only possible cause of a net temperature rise and that can only happen if the two units of mass are in contact with one another as is the case for an irradiated surface and the mass of an atmosphere suspended off that surface against the force of gravity.
Can anyone find a flaw in that ?
To make it a bit clearer, the potential system temperature increase that could theoretically arise from the swapping of radiation between the two stars is never realised because it is instantly negated by an increase in radiation from the receiving star.
One star radiates a bit more than it should for its temperature and the other radiates a bit less than it should for its temperature but the energy loss to space is exactly as it should be for the combined units so no increase in temperature can occur for the combined units.
The S-B equation is valid only for a single emitter. If one has dual emitters the S-B equation applies to the combination but not to the discrete units.
The radiative theorist’s mistake is in thinking that the radiation exchange between two units slows down the radiative loss for BOTH of them. In reality, radiation loss from the warmer unit is slowed down but radiative loss from the cooler unit is speeded up and the net effect is zero.
Unless the energy transfer is slower than the speed of light the potential increase in temperature cannot be realised.
Which leaves us with conduction and convection alone as the cause of a greenhouse effect.
I should have said “…now each also has additional energy flux from the nearby star.”
When energy is absorbed by an object, in most cases it will increase in temperature, that is, it will warm up.
Exceptions clearly exist, as when energy is added to a substance undergoing a phase change and the added energy does not show up as sensible heat but rather exists as latent heat in the new phase of the material.
But in general conversational parlance, I think most of us understand what concept is being conveyed when one uses the word “heat”, when what is actually meant is more precisely termed “energy’.
I wasn’t happy with my previous effort so try this instead:
Consider two objects in space, one warmer than the other and exchanging radiation between them.
Taking a view from space and bearing in mind the S-B equation that mass can only radiate according to its temperature, what happens to the temperatures of the individual objects?
The warmer object can heat the cooler object via a net transmission of radiation across to it so the temperature of the cooler object can rise and more radiation to space can occur from the cooler object.
However, the cooler object will be drawing energy from the warmer object that would otherwise be lost to space.
From space the warmer object would appear to be cooler than it actually is because the cooler object is absorbing some of its radiation.
The apparent cooling of the warmer object would be offset by the actual warming of the cooler object so as to satisfy the S-B equation when observing the combined pair of units from space.
So, the actual temperature of the two units combined would be higher than that predicted by the S-B equation but as viewed from space the S-B equation would be satisfied.
That scenario involves radiation alone and since radiation travels at the speed of light the temperature divergence from S-B for the warmer object would be indiscernible for objects less than light years apart and for objects at such distances the heat transmission between objects would be too small to be discernible.
So, for radiation purposes for objects at different temperatures the S-B equation is universally accurate both for short and interstellar distances.
The scenario is quite different for non-radiative processes which slow down energy transfers to well below the speed of light.
As soon as one introduces non-radiative energy transfers the heating of the cooler object (a planetary surface beneath an atmosphere) becomes magnitudes greater and is easily measurable as compared to the temperature observed from space (above the atmosphere).
So, in the case of Earth, the view from space shows a temperature of 255k which accords with radiation to space matching radiation in from the sun.
But due to non-radiative processes within the atmosphere the surface temperature is at 288k.
The same principle applies to every planet with an atmosphere dense enough to lead to convective overturning.
Stephen,
Thank you for responding.
Very interesting thoughts you have added.
I did of course realize that there would be after some delay (perhaps a very exceedingly brief delay?) a new equilibrium temperature and if this is hotter then the immediate effect will be an increase in output of the star.
I have to step out at the moment and will comment more fully later this evening, but for now a few brief thoughts in response to your thoughtful comments:
– How far into a star can a photon impinging upon that star go before being absorbed? Probably different for different wavelengths, no?
– How fast can the star transfer energy from the side of the star facing the other star, to the side facing empty space? I had not considered it, but most stars are known to rotate, although the thought experiment did not stipulate this. Stars are very large. If the stars are not rotating, will it not take a long time for energy to make it’s way to the far side?
-If the star warms up on the side facing the other star, will it not tend to shine most of the increased output towards the other star? Each point on the surface is presumably radiating omnidirectionally. If the surface now has another input of energy, will it not have to increase output? If it’s output is increased, is that synonymous with, or equivalent to, an increase in temperature?
-If it takes a long time (IOW not instantaneous) for energy to be transferred to the far side, will not most of the increased output be aimed right back at the other star?
OK, got dash now, but you have got me thinking…my thought experiment only went as far as the instantaneous change that would occur, not to the eventual result when a new equilibrium was reached, but several questions arise when that is considered.
Stars can be cooler AND simultaneously more luminous…in fact this happens to all stars as the move into the red giant branch on the H-R diagram, to give one example.
So, will the stars each expand when heated from an external source, and not get hotter, but instead become more luminous while staying the same temp?
I suppose now we will have to have a look at published thoughts on the subject, and maybe measurements of the relative temp of similar stars when in isolation and when in binary and trinary close orbits with other stars.
How fast does gas conduct energy, and how fast does a parcel of gas on the surface convect, and how efficient is radiation inside a star? Does all of the incident energy really just shine right back out? If it happens instantly, wont it just shine back at the first star, so they are now sending photons back and forth (hoo boy, I see where this is going!)
BTW…all honest questions…I do not know for sure what the answers are.
How sure are you about your view on this?
I think to keep it simple at first, let us just consider the case where the stars are the same diameter.
Does it matter how close they are and/or how large they actually are?
Thanks again for responding…few have done so over the years to this thought experiment.
Nicholas,
I’m sure I am right on purely logical grounds.
I have been confronted with this issue many times but only now has it popped into my mind what the truth is.
You mention a number of potentially confounding factors but none of them matter.
Whatever the scenario,the truth must be that the S-B equation simply does not apply to discrete units where radiation is passing between them.
If viewing from outside the system then one will be radiating more that it ‘should’ and one will be radiating less than it ‘should’ with a zero net effect viewed from outside.
However, the discrepancy is indiscernible for energy transfers at the speed of light. For slower energy transfers the discrepancy becomes all too apparent hence the greenhouse effect induced by atmospheric mass convecting up and down within a gravity field rather than induced by radiative gases.
At its simplest:
S-B applies to radiation only between two locations only, a surface and space.
Add non radiative processes and/or more than two locations and S-B does not apply.
A planetary surface beneath an atmosphere open to space involves non radiative processes (conduction and convection) and three locations (surface, top of atmosphere and space).
The application of S-B to climate studies is an appalling error.
It seems ” Podsiadlowski (1991)” may have explored the effects of irradiation on the evolution of binary stars, in particular with regard to high x-ray flux.
I am sure there must be plenty of literature on how binary stars effect each other’s evolution, but most of what I find in a quick look has to do with mass transfer situations.
Be back later, but:
http://www-astro.physics.ox.ac.uk/~podsi/binaries.pdf
Page 38 is where I got too for now.
This is paywalled:
http://adsabs.harvard.edu/abs/1991Natur.350..136P
Just reading some easily found papers, I have come across a few references to what happens in such cases, which as actually common: It is thought most stars are binary.
http://articles.adsabs.harvard.edu/cgi-bin/nph-iarticle_query?bibcode=1985A%26A…147..281V&db_key=AST&page_ind=0&data_type=GIF&type=SCREEN_VIEW&classic=YES
The second paragraph of this paper starts out stating: “In general, the external illumination results in a heating of the atmosphere.”
Second paragraph begins:
“For models in radiative and convective equilibrium, not all the incident energy is re-emitted”
The details are apparently, as I surprised, quite complex, and have been considered by various stellar physicists going back to at least Chandrasekhar in 1945.
This one is a relatively old paper, 1985 it appears, and is of course paywalled.
I think I read once on here that there is a way to read most paywalled scientific papers without paying. Maybe someone can help on this.
But most papers on this topic are concerned with the apparently more interesting effects of mass transfer in binary systems, the primary mechanism for which is something called Roche Lobe Overflow (RLOF). Along the way one learns about stars called “redbacks” and “black widows”, among others.
https://iopscience.iop.org/article/10.1088/2041-8205/786/1/L7/pdf
Limb brightening, grey atmospheres, stars in convective equilibrium…have to read up on these and refresh my memory…I only took a couple of classes in astrophysics.
“Standard CBS models do not take into account either evaporationofthedonorstarbyradiopulsarirradiation(Stevensetal. 1992), or X-ray irradiation feedback (B¨uning & Ritter 2004). During an RLOF, matter falling onto the NS produces X-ray radiation that illuminates the donor star, giving rise to the irradiationfeedbackphenomenon.Iftheirradiatedstarhasanouter convective zone, its structure is considerably affected. Vaz & Nordlund (1985) studied irradiated grey atmospheres, finding that the entropy at deep convective layers must be the same for the irradiated and non-irradiated portions of the star. To fulfill this condition, the irradiated surface is partially inhibited from releasing energy emerging from its deep interior, i.e., the effective surface becomes smaller than 4πR2 2 (R2 is the radius of thedonorstar).Irradiationmakestheevolutiondepartfromthat predicted by the standard theory. After the onset of the RLOF, the donor star relaxes to the established conditions on a thermal (Kelvin–Helmholtz) timescale, τKH =GM2 2/(R2L2) (G is the gravitational constant and L2 is the luminosity of the donor star). In some cases, the structure is unable to sustain the RLOF and becomes detached. Subsequent nuclear evolution may lead the donor star to experience RLOF again, undergoing a quasicyclic behavior (B¨uning & Ritter 2004). Thus, irradiation feedbackmayleadtotheoccurrenceofalargenumberofshort-lived RLOFs instead of a long one. In between episodes, the system may reveal itself as a radio pulsar with a binary companion. Notably, the evolution of several quantities is only mildly dependent on the irradiation feedback (e.g., the orbital period).”
“A planetary surface beneath an atmosphere open to space involves non radiative processes (conduction and convection) and three locations (surface, top of atmosphere and space).”
I agree with this completely.
There is no reason to think that radiative properties of CO2 dominate all other influences, and many reasons to believe it’s influence at the margin is very small, if not negligible or zero. If it is negligible or zero there are many possible reasons for it being so.
One need not be able to explain the precise reasons, however, to know that there is no causal correlation at any time scale, between CO2 and the temperature of the Earth.
“The application of S-B to climate studies is an appalling error.”
I am still trying to figure out why there is such a variety of views on this point.
I confess I find this baffling.
I do not know who is right.
My thought experiment is conceived to look narrowly at the question of whether or not radiant energy from a cool object impinges upon a warmer object, and what happens if and when it does.
How fast everything happens seems to me to be a separate question.
The speed of light is very fast, but it is not instantaneous.
I can find many references confirming that when photons are absorbed by a material, the effect is generally to make the material becomes warmer, because energy is added.
I have not found anything that says that the temperature of the substance that emitted the photons changes that.
“The warmer object can heat the cooler object via a net transmission of radiation across to it so the temperature of the cooler object can rise and more radiation to space can occur from the cooler object.
However, the cooler object will be drawing energy from the warmer object that would otherwise be lost to space.
From space the warmer object would appear to be cooler than it actually is because the cooler object is absorbing some of its radiation.
The apparent cooling of the warmer object would be offset by the actual warming of the cooler object so as to satisfy the S-B equation when observing the combined pair of units from space.”
I had not seen this previously.
I have to disagree.
Perhaps I misunderstand, or perhaps you misspoke.
The warmer star appears cooler because the cooler star is intercepting some of it’s radiation?
But radiation works by line of site. And whatever photons the cooler star absorbs cannot have any effect on how the warmer star radiates.
Here is how it must be in my view:
Each star had, when isolated in space, a given temp, which was a balance between the flux from the core and the radiation emitted from the surface. Flux from the core is either via radiation or convection according to accept stellar models, and these tend to be in discreet zones.
When the stars are brought into orbit (and let’s stipulate circular orbits around a common center of mass, in a plane perpendicular to the observer (us), so they are not at any time eclipsing each other from our vantage point) near each other, each is now being irradiated by the other. And radiation emitted in the direction of the other star by either one of them is either reflected or absorbed. Each star emits across a wide range of energies, and the optical depth of the irradiated star to these wavelengths varies depending on the wavelength of the individual photons.
Since in the new situation, the flux leaving the core remains the same, and since the surface area of each star that is losing energy to open space is now diminished, each one will have to become more luminous. Each star now has an addition flux of energy reaching it’s surface, due to being irradiated.
Since each star is absorbing some energy from the other, each will initially get hotter.
The stars will each respond by expanding, because that portion of the star has first become hotter.
I’m not happy with my description either so still working on it. There is something in it though which is niggling at me but best to leave it for another time.
The thing is that one should be considering two objects rather than two stars both of which are being irradiated by a separate energy source so the issue is one of timing which involves delay caused by the transfer of radiative energy throughput to and fro between the two irradiated objects.
I have previously dealt with it adequately in relation to objects in contact with one another such as a planet and its atmosphere which involves non radiative transfers but I need to create a separate narrative where the irradiated objects are not in contact so that only radiative transfers are involved.
The general principle of a delay in the energy throughput resulting in warming of both objects whilst not upsetting the S-B equation should apply for radiation just as it does for slower non radiative transfers but it needs different wording and I’m not there yet.
Your comments are helpful though.
No, no refrigerators without power cords, heat flows from hot to cold, unless you put work into it. Not handwavium, standard physics, yes classical. No helping you. I can only stop others from accepting your erroneous view.
Heat does not flow in radiation as in conduction. Bodies radiate. Two bodies, not at absolute zero will radiate and each body will capture some radiation from the other. There will be a net energy gain in some cases (large hot object to small cold one- cold one gains net heat for example). If the bodies are spheres in space, a lot of the radiation just travels away though “space”, except where areas intersect (view factor). Even if a cold body “sees” a hot one, it still radiates photons to it. There is no magic switch turning off the radiation. The cold body may send 1000 photons to the hot one, but the hot one may send a trillion to the cold one.
Please explain how a room temperature thermal imaging camera works. It measures temperatures down to -20°C whilst its sensor temperature is well above operating environment temperature of 50°C.
this is an UNCOOLED microbolometer sensor
For objects cooler than the microbolometer less radiation is focussed on the array so the array is only slightly warmed
for objects hotter than the microbolometer array more radiation is focussed on the array so the array is warmed more.
The array cannot be cooled unless you believe in negative IR energy!
there is a continual change of IR from hot to cold and cold to hot. The NET radiation is from the hot to cold. BUT the cold still adds energy to the hot!
FLIR data sheet
https://flir.netx.net/file/asset/21367/original
Detector type and pitch Uncooled microbolometer
Operating temperature range -15°C to 50°C (5°F to 122°F)
Please tell me how the Uncooled microbolometer knows what the temperature is?
What function of the Radiation tells the meter what temperature it is at ie what is it that “warms it up a bit”?
The bolometer works by turning radiation into heat in each pixel of the array. Different temperatures of radiation from different parts of the object heat different pixels more or less. The individual pixels are constructed a couple micrometers away from the chip base. All the pixels can maintain a fairly steady temperature by radiating from the backside into the base chip.
The temperature is measured by the varying resistance of each pixel. Once a stable image has formed additional energy is going to be going into the base chip. The pixels are separated enough to not allow much transfer of heat to adjacent pixels.
Yess, if a black body had a temperature of 16° C, it would radiate at 396 W/m².
Notice that the Earth does not have a constant temperature all over. In some places the temperature is 288 K, at others it’s 293 K or 283 K.
A black body at 288 K will radiate 390.7 W/m².
The average for black bodies radiating at 293 K and 283 K will be the average of
(293/288)^4 *390.7K and (283/288))^4*390.7 K , or the average of
418.546 and 364.266 W/m²., which is 391.406 W/m², higher than 390.7
For temperatures of 298 K and 278 K, still averaging 288 K, wattage will be the average of
(298/288)*390.7 W/m²^4 and (278/288)^4, *390.7 W/m² or the average of
447.856 and 339.198 which is 393.527 W/²m²m 2.827 W/m² greater than 390.7.
The average for
(302/288)^4*390.7 W/m²^ and (274/288)^4 W/m²^4 is
the average of 472.390 W/m² and 320.092= 396.241 W/m².
The greater the variation in temperatures from “average”, the greater Wattage per square meter radiated from Earth’s surface, even though average temperatures stay the same.
Bingo!
Plus Engineers will physically test heat transfer under controlled conditions to ensure their calculations match reality.
Energy flows in an electromagnetic field, high energy to low, and unlike running red lights, the Second Law of Thermodynamics is inviolable. Period.
Tom all I read on here is Photons, never energy. Just about everybody on here says photons are photons.
But surely they cannot all be equal? Otherwise there would be no SW, no Near IR and no LWIR.
My understanding is that photons are electrically neutral with no energy of their own, and they flow within an electromagnetic field between the energy-emitting and energy-absorb¬ing molecules of surfaces coupled by the electromagnetic field. They may be considered to mediate or denominate a flow of energy out of and into the molecules. They have the following basic properties:
Stability,
Zero mass and energy at rest, i.e., nonexistent except as moving particles,
Elementary particles despite having no mass at rest,
No electric charge.
Motion exclusively within an electromagnetic field (EMF),
Energy and momentum dependent on EMF spectral emission frequency.
Motion at the speed of light in empty space,
Interactive with other particles such as electrons, and
Destroyed or created by natural processes – e.g., radiative absorption or emission.
Photons have energy. Photons with energy above 10 eV can remove electrons from atoms (i.e., ionize them).
My previous response to this was evidently not sufficiently clear (or acceptable), which is unfortunate since there seems to be a good deal of misunderstanding about energy within an electromagnetic field and photons that mediate that flow. As mediators or markers of that energy they have no existence except within the field and as indication that it exists. Information concerning that is available and can clear up most if not all of the mystery.
Photons are a boson of a special sort, sometimes called a force particle, as bosons are intrinsic to physical forces like electromagnetism and possibly even gravity. In 1924 in an effort to fully understand Planck’s law of thermodynamic equilibrium arising from his work on blackbody radiation, physicist Subhas Chandra Bose (1897 – 1945) proposed a method in 1924 to analyze photons’ behavior. Einstein, who edited Bose’s paper, extended its reasoning to matter particles, basic “gauge bosons” that mediate the fundamental physics forces. These four gauge bosons have been experimentally tracked if not observed. They are:
The photon – the particle of light that transmits electromagnetic energy and acts as the gauge boson mediating the force of electromagnetic interactions,
The gluon – mediating the interactions of the strong nuclear force within an atom’s nucleus,
The W Boson – one of the two gauge bosons mediating the weak nuclear force, and
The Z Boson – the other gauge boson mediating the weak nuclear force.
The unstable Higgs boson that lends weak-force gauge bosons mass they otherwise lack
I am also to blame for not fully understanding your point. As a health (radiation) physicist (but educated as a generalist physicist), I am rather entrenched in conceptualizing photons as energetic wave packets, whose deposited energy has consequences, cell damage, heating, etc. And, with that said, however one conceptualizes the photon, the absorption or scattering of a photon imparts energy in the receptor.
“Most people don’t understand that at this distance from the sun objects get hot (394 K) ”
Off hand I’d say ALL people don’t understand it because it isn’t true.
Thank-you Charles and thank-you Anthony, for this and all you do.
Thank YOU for the hard fought effort to post the paper!
Thanks, but no thanks are necessary, Sunsettommy. I was compelled to do it. Compelled. My sanity demanded it.
I’m just very glad that the first slog is over.
Of course, now comes the second slog. 🙂 But still, that’s an improvement.
Fantastic paper and comments, Pat!
Being trained in the atmospheric science major I was, it was well understood from radiation physics derived post Einstein by the pioneers of the science that CO2 is only a GHG of secondary significance in the troposphere because of the hydrological cycle and cannot control the IR flux to space in its presence. It has no controlling effect on climate.
These conclusions were derived empirically from the calculations and the only thing that changed this was the advent of these horrible models you cite, the lies that were told about them to get grant money and the continued lies being told about them that they are accurate and can be used today to make public policy with.
It is $ money that is the motivating factor behind the lying. Both for the taxpayer funded grant money keeping the climate hysteria gravy train rolling in the universities and for the political class that saw an opportunity to exploit this fraud through creating a fake Rx that carbon taxation will fix it.
This terrible corruption has spread through the public university system and must be stopped. The political class falls back on the universities that promote climate hysteria as a means of defending their horrible ideas about carbon taxes and cap and trade.
You are correct that a hostile response to you will be forthcoming. It is always what happens when funding for fraud needs to be cut off and the perpetrators are threatened with unemployment as a result.
Thanks, Chuck.
Gordon Fulks has written of your struggles in Oregon. You’ve had to withstand a lot of abuse, telling the truth about climate and CO2 as you do.
Chuck’s testimony before an OR legislative committee:
https://olis.leg.state.or.us/liz/2018R1/Downloads/CommitteeMeetingDocument/145657
Hopeless but valiant struggle in defense of science against the false religion and corrupt political ideology of CACA.
“Being trained in the atmospheric science major I was, it was well understood from radiation physics derived post Einstein by the pioneers of the science that CO2 is only a GHG of secondary significance in the troposphere because of the hydrological cycle and cannot control the IR flux to space in its presence. It has no controlling effect on climate.”
A brilliant summation of reality. CO2 doesn’t “drive” jack shit. Just like ALWAYS. A quick review of the Earth’s climate history shows that atmospheric CO2 does NOT “drive” the Earth’s temperature. Nor will it ever. This is, and will always be (until the Sun goes Red Giant and makes Earth uninhabitable or swallows it up), a water planet.
But this is what happens when so-called “scientists” obsess about PURELY HYPOTHETICAL situations (i.e., doubling of atmospheric CO2 concentration with ALL OTHER THINGS HELD EQUAL, which of course will NEVER HAPPEN), and extrapolating from there with imaginary “positive feedback loops” which simply don’t exist here in the REAL world.
Many Many thanks Pat. 6 years to get a paper reviewed!! Holy Cow! You have the patience of Job. The world is a better place because of your tenacity! It demonstrates that here is hope after all.
Thank you Professor Frank from this Irishman.
For you to even exist in such a social, religious, and scientific desert as the country of my birth has become, is a minor miracle.
As you say, without people like Anthony and his wonderful band of realist contributors and helpers we would be lost.
Just look at a few crazy headlines this past week
A “scientist” proposes we start eating cadavers, my Pope proposes we stop producing and using all fossil fuels NOW to prevent run away global warming.
So help God me to leave this madhouse soon.
Keep hope Patrick Healy.
Things have gotten much worse in the past, and we’ve somehow muddled our way back to better things. 🙂
Patrick,
Pat lives in the worst of the USA’s madhouses, ie the SF Bay Area, albeit outside of then verminous rat-infested, human fecal matter-encrusted diseased and squalid City.
He attended college and grad school in that once splendid region*, earning a PhD. in chemistry from Stanford and enjoying a long career at SLAC.
*In 1969, Redwood City still billed itself as “Climate Best by Government Test!”
An excellent paper and commentary. I will reference it often.\
Thanks, Rs. Your critical approval is welcome.
..and thank you Pat…for “he persisted”
Appreciated, Latitude. 🙂
Thank you, Patrick, for this magnificent defense of science and reason
@ Pat Frank,
I have been waiting for 20+ years for someone to publish “common sense” commentary such as yours is, that gives reason for discrediting 99% of all CAGW “junk science” claims and silly rhetoric.
I’m not sure they will believe you anymore than they have ever admitted to believing my learned scientific opinion, ….. but here is hoping they will.
Cheers, ….. Sam C
Bravo Pat! Given the vagaries of weather relative to the stability of climate, we should expect a reduction in predictive uncertainty with time. Yet, the models predict just the opposite and become less reliable as time progress.
I haven’t made it through all the comments (most of which have nothing to do with your paper) but did note a few detractors posting links.
I also noted you have thus far ignored these folks. IMHO, you should continue to do so until they post quotes (or paraphrases) purporting to refute the thesis of your paper.
The models ignore all non radiative energy transfer processes. Thus all deviations from the basic S-B equation are attributed falsely to radiative phenomena such as the radiative capabilities of so called greenhouse gases.
They have nothing other than radiation to work with.
Thus the fundamental error in the Trenberth model which has convective uplift as a surface cooling effect but omits convective descent as a surface warming effect.
To make the energy budget balance they then have to attribute a surface warming effect from downward radiation but that cannot happen without permanently destabilising the atmosphere’s hydrostatic equilibrium.
As soon as one does consider non radiative energy transfers it becomes clear that they are the cause of surface warming since they readily occur in the complete absence of radiative capability within an atmosphere which is completely transparent to radiation.
My colleague Philip Mulholland has prepared exhaustive and novel mathematical models based on my conceptual descriptions for various bodies with atmospheres to demonstrate that the models currently in use are fatally flawed as demonstrated above by Pat Frank.
https://wattsupwiththat.com/2019/06/27/return-to-earth/
The so called greenhouse effect is a consequence of atmospheric mass conducting and convecting within a gravity field and nothing whatever to do with GHGs.
Our papers have been serially rejected for peer review so Anthony and Charles are to be commended for letting them reach an audience.
Stephen,
Emissivity & the Heat Balance
Emissivity is defined as the amount of radiative heat leaving a surface to the theoretical maximum or BB radiation at the surface temperature. The heat balance defines what enters and leaves a system, i.e.
Incoming = outgoing, W/m^2 = radiative + conductive + convective + latent
Emissivity = radiative / total W/m^2 = radiative / (radiative + conductive + convective + latent)
In a vacuum (conductive + convective + latent) = 0 and emissivity equals 1.0.
In open air full of molecules other transfer modes reduce radiation’s share and emissivity, e.g.:
conduction = 15%, convection =35%, latent = 30%, radiation & emissivity = 20%
Actual surface emissivity: 63/160 = 0.394.
Theoretical surface emissivity: 63/396 = 0.16
So where have you included energy returning to the surface in the form of KE (heat) recovered from PE (not heat) in descending air?
Don’t the advanced models have a 1-D finite difference Navier Stokes model built in (with analytical spreading)? Shouldn’t that be able to account for some two-way convection?
Not that I am aware of. It isn’t in the Trenberth diagrams.
Can you demonstrate otherwise ?
MAybe they use 3D NS, I looked at this a few years ago, an thought I was correct:
https://books.google.com/books?id=XASqDQAAQBAJ&pg=PA79&lpg=PA79&dq=GCM+1-D+Navier+stokes+model&source=bl&ots=SMfkD_olvq&sig=ACfU3U0eQ6nQIquAfhBfXO7EiGDnF2UsLA&hl=en&sa=X&ved=2ahUKEwiq9vK2uMLkAhWIvp4KHchlA80Q6AEwG3oECAoQAQ#v=onepage&q=GCM%201-D%20Navier%20stokes%20model&f=false
The ‘state of the art’ computer models do pretend to solve the Navier-Stokes equations. But they are really 2D+1 in the sense that the vertical is made of very few layers only (something like 12 – 15 or of that order). That means they cannot really model convection. They cannot really model anything at a scale that matters: convections, clouds and so on.
Not that it would matter, anyway, they would output exponentially amplified garbage no matter what they do.
Nick S
“Emissivity = radiative / total W/m^2 = radiative / (radiative + conductive + convective + latent)
In a vacuum (conductive + convective + latent) = 0 and emissivity equals 1.0.”
This description is seriously defective.
Emissivity is not calculated in that manner. If it was, everything that radiates in a vacuum would be rated on a different scale. Emissivity is based on an absolute scale. Totally black is 1.0. Polished cadmium, silver or brass can reach as low as 0.02. Gases have have an emissivity in the IR range of essentially zero. Molecular nitrogen, for example.
Generally speaking, brick, concrete, old galvalised roof sheeting, sand, asphalt roofing shingles and most non-metal objects have an emissivity of 0.93 to 0.95. High emissivity materials include water (0.96 to 0.965), which everyone knows covers 70% of the earth. Snow is almost pitch black in IR. Optically white ice radiates IR very effectively. The Arctic cools massively to space when it is frozen over.
“For example, emissivities at both 10.5 μm and 12.5 μm for the nadir angle were 0.997 and 0.984 for the fine dendrite snow, 0.996 and 0.974 for the medium granular snow, 0.995 and 0.971 for the coarse grain snow, 0.992 and 0.968 for the sun crust, and 0.993 and 0.949 for the bare ice, respectively.”
https://www.sciencedirect.com/science/article/abs/pii/S0034425705003974
That part of the ground that is not shaded by clouds has an emissivity of about 0.93 and the water and ice is 0.96-0.99. Clouds have a huge effect on the amount of visible light reflected off the top, but that same top radiates in IR with a broad range.
http://sci-hub.tw/https://doi.org/10.1175/1520-0469(1982)039%3C0171:SAFIEO%3E2.0.CO;2
Read that to see how to calculate emissivity from first principles. In the case of clouds, the answer is a set of curves.
There is a core problem with the IPCC’s calculation of radiative balance and that is the comparison of a planet with an atmosphere containing GHG’s to a planet with no atmosphere at all, and a surface emissivity of 1.0. The 1.0 I can forgive but the seer foolishness of making that comparison instead of an atmosphere with and without GHG’s, is inexplicable. Read anything by Gavin, The IPCC or Trenberth. That is how they “explain it”. They have lumped heating by convective heat transfer with radiative downwelling. Unbelievable. In the absence of (or presence of much more) greenhouse gases, convective heat transfer continues. What the GHG’s do is permit the atmosphere itself to radiate energy into space. Absent that capacity, it would warm continuously until the heat transfer back to the ground at night equalled the heat gained during the day. That would persist only at a temperature well above the current 288K.
These appalling omissions, conceptual and category errors are being made by “climate experts”? Monckton points out they forgot the sun was shining. I am pointing out they forgot the Earth had an atmosphere.
Crispin,
Good response to Nick’s unique viewpoint…….sure wish your sci-hub link would open though…have you got a paper name and author to search ?
Crispin,
Got it using https://doi.org/10.1175/1520-0469(1982)039%3C0171:SAFIEO%3E2.0.CO;2
“…Dr. Luo chose four reviewers, three of whom were apparently not conflicted by investment in the AGW status-quo…”
I would love to hear more about this reviewer’s issues.
Reviewers are anonymous, for excellent reasons.
You mean the one negative reviewer, Michael J.? S/He made some of the usual objections I encountered so many times in the past, documented in the links provided above.
One good one, which was unique to that reviewer, was that the linear emulation equation (with only one degree of freedom), succeeded because of offsetting errors (requiring at least two degrees of freedom).
That objection was special because climate models are tuned to reproduce known observables. The tuning process produces offsetting parameter errors.
So, the reviewer was repudiating a practice in universal application among climate modelers.
So it goes.
By the way, SI Sections 7.1, 8, 9, 10.1 and 10.3 provide some examples of past objections that display the level of ignorance concerning physical error analysis so widespread among climate modelers.
Thank you. I knew he/she needed to remain anomymous but was curious about the comments of the big dissenter.
Great article thanks; never forget models are opinions(!) and opinions have limited value in science
Scientific models are supposed to embody objective knowledge, oebele. That trait is what makes the models falsifiable, and subject to improvement.
That trait — objective knowledge — is also what makes science different from every other intellectual endeavor (except mathematics, which, though, is axiomatic).
As you stated: “scientific models are supposed to embody objective knowledge” As the definition of climate is the average of weather during a 30 year period, why not using a 100 year period and the models will give you a different outcome? The warming/cooling is in the eye of the beholder (the model maker), because he/she fills in variables… guess work: opinions.
From my own work, Oebele, I’ve surmised that the 30 year duration to define climate was chosen because it provides enough data for a good statistical approximation.
So, it’s an empirical choice, but not arbitrary.
Thanks Pat for an exceptional expositin on the massive cloud uncertainty in models.
May I recommend exploring and distinguishing the massive TypeB error of the divergence of surface temperature tuned climate model Tropospheric Tropical Temperatures versus Satellite & Radiosonde data (using BIPM’s GUM methodology), and compare that with the Type A errors – and with the far greater cloud uncertainties you have shown. e.g., See
McKitrick & Christy 2018;
Varotsos & Efstathiou 2019
https://agupubs.onlinelibrary.wiley.com/doi/full/10.1029/2018EA000401
https://greatclimatedebate.com/wp-content/uploads/globalwarmingarrived.pdf
PS Thanks for distinguishing between between accuracy and uncertainty per BIPM’s GUM (ignored by the IPCC):
PPS (Considering the ~60 year natural Pacific Decadal Oscillation (PDO), a 60 year horizon would be better for an average “climate” evaluation. However, that further exacerbates the lack of accurate global data.)
Pat,
I have always had a suspicion that the 30-year period was chosen to avoid capturing the natural 60 cycle in the climate. If you choose to bias your data base to the upswing part of the natural cycle then you can hide to impact of the next downturn. As we now are beginning to see, the natural weather cycle has switched from 30 years of zonal dominated flow towards 30 years of meridional dominated flow. Here in the UK we are expecting an Indian Summer as the next meridional weather event brings a late summer hot plume north from the Sahara.
During this summer the West African Monsoon has sent moist air north across the desert towards the Maghreb (See Images Satellites in this report for Agadir) and produced a catastrophic flood at Tizert on the southern margin of the Atlas Mountains in Morocco on Wednesday 28th August.
I am sure that the 30 year period which defines a climate was chosen long before the advent of climate alarmism.
This same time period was how a climate was defined at least as far back as the early 1980s when I took my first classes in such subjects as physical geography and climatology.
So I do not think this time period was chosen for any purpose of exclusion or misrepresentation.
I believe it was most likely chosen as a sufficiently long period of time for short term variations to be smoothed out, but short enough so that sufficient data existed for averages to be determined, back when the first systematic efforts to define the climate zones of the Earth were made and later modified.
Philip Mulholland and Nicholas McGinley,
I think you are both “sort of correct”.
In 1976, Lambeck identified some 15 diverse climate indices which showed a periodicity of about 60 years – the quasi 60 year cycle. The 30 -year minimum period for climate statistics up to that time was just a rule-of thumb; it was probably arrived at because it represented the minimum period which covered the observed range of variation in data over a half-cycle of the dominant 60 year cycle – even before a wide explicit knowledge of the ubiquity of this cycle. Since that time, and long after clear evidence of the presence of the quasi 60-year cycle in key datasets, I believe that there has been a wilful resistance in the climate community to adopt a more adult view of how a time interval should be sensibly analysed and evaluated. In particular, climate modelers as a body reject that the quasi-60 year cycle is predictably recurrent. They are obliged to do so in order to defend their models.
kribaez,
Thank you for your support. In my opinion the most egregious aspect of this wholly disgraceful nonsense of predictive climate modelling is the failure to incorporate changes in delta LOD as a predictor of future climate trends. It was apparent to me in 2005 that a change was coming signalled by LOD data (See Measurement of the Earth’s rotation: 720 BC to AD 2015)
So, when in the summer of 2007 I observed changes in the weather patterns in the Sahara I was primed to record these and produce my EuMetSat report published here.
West African Monsoon Crosses the Sahara Desert.
This year, 12 years on from 2007 and at the next solar sunspot minimum, another episode of major weather events has occurred this August in the western Sahara. A coincidence, it’s just weather? Maybe that is all it is, but how useful for the climate catastrophists to be able to weave natural climate change into their bogus end-of-times narrative.
Philip Mulholland,
I agree with you. On and off for about 6 years, I have been trying to put together a fully quantified model of the effects of LOD variation on energy addition and subtraction to the climate system. AOGCMs are unable to reproduce variations in AAM. To the extent that they model AAM variation, it is via a simplified assumption of conservation of angular momentum with no external torque. This is probably not a bad approximation for high frequency events like ENSO, but is demonstrably not valid for multidecadal variation. A big problem however is that if you convert the LOD variation into an estimate of the total amount of energy added to and subtracted from the hydrosphere and atmosphere using standard physics, it is the right order, but too small to fully explain the variation in heat energy estimated from the oscillatory amplitude of temperature variation over the 60 year cycles. Some of the difference is frictional heat loss, but I believe that the greater part (of the energy deficiency) is explained by forced cloud variation associated with LOD-induced changes in tropical wind speed and its direct effect on ENSO. This is supported by the data we have post-1979.
While the latter is still hypothesis, I can demonstrate with high confidence that the 60-year cycle is an externally forced variation and not an internal redistribution of heat. I have not published anything on the subject as yet.
Excellent work, and great post.
I always thought weather was a non-linear chaotic system, in which case, if it can be modelled becomes no longer chaotic.
oebele bruinsma
Models are complex hypotheses that need to be validated, and if necessary, revised.
Great article. Thank you Pat.
Thanks, John. 🙂
Yes, agree, I’ve bookmarked this so the citation will be to hand as needed.
This looks like a giant step forward to me.
We can only hope so, Robert. 🙂
Frank,
It would be useful to give a practical example of accuracy and of precision to show various folks the difference of the two concepts.
Here’s a nice graphic, Willem: https://www.mathsisfun.com/accuracy-precision.html
“There’s plenty of blame to go around, but the betrayal of science garners the most. Those offenses would not have happened had not every single scientific society neglected its duty to diligence.
From the American Physical Society right through to the American Meteorological Association, they all abandoned their professional integrity, and with it their responsibility to defend and practice hard-minded science. Willful neglect? Who knows. Betrayal of science? Absolutely for sure.
Had the American Physical Society been as critical of claims about CO₂ and climate as they were of claims about palladium, deuterium, and cold fusion, none of this would have happened. But they were not.
The institutional betrayal could not be worse; worse than Lysenkoism because there was no Stalin to hold a gun to their heads. They all volunteered.”
All I can say is WOW! Thank you.
Interesting observation in that last bit about no one holding a gun to their head. Says much about an aspect of human nature to want to go along with the mob which appears to be on the right side, whether true or not.
I see it as the tyranny of a collectivist psychology, goldminor.
Many seem to yearn for it.
Craven is the word you’re looking for. They will go along with whichever appears to be the winning side. Fear of the shame and embarrassment of being on the losing side is a powerful manipulative device.
Thanks, tgasloli. It didn’t seem a time to hold back.
Pat Frank’s powerful article is the most important ever to have been published at WattsUpWithThat.com. I have had the honor to know Pat for many years, and I know well the long and painful struggles he has been through to get his ground-breaking paper published, and how much he has suffered for the science to which he has devoted his career.
I watched him present his results some years ago at the annual meeting of the Seminars on Planetary Emergencies of the World Federation of Scientists. The true-believing climate Communists among the audience of 250 of the world’s most eminent scientists treated him with repellent, humiliating contempt. Yet it was obvious then that he was right. And he left that meeting smarting but splendidly unbowed.
Pat has had the courage to withstand the sheer nastiness of the true-believers in the New Superstition. He has plugged away at his paper for seven years and has now, at last, been rewarded – as have we all – with publication of his distinguished and scam-ending paper.
It is the mission of all of us now to come to his aid and to ensure that his excellent result, building on the foundations laid so well by Soon and Baliunas, comes as quickly as possible into the hands of those who urgently need to know.
I shall be arranging for the leading political parties in the United Kingdom to be briefed within the next few days. They have other things on their minds: but I shall see to it that they are made to concentrate on Pat’s result.
I congratulate Pat Frank most warmly on his scientific acumen, on his determination, on his great courage in the face of the unrelenting malevolence of those who have profiteered by the nonsense that he has so elegantly and compellingly exposed, and on his gentlemanly kindness to me and so many others who have had the honor to meet him and to follow him not only with fondness, for he is a good and upright man, but with profoundest admiration.
Thank-you for that, Christopher M. I do not know how to respond. You’ve been a good and supportive friend through all this, and I’ve appreciated it.
Thank-you for what I am sure is your critical agreement with the analysis. You, and Rud, and Kip are a very critical audience. If there was a mistake, you’d not hesitate to say so.
I recall having breakfast with you in the company of Debbie Bacigalupi, who was under threat of losing her ranch from the arrogated enforcement of the Waters of the US rule by President Obama’s EPA. She expressed reassurance and comfort from your support.
You also stood up for me during that very difficult interlude in Erice. It was a critical time, I was under some professional threat, and you were there. Again, very appreciated.
You may have noticed, I dedicated the paper to the memory of Bob Carter in the Acknowledgements. He was a real stalwart, an inspiration, and a great guy. He was extraordinarily kind to me, and supportive, in Erice and I can never forget that.
Best to you, and good luck ringing the bell in the UK. May it toll the end of AGW and the shame of the enslavers in green.
Lord Monckton, I have previously negatively challenged your detailed posts, and I beg to differ yet again but in a positive way.
The three most important fundamental science posts at WUWT (except of course WE, often a bit of diagonal parking in a parallel universe) are this one, and your two on yout irreducible equation, and on your fundamental error.
My reasons for so saying are the same for all three. They force everyone here at WUWT to go back to the ‘fundamental physics’ behind AGW, and rethink the basics for themselves.
Nullius in Verba.
Pat, I am totally blown away by the posted comment about your paper (I hope to read very soon). Very powerful.
And, RI, you will not remember but you put me on the true path regarding modeling some time ago for which I am truly grateful.
Now, a question: what about the “Russian Model”? Similarly limited in terms of uncertainty?
The answer to your question will be found in the long wave cloud forcing error the Russian model produces, JRF. Whatever it is.
Given the likelihood that the Russians don’t have some secret physical understanding no one else possesses, one might expect their model is on target by fortuitous happenstance.
Thank you, sir. As I understand it, the Russian Model ignores CO2 and factors in solar but I look at the models with great skepticism regarding any predictive power. I remember seeing a video on iterative error in computer models (cannot remember the presenter), which combined with Rud Istvan’s tutoring on tuning and, now, your thoughts on uncertainty certainly amplify that skepticism.
I remember an issue of NatGeo on “global warming” some 15-20 years ago. It had a big pullout, as the magazine will do from time to time, and on one side it showed the “Hockey Stick” with a second line showing “mean global temperature”. The two lines were rising in tandem until the “mean global temperature” line took a right turn on the horizontal but ended after a few years as “the Pause” began. Although skeptical before that time, that was the point where I began looking at climate data in earnest and thinking something was very amiss.
Oh, I dropped Nat Geo not long after that issue.
Pat … first – huge congratulations on your herculean efforts. That your persistence was finally rewarded is a huge accomplishment – for all of climate science.
As to the Russian INM-CM4 (and now INM-CM5) model, if I recall this model had a higher deep ocean capacity and used a CO2 forcing that was appx 1/3rd lower than the other 101 CIMP5 models.
Which even to a novice like me makes sense. The majority of the models overestimate equilibrium climate sensitivity and as such predict significantly more warming than measured temp data shows.
Something even Mann, Santer etal agree with in their (somewhat) recent paper:
Causes of differences in model and satellite tropospheric warming rates
“We conclude that model overestimation of tropospheric warming in the early twenty-first century is partly due to systematic deficiencies in some of the post-2000 external forcings used in the model simulations. ”
http://www.meteo.psu.edu/holocene/public_html/Mann/articles/articles/SanterEtAlNatureGeosci17.pdf
“Pat Frank’s powerful article is the most important ever to have been published at WattsUpWithThat.com”
That says a lot given the level of stuff published on this site!
“worse than Lysenkoism because there was no Stalin to hold a gun to their heads. They all volunteered.”
They are money grubbing, uncaring, genocidal psychopaths to be sure.
Thank you so much for sticking it out and finishing the job. So many others would have just given up. Your intellectual honesty is only outdone by your intestinal fortitude!
I have been trying to spread it as much as I can on Twitter.
With some resistance from people who just don’t get it. True Believers.
They waste a lot of my time, as I do try to explain in plain terms.
But sometimes it is like talking to a brick wall.
Monckton of Brenchley: Pat Frank’s powerful article is the most important ever to have been published at WattsUpWithThat.com.
I concur. Really well done.
I can still remember the day our PM signed on to the Kyoto accord. My director of the laboratory told me to calm down and look (through the eyes of management) forward to more $$$ for research.
All submissions were round filed IF they did not pay homage to the CAGW meme. After a few years of this charade I was fortunate enough to retire. I have not avoided discussions on the climate issue, but I have been emotionally depleted by self righteous fools, an abundant lot indeed.
I am very pleased to read this publication and greatly appreciate the author’s dedication and perseverance. Bravo!
In my area of engineering , we called certain things ” stupid marks ” .
Bandaids , stitches , bruises , etc .
Looks like a whole lot of alarmists are revealed to be adorned with “stupid marks ”
😉
CtM asked me to look at this as the claims ‘are rather strong’. I went and read the published paper, and then too a quick look at the SI. Called CtM back and said gotta post this. Would urge all here to also read the paper. Hard science at its best. Rewarding.
This paper should have been published long ago, as it is rigorous, extremely well documented, and with robust conclusions beyond general dispute. Simple three part analysis: (1) derive an emulator equation for a big sample of actual CMIP5 results (like Willis Eschenbach did) showing delta T result is linear with forcing, (2) go at what IPCC says is the climate model soft underbelly, clouds, and derive the total cloud fraction (TCF) difference between the CMIP5 models and the average of MODIS and ISCCP observed TCF (a rigorous measurement of the TCF model to observational accuracy limits), then propagate that potential inaccuracy forward using the emulator equation. QED.
Or as we said where I studied: QFED.
Is it not somewhat improper to mix old English (“F”) with Latin (“QED”)?
How do you know he didn’t mean fornix/fornicis?
https://latin-dictionary.net/definition/20925/fornix-fornicis
Thank-you, Rud. You’re a critical reviewer and your positive assessment is an endorsement from knowledge.
Yours is the most important comment. On the one diagram in the article there is the right side panel. What happens when you weight the possibilities? We believe the climate has a current state equilibrium. An error may go away from it, but then what does an equilbrium do if it does exists? It corrects for random drift. -15 C in the diagram. That fact the we haven’t been there in the past 200 years tell us about the system. So whatever the error compounding or drift problem is, their models don’t do that once they’re adjusted. The -15 C could happen I guess, but it didn’t. So that something could happen, this -15 C that didn’t happen, isn’t a good argument. This same criteria could apply to any model.
So with chaos, small errors propagate. Yet the models that demonstrate basic chaos typically include two basins of attraction. Which to me are equilibrium deals. The things that stop wild results like -15 C. It just makes the change a swap to another state. Small error propagation over in nature, is handled. There’s equilibrium and once in awhile a jump to another state.
I don’t know that this error propagation should have traction?
“To push the earth’s climate into the glaciated state would require a huge kick from some external source. But Lorenz described yet another plausible kind of behavior called “almost-intransitivity.” An almost-intransitive system displays one sort of average behavior for a very long time, fluctuating within certain bounds. Then, for no reason whatsoever, it shifts into a different sort of behavior, still fluctuating but producing a different average. The people who design computer models are aware of Lorenz’s discovery, but they try at all costs to avoid almost-intransitivity. It is too unpredictable. Their natural bias is to make models with a strong tendency to return to the equilibrium we measure every day on the real planet. Then, to explain large changes in climate, they look for external causes—changes in the earth’s orbit around the sun, for example. Yet it takes no great imagination for a climatologist to see that almost-intransitivity might well explain why the earth’s climate has drifted in and out of long Ice Ages at mysterious, irregular intervals. If so, no physical cause need be found for the timing. The Ice Ages may simply be a byproduct of chaos.”
Chaos: Making a New Science, James Gleick
Ragnaar, just wow. CtM and I had this exact argument for over half an hour concerning the Franks paper and natural ‘chaos node’ stability stuff in by mathematical definition N-1 Poincare spaces.
As someone who has studied this math (and peer review published on it) rather extensively for other reasons, a few observations:
It isnt as severe in systems as projected. Real world analogy from Gleick’s book: a chaotically leaking kitchen faucet never bursts into a devastating flood. It just bifurcates then goes back to initial drip conditional conditions. Plumbers know this.
I botched the line that had nature in it in my above. But a dripping faucet works fine. The idea is to say the real world does this. So it’s very likely the climate uses the same rules. The errors in modeling a dripping faucet do or do not compound? We can determine the equilibrium value of the drips per minute. Observation would be one way. Input pressure and constriction measurement would be another. Each drip may deviate by X amount of time. But the system is pushing like a heat engine. The water pressure is constant and the constriction is constant or at least has an equilibrium value. The washer may be slightly moving. So the error could be X amount of time per drip. Now model this through 100 drips or Y amount of time. Unless the washer fails or shifts, it can be done.
Ragnaar, the ±15 C in the graphic is not a temperature, it’s an uncertainty. There is no -15 C, and no +15 C.
You are making a very fundamental mistake, interpreting an uncertainty as a temperature. It’s a mistake climate modelers made repeatedly.
The mistake implies you do not understand the uncertainty derived from propagated calibration error. It is root-sum-square, which is why it’s ‘±.’
The wide uncertainty bounds, the ±15 C, mean that the model expectation values (the projected air temperatures) convey no physical meaning. The projected temperatures tell us nothing about what the future temperature might be.
The ±15 C says nothing _at_all_ about physical temperature itself.
CtM, be reassured. There is nothing in my work, or in the graphic, that implies an excursion to 15 C warmer or cooler.
Uncertainty is not a physical magnitude. Thinking it is so, is a very basic mistake.
Your comment about drip rate assumes a perfectly constant flow; a perfect system coupled with perfect measurement. An impossible scenario.
If there is any uncertainty in the flow rate and/or in the measurement accuracy, then that uncertainty builds with time. After some certain amount of time into the future, you will no longer have an accurate estimate about the number of drops that will have fallen.
Dr. Frank,
More clearly, my issue is that the use of proxy linear equations, despite being verified under multiple runs, may not extend to an error analysis because of the chaotic attractors, or attraction to states that exist in the non-linear models. My math is not advanced enough to follow the proofs.
Simply stated, the behavior could be so different i.e more bounded, that errors don’t accumulate in the same manner. Rud assured me that you covered that.
Let me add that in ±15 C uncertainty the 15 Cs are connected with a vertical line. They are not horizontally offset.
If someone (such as you Ragnaar) supposes the ±15 C represents temperature, then the state occupies both +15 C and -15 C simultaneously.
One must then suppose that the climate energy-state is simultaneously an ice-house and a greenhouse. That’s your logic.
A delocalized climate. Quantum climatology. Big science indeed.
But it’s OK, Ragnaar. None of the climate modelers were able to think that far, either.
Thank you. The question is what happens with what I’ll call error propagation? What I think you’re saying is this error per iteration gives us roughly plus or minus 15 C at a future time as bounds.
You’re talking about what the models fail to do I think. I’ll say they are equilibrium driven either explicitly or forced to do that, maybe crudely. Your right side plot reminds me of some chaos chart I’d seen.
I am where a linear model works just as well as a GCM for the GMST. A linear model breaks down with chaos though. Both the simple model and the CMIPs have this problem.
Can we get to where your right hand panel above is a distribution?
Here’s what I think you’re doing: Taking the error and stacking it all in one direction or the other. As time increases the error grows. And I am trying to reconcile that with the climate system. Which is ignoring the GCMs. But my test for them anyways is their results. Do they do the same thing? My Gleick quote above adds context. He suggests they GCMs sometimes do but are prevented from doing so. If I was heading a GCM team, I’d do that too. If Gleick is not wearing a tinfoil hat, he may be a path to understanding why GCMs don’t run away?
So we have you’re error propagation with a huge range. And GCMs not doing that. And the climate not doing that. A runaway is what I’ll call chaos. Chaos is kept in check both by the climate and the models. This means most the time, we don’t get an error propagation as your range indicates.
Let’s say I am trying to market something here. And let’s say that’s an understanding of what you’re saying. I am not there yet. 99% of population isn’t there yet. Assume you’re right. The next step it to market the idea. And that can involve a cartoon understanding of your point. It worked for Gore. In the end it doesn’t matter if you’re right. It matters if your idea propagates. At least to as far a Fox News.
Uncertainty is not a measure of that which is being measured or forecast. It is a measurement of the instrument or tool that is doing the measuring or forecasting. In the case of climate, we believe that the system is not unbounded, whether we use the chaos theory concept of attractors or some other concept. That the uncertainty estimates are greater than the bounds of the system simply means that the instrument or tool (in this case a model) cannot provide any useful information about that which is being measured or forecast.
For example, a point source of light can be observed at night. If one observes that source of light through a camera lens and the lens is in focus, then the light will be seen as a point source. However, if the lens is defocused, then the point source of light will appear to be much larger. That is analogous to the measure of uncertainty. The point source of light has not changed its size. It just appears to be larger, because of the unfocused lens. In the same manner, the state of the system does not change because the uncertainty of the tool used to measure or forecast the system is estimated to be much larger. The system that we think is bounded continues to be bounded, even thought the measure of uncertainty exceeds the bounds of the system. All the uncertainty then tells us is that we cannot determine the state of the system or forecast it usefully, because the uncertainty is too large. In the same manner, an unfocused camera lens cannot tell us how large the point of light is, because the fuzziness caused by the unfocused camera lens makes the point source of light appear to be much larger than it actually is.
Ragnaar, the issue of my analysis concerns the behavior of GCMs, not the behavior of the climate.
The right-side graphic is a close set of vertical uncertainty bars. It is not a bifurcation chart, like the one you linked. Its shape comes from taking the square root of the summed calibration error variances.
The propagation of calibration error is standard for a step-wise calculation. However, it is not that, “As time increases the error grows.” as you have it.
It is that as time grows the uncertainty grows. No one knows how the physical error behaves, because we have no way to know the error of a calculated future state.
Maybe the actual physical error in the calculation shrinks sometimes. We cannot know. All we know is that the projection wanders away from the correct trajectory in the calculational phase-space.
So, the uncertainty grows, because we have less and less knowledge about the relative phase-space positions of the calculation and the physically correct state.
What the actual physical error is doing over this calculation, no one knows.
Again, we only know the uncertainty, which increases with the number of calculational steps. We don’t know the physical error.
Hi Charles, please call me Pat. 🙂
The central issue is projection uncertainty, not projection error. Even if physical error is bounded, uncertainty is not.
Thank-you Phil. You nailed it. 🙂
Your explanation is perfect, clear, and easy to understand. I hope it resolves the point for everyone.
Really well done. 🙂
Pat Frank: Ragnaar, the issue of my analysis concerns the behavior of GCMs, not the behavior of the climate.
It’s awfully good of you to hang around and answer questions.
You may have to repeat that point I quoted often, as it’s easy to forget and some people have missed it completely.
Here is NIST’s description with equations on error propagation.
2.5.5. Propagation of error considerations
Citing the derivation by Goodman (1960)
Leo Goodman (1960). “On the Exact Variance of Products” in Journal of the American Statistical Association, December, 1960, pp. 708-713.
https://www.itl.nist.gov/div898/handbook/mpc/section5/mpc55.htm
https://www.semanticscholar.org/paper/On-the-Exact-Variance-of-Products-Goodman/f9262396b2aaf7240ac328911e5ff1e46ebbf3da
No. We do know that the glacial cycle responds to changes in the orbit of the Earth caused by the Sun, the Moon, and the planets. Since the early 70’s we have hard evidence that benthic sediments reproduce Milankovitch frequencies with less than 4% error. James Gleick shows a worrisome ignorance of what he talks about.
From one of the men that solved the mystery:
https://www.amazon.com/Ice-Ages-Solving-John-Imbrie/dp/0674440757
Try to use that reasoning to explain the Younger Dryas and other D-O events. You can’t.
These look more like the climate moving to another ‘strange attractor’ and back again than a smooth orbital or declination change.
Because the YD and D-O events do not depend on orbital changes. That doesn’t mean that we don’t know what drives the glacial cycle. We do since 1920 and we have proof since 1973. But lots of people are not up to date, still in the 19th century.
Pat, nice going. Way to hang in there.
One of the listed reviewers is Carl Wunsch of MIT. Couldn’t get much more mainstream in the Oceanographic research community. I am not familiar with Davide Zanchettin, but his publication record is significant, as is Dr. Luo’s. Was there another reviewer that is not listed? If so, do you know why?
Thanks, Mark. I was really glad they chose Carl Wunsch. I’ve conversed with him in the rather distant past, and he provided some very helpful insights. His review was candid, critical, and constructive.
I especially admire Davide Zanchettin. He also provided a critical, dispassionate, and constructive review. It must have been a challenge, because one expects the paper impacted his work. But still, he rose to the standards of integrity. All honor to him.
I have to say, too, that his one paper with which I’m familiar, a Bayesian approach to GCM error, candidly discussed the systematic errors GCMs make and is head-and-shoulders above anything else I’ve read along those lines.
There were two other reviewers. One did not dispute the science, but asked that the paper be shortened. The other was very negative, but the arguments reached the wanting climate modeler standard with which I was already very familiar.
Neither of those two reviewers came back after my response and rendered a final recommendation. So, their names were not included among the reviewers.
Excellent article with profound meaning but I believe in the current propaganda driven world it will be completely ignored. Google will probably brand it as tripe. Sincere THANK You to Pat Frank.
“… but I believe in the current propaganda driven world it will be completely ignored.”
That’s how to bet.
Thanks, Terry. It’s early yet. Let’s see who notices it.
Christopher Monckton is going to bring it to certain powers in the UK. Maybe a fuse will be lit. 🙂
What do you suppose would happen if I posted the link to this page on my wife’s Facebook page?
And of course I hope that it was understood that the point is for everyone with a social media presence do the same to help produce some noise about it.
Have him pass it on to Nigel Farage. They are a bit preoccupied with Brexit at the moment.
How about to the staffs of all GOP members of Congress and POTUS?
Also file it as an amicus brief in Mann v. Steyn and Steyn v. Mann.
Congratulations Pat on finally getting this done.
Thanks, dp. 🙂
Pat Frank ==> Congratulations on your hard won battle to get your paper published! Marvelous!
Thanks very much, Kip. 🙂
And congratulations from an old veteran of the forum. Saw this article retweeted on several market forums so it is getting around.
Well done Pat!
There is a Dr. Pat Frank video on YouTube that is my all time favorite :
https://www.youtube.com/watch?v=THg6vGGRpvA
That video has been very important to me personally, as it clearly and concisely lays out the problems of propagation of errors in climate models in a way I could readily understand.
I am not at all a scientist but I did apparently receive a really good grounding in scientific error propagation in my high school studies and it had always astonished me that the climate modelers and other climate “scientists” seemed to be oblivious to them.
I am no longer a daily WUWT reader – just busy leading my life….. But I am so grateful to see this and I thank Anthony and Dr. Frank for their perseverance.
I have downloaded the paper, its supporting info and the previous submission and comments. I appear to have many hours of interesting reading ahead of me!
Thanks,
Dave Day
DD, had not known about that video. Many thanks. A great simple layman’s explanation of his later now published paper mathematical essence.
Thanks for the link. I enjoyed watching it.
Thanks Dr. Pat Frank for your presentation on youtube called ‘No Certain Doom”
Thanks for your comments of appreciation here, folks. They’re appreciated right back. 🙂
My Dad calls this sort of thing a “guesstimate”. He’s really smart.
I want to put an ENSO meter on my car’s dashboard.
Congrats, Pat. Well done and well deserved.
Loved your “No Certain Doom” presentation of ~ 3 years ago. The link to it is in my favorites list, and I refer to it and share it often with (approachable) warmists.
Tracking the Propagation of Error over time (in time based models) is fundamental to the determination of any model’s ability to make accurate projections. It sets the limits to the accuracy of the projections. All errors “feed back through the loop” in each iteration…multiplying errors each time “around”.
The $Billions spent on Climate Models (that incorporate these known large error boundary amplitudes) that go out more than a couple years, is deliberate fraud…unless the errors are reported for each time interval…AND THIS IS NOT DONE with the Climate Models used by the IPCC in their propaganda, and by US policy makers.
Nobody that works with time based models that make projections IS UNAWARE OF THIS. It’s elementary and VERY OBVIOUS.
Again, this is deliberate fraud.
I should have specified that non-random errors multiply at each iteration…random errors can and generally do cancel out.
All the Climate Models have been shown to have non-random errors…AND THEY ALL HAVE THE SAME ERROR(S).
See: https://www.youtube.com/watch?v=THg6vGGRpvA
This is something I have been long awaiting. Now, how is this going to be brought to the attention of and reported in the MSM? Or, is this going to be swept under the carpet in the headlong rush to climate hysteria?
In answer to your second question, probably.
In answer to your first question, find an honest MSM outlet owner who will let his editor/s report on this paper.
My background is in neuroscience, in which ‘modeling’ is very popular. Right from the beginning, it was obvious that the ‘models’ are grossly simplistic compared to a rat brain, let alone a human one: they remain so, despite publicity about the coming of the ‘thinking robot’. When the first model-based projections of climate came out, I was skeptical, and have been a disbeliever from day one. I may not know enough physics to contribute substantively here, but I sure do know about propagation of error.
Your comment mirrors my own experience as someone trained in epidemiology of human genetics. I have seen over and over again, confirmation bias, ignoring other possible explanations, ignoring confounding factors, and mixing causation with association in climate modelling. And I’ve seen assumptions on proxies that make me just want to gag. I cut my teeth as a scientist on the idea only a fool assumes an extrapolation is guaranteed to happen. I too, lack the knowledge of physics to to contribute substantively here. I admit that I can barely read some of those differential equations in Dr. Frank’s paper, but he’s sure nailed it. Well done. At some point the hype on climate alarmism is going to go too far and people will start speaking up. Something will turn the tide. I personally began doubting this pseudoscience when I asked an innocent question on error bars and got called a troll in pay of big oil and banned from an on line discussion group. If an innocent question about error results in that kind of behaviour, it’s a cult not science. And peer review? Bah! I had a genetics paper rejected after a negative review by a reviewer who didn’t know what Hardy Weinberg equilibrium was. I had just finished teaching it to a second year genetics class that week but this reviewer had never heard of it! Nor would the editor agree to find another reviewer. The paper was just tossed. Peer review requires peers to review, not pals and certainly not ignoramuses.
Really great rant, Natalie. 🙂
You’ve had the climate skeptic experience of getting banned for merely thinking critically.
You really hit a lot of very valid points.
Salvation is only through faith. Faith is the negation of reason. True believers cannot be swayed by logic or evidence. Resistance is futile.
AndyHce Climate science has gone astray for relying on unvalidated models.
Contrast Christianity where faith is founded on the facts of historical eye witness evidence, especially Jesus resurrection. e.g., William Lane Craig’s popular and scholar writings and dissertation at https://www.reasonablefaith.org/
PS for a validated climate model by Apollo era NASA scientists and engineers see TheRightClimateStuff.com
My comment is not about the potential of the scientific method to provide insight into reality, but about the massive belief systems that have at times observed heretics, witches, demons, and other dangers all around them. A seemingly major belief system now finds a growing sea of deniers everywhere. The believers are mostly immune from reason. The above expressed hope/belief about changes are based on the false premise that logic and evidence can matter. This is no different than when the expressed beliefs are openly labeled religious.
I could go on about the wide range of groups, both large and small, calling themselves Christians though having widely varying beliefs about what that means. No small number of them are fixated to the idea that their own group has the only true path to whatever end they imagine. Fortunately, the majority of these, but hardly all, do not seem to be violent towards other views. However, that isn’t relevant here as this climate thing is its own religion.
I noticed TheRightClimateStuff.com says at one point, about the 180 ppm bottom of atmostpheric CO2 during the last ice age glaciation, “This was dangerously close to the critical 150 ppm limit required for green plants to grow.” Make that required for the most-CO2-needy plants to grow. The minimum atmospheric concentration of CO2 required for plants to grow and reproduce ranges from 60-150 PPM for C3 plants and is as low as below 10 ppm for C4 plants, among the plants studied in
Plant responses to low [CO2] of the past, Laci M. Gerhart and Joy K. Ward
https://pdfs.semanticscholar.org/0e23/5047cba00479f9b2177e423e8d31db43229d.pdf
For religious faith to have value, it must be based not upon evidence, but belief. That’s why Protestant theology relies upon the Hidden God, a view also found in some Catholic Scholastics. Those, like Aquinas, who sought rational proofs for God’s existence didn’t value faith alone, as did Luther and Calvin.
As Luther said, “Who would be a Christian, must rip the eyes out of his reason” (Wer ein Christ sein will, der steche seiner Vernunft die Augen aus).
CACA pretends to have evidence which it doesn’t. GIGO computer games aren’t physical evidence. So it’s a faith-based belief system, not a valid scientific hypothesis. Indeed, it was bron falsified, since Earth cooled for 32 years after WWII, despite rising CO2. And the first proponents of AGW, ie Arrhenius in the late 19th and Callendar in the early 20th centuries, considered man-made global warming beneficial, not a danger. In the 1970s, others hoped that AGW would rescue the world from threatening global cooling.
Don,
Yup, CAM and C4 plants can get by on remarkably little CO2, but more is still better for them. In response to falling plant food in the air over the past 30 million years, C4 pathways evolved to deliver CO2 to Rubisco.
But most crops and practically all trees are C3 plants. I’d hate to have to subsist on corn, amaranth, sugar cane, millet and sorghum. In fact, without legumes to provide essential amino acids, I couldn’t. Would have to rely on animal protein fed by these few plants.
Allegedly some warm legumes are C4, but I don’t know what species they are. I imagine fodder rather than suitable for human consumption.
Religious faith is not the negation of reason. It is the transcendent result of right reasoning.
Nonsense, but totally off topic.
Well said Natalie.
“ … but I sure do know about propagation of error.”
So, Fran, I gather you disagree with Nick Stokes that root-mean-square error has only a positive root. 🙂
And with Nick’s idea (and ATTP’s) that one can just blithely subtract rsme away to get a perfectly accurate result. I gather you disagree with that, too? 😀
“that root-mean-square error has only a positive root”
I issued a challenge here inviting PF or readers to find a single regular publication that expressed rmse, or indeed any RMS figure, as other than a positive number. No-one can find such a case. Like many things here, it is peculiar to Pat Frank. His reference, Lauer and Hamilton, gave it as a positive number. Students who did otherwise would lose marks.
The population standard deviation (greek letter: sigma) is expressed as a positive number, yet we talk about confidence intervals as plus or minus one or more sigmas. Statistics How To states:
xxxxx xx xxxxxxxxxx xxx xxxx xxxxx xx Nick Stokes. x xxxxxxx xx xxxxx x xxxxxxx xxxxxx xx xx xxxxxxxxxxxx xxxxxx. xxx xxx xxxxxxxxxx xxxxxx. (Comment self censored)
“The population standard deviation (greek letter: sigma) is expressed as a positive number, yet we talk about confidence intervals as plus or minus one or more sigmas”
Yes, that is the convention. You need a ± just once, so you specify the number, and then use the ± to specify confidence intervals. You can’t do both (±±σ?).
It’s a perfectly reasonable convention, yet Pat insists that anyone who follows it is not a scientist. But he can’t find anyone who follows his variant.
I believe Pat Frank is following the proper convention in his paper. Your arguments are contradictory and seem to deliberately create confusion. To repeat, it seems to me that the proper convention is being followed in the paper. It was not confusing to me nor would it be to anyone reasonable. You are dwelling on self-contradictory semantics. There is no confusion in the paper.
Nick, “You need a ± just once”
You just refuted yourself, Nick.
And you know it.
You’ll just never admit it.
“I believe Pat Frank is following the proper convention in his paper.”
No, you stated the convention just one comment above. The measure, sd σ or rmse, is a positive number. When you want to describe a range, you say x±σ.
It wouldn’t be much of an issue, except Pat keeps making it one, as in this article:
“did not realize that ‘±n’ is not ‘+n.’”
That is actually toned down from previous criticism of people who simply follow the universal convention that you stated.
Nick, “ I issued a challenge here inviting PF or readers to find a single regular publication that expressed rmse, or indeed any RMS figure, as other than a positive number. No-one can find such a case. ”
I supplied a citation that included plus/minus uncertainties, and you then dropped the issue. Here.
And here is an example you’ll especially like because Willy Soon is one of the authors. Quoting, “the mean annual temperatures for 2011 and 2012 were 10.91 ± 0.04 °C and 11.03 ± 0.04 °C respectively, while for the older system, the corresponding means were 10.89 ± 0.04 °C and 11.02 ±0.04 °C. Therefore, since the annual mean differences between the two systems were less than the error bars, and less than 0.1 °C, no correction is necessary for the 2012 switch. (my bold)”
Here is another example. It’s worth giving the citation because it’s so relevant: Vasquez VR, Whiting WB. Accounting for Both Random Errors and Systematic Errors in Uncertainty Propagation Analysis of Computer Models Involving Experimental Measurements with Monte Carlo Methods Risk Analysis. 2006;25(6):1669-81.
Quoting, “A similar approach for including both random and bias errors in one term is presented by Dietrich (1991) with minor variations, from a conceptual standpoint, from the one presented by ANSI/ASME (1998). The main difference lies in the use of a Gaussian tolerance probability κ multiplying a quadrature sum of both types of errors, … [where the] uncertainty intervals for means of large samples of Gaussian populations [is] defined as x ± κσ.
“[One can also] define uncertainty intervals for means of small samples as x ± t · s, where s is the estimate of the standard deviation σ.”
Here’s a nice one from an absolute standard classic concerning error expression: “The round-off error cannot exceed ± 50 cents per check, so that barring mistakes in addition, he can be absolutely certain that the total error’ of his estimate does not exceed ±$10. ” in Eisenhart C. Realistic evaluation of the precision and accuracy of instrument calibration systems. J Res Natl Bur Stand(US) C. 1963;67:161-87.
And this, “If it is necessary or desirable to indicate the respective accuracies of a number of results, the results should be given in the form a ± b… ” in Eisenhart C. Expression of the Uncertainties of Final Results Science. 1968;160:1201-4.
Let’s see, that’s four cases, including two from publications that are guides for how to express uncertainty in physical magnitudes.
It appears that one can indeed find such a case.
Here’s another: JCGM. Evaluation of measurement data — Guide to the expression of uncertainty in measurement Sevres, France: Bureau International des Poids et Mesures; 100:2008. Report No.: Document produced by Working Group 1 of the Joint Committee for Guides in Metrology (JCGM/WG 1), under section 4.3.4: “A calibration certificate states that the resistance of a standard resistor RS of nominal value ten ohms is 10.000 742 Ω ± 129 μΩ …”
Another authoritative recommendation for use of ± in expressions of uncertainty.
A friend of mine, Carl W. has suggested that you are confusing average deviation, α, with standard deviation, σ.
Bevington and Robinson describe the difference, in that α is just the absolute value of σ. They go on to say that, “The presence of the absolute value sign makes its use [i.e, α] inconvenient for statistical analysis.
The standard deviation, σ, is described as “a more appropriate measure of the dispersion of the observations” about a mean.
Nothing but contradiction for you there, Nick.
Pat,
This is so dumb that I can’t believe it is honest. Here is what you wrote castigating Dr Annan:
“He wrote, “… ~4W/m^2 error in cloud forcing…” except it is ±4 W/m^2 not Dr. Annan’s positive sign +4 W/m^2. Apparently for Dr. Annan, ± = +.”
The issue you are making is not writing x±σ as a confidence interval. Everyone does that; it is the convention as I described above. The issue you are making is about referring to the actual RMS, or σ, as a positive number. That is the convention too, and everyone does it, Annan (where you castigated), L&H and all. I’ve asked you to find a case where someone referred to the rmse or σ as ±. Instead you have just listed, as you did last time, a whole lot of cases where people wrote confidence intervals in the conventional way x±σ.
“A friend of mine, Carl W”
Pal review?
You need to read carefully, Nick.
From Vasquez and Whiting above: “[where the] uncertainty intervals for means of large samples of Gaussian populations [is] defined as x ± κσ.
“[One can also] define uncertainty intervals for means of small samples as x ± t · s, where s is the estimate of the standard deviation σ.”
That exactly meets your fatuous exception, “to find a case where someone referred to the rmse or σ as ±.(my emphasis)”
Here‘s another that’s downright basic physics: Ingo Sick (2008) Precise root-mean-square radius of4He Phys. Rev. C77, 041302(R).
Quoting, “The resulting rms radius amounts to 1.681±0.004 fm,where the uncertainty covers both statistical and systematic errors. … Relative to the previous value of 1.676±0.008 fm the radius has moved up by 1/2 the error bar.”
Your entire objection has been stupid beyond belief, Nick, except as the effort of a deliberate obscurantist. You’re hiding behind a convention.
RMSE is sqrt(error variance) is ±.
Period.
“That exactly meets your fatuous exception”
Dumber and dumber. You’ve done it again. I’ll spell it out once more. The range of uncertainty is expressed as x ± σ, where σ, the sd or rmse etc, is given as a positive number. That is the convention, and your last lot of quotes are all of that form. The convention is needed, because you can only put in the ± once. If you wrote σ=±4, then the uncertainty range would have to be x+σ. But nobody does that.
“You’re hiding behind a convention.”
It is the universal convention, and for good reason. You have chosen something else, which would cause confusion, but whatever. The problem is your intemperate castigation of scientists who are merely following the convention, as your journal should have required.
Willful dyslexia, Nick.
If t in t*s quoted from above is from the t distribution, then s is an estimate of the standard deviation of the distribution of another sample ststistic (probably the mean), which makes the formula a confidence interval or a prediction interval, the difference depending on the qualitative nature of s.
A friend of mine, Carl W.
Nick, “Pal review?”
Different last name. But I appreciate the window on your ever so honest heart, Nick.
Since when does a real number square not have a negative root?
Nick Stokes Why make a mountain out of a molehill of missunderstanding over the common usage? See BIPM JIPM GUM
Both positive and negative values are given to show the range. “Y = y ± U ”
https://www.bipm.org/utils/common/documents/jcgm/JCGM_100_2008_E.pdf
David,
You’re doing it too. I’ll spell it out once more. The range of uncertainty is expressed as x ± σ, where σ, the sd or rmse etc, is given as a positive number. That is exactly what your link is saying.
But thanks for the reference. It does spell out the convention. From Sec 3.3.5:
” The estimated standard deviation (C.2.12, C.2.21, C.3.3) u, the positive square root of u2, is thus u = s and for convenience is sometimes called a Type A standard uncertainty”
Search for other occurrences of “positive square root”; there are many.
As I said above, unlike the nutty insistence on change of units, which does determine the huge error inflations here, the ± issue doesn’t seem to have bad consequences. But it illustrates how far out this paper is, when Pat not only makes up his own convention, but castigates the rest of the world who follow the regular convention as not scientists.