Reposted from Dr. Roy Spencer’s blog
September 13th, 2019 by Roy W. Spencer, Ph. D.
Have you ever wondered, “How can we predict global average temperature change when we don’t even know what the global average temperature is?”
Or maybe, “How can climate models produce any meaningful forecasts when they have such large errors in their component energy fluxes?” (This is the issue I’ve been debating with Dr. Pat Frank after publication of his Propagation of Error and the Reliability of Global Air Temperature Projections. )
I like using simple analogies to demonstrate basic concepts
Pots of Water on the Stove
A pot of water warming on a gas stove is useful for demonstrating basic concepts of energy gain and energy loss, which together determine temperature of the water in the pot.
If we view the pot of water as a simple analogy to the climate system, with a stove flame (solar input) heating the pots, we can see that two identical pots can have the same temperature, but with different rate of energy gain and loss, if (for example) we place a lid on one of the pots.

A lid reduces the warming water’s ability to cool, so the water temperature goes up (for the same rate of energy input) compared to if no lid was present. As a result, a lower flame is necessary to maintain the same water temperature as the pot without a lid. The lid is analogous to Earth’s greenhouse effect, which reduces the ability of the Earth’s surface to cool to outer space.
The two pots in the above cartoon are analogous to two climate models having different energy fluxes with known (and unknown) errors in them. The models can be adjusted so the various energy fluxes balance in the long term (over centuries) but still maintain a constant global average surface air temperature somewhere close to that observed. (The model behavior is also compared to many observed ocean and atmospheric variables. Surface air temperature is only one.)
Next, imagine that we had twenty pots with various amounts of coverage of the pots by the lids: from no coverage to complete coverage. This would be analogous to 20 climate models having various amounts of greenhouse effect (which depends mostly on high clouds [Frank’s longwave cloud forcing in his paper] and water vapor distributions). We can adjust the flame intensity until all pots read 150 deg. F. This is analogous to adjusting (say) low cloud amounts in the climate models, since low clouds have a strong cooling effect on the climate system by limiting solar heating of the surface.
Numerically Modeling the Pot of Water on the Stove
Now, let’s say we we build a time-dependent computer model of the stove-pot-lid system. It has equations for the energy input from the flame, and loss of energy from conduction, convection, radiation, and evaporation.
Clearly, we cannot model each component of the energy fluxes exactly, because (1) we can’t even measure them exactly, and (2) even if we could measure them exactly, we cannot exactly model the relevant physical processes. Modeling of real-world systems always involves approximations. We don’t know exactly how much energy is being transferred from the flame to the pot. We don’t know exactly how fast the pot is losing energy to its surroundings from conduction, radiation, and evaporation of water.
But we do know that if we can get a constant water temperature, that those rates of energy gain and energy loss are equal, even though we don’t know their values.
Thus, we can either make ad-hoc bias adjustments to the various energy fluxes to get as close to the desired water temperature as we want (this is what climate models used to do many years ago); or, we can make more physically-based adjustments because every computation of physical processes that affect energy transfer has uncertainties, say, a coefficient of turbulent heat loss to the air from the pot. This is what model climate models do today for adjustments.
If we then take the resulting “pot model” (ha-ha) that produces a water temperature of 150 deg. F as it is integrated over time, with all of its uncertain physical approximations or ad-hoc energy flux corrections, and run it with a little more coverage of the pot by the lid, we know the modeled water temperature will increase. That part of the physics is still in the model.

This is why climate models can have uncertain energy fluxes, with substantial known (or even unknown) errors in their energy flux components, and still be run with increasing CO2 to produce warming, even though that CO2 effect might be small compared to the errors. The errors have been adjusted so they sum to zero in the long-term average.
This directly contradicts the succinctly-stated main conclusion of Frank’s paper:
“LWCF [longwave cloud forcing] calibration error is +/- 144 x larger than the annual average increase in GHG forcing. This fact alone makes any possible global effect of anthropogenic CO2 emissions invisible to present climate models.”
I’m not saying this is ideal, or even a defense of climate model projections. Climate models should ideally produce results entirely based upon physical first principles. For the same forcing scenario (e.g. a doubling of atmospheric CO2) twenty different models should all produce about the same amount of future surface warming. They don’t.
Instead, after 30 years and billions of dollars of research they still produce from 1.5 to 4.5 deg. C of warming in response to doubling of atmospheric CO2.
The Big Question
The big question is, “How much will the climate system warm in response to increasing CO2?” The answer depends not so much upon uncertainties in the component energy fluxes in the climate system, as Frank claims, but upon how those energy fluxes change as the temperature changes.
And that’s what determines “climate sensitivity”.
This is why people like myself and Lindzen emphasize so-called “feedbacks” (which determine climate sensitivity) as the main source of uncertainty in global warming projections.
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.
Well, isn’t heart of the scientific method experiment and demonstration?
The point of this second experiment is to demonstrate that a surface with multiple outgoing heat transfer pathways cannot radiate as a BB. Just as reflected, transmitted, absorbed incoming radiation must equal 1.0, the outgoing radiative and non-radiative heat transfer processes must equal 1.0. Radiation does not function independently from the non-radiative processes.
The immersion heater is feeding 1,180 W of power into the insulated pot of water which is boiling at an equilibrium temperature of 200 °F. (6,300 feet) The only significant pathway for energy out of this system is through the water’s surface.
Any surface at 200 °F radiates at 1,021 W/m^2. This is 2.38% of the 42,800 W/m^2 power input to the system. That means 97.6% of the power input is carried away by non-radiative heat transfer processes, i.e. conduction, convection and evaporation. Likewise, the significant non-radiative heat transfer processes of the atmospheric molecules render the 396 W/m^2 LWIR radiation upwelling from the surface impossible. The ocean surface cannot radiate with a 0.97 emissivity.
No 396 W/m^2 upwelling BB LWIR means there is:
No energy to power the 333 W/m^2 GHG out-of-nowhere perpetual energy loop,
No energy for the CO2/GHGs to “trap” or absorb and re-radiate “warming” the atmosphere/surface,
No RGHE or 33 C warmer and
No man-caused climate change.
https://principia-scientific.org/debunking-the-greenhouse-gas-theory-with-a-boiling-water-pot/
This second experiment validates the findings of the modest experiment.
Second experiment and exhibits:
https://www.linkedin.com/feed/update/urn:li:activity:6454724021350129664
Modest experiment:
https://www.linkedin.com/feed/update/urn:li:activity:6394226874976919552
Annotated TFK_bams09
https://www.linkedin.com/feed/update/urn:li:activity:6447825132869218304
Nick,
Wrong again.
The ocean does have an emissivity of about .97. In Dr. Roy’s kettle visualization, you correctly calculate 1021 W/sq.M for 200 F temperature, yet claim the 333 negative part of the SB equation does not exist when it comes to RGHE. Not sure how you can possibly reconcile both views in your thought processes.
[Language. Snipped. Mod]. Someone tell AOC and her Little friends they can’t have their GND or their socialism
The lid will create a higher pressure and at saturation boiling a higher temperature.
The lidded pot must “boil” at a higher temperature than the open pot.
In fact the open pot must boil at 212 F at sea level; 200 F at 6,000 feet.
The altitude for boiling water at 150 F is about 30,000 feet.
@ur momisugly Nick,
what point are you trying to make ???
no mention of boiling, no mention of sealed lids (therefore no increase in pressure).
why didn’t you read the words before criticising ???
To bad my earlier comment didn’t survive the “snipper.”
Just the lid alone increases pressure. No need for it to be sealed.
To be blunt, the two pot setups are not exactly the same.
Any attempt to model either stove top setup changes the parameters required and how those parameters are used within the model.
Nick specified
to eliminate sloppy open flame heating of the pot and subsequent wasted heat and the losses due to conductive heat loss.
A perfect insulated pot with lid would only require a heat source of 150° and time for the contents to reach 150°. Not so, for the open uninsulated pot with an external heat source.
Nick also specified an altitude of 6,300 feet (1,920 meters) since small variations in altitude affect heating and cooling. Though, Nick specifying an immersion heater eliminates changes in gas delivery and combustion changes.
Nick ably demonstrates that a multitude of myriad parameters are necessary to model the simplest closed system demonstrations or experiments.
Open systems increase the variables required.
a multitude of myriad, that sounds like a big number 😉
“Just the lid alone increases pressure. No need for it to be sealed.” It will only increase pressure if there is ferocious boiling and the lid gap so small as to restrict the flow. No said is was a heavy lid preventing vapour from escaping. Don’t get in to nit-picking, it is a simplistic model for discussion. The lid is supposed to be an analogy of GHE.
The lids are the problem. The lids should be sealed and have different conductivities based upon the mix of GHG’s. One lid should be water vapor only and another with water vapor plus CO2.
The pots should have about 75% of its surface area have substantial fins in order to simulate the storage of heat by the oceans.
And, I’m not sure any of this is appropriate to model the radiation. Conduction maybe.
“Just the lid alone increases pressure. No need for it to be sealed.”
But more so, the lid retard evaporation …… which cause the temperature to increase.
Why are we measuring the water and not the air temperature.? Let’s change the temperature of the water to 0C add an ice cube. Now what is your model? That ice cube has not melted since man has been here. Doesn’t bloody matter where the lid is does it? That is the problem confusing temperature with a unit of energy. And totally ignoring that similar processes occur daily on earth. Processes that change the temperature by more than the total of a century of global warming in one year. Think el Nino
The “lid” idea is a red herring. It is frequently raised in debates about whether cooking stoves should be tested with the lid on or off. A 2 pound cast iron tight fitting lid with a water seal maintained by condensation does indeed raise the boiling point of water, equivalent to moving the stove down five stories in an apartment building, which is to say, the effect is negligible and swamped by air pressure changes during the day.
An even bigger question, is there an optimum global temperature? If so, what is it?
“This is analogous to adjusting (say) low cloud amounts in the climate models, since low clouds have a strong cooling effect on the climate system by limiting solar heating of the surface.”
But they also retain the warm air’s convection.
Totally unconnected to the comment to which you replied but what is that unqualified assertion based on? Hint you are talking out of your hat.
Go and find out why low clouds are fluffy cotton looking, why they are white and what they are made of and why do we see them forming at the altitude they do form at. Then post back and explain HOW they retain warm air’s convection.
Look what happens at night, in wintertimes, see the difference in temperature with clear sky and fluffy cloudy sky.
The comment you commented on was about solar radiance during day. Are you arguing the retention is more then the blocking? If not then you don’t dispute the net effect of the clouds
Clouds reduce radiative heat loss to space at night but that is not what you said.
Pat Frank’s paper suggest you can’t predict if the lid is on or not, therefore the models are worthless at prediction.
Yeah, how confident can we be that the lid is on the pot or not at a given time, or how far the lid is on the pot, or whether the lid even properly fits the pot, has holes in it, is bent, is made of paper, pressed down with a hand, held at a distance slightly above, made of crystallized sugar (subject to melting after a time)?
The lid is an analogy for the greenhouse effect, which Dr. Frank admits exists because his only energy flux he analyses is the long wave cloud forcing [LWCF]. He shows the LWCF varies between models, I use the analogy of moving the lid. This isn’t difficult, folks. If you dont understand the basics, don’t comment.
Actually, I don’t take Dr. Frank’s article as an actual admission of the greenhouse effect’s existence. I could read the paper, without this assumption. He might very well believe in it, but it is not necessary for him to believe it, in order to model the models that incorporate this belief. That’s a whole ‘nother level of argument, though, which still seems not so welcome here, and so no need to pursue it.
What Dr. Frank does, as I see it, is to reveal a level of uncertainty that looms over the model-instrument measuring error [climate models are instruments of forecasting, yes?], resulting in uncertainty about the confidence in what the model-instrument actually registers. In other words, the models (as instruments) can establish marks representing certain measures, but are these marks of the correct magnitude to represent the reality they supposedly measure? Can we be confident that the size of those “marks” are representative of reality?
I’m not sure whether he shows that the LWCF varies between models. I think the more important thing he shows is that we cannot have any confidence in even the variations between models, because the models have wired-in uncertainty about what those variations might actually be in reality. The markings can be in a certain tight range, but our confidence about how those markings represent anything real cannot be very high, because we don’t know if the instrument producing those markings is built right or not.
The models SIMULATE that segment of reality that we cannot be confident that they simulate correctly. The uncertainty is in the confidence that we can have in what the models SIMULATE. And this confidence seems ridiculously low, because the instrument of forecasting doesn’t have a confidently reliable interval built into it.
We know what a degree mark on a thermometer means, for example. But what if we did not?
Suppose, on some thermometers, the etched marks appeared where we could not know their actual meaning — they might be off by, say, 4 tenths of a degree.
We could keep measuring temperatures, and stating uncertainties about the accuracy of the thermometer in terms of +/- tenths of a degree. But those degrees themselves would NOT be known, with any confidence, to be etched on the thermometer instrument correctly.
The phrase, “calibration error”, particularly pops into my focus here. I think we are concerned with an uncertainty (and possible error in output) in how the instrument is built. How can we have any confidence in such an instrument?
I might not have the deep knowledge of this stuff, but I think I might be starting to get the basic idea of it.
If you don’t understand that an apostrophe comes between the “n” and “t” of “don’t”, then don’t comment in English. In other words, everybody’s knowledge is fragmented in one way or another, and we also make basic mistakes. Commenting is how we de-fragment our knowledge to correct mistakes, however big or small, and so I will continue to comment, as the moderator gods allow. (^_^)
+1
Actually Roy, Franks paper discusses LWCF as a result of the models code that “predicts” clouds, and all this within the context of Total GHGF, and its relevance to estimating CO2F.
Granted, as Javier noted, Climate Models don’t spit out an uncertainty associated with the internal error of the model, GCMs, just give you a range of individual numbers based on numerous runs. Franks point was that GCMs don’t give you an uncertainty associated with each run, but his paper gives evidence that they should.
This is the kind of comment I was expecting. Congrats.
Dr Spencer I know you meant the greenhouse effect, I stretched the analogy maybe to breaking point.
The situation as I understand it:
1. The models are energy balanced and more or less replicate observed temperatures.
2. The impact of the difference between predicted and observed cloud cover is greater than CO2 forcing.
3. Observations of cloud cover are uncertain enough that the difference between prediction and observation could be an artefact of the measurement technique
4. Par Frank’s concern is the uncertainty over what causes cloud cover to change is sufficient to throw model predictions into doubt. If cloud cover suddenly increased, by an amount which is within the range of model error, the CO2 effect would be overwhelmed.
5. Given models are iterative, they use the previous state as the starting point for the next iteration, errors are amplified?
Yes, and as I recall, Dr Spencer has in the past noted that GCMs do very poorly with clouds, and a cloud change of as little as 1-3% would wipe out all CO2 forcing over the 20th century.
All Franks paper did was show that this uncertainty needs to be included as an error statement, and that the uncertainty is much larger than the CO2 forcing component, and as such, GCMs are meaningless..
Dr Deanster
Spot on. I am really surprised that Roy is repeating this mistake for the third time.
Roy said:
“The errors have been adjusted so they sum to zero in the long-term average.
“This directly contradicts the succinctly-stated main conclusion of Frank’s paper:
“’LWCF [longwave cloud forcing] calibration error is +/- 144 x larger than the annual average increase in GHG forcing. ‘”
The first two statements above are incorrect. “Errors” cannot be “adjusted”. Propagated errors are inherent in the set of processes used as inputs, or as they arise during the interative calculations.
There is no contradiction of Pat Frank’s main conclusion at all.
I was thinking about this for some time today and was surprised to see the basic logical error Roy made in his first response made again.
In that article Roy says, defending the position that the uncertainty is low, that various errors and uncertainties cancel each other out, as evidenced by the clustered output values.
The first claim is obviously incorrect as uncertainties do not “cancel” they add in quadrature when they are propagated. So from the start, Pat is speaking a language that is not understood. He is calculating the uncertainty “about the final values” spit out by the model. He is NOT saying the output values have to vary a lot.
That this is not sinking with Roy is worrying to me. If standard technical language is not being used, how can the discussion proceed?
The models have all sorts of built-in limits applied by various adjustable parameter ranges. That is a choice exercised by the modeller. No problem. But the uncertainty about the outcome of that model is not related to what final answers it produces.
Suppose you ran a model once and is came out with doubling CO2 the temp rises 1.5 C. What is the uncertainty of that concluding value?
Pat says it is about 4 C because of the propagation of uncertainties through the calculations.
Roy says you cannot know the uncertainty until a lot of runs are complete and then look at the range of answers produced. These two guys are not even on the same page!
Pat is talking about the propagated uncertainty and Roy is talking about the CoV. Roy says the CoV is low so the propagated uncertainty must therefore be lower than Pat’s calculated value. Well, sorry to disappoint Roy, but that is not how error propagation works. Look it up.
I hate to bring up bridge trusses again but as an engineer i can’ t help myself. Every bridge truss made has an inherent uncertainty associated with its length. You can measuree1000 of them and get an average but the fish plates you use to connect them better allow for the uncertainty associated with each individual truss. Since a large shipment from the same run can have the same uncertainty those uncertainties can certainly add. Your overall span can wind up either short or long. And you better allow for it. Uncertainties are not the same thing as random errors which might cancel.
A agree with the comment concerning uncertainty.
However it is not uncertainty in modeling low level cloud cover.
It worse than uncertainty.
The cloud changes are not random. There are being driven by something.
You guys lost me a long time ago. If we are talking about a doubling of CO2 I am assuming you mean the CO2 contribution due to human activity which I understand to be 3% of total CO2 in the atmosphere with 97 % contribution from natural sources. Question I have is that if there were no humans on earth, then 100 % of all CO2 would be naturally occurring. If so, would there still be climate change and what would the models predict as the average global temperature.
If a tree falls in a forest, and there is no one there to hear it, did it make a sound?
“I understand to be 3% of total CO2 in the atmosphere with 97 % contribution from natural sources”
This is incorrect but… human’s annual contribution might only be 3%, but it is cumulative and the natural sinks are not keeping pace. That is why the increase from 280-290ppm 200 years ago to 413 today is 100% anthropogenic.
loydo: you have NO (zero) proof that the beneficial CO2 increase is totally man made. You seem to struggle with basic logic.
Loydo – September 13, 2019 at 9:19 pm
Loydo, your above comment proves that you have not overcome your nurtured addiction for the taste of CAGW flavored Kool Aid.
Your above, per se, 200 years increase in atmospheric CO2 is a 100% natural source, the ocean waters of the world. As long as the ocean water continues to warm, atmospheric CO2 will increase.
When those ocean waters start cooling again, , atmospheric CO2 will begin decreasing.
Loydo
I warmly suggest you to try the last pot model. It will do good to your thinking ability.
Where do you think 600Gt of fossil fuel exhaust went?
Loydo
The challenge to your assertion is that the CO2 does not rise in a manner equal to the emissions from fossil fuel. I suppose to knew that but chose to claim the cause is known anyway, hoping someone will one day validate your assertion.
800 year-buried warm water rising from the deeps (meaning warmer than normal for the past 6 centuries) causes CO2 concentration to rise without any of humanity’s exhalations. Remember the 800 year lag?
So, how is you are sure a rise equal to half of that from fossil fuel burning is “100%” anthropogenic? It should be 200% and it could be zero. No one knows save you. Please explain.
Anyone who graduated with a technical degree should have taken Physical Chemistry and should understand the difference between systematic error and measurement error. The first experiment done was asking groups of students to measure various sticks using rulers that were on the bench. One ruler was a yardstick made by a lab tech by scribing lines by hand 1 in. apart. The other was machined on a mill to an accuracy of .001in on each 1/16in. mark.
Some students who weren’t so technically minded were surprised that any particular stick could be measured with both rulers and would give a result accurate to 1/16in. or so. The hand made ruler had more systematic error built in, but with enough measurements the result would be accurate, but with a wide variation. The machined rule would have similar accuracy but with a very narrow range very little variation.
That is the difference between systematic error, precision, and accuracy. Systematic error can be averaged or countered. It simply builds exponentially until the error completely washes out the usefulness of any results.
Since climate models have been made with many assumptions about what contributes to climate changes they automatically have a built in wide range of outcomes. In addition there are numerical calculation errors that can cascade out of control, and errors in how the various processes interact and how repeatably they interact.
There have been a number of posts and reviews of papers on the subject here and other places. Systematic and numerical calculation errors don’t cancel out as random error does. With every iteration of the model the error increases exponentially to the point it tends to go to an asymptotic limit.
“Systematic error can be averaged or countered. ” you missed a ‘t in your sentence.
I have been watching the discussion closely and have enjoyed it very much. I still haven’t made up my mind but am tending towards Pat Frank’s position. Part of the reason for that is highlighted by the above analogy. The models are tuned so that over the long term they generate the known temperature of the earth. Some variables may be overweighted and some underweighted but I think Dr. Spencer’s point is that we don’t need to worry about that be cause we know empirically that the models can generate the correct temperature. My problem is that with errors and physical system that you don’t understand, it is possible for the errors to cancel or for them to accumulate. For a given condition, we can tune models so that they generate what is expected. However, because we don’t understand the physics of the system fully, we don’t know if we can rely upon the conditions to remain constant. In the above example, if the lid melts due to the higher energy flux, then the system changes and our ability to make any useful productions is gone. An analogy with the earth might be cloud cover. As the conditions change the cloud lid could get denser and or thinner and change the behaviour of the system completely so that the previously tuned models are useless. This is what I think Dr. Frank means by error. It is a behaviour that comes from uncontrollable causes and so can’t be relied upon to behave in the same way that it did when the model was tuned.
Me too. Interesting and educating reading. I’d love to hear Dr Brown of DukeU go over Dr Frank’s paper.
Basic engineering- interpolate, don’t extrapolate (within calibrated or “curve fit” systems).
Thermodynamic rubbish.
Snip that.
Dr. Spencer,
How useful to understanding this is the spreadsheet model on your webpage ?
I can’t set it up to return anything but positive temperature increases over the 50 year time span.
Also, for the 8 parameters that can be adjusted in the model spreadsheet, can you identify which of those fields corresponds to the various physical forces are being discussed in these latest set of posts on the subject.
For instance, water depth is 1 meter (what does this represent ?)
Feedback Coef ?
radiative heat flux parameter (another name for ?)
non-radiative heat flux parameter (another name for ?)
CO2 increase (units are w/m2 per decade of energy rejected back to earth ?)
Thanks
There have been years of work put into calculating ECS. 3C still seems like a prety good bet.
Adapted from Knutti et al 2017 meta-analysis.
calculating?….if you mean going back and adjusting things….they don’t understand…to get the results they want
woops…too much clouds….adjust that down…ah much better
…then if we tweak humidity a little…we’ll have it
..and don’t even mention what they’ve done to temp history….of course the models show a faster rate of warming….when you first adjust the temp history to show a faster rate of warming to fit the agenda
This one more to your liking?
Dr Spencer:
That is valid as far as it goes but ignores that some parameters are tweaking sensitivity to variable natural forcings ( like stratospheric volcanoes ).
If you program an exaggerated cooling sensitivity to volcanic eruptions you can balance that with an exaggerated warming sensitivity to GHG. That will work in your hindcasts and add a few dips for realism.
However, once there is a pause is major eruptions ( eg since Mt Pinatubo in 1991 ) the erroneous balance falls apart and all you are left with is your exaggerated warming. Your models warm too quickly. This is exactly what we see.
I discussed this in detail on Judith Curry’s site, with detailed quotes from Hansen’s papers showing they intentionally abandonned physics-based modelling in favour of tweaking parameters:
https://judithcurry.com/2015/02/06/on-determination-of-tropical-feedbacks/
Lacis et al 1992 did physics based modelling and got close to matching data from 1982 El Chichon eruption. Hansen et al 2002 states that they were making arbitrary changes to parameters in order to “reconcile” GCM model output with the climate record. They admit that the new model produces twice the warming in the stratosphere than was observed. An indication they are doubling the real effects, presumably resulting in a similar exaggered cooling in the troposphere.
Both these papers are from the same team , taking turns at being lead author.
Latitude, you forget a fabricated high-level of (of course) human-generated aerosols to explain the lack of predicted temp rise. AFAIK, those artificially high aerosol effects are still in the models.
Loydo
Sometimes you are funny! If it was 3 C it would already be up 2.5 C since 1900. It’s not. Therefore ECS is not 3 C.
Wrong. ECS means equilibrium climate sensitivity. We would not be up 2.5C since 1900. Warming takes time.
plus 3C is for a doubling of CO2 and we are a long way from doubling.
Howeve, models which do not reproduce the early 20th c. warming at all can hardly be trusted just because a bunch of poorly constrained parameters have been rigged ( sorry “tuned” ) to produce something similar to the late 20th c. warming.
The article says:
“I’m not saying this is ideal, or even a defense of climate model projections. Climate models should ideally produce results entirely based upon physical first principles.”
These statements seem awkward, convoluted, perhaps indirect or even insincere. “Climate models should ideally produce results entirely based on…”
Dr. Spencer could be being careful, or unable to speak directly (which seems necessary in science), but more likely is being subtly evasive, especially considering his comments were hoped to be responses to the compact and concise work of Dr. Frank.
I understand why you cannot agree with Dr Frank .
You are not retired yet .Your paycheck depends on it .
I understand .
There are plenty of issues I disagree with Roy about, but this is just a gratuitous insult born of ignorance.
You obviously either don’t know or don’t care about workplace politics .
The truth rattles your cage ?
So it is a sibling of most of your posts. Nice family.
I work in a research institute that is affiliated with a major university. A lukewarmer that is frequently mentioned here used to be in this same institute. In a meeting I had with the director earlier this week, he mentioned that individual and the steps that he had taken to make sure he was no longer affiliated with this institute, though that individual is still with the university. Any deviation from the party line is career damaging.
This is not to say that this necessarily applies to Dr. Spencer and his motivation, but what you say is real in many cases.
Well looks like you don’t have an intelligent response so ad hominem it is.
The pot model pic is stupid and therefore I will not re-post the article.
Very poor analogy.
The only way heat energy enters the pot system (water) is conduction. And the vast majority of the heat leaves via convective latent heat transport.
We can’t be sure how much radiative (sw) energy enters the Earth’s climate system because albedo and thus insolation to the surface is constantly changing everywhere on the (nearly) spherical Earth. The modelers tweak and turn their multiple water/water vapor/water physics parameter knobs until they get something on the output they like. Junk science.
Simple analogies are how we got into this mess with climate models from agenda driven Cargo cult modelers in the first place.
Yeah, why don’t we call it the “Pot-top Effect”.?
Well, the atmosphere does not really act as a pot lid — we know this, but it is such a cool name that kiddies can remember and the public can easily be fooled to believe in, because it relates to something everyday-practical.
Yes, let’s replace one bad analogy with another. Keep it fresh. Never mind the reality of the actual physics dictating it.
[moderation note to self: watch it RK, you’re getting mighty snarky with a respected figure in the industry]
@ur momisugly Robert Kernodle – September 13, 2019 at 3:08 pm
Robert, …. I just hafta gotta paraphrase your comment, …… to wit:
exactly > ” The modelers tweak and turn their multiple water/water vapor/water physics parameter knobs until they get something on the output they like.”
…and jiggle a bunch of parameters they don’t even understand…and they all put in..and jiggle different parameters until they cancel them all out….and end up with this much CO2 causes this much warming
Where is the like button
I must agree, it is an absolutely atrocious analogy.
The pot without a lid is losing energy more rapidly because water molecules are being physical transported away with their energy. Why is this being considered as a viable cognate for radiative transfer?
In fact, if I am not mistaken, the the water will be the same temperature in either case even with same input (lid-on/lid-off). What will change is the pressure. Correct me if I’m wrong. The water will stay at 100C until its all boiled off.
This is exactly the same error (not uncertainty) that led to the concept of “Green House Gases”. Everyone knows that greenhouses operate based on convective blocking, not CO2 increase. But we still sell AGW as the green house effect (maybe less now than 10-20 years ago). How ironic.
Beeze – September 13, 2019 at 4:45 pm
Beeeze, energy will be radiated away from the surface of the heated water in the pot …… regardless of whether or not the water has reached its boiling “point”.
Iffen you don’t believe me, just heat a pot of water to 210F @ur momisugly STP, then turn the stove off …… and the H2O will cool back down to room temperature without losing a single molecule of water.
You are mistaken and therefore …….You are wrong.
If lid-on, steam pressure will increase, and its temperature will also increase.
“The two common steam-sterilizing temperatures are 121°C (250°F) and 132°C (270°F).”
We seem to have missed one of the main points (which was not stated); The pot-on-the-stove model is NOT, and was not intended to be, an EXPERIMENT to demonstrate one or more of the attributes under discussion.
Since it does not well represent the earth as we understand it – from an average temp perspective – It also is weak as a explanatory tool.
Precisely. The pot without a lid is an open system and the pot with a lid fully deployed is a closed system. You get radically different behavior with mass flow.
Then there’s this statement:
This is absolutely not guaranteed. Temperature is constant at phase transitions so the statement above would be true for two pots of melting ice melting at radically different rates, i.e. radically different energy fluxes. Now, you can claim that the fact that it’s water implies that it’s liquid water and not near a phase boundary, but given that latent heat (of water!) is such a significant effect in the climate system, this is just bad.
Just remember, the precision of an powered instrument turned off and reading zero is perfect too. Just don’t worry about its accuracy.
Well to be fair, it’s impossible to create a proper analogy of the “greenhouse gas back radiation effect” because it’s pure pseudoscience.
Robert, re “ pseudoscience”
The amount of heat being radiated from one surface to another is
q/a= [k/(1/ehot+1/ecold-1] x (Thot^4-Tcold^4).
The back radiation is the -Tcold^4 term and is proven every day in furnaces and heat exchangers worldwide. You are simply incorrect. As far as the atmosphere:
The ground is at Thot due to being warmed by sunshine,
If the atmosphere was only N2 and O2, it would be completelely transparent to Infrared. In that case, Thot would be ground temperature and Tcold would be outer space at -270 C. But CO2 and H2O readily absorb and reradiate IR. Because the H2O and CO2 are the same temperature in the atmosphere as the N2 and O2, the ground radiates to “the sky” instead of outer space, and the “sky” is much warmer than outer space. You can take an IR thermometer and typically read the temperature of clouds at about freezing and blue sky down to -80, but $40 IR guns do not have proper emissivity settings to be accurate for this job. Anyway my point is that the ground temp will warm more in the daily sunshine in order to radiate the same amount of heat it receives from the sun, when there are radiating gases between the ground and outer space. That extra temperature is caused by the Sun, but is a result of the greenhouse gases mixed with the Nitrogen and Oxygen in the atmosphere. That is the Radiative Green House effect, RGHE. Prove it to yourself with some basic SB calcs. Think about your warm face radiating to the walls of your house and the walls radiating back, etc….
Quoting the post: “The answer depends not so much upon uncertainties in the component energy fluxes in the climate system, as Frank claims, but upon how those energy fluxes change as the temperature changes.”
Hair splitting. The distinction between what Dr. Spencer is saying and what Dr. Frank is saying is hard for me to see. Changes in the energy fluxes cause temperature to change. Temperature changes cause energy fluxes to change. Temperature changes cause the rate of evaporation at the ocean surface to change, oceans cover 70% of the Earth surface, this changes low level cloud cover, this changes energy flux. To this simple observer, Dr. Spencer and Dr. Frank are saying the same thing. Can someone explain the difference to me?
I do accept that Dr. Frank did not disprove the models, he only showed (conclusively IMHO) that they are not accurate enough to compute man’s influence on climate change or any potential dangers from climate change. But, Spencer and Lindzen and others have shown that also in different ways. I just don’t see a clear and meaningful difference in what Spencer and Frank are saying.
Andy,
The distinction between Dr. Spencer’s position and Dr. Frank’s is quite simple. Dr. Frank claims that
if you increase the long wave cloud forcing by 4 W/m^2 then the temperature increases by about 1.8 degrees each and every year. Dr. Spencer’s position is that if the long wave cloud forcing is changed then other parts of the climate system will adjust themselves so that the final temperate will reach a new equilibrium that will remain constant in time. Dr. Frank’s claim violates conservation of energy and so it is not plausible.
Izaak
You have lost me with your claim that Frank’s position violates conservation of energy. The official climate models are predicting a steady increase in temperature as CO2 increases. Is that a violation of conservation of energy?
Perhaps you could explain in a manner that even I could understand.
P.S. I think you misunderstand Frank’s position. He is claiming that the envelope of uncertainty increases every year because the calculations are iterative and depend on the previous value to calculate the new value.
Clyde,
If you suppose that there is no increase in CO2 forcing in a GCM then each model will
stabilise at some equilibrium temperature where the outgoing radiation equals the
incoming solar radiation, i.e. they conserve energy. Dr. Frank’s model claims instead
that every year it is possible that the temperature rises by about 1.8 degrees for as long as
you run the model and so will very quickly violate conservation of energy since the incoming solar flux does not change.
I am not sure what the “envelope of uncertainty” means but how such an envelope grows
depends critically on the equations being solved. Suppose that you try and numerically calculate the terminal velocity of a skydiver by solving the equations of motion. If you only know the mass of the skydiver to 10% then your numerical answer will be out by 10% no matter how long you do the simulations. Dr. Frank analysis would predict that instead the error will grow the longer you run the simulations. In general if a set of differential equations converges to a fixed point then the error will also converge to a finite value and will not continue to grow.
” In general if a set of differential equations converges to a fixed point then the error will also converge to a finite value and will not continue to grow.”
This is true for ONE iteration of solving the differential equations. But when the input of the next iteration is the output of the previous iteration then uncertainty certainly increases with each iteration. A converging solution is no guarantee that there is no uncertainty as you admit – error converges to a finite value.
In addition, error and uncertainty are two different things. Errors may not accumulate, uncertainty does.
” error and uncertainty are two different things”
The paper is titled “Propagation of Error and the Reliability of Global Air Temperature Projections”. So which of these two is it about?
Tim Gorman,
In this instance, it is not about convergence for a solution iteration, it is about convergence of the model to a result over a period of time.
Pat Frank’s emulator uses simplifications which detach it from the energy balance equation (from which it derives). If you replace it with a simple single body heating model
net flux = CdT/dt = F – lambda T for a constant forcing applied to a system in steady state flux balance, you will find that errors in F do not accumulate. They are bounded (max error in T = max error in F/lambda).
Dr Frank’s increasing uncertainty envelope comes from the ever increasing uncertainty he attributes to F.
“In this instance, it is not about convergence for a solution iteration, it is about convergence of the model to a result over a period of time.”
You are missing the point. If the output of a single iteration has uncertainty associated with it then so do successive iterations based on that uncertain output. It doesn’t matter if those successive iterations converge to a result over a period of time, the value they converge to is subject to the accumulation of uncertainty over successive iterations.
For the equation you give, net flux = F – lambda T, if the actual value of any of the components are uncertain then any output of the equation is equally uncertain. If the models *assume* anything then their output has to be uncertain, otherwise “assumptions” would not be required.
kribaez. –> Dr. Frank’s doesn’t deal with the energy balance in the equations at all. He has shown that the increase in temperature output in the models is linear. Using the linear simulation of the models output values he has shown how uncertainty can grow in iterative runs.
Arguments against Dr. Frank’s paper need to deal with how accurate his linear approximation is to the models output and if iterative runs do or do not increase uncertainty. Attempting to move the argument to one about the internals of the models is not appropriate.
I posted this late on the previous thread, by which time the party had already moved, but it is highly relevant to this conversation:-
Reposted:
It is obvious that many commenters here think that the “LW cloud forcing” is a forcing. Despite its misleading name, it is not. It forms part of the pre-run net flux balance.
Dr Frank wrote ““LWCF [longwave cloud forcing] calibration error is +/- 144 x larger than the annual average increase in GHG forcing. This fact alone makes any possible global effect of anthropogenic CO2 emissions invisible to present climate models.”
Dr Spencer replied above:- “While I agree with the first sentence, I thoroughly disagree with the second. Together, they represent a non sequitur. ”
Pat Frank implies in the above statement that the LWCF is a forcing. It is not. In his uncertainty estimation, he further assumes that any and all flux errors in LWCF can be translated into an uncertainty in forcing in his emulator. No, it cannot.
Forcings – such as those used in Dr Franks’s emulator – are exogenously imposed changes to the net TOA flux, and can be thought of essentially as deterministic inputs. The cumulative forcing (which is what Dr Frank uses to predict temperature change in his emulator) is unambiguously applied to a system in net flux balance. The LWCF variable is a different animal. It is one of the multiple components in the net flux balance, and it varies in magnitude over time as other state-variables change, in particular as the temperature field changes.
They have the same dimensions, but they are not similar in their effect.
If I change a controlling parameter to introduce a +4 W/m^2 downward change in LWCF at TOA at the start of the 500 year spin-up period in any AOGCM, the effect on subsequent incremetal temperature projections is small, bounded and may, indeed, be negligible. If, on the other hand I introduce an additional 4 W/m^2 to the forcing series at the start of a run, then it will add typically about 3 deg C to the incremental temperature projection over any extended period.
The reason is that, during the spin-up period, the model will be brought into net flux balance. This is not achieved by “tweaking” or “intervention”. It happens because the governing equations of the AOGCM recognise that heating is controlled by net flux imbalance. If there is a positive/negative imbalance in net TOA flux at the aggregate level then the planet warms/cools until it is brought back into balance by restorative fluxes, most notably Planck. My hypothetical change of +4 W/m^2 in LWCF at the start of the spin-up period (with no other changes to the system) would cause the absolute temperature to rise by about 3 deg C relative to its previous base. Once forcings are introduced for the run (i.e. after this spin-up period), the projected temperature gain will be expressed relative to this revised base and will be affected only by any change in sensitivity arising. It is important to note that even if such sensitivty change were visible, Dr Frank has no way to mimic any uncertainty propagation via a changing sensitivity. It would correspond to a change in his fixed gradient which relates temperature change to cumulative net flux, but he has no degree of freedom to change this.
None of the above should be interpreted to mean that it is OK to have errors in the internal energy of the system. It is only to emphasise that such errors and particularly systemic errors can not be treated as adjustments or uncertainties in the forcing.
kribaez, “during the spin-up period, the model will be brought into net flux balance. This is not achieved by “tweaking” or “intervention”. It happens because the governing equations of the AOGCM recognise that heating is controlled by net flux imbalance.”
This puts the onus on the modelers to take the next step and demonstrate how their models will handle this external uncertainty statistic directly.
Izaak Walton says, “Dr. Frank claims that if you increase the long wave cloud forcing by 4 W/m^2 then the temperature increases by about 1.8 degrees each and every year.”
That is not at all what Dr. Frank is claiming. The +/- 4 W/m^2 is a calibration error statistic. The propagation of this type of error informs one as to the reliability of the output of the models, but not what the output may be.
As Crispin in Waterloo put it in response to a different article on this topic (https://wattsupwiththat.com/2019/09/11/critique-of-propagation-of-error-and-the-reliability-of-global-air-temperature-predictions/), “the ±n value is an inherent property of the experimental apparatus, in this case a climate model, not the numerical output value.”
Barbara,
A calibration error statistic means that if you run your model with different values of a
parameters then you will get a different output. In Dr. Frank’s paper he explicitly states that
the +/- 4 W/m^2 error should be put into his emulation model to give a temperature error.
This is the same as saying that a GCM with a higher value of long wave forcing will output a higher temperature. And the next year you add in a second lot of 4 W/m^2 to the emulation model to get an additional 1.8 degrees of temperature rise. A full GCM will not behave in this way and thus Dr. Frank’s emulation model is wrong.
No. That is not correct. The emulation model is used, not to calculate future output, but to calculate the reliability of the future iterations of the models. It’s calculating the increasing inability of the models to have predictive value, not what values they will predict.
Barbara,
You cannot calculate the reliability of a model without calculating its future
output. It is the same thing. Suppose you have a function f(x) and you want
to find the error if x is known to within 10%. The error df is given by
df=f(x*1.1)-f(x*0.9). So you have to be able to model what happens for different
value parameters in order to calculate the error. Hence the emulation model must
be capable of providing an estimate of future temperatures before it can be used to
calculate the errors associated with those predictions.
“You cannot calculate the reliability of a model without calculating its future output”.
Yes you can. Look at any basic textbook uncertainty analysis. Your position seems to say you have no concept of uncertainty assessment.
Even better than that. Pat Frank knows a helluva lot more about the topic than anybody else on this forum. Why not pay attention to what he is saying and you might learn something
No, that’s not correct.
The errors do not (necessarily) lead to different outputted temperatures. They reduce your confidence that those temperatures are predictive in the real world — that is, that the system is an accurate model of reality.
Prove it. Pat Frank established the linear emulator was adequate to reproduce the results of many GCMs under differing conditions. To use the actual GCMs it would be most effective to enumerate all possible errors including branching for every time step over 100 years (1^100 runs for the last time step). But there may be ways to randomly sample intermediate steps while running the outer envelope (+4, +4, +4… and -4, -4…, -4…).
Izaak
Per Barbara and Jordan, please take a closer look at uncertainty analyses, especially Type B errors. You can use a Fisherman’s ruler graduated to 0.001″ to measure your fish to 10.602″ long. But if the ruler is actually only 6″ not 12″, then your fish is actually only 5.301″ long. Similarly if you have fluxes Fi of 101 W/m2 in and Fo 100 W/m2 out, you have 1 W/m2 net fluxes. You can model and calculate the 1 W/m2 to 4 significant figures – assuming that the other 100 W/2 Fi remains constant. However if Fi drops to 81 and Fo to 70, than you have a 11 W/m2 difference. There are huge known unknowns and also unknown unknowns. The IPCC models ASSUME most balancing factors remain constant – but we don’t know if they will or for how long. McKitrick
& Christy 2018 test the predicted Tropical Tropospheric Temperature using independent satellite and radiosonde data after the models are tuned to surface temperatures. They found predicted trends of 285% of actual. Thus NOT Proven, and NOT fit for policy purposes.
AND That is just since 1979. What about the 1000 years before – or the next 100 and then 1000 years. The actual could be far from current projections – EITHER WAY. https://agupubs.onlinelibrary.wiley.com/doi/full/10.1029/2018EA000401
Pat establishes a degree of uncertainty in the GCMs. I like to describe uncertainty as “lost information”, whereas Pat calls it “our ignorance”.
Once information is lost, there is no way to recover it. We may introduce new assumptions to patch-up as we think fit, and we may force model solutions down “constraint corridors”. None of this can recover the lost information. Uncertainty cannot reduce, and adding more modelling assumptions (levels of abstraction) can only increase uncertainty.
And the iterative nature of the GCM’s is a key point you seem to be missing when you talked about solving ode’s.
Each stage of GCM iteration inherits the uncertainty of the previous, and then adds a bit more. Pat’s uncertainty range grows and grows, while the model marches-on with its iterative solution, blissfully unaware of how far off-course it may have drifted from the real world. As the outputs relate to the future, there is no way check and correct the GCM course.
All we can realistically do is use the GCM output as a prediction, and wait to see if reality confirms what it said.
So the uncertainty range provided by Pat cannot violate conservation of energy. All it does is give a measure of how far from reality the GCM output could be.
“This is analogous to adjusting (say) low cloud amounts in the climate models, since low clouds have a strong cooling effect on the climate system by limiting solar heating of the surface.”
Huh? If you *adjust* the amount of low cloud amounts instead of calculating that amount based on atmospheric conditions that actually exist then you are doing nothing other then getting the answer you want instead of a model of reality.
What you are suggesting is that you can predict an automobile’s speed by measuring the temperature of the engine with no actual knowledge of the air/fuel flow, the aerodynamics involved, or the internal loads on the engine such as a power steering pump or air conditioner compressor.
If you can’t model the internals of the Earth’s thermodynamic system then how can you pick just one variable and say it is a controlling factor? That’s an *assumption* made in order to make the calculations work, not a known physical reality.
Sit in front of a roaring campfire on a chilly autumn eve.
Raise a blanket (atmosphere) up between yourself (earth) and that campfire (sun).
Are you warmer now or colder?
Drop that blanket down.
Are you warmer now or colder?
This simple thought experiment just trashed the greenhouse effect theory which says you get warmer with the blanket & colder without it.
No atmospheric greenhouse effect, no CO2 warming, no man caused climate change.
(The atmosphere obeys Q = U A dT same as the insulated walls of a house.)
Sit on your camp chair long after the fire is dead, staring into the dark and radiating away your body heat. Hang a blanket a foot away from you, do you feel warmer or colder?
Because it stifles the convection, not because of BB IR.
The only way a surface radiates as a BB is into a vacuum.
Energy leaves my body by both non-radiative (conduction, convection, advection, latent) and radiative processes.
The blanket decreases U and dT increases.
Nick,
Nick
No, the blanket is at an intermediate temperature between your body and the campfire, and radiates between blanket and campfire, and your body and the blanket according to the Stephan-Boltzmann equation. Since CO2 and H2O are infrared radiating gases, analogous to your blanket, you are actually proving CO2 warming and the radiative greenhouse effect with your thought experiment, except maybe it would be clearer to assume you are standing by the fire (ground surface warmed by the sun) and you hold the blanket (GH gasses) up between yourself and the cold starry night sky.
NS –> Game, set, match
Nick Schroeder – September 13, 2019 at 3:05 pm
Nick S, … “simple” is correct, …… give it up, ….. you absolutely, positively cannot “trash” the greenhouse effect theory …. by talking “trash”, …… no matter how hard you try.
Now iffen you want to claim that the atmosphere, per se, “blankets” the earth, …… FINE.
But ….. “DUH”, there is no way in hades you can raise a blanket (atmosphere) up between yourself and the campfire …… simply because that per se atmospheric “blanket” is already in place.
But iffen you wanted to lie flat on the ground …. or maybe roll a big rock up in front of the campfire and hide behind it.
California wisdom, ……. no thank you.
Dr. Spencer,
I am going to bookmark this page, and every time someone on a thread states that Pv=mRT determines surface temperature, I am going to anchor this URL with the phrase “The first law of thermodynamics determines temperature”.
“How much will the climate system warm in response to increasing CO2?”
The answer is and always has been – ZERO!!!!
The 396 W/m^2 upwelling is a theoretical “what if” with ZERO physical reality.
ZERO 396 means ZERO 333 for the GHGs to absorb/reradiate.
ZERO GHG warming.
ZERO man caused climate change.
At last!!!! I thought I was going insane, or the website had been taken over by Greenpeace.
Nick,
“The answer is…. ZERO”. No, the answer is around 1.5 to 2.5 C per doubling of C02, plus or minus about 1C., so not really accurate enough to declare a climate crisis.
The IPCC thinks it might be as high as 4.5 C but the last 50 years temperatures do not support such a high number.
As far as your GHG warming statement:
The amount of heat being radiated from one surface to another is
q/a= [k/(1/ehot+1/ecold-1] x (Thot^4-Tcold^4).
396 is the Thot^4 term on average, and 333 is the Tcold^4 term, again on average, simply because the sky has a temperature much warmer than outer space.
The ground is at Thot due to being warmed by sunshine,
If the atmosphere was only N2 and O2, it would be completelely transparent to infrared, and the ground would radiate directly to outer space at absolute zero. In that case, Tcold^4 term would be zero instead of 333, and the ground would radiate to outer space at 396 W/sq.M on average, instead of 396-333= 63 W/sq.M on average that is radiated from ground to the sky.
Day and night are quite different, we are talking averages here. There isn’t much wrong with Trenberth’s numbers that you refer to, a few watts one way or another….
It is quite typical that when somebody refers to the climate sensitivity values of the IPCC, he/she uses the equilibrium climate sensitivity (ECS) values which are somewhere in the range of 3.0 to 4.5 C.
I am astonished that nearly nobody has ever referred to the following statement of the IPCC in AR5, p. 1110: “ECS determines the eventual warming in response to stabilization of atmospheric composition on multi-century time scales, while TCR determines the warming expected at a given time following any steady increase in forcing over a 50- to 100-year time scale.” And further, on page 1112, IPCC states that “TCR is a more informative indicator of future climate than ECS”.
I just remind that the average value of TCR/TCS is about 1.8 C. It is the right climate sensitivity value per the IPCC for this century.
That is from the (per Dr Frank and my independently grounded previous posts here on models) climate models. We can say reliably that it is too uncertain (Frank) and too (computational intractability=>large grid cells=>parameterization=>tuning to best hindcast=>attribution problem).
The alternative energy budget ‘observational’ approach produces a TCR ~ 1.35.
DMacKenzie – September 13, 2019 at 7:07 pm
MacKenzie, ……I hate telling you this, ….. but you are mimicking a “junk science” claim because the claimed “CO2 warming effect” on surface temperatures (+1.5 to 2.5 C) has NEVER been measured or scientifically proven, ….. but only calculated via use of “fuzzy math” and the mis-use of a scientific property associated with “doubling of C02”.
It is obviously bogus when the claimed “warming” ranges from “1.5 to 2.5 C” …. to “4.5 C”, … per doubling of C02.
RE: “The errors have been adjusted so they sum to zero in the long-term average.”
How, specifically, do you ‘adjust’ an unknown number of unknown errors such that they sum to zero in the long term average?
What assumptions are you making?
What makes you think this is a valid thing to attempt/apply to real stove top pots, let alone unreal climate models?
Exactly! The computer programmers have an acronym for it: GIGO!
J Mac
This has always troubled me. If there are unknown forcings, or very poorly characterized forcings, and other forcings are adjusted to make everything balance or behave as expected, then there is no guarantee that those particular adjustments will accomplish the same thing when the system is in a different state. One can only be certain that the ‘fix’ is valid for that particular state, and not for all states.
One would run the models with zero CO2 forcing which would result in zero warming. Any modeling errors are necessarily zero in this case. It is by design.
How do you know there would be no warming with zero CO2 forcing? You would be calibrating the model against a standard that may or may not represent reality. That alone generates an uncertainty in the output of the model.
We don’t know whether or not there would be zero warming in the real world, but I was talking about the models and modeling errors. It’s an assumption. Your comment is, like so many other here, a non sequitur.
Tom, we do know, don’t we? Glacial periods end due to increased insolation caused by orbital mechanics. Didn’t the current interglacial begin long before CO2 started increasing? Isn’t “equilibrium” an observational artifice, like the Assumed Position used in celestial navigation (useful in determining position but not itself an actual position)? It appears that on geologic time scales “equilibrium” is a moving target with no intrinsic set point.
tom,
“We don’t know whether or not there would be zero warming in the real world”
Which is exactly what I just said.
” I was talking about the models and modeling errors”
Uncertain inputs generate uncertain outputs, even in models. That uncertainty grows with every iteration that uses uncertain outputs from the previous iteration – even in models.
“Your comment is, like so many other here, a non sequitur.”
I’m not sure you know what a non sequitur is.
Tom – September 14, 2019 at 4:27 am
Tom, you are absolutely correct.
The original intent of the “climate modeling” computer programs is/was to provide undeniable “proof” to the populace that atmospheric CO2 was causing the increases in near-surface temperatures.
“DUH”, it wasn’t until a few years after Charles Keeling started making accurate measurements of atmospheric CO2 (March 03, 1958 – 315.71 ppm) that they started creating those “climate modeling” computer programs.
And given the fact that Mauna Loa CO2 ppm data was the only, per se, accurate atmospheric “entity” that the “climate scientists” had access to, it was therefore the “controlling” factor that governed the “output” of the “climate models”.
Just like a “Fortune Telling Program”, ….. it would be designed to “tell you” what they wanted you to hear.
Agree.
… and the cloud cover change is not random and it is a fact that there has been a reduction in low level cloud cover in high latitude regions.
The entire warming can be explained by the measured reduction in cloud cover which explains why there is regional warming rather than global warming which explains why it the 1970s cooling and the 1997 to present pause could occur.
.. and as the GCM have over a hundred different variables which must be ‘tuned’ the GCM , the cult tuned the GCM to produce the 3C warming,
It should be noted that there are one-dimensional CO2 vs planetary studies that estimate the warming for a doubling of atmospheric CO2 to be 0.1C to 0.2C.
We should re-look at the simple one-dimensional analysis.
The only thing we appear to know is that we do not know. By adjusting assumptions a model can get the results the alarmists want. However, if the assumptions are wrong the results will not match the climate in the real world. I see no reason to spend 16 trillion dollars or stop eating meat because I know we don’t know.
I’ve been using this very same analogy to try to explain to Leif Svalgaard and Willis Eschenbach for years that it is perfectly possible for solar activity to decrease from SC21 to SC23 and still cause a temperature increase as long as solar activity is above the equilibrium (average) level. I even made a picture about it:

Reducing the fire under the pot can reduce the rate of warming without causing cooling. Only when the fire is reduced beyond the point of equilibrium the pot starts cooling.
I even made a picture about it:

(javier, do you think your picture is simple enough for svalgaard?)
fonzie
Wow. That’s well far below the belt. I never would have imagined you writing that.
Disappointing.
Bindi, it’s a joke (😉)…
fonzie
Wether or not something is a joke does not depend on who writes it, but on who reads it.
My point was that Leif Svalgaard isn’t a nobody, fonzie, but a great scientist, who nevertheless was here at WUWT often enough target of rather respectless opinions.
“a joke does not depend on who writes it, but on who reads it.”
Sorry Bindidon, …… like “sexual harassment”, it doesn’t matter what is said, or who hears it, …. only who said it.
Don’t be so sensitive, Bindidon. Afonzarelli knows that Leif is a well known and respected astrophysicist and a professor at Stanford. That’s why it is a joke. You just didn’t get it. Leif has said several times that he has a thick skin. He doesn’t need you to come to his defense.
Javier
+1
I even made a picture about it:

(javier, do you think your picture is simple enough for svalgaard?)
Perhaps it is time for Dr. Spencer to give it a rest and let the impact of Dr. Frank’s paper sink in for a while.
Roy has now inspired an evermore infrequent guest post, since most intelligently useful have already been posted at least once.
I will cogitate, and almost certainly provide CtM another guest proffer. Somewhen. As personal issues now take precedent.
We can measure the temperature of the water precisely. We can calculate the amount of heat absorbed by the water over time. We seem unable to do either of those things with climate.
A more appropriate analogy for climate is using the gas stove to heat a cold room in the winter with the ceiling fan on and the windows closed. Measuring temperature is problematic because its different depending on distance from the stove and incoming cold (heat loss) from conduction through the windows and the ceiling fan air convection creates temperature gradients. All we can do is pick a few spots and measure changes. Hopefully spots away from the stove and windows. The amount of heat loss to the outside we don’t know exactly. All we know for certain, within error bars, is how much gas we consume from the gas meter.
We could improve this and add buckets of water in the kitchen and place them near the windows and stove. Then we get some humidity from evaporation and condensation on the windows. We can measure the water temperature change which helps estimate how much heat the water absorbed. We can measure the water level and calculate how much water is lost to evaporation. Alas, unless its really cold we have no ice. We have no clouds or rain. No ocean currents. Climate is way more complicated
Love it! Wish I had said that!
The analogy leads with
But for the analogy to work, the pots have to be different sizes and shapes. Some will have a large surface area on the sides and be less impacted by a lid some will have more water and take longer to get to equilibrium.
This represents the error in the models because we want to compare to a “standard pot”, whatever that is.
When you do that, the idea
is indeed correct, however the final temperature of the water will be unknown and the time to get there is also unknown. All due the the differences in pots’ thermal properties.
I do want to address this specifically, though.
This is simply not true. There will inevitably be a bias over the long term because the components cant be cancelling errors over all the states of the individual components of the GCM as they all change with continued forcing. That will result in warming (or cooling) as they run, away from their balanced control values.
My real concerns are the actual and real large effects of phase change and the idea of a solid boundary (lid) in the example, that does not exist in real life
OPEN POT: When water is slowly heated, more water vapour is produced, so the energy transferred out from the open pot is by a combination of radiation, natural convection, and the mass transport of potentially-condensable water vapour (‘carries’ latent heat with it.). ‘
CLOSED POT: If there is a lid, some condensing vapour will give up its energy (latent heat) to the lid and heat it. Hot droplets will recycle energy back to the pot and the whole chamber will get EQUALLY hot (150F). Energy transferred above the lid is now only due to radiation from the lid surface, plus natural convection of local air impinging the external hot lid.
The lid is impervious to vapour transport. In the real world, the ‘so-called lid’ is the phase-change clouds which can also transfer back to vapour (condense and precipitate). These are not impenetrable or a solid-like lid-barriers: indeed they are quite unlike the solid barrier lid.
Clouds can never be as warm as the liquid source (oceans). They result from phase change (condensation) with the associated latent heat energy released. The lid-pot system and ALL the zone between the 150F water being heated, will all reach a constant temperature. This is also unlike the atmosphere [Thermal Lapse Rate 6.5 C drop per km rise above earth]. There are also humidity gradients in the real world atmosphere, but not inside the lidded-pot.
One additional point, the ‘lid’ cannot be a carbon dioxide system, as water vapour is over 10 times more radiation-effective with long wave electromagnetic energy than carbon dioxide (radiation from earth). So the earth’s environmental temperature cannot be due to an alleged blanket/lid. Water vapour is 20-30 times higher in concentration and is the only phase-change GHG.
Energy transfer is dominated by evaporation-humidification-condensation-precipitation + water vapour-radiation. Water vapour is self-buffering; self-regulating; self-compensating and self-restoring.
One final point: in the real earth-system, the supply solar energy comes in from above the clouds (‘lid’), the supply energy must interact with the lid first (reflection, absorption etc.) the ‘cloud-lid also acts as an umbrella on sunny days!
I think that the heated pot system misses a lot of these vital mechanistic issues!
Great comment. Also, the lid is solid and CO2 is non phase changing gas. CO2 only gives back IR as heat when its surrounds are -80C. So absolutely zero effect near ground levels and truly dispersed by convection where it might collide with matter.
CO2 is a life supporting passenger.
Yes, and the lid should only cover 0.04% of the pot…..
Good comments all around. But I think you overdo the analogy.
You have a hard lid which is impervious, and everything under the lid is isothermal, OK, that is what Roy describes.
Consider this:
Your basic chemistry student’s distillation apparatus. You have a heated pot and instead of a lid, a straight tube distillation column. Heated vapor goes up the column until it cools and condenses and starts it’s return to the pot. The column absolutely does have a lapse rate. Further, the column is open to the air but it does have a virtual lid. The vapor goes up until it condenses, then no further. This virtual lid is governed by thermodynamics and is just as impenetrable to the vapor as if it was made of steel. You can reflux your solution all day and not lose a whiff of vapor just so long as you do not overheat and flood the column. On Earth, this virtual lid is what we call the tropopause. The boundary between where water vapor is, and where it is not. When you turn up the heat, you get more evaporation and condensation, perhaps a bit higher up the column, but you do not get a higher temperature.
Just like those islands down in the Caribbean, in the tropics, surrounded by the ocean, where it rains every day?????????????
Dear Dr Spencer,
I think the “big question” is key to the disagreement between you and Dr Frank. I think you are addressing different questions. Here’s what I mean…
Dr Spencer’s big question: “How much will the climate system warm in response to increasing CO2?”
Dr Spencer’s answer (my interpretation): The models can’t predict this because they don’t even account for the fact that climate feedbacks change as the climate changes.
Dr Frank’s big question (my interpretation of his comments): “Given the large errors in basic inputs, can the models possibly determine whether fossil fuel CO2 can have a significant effect on the climate?”
Dr Frank’s answer (my interpretation): The models cannot possibly resolve any effects of fossil CO2 because the errors in their basic inputs are huge compared to any possible influence from fossil CO2.
You very well may both be correct. But I think Dr Frank’s is the more basic and important of the two. You may be able to prove that the models do not include the important effect you mention. But if, as Dr Frank has shown, those models are not even up to the task of their stated purpose, then they are a pointless exercise in the first place.
I both like and support your analysis of the dispute (although I’m uncertain if either author will agree). Indeed, I find the models unconvincing for both reasons above and more.
Models are a very useful research tool for comparing observations with physical mechanisms hypothesized to be in play. We know that the heat capcity of water is about 1 BTU/lb per degree Fahrenheit. If one measures about 1,000 BTU’s going into 500 pounds of water, you’d expect (give or take) about a 2F gain in temperature for the system. If the thermometer reads +5F, its time to check ones instruments and/or thinking.
What models are never is reality. Yes, the physical sciences textbooks are crammed with pages of equations (models) that have been derived from a combination of first principals (hypothesis, mind you) and observations. They have been well tested and often provide useful estimates of the inter-relationship between the parameters. You know many quite well. PV=nRT, F=ma, H=Cp(T2-T1). Take yer pick.
However, all of these models are incomplete and have limitations and simplifications. Back in school, we joked about the Perfect Scientific Corporation, proveyors of fine massless pulleys and frictionless incline planes. Sometimes the incompleteness and simplifications are acceptable errors. Sometimes they are not. F=ma is pretty good aside from frictional loses until relativistic effects heave into site and the mass starts its march towards infinity. Websearch for Compressibility (Z) which is used in the revised version of the Ideal Gas Law: PV=ZnRT. The careful worker is mindful of both the assumptions and limitiations of a given model.
Where I part company from many workers is when they try to calibrate their model to observations and then conclude that the model parameters are “complete” and intereptation of the finer points of the universe may now proceed. Bullpucky.
Back in July, Lord Monckton and Joe Born went three rounds about what may or may not properly be included in the feedback model. Lord Monckton’s addition of a solar term to the previously published feedback model and the recalculation of the GHG feedback illustrates the problem with this frankly empirical approach. The nicely treed worker has two outs: (a) aruge that sunshine is not a relevant input to the climate feedback model (Have fun storming the castle!); or, (b) argue that the GHG forcing function is still menacingly high because we’re now adding a new term that’s been most recently off-setting the GHG gain but its played-out now and we’re all going to burn! Climate hockey, anyone?
Joe took option (b) and had some fun utilizing other contributions to the feedback transfer function and demonstrated that one can get all sorts of outcomes from melting the mountains to condensing the atmosphere (ok, I exagerate, a bit). Lord Monckton (in his trademark colorful style) disputed the validity of these alternative model contributions with good cause. Some of Joe’s hypothesised high-gain contributions are on the thin edge of plausible.
So, was Lord Monckton justified to point to the model outputs from these scenarios and call the hypothesis into question? Yep, the model is quite good for that. However, in Lord Monckton’s first posting on the feedback model (back in July), he claimed that the recalculated GHG gain factor was proof that there is no hazardously high GHG feedback forcing. Was Joe Born justified in demonstrating that the model construct could be used to obtain any GHG gain one cares to conjur? Yes he was given the way Lord Monckton argues that his recalculation is the one and only true outcome.
The empirical feedback models cannot sort truth from fantasy. The models on offer have many obvious shortcomings that have been amply cited recently. One can try to wish-away those contributions by burying them into some “long-term equilibrium” or some “near-invariant” base climate signal. Not a very convincing argument when the interested reader applies for details.
In the words of the Down Easter, you can’t get there from here. The models are incapable of either proving or disproving AGW as they can be tuned to prove near anything. Sure, it’s easier to dismiss the results that stretch credulity. Given that this exercise is trying to discern between contributions amounting to tenths of a degree per decade, I see near zero confidence that fiddling with the very few parameters believed to be known allows the model to pick between the good and less-good hypothesis.
Actually, I didn’t say anything like that.
Although Lord Monckton tricks his theory out in talk about taking the sun into account, it really boils down to the proposition that as a result of the feedback theory used in, e.g., electronic circuits and control-systems theory the global-average surface temperature at equilibrium has to be so linear a function of the value it would have without feedback as to preclude high equilibrium climate sensitivity. What I say is that feedback theory requires no such thing.
Lord Monckton has more recently so changed his argument as to base it on the proposition that IPCC statements about “near invariance” are inconsistent with a function nonlinear enough to permit high climate sensitivity. Moreover, he claims that Dr. Spencer has finally been persuaded by this latest wrinkle (which Lond Monckton adumbrated in his “Wigmaleerie” post on this site).
In my view, though, the math shows that such statements are not so restrictive and that Dr. Spencer is mistaken if he has been persuaded that they are. Unfortunately, this site declined to run a head post I proposed to demonstrate that fact, so its readership won’t get the benefit of a different viewpoint.
I will submit that we’re in agreement. I appreciate your prespective on what you wrote (one seldom gets the author’s input on that usually silent discourse!). I do apologize if you think that I put words into your mouth; but, you did actually demonstrate that the math allows for either a high and low-gain GHG terms. As you said, that model construct precludes neither case. “Fun” is likely a regretable word choice in this instance as I was not on the receiving end of what Lord Monckton considers wit.
I’ve learned to calibrate my expectations on scientific rigour in this forum. There are frequently many transgressions that would be called-out for correction in a different venue. The prime example is the use (including Dr. Spencer) of the term equilibrium when it should be properly steady-state. (Trying to correct the usuage would likely create more confusion than it would seek to fix.) Consequently, the argument that model math (a priori) makes not a proof is perhaps a bit esoteric for this venue.
My observation is that certain workers are trying to overcome that limitation by spreading a thin layer of first principals onto the math. The battle is thus shifted to excluding the gain terms that do not agree with the worker’s bias. Lord Monckton hangs his hat on the IPCC assertion of near invariance. Dr. Spenser likes his “long-term equilbrium” and the resulting nice and flat non-GHG climate signal. Gosh. That’s a pretty sweeping assumption.
The intellectual quicksand is then considering the merit of a hypothesized mechanism in light of the suspicion that the climate behaves in a low-gain and linear manner. It is a suspicion that I frankly share; but, I recoginize it as merely a bias and I in no way confuse it with proof sufficient to out-of-hand preclude any higher-gain or non-linear contributors. We have previously spoken about the fact (via feedback theory) that an observed over-damped response offers no illumination of the gain/linearity of the indivdual mechanisms driving the transfer function.
If the model workers want to undertake finding and calibrating all the significant contributors to the massive heat engine that is the climate (not excluding the oceans, clouds and ice-caps), then I can get behind that intellectually. However, the proposal to just toss nearly all of the candidates into some quasi-invariant-equilibriumish bucket is polietly declined.
I will make this observation: Both the pro- and anti-ACGW parties have a vested interest in the near-invariant assumption. To allow for the possibility that the baseline since 1855 could have moved on it’s own (up or down) would be lethal to any attempt to show the effect/non-effect of CO2 on the global temperature. So, at least the contestants have that common ground.
So, I would urge you to rework you submission. It’s time will come.
A better question to start the report would have been:
“How can climate models produce any meaningful forecasts when they have zero ability to hindcast what has already happened?”
Given the current Co2 ppm, and that of say, 100 years ago, a model run backwards can be checked precisely. A hundred years ago (~1920), the average Co2 was 304. In 1820, it was 284. Now, it is 411. Obviously, if you run the models backwards, you will find that we were in an ice age in 1920 according to the models. Less Co2 by 25% from now, but the increase has been 33% since then.
G’head – Run them.
Publish it.
I already know the answer – Co2 has nothing to do with the climate, so it cannot be modeled backwards. We already have a record of the Co2 levels AND the temperatures – so let’s see the models run backwards.
They will simply “tweak” them, where necessary, to make them match…(that’s why the programs are super secret ! lol