A Stove Top Analogy to Climate Models

Reposted from Dr. Roy Spencer’s blog

September 13th, 2019 by Roy W. Spencer, Ph. D.

Have you ever wondered, “How can we predict global average temperature change when we don’t even know what the global average temperature is?”

Or maybe, “How can climate models produce any meaningful forecasts when they have such large errors in their component energy fluxes?” (This is the issue I’ve been debating with Dr. Pat Frank after publication of his Propagation of Error and the Reliability of Global Air Temperature Projections. )

I like using simple analogies to demonstrate basic concepts

Pots of Water on the Stove

A pot of water warming on a gas stove is useful for demonstrating basic concepts of energy gain and energy loss, which together determine temperature of the water in the pot.

If we view the pot of water as a simple analogy to the climate system, with a stove flame (solar input) heating the pots, we can see that two identical pots can have the same temperature, but with different rate of energy gain and loss, if (for example) we place a lid on one of the pots.

Pot-of-water-example-of-same-temp-different-energy-fluxes-550x413

A lid reduces the warming water’s ability to cool, so the water temperature goes up (for the same rate of energy input) compared to if no lid was present. As a result, a lower flame is necessary to maintain the same water temperature as the pot without a lid. The lid is analogous to Earth’s greenhouse effect, which reduces the ability of the Earth’s surface to cool to outer space.

The two pots in the above cartoon are analogous to two climate models having different energy fluxes with known (and unknown) errors in them. The models can be adjusted so the various energy fluxes balance in the long term (over centuries) but still maintain a constant global average surface air temperature somewhere close to that observed. (The model behavior is also compared to many observed ocean and atmospheric variables. Surface air temperature is only one.)

Next, imagine that we had twenty pots with various amounts of coverage of the pots by the lids: from no coverage to complete coverage. This would be analogous to 20 climate models having various amounts of greenhouse effect (which depends mostly on high clouds [Frank’s longwave cloud forcing in his paper] and water vapor distributions). We can adjust the flame intensity until all pots read 150 deg. F. This is analogous to adjusting (say) low cloud amounts in the climate models, since low clouds have a strong cooling effect on the climate system by limiting solar heating of the surface.

Numerically Modeling the Pot of Water on the Stove

Now, let’s say we we build a time-dependent computer model of the stove-pot-lid system. It has equations for the energy input from the flame, and loss of energy from conduction, convection, radiation, and evaporation.

Clearly, we cannot model each component of the energy fluxes exactly, because (1) we can’t even measure them exactly, and (2) even if we could measure them exactly, we cannot exactly model the relevant physical processes. Modeling of real-world systems always involves approximations. We don’t know exactly how much energy is being transferred from the flame to the pot. We don’t know exactly how fast the pot is losing energy to its surroundings from conduction, radiation, and evaporation of water.

But we do know that if we can get a constant water temperature, that those rates of energy gain and energy loss are equal, even though we don’t know their values.

Thus, we can either make ad-hoc bias adjustments to the various energy fluxes to get as close to the desired water temperature as we want (this is what climate models used to do many years ago); or, we can make more physically-based adjustments because every computation of physical processes that affect energy transfer has uncertainties, say, a coefficient of turbulent heat loss to the air from the pot. This is what model climate models do today for adjustments.

If we then take the resulting “pot model” (ha-ha) that produces a water temperature of 150 deg. F as it is integrated over time, with all of its uncertain physical approximations or ad-hoc energy flux corrections, and run it with a little more coverage of the pot by the lid, we know the modeled water temperature will increase. That part of the physics is still in the model.

Example Pot Model (Getty images).

Example Pot Model (Getty images).

This is why climate models can have uncertain energy fluxes, with substantial known (or even unknown) errors in their energy flux components, and still be run with increasing CO2 to produce warming, even though that CO2 effect might be small compared to the errors. The errors have been adjusted so they sum to zero in the long-term average.

This directly contradicts the succinctly-stated main conclusion of Frank’s paper:

“LWCF [longwave cloud forcing] calibration error is +/- 144 x larger than the annual average increase in GHG forcing. This fact alone makes any possible global effect of anthropogenic CO2 emissions invisible to present climate models.”

I’m not saying this is ideal, or even a defense of climate model projections. Climate models should ideally produce results entirely based upon physical first principles. For the same forcing scenario (e.g. a doubling of atmospheric CO2) twenty different models should all produce about the same amount of future surface warming. They don’t.

Instead, after 30 years and billions of dollars of research they still produce from 1.5 to 4.5 deg. C of warming in response to doubling of atmospheric CO2.

The Big Question

The big question is, “How much will the climate system warm in response to increasing CO2?” The answer depends not so much upon uncertainties in the component energy fluxes in the climate system, as Frank claims, but upon how those energy fluxes change as the temperature changes.

And that’s what determines “climate sensitivity”.

This is why people like myself and Lindzen emphasize so-called “feedbacks” (which determine climate sensitivity) as the main source of uncertainty in global warming projections.

Advertisements

247 thoughts on “A Stove Top Analogy to Climate Models

  1. Well, isn’t heart of the scientific method experiment and demonstration?

    The point of this second experiment is to demonstrate that a surface with multiple outgoing heat transfer pathways cannot radiate as a BB. Just as reflected, transmitted, absorbed incoming radiation must equal 1.0, the outgoing radiative and non-radiative heat transfer processes must equal 1.0. Radiation does not function independently from the non-radiative processes.

    The immersion heater is feeding 1,180 W of power into the insulated pot of water which is boiling at an equilibrium temperature of 200 °F. (6,300 feet) The only significant pathway for energy out of this system is through the water’s surface.

    Any surface at 200 °F radiates at 1,021 W/m^2. This is 2.38% of the 42,800 W/m^2 power input to the system. That means 97.6% of the power input is carried away by non-radiative heat transfer processes, i.e. conduction, convection and evaporation. Likewise, the significant non-radiative heat transfer processes of the atmospheric molecules render the 396 W/m^2 LWIR radiation upwelling from the surface impossible. The ocean surface cannot radiate with a 0.97 emissivity.

    No 396 W/m^2 upwelling BB LWIR means there is:

    No energy to power the 333 W/m^2 GHG out-of-nowhere perpetual energy loop,
    No energy for the CO2/GHGs to “trap” or absorb and re-radiate “warming” the atmosphere/surface,
    No RGHE or 33 C warmer and
    No man-caused climate change.

    https://principia-scientific.org/debunking-the-greenhouse-gas-theory-with-a-boiling-water-pot/

    This second experiment validates the findings of the modest experiment.

    Second experiment and exhibits:
    https://www.linkedin.com/feed/update/urn:li:activity:6454724021350129664
    Modest experiment:
    https://www.linkedin.com/feed/update/urn:li:activity:6394226874976919552
    Annotated TFK_bams09
    https://www.linkedin.com/feed/update/urn:li:activity:6447825132869218304

    • Nick,
      Wrong again.
      The ocean does have an emissivity of about .97. In Dr. Roy’s kettle visualization, you correctly calculate 1021 W/sq.M for 200 F temperature, yet claim the 333 negative part of the SB equation does not exist when it comes to RGHE. Not sure how you can possibly reconcile both views in your thought processes.

      • [Language. Snipped. Mod]. Someone tell AOC and her Little friends they can’t have their GND or their socialism

  2. The lid will create a higher pressure and at saturation boiling a higher temperature.

    The lidded pot must “boil” at a higher temperature than the open pot.

    In fact the open pot must boil at 212 F at sea level; 200 F at 6,000 feet.

    The altitude for boiling water at 150 F is about 30,000 feet.

    • @ Nick,
      what point are you trying to make ???

      no mention of boiling, no mention of sealed lids (therefore no increase in pressure).

      why didn’t you read the words before criticising ???

      • Just the lid alone increases pressure. No need for it to be sealed.

        To be blunt, the two pot setups are not exactly the same.
        Any attempt to model either stove top setup changes the parameters required and how those parameters are used within the model.

        Nick specified

        “The immersion heater is feeding 1,180 W of power into the insulated pot of water”

        to eliminate sloppy open flame heating of the pot and subsequent wasted heat and the losses due to conductive heat loss.

        A perfect insulated pot with lid would only require a heat source of 150° and time for the contents to reach 150°. Not so, for the open uninsulated pot with an external heat source.

        Nick also specified an altitude of 6,300 feet (1,920 meters) since small variations in altitude affect heating and cooling. Though, Nick specifying an immersion heater eliminates changes in gas delivery and combustion changes.

        Nick ably demonstrates that a multitude of myriad parameters are necessary to model the simplest closed system demonstrations or experiments.
        Open systems increase the variables required.

        • a multitude of myriad, that sounds like a big number 😉

          “Just the lid alone increases pressure. No need for it to be sealed.” It will only increase pressure if there is ferocious boiling and the lid gap so small as to restrict the flow. No said is was a heavy lid preventing vapour from escaping. Don’t get in to nit-picking, it is a simplistic model for discussion. The lid is supposed to be an analogy of GHE.

          • The lids are the problem. The lids should be sealed and have different conductivities based upon the mix of GHG’s. One lid should be water vapor only and another with water vapor plus CO2.

            The pots should have about 75% of its surface area have substantial fins in order to simulate the storage of heat by the oceans.

            And, I’m not sure any of this is appropriate to model the radiation. Conduction maybe.

        • Just the lid alone increases pressure. No need for it to be sealed.

          But more so, the lid retard evaporation …… which cause the temperature to increase.

          • Why are we measuring the water and not the air temperature.? Let’s change the temperature of the water to 0C add an ice cube. Now what is your model? That ice cube has not melted since man has been here. Doesn’t bloody matter where the lid is does it? That is the problem confusing temperature with a unit of energy. And totally ignoring that similar processes occur daily on earth. Processes that change the temperature by more than the total of a century of global warming in one year. Think el Nino

    • The “lid” idea is a red herring. It is frequently raised in debates about whether cooking stoves should be tested with the lid on or off. A 2 pound cast iron tight fitting lid with a water seal maintained by condensation does indeed raise the boiling point of water, equivalent to moving the stove down five stories in an apartment building, which is to say, the effect is negligible and swamped by air pressure changes during the day.

    • “This is analogous to adjusting (say) low cloud amounts in the climate models, since low clouds have a strong cooling effect on the climate system by limiting solar heating of the surface.”

      But they also retain the warm air’s convection.

      • Totally unconnected to the comment to which you replied but what is that unqualified assertion based on? Hint you are talking out of your hat.

        Go and find out why low clouds are fluffy cotton looking, why they are white and what they are made of and why do we see them forming at the altitude they do form at. Then post back and explain HOW they retain warm air’s convection.

        • Look what happens at night, in wintertimes, see the difference in temperature with clear sky and fluffy cloudy sky.

          • The comment you commented on was about solar radiance during day. Are you arguing the retention is more then the blocking? If not then you don’t dispute the net effect of the clouds

          • Clouds reduce radiative heat loss to space at night but that is not what you said.

            But they also retain the warm air’s convection.

            Still waiting to hear how clouds “retain convection”. What was that supposed to mean?

  3. Pat Frank’s paper suggest you can’t predict if the lid is on or not, therefore the models are worthless at prediction.

    • Yeah, how confident can we be that the lid is on the pot or not at a given time, or how far the lid is on the pot, or whether the lid even properly fits the pot, has holes in it, is bent, is made of paper, pressed down with a hand, held at a distance slightly above, made of crystallized sugar (subject to melting after a time)?

      • The lid is an analogy for the greenhouse effect, which Dr. Frank admits exists because his only energy flux he analyses is the long wave cloud forcing [LWCF]. He shows the LWCF varies between models, I use the analogy of moving the lid. This isn’t difficult, folks. If you dont understand the basics, don’t comment.

        • Actually, I don’t take Dr. Frank’s article as an actual admission of the greenhouse effect’s existence. I could read the paper, without this assumption. He might very well believe in it, but it is not necessary for him to believe it, in order to model the models that incorporate this belief. That’s a whole ‘nother level of argument, though, which still seems not so welcome here, and so no need to pursue it.

          What Dr. Frank does, as I see it, is to reveal a level of uncertainty that looms over the model-instrument measuring error [climate models are instruments of forecasting, yes?], resulting in uncertainty about the confidence in what the model-instrument actually registers. In other words, the models (as instruments) can establish marks representing certain measures, but are these marks of the correct magnitude to represent the reality they supposedly measure? Can we be confident that the size of those “marks” are representative of reality?

          I’m not sure whether he shows that the LWCF varies between models. I think the more important thing he shows is that we cannot have any confidence in even the variations between models, because the models have wired-in uncertainty about what those variations might actually be in reality. The markings can be in a certain tight range, but our confidence about how those markings represent anything real cannot be very high, because we don’t know if the instrument producing those markings is built right or not.

          The models SIMULATE that segment of reality that we cannot be confident that they simulate correctly. The uncertainty is in the confidence that we can have in what the models SIMULATE. And this confidence seems ridiculously low, because the instrument of forecasting doesn’t have a confidently reliable interval built into it.

          We know what a degree mark on a thermometer means, for example. But what if we did not?
          Suppose, on some thermometers, the etched marks appeared where we could not know their actual meaning — they might be off by, say, 4 tenths of a degree.

          We could keep measuring temperatures, and stating uncertainties about the accuracy of the thermometer in terms of +/- tenths of a degree. But those degrees themselves would NOT be known, with any confidence, to be etched on the thermometer instrument correctly.

          The phrase, “calibration error”, particularly pops into my focus here. I think we are concerned with an uncertainty (and possible error in output) in how the instrument is built. How can we have any confidence in such an instrument?

          I might not have the deep knowledge of this stuff, but I think I might be starting to get the basic idea of it.

          If you don’t understand that an apostrophe comes between the “n” and “t” of “don’t”, then don’t comment in English. In other words, everybody’s knowledge is fragmented in one way or another, and we also make basic mistakes. Commenting is how we de-fragment our knowledge to correct mistakes, however big or small, and so I will continue to comment, as the moderator gods allow. (^_^)

        • Actually Roy, Franks paper discusses LWCF as a result of the models code that “predicts” clouds, and all this within the context of Total GHGF, and its relevance to estimating CO2F.

          Granted, as Javier noted, Climate Models don’t spit out an uncertainty associated with the internal error of the model, GCMs, just give you a range of individual numbers based on numerous runs. Franks point was that GCMs don’t give you an uncertainty associated with each run, but his paper gives evidence that they should.

        • Dr Spencer I know you meant the greenhouse effect, I stretched the analogy maybe to breaking point.

          The situation as I understand it:

          1. The models are energy balanced and more or less replicate observed temperatures.

          2. The impact of the difference between predicted and observed cloud cover is greater than CO2 forcing.

          3. Observations of cloud cover are uncertain enough that the difference between prediction and observation could be an artefact of the measurement technique

          4. Par Frank’s concern is the uncertainty over what causes cloud cover to change is sufficient to throw model predictions into doubt. If cloud cover suddenly increased, by an amount which is within the range of model error, the CO2 effect would be overwhelmed.

          5. Given models are iterative, they use the previous state as the starting point for the next iteration, errors are amplified?

          • Yes, and as I recall, Dr Spencer has in the past noted that GCMs do very poorly with clouds, and a cloud change of as little as 1-3% would wipe out all CO2 forcing over the 20th century.

            All Franks paper did was show that this uncertainty needs to be included as an error statement, and that the uncertainty is much larger than the CO2 forcing component, and as such, GCMs are meaningless..

          • Dr Deanster

            Spot on. I am really surprised that Roy is repeating this mistake for the third time.

            Roy said:
            “The errors have been adjusted so they sum to zero in the long-term average.

            “This directly contradicts the succinctly-stated main conclusion of Frank’s paper:

            “’LWCF [longwave cloud forcing] calibration error is +/- 144 x larger than the annual average increase in GHG forcing. ‘”

            The first two statements above are incorrect. “Errors” cannot be “adjusted”. Propagated errors are inherent in the set of processes used as inputs, or as they arise during the interative calculations.

            There is no contradiction of Pat Frank’s main conclusion at all.

            I was thinking about this for some time today and was surprised to see the basic logical error Roy made in his first response made again.

            In that article Roy says, defending the position that the uncertainty is low, that various errors and uncertainties cancel each other out, as evidenced by the clustered output values.

            The first claim is obviously incorrect as uncertainties do not “cancel” they add in quadrature when they are propagated. So from the start, Pat is speaking a language that is not understood. He is calculating the uncertainty “about the final values” spit out by the model. He is NOT saying the output values have to vary a lot.

            That this is not sinking with Roy is worrying to me. If standard technical language is not being used, how can the discussion proceed?

            The models have all sorts of built-in limits applied by various adjustable parameter ranges. That is a choice exercised by the modeller. No problem. But the uncertainty about the outcome of that model is not related to what final answers it produces.

            Suppose you ran a model once and is came out with doubling CO2 the temp rises 1.5 C. What is the uncertainty of that concluding value?

            Pat says it is about 4 C because of the propagation of uncertainties through the calculations.

            Roy says you cannot know the uncertainty until a lot of runs are complete and then look at the range of answers produced. These two guys are not even on the same page!

            Pat is talking about the propagated uncertainty and Roy is talking about the CoV. Roy says the CoV is low so the propagated uncertainty must therefore be lower than Pat’s calculated value. Well, sorry to disappoint Roy, but that is not how error propagation works. Look it up.

          • I hate to bring up bridge trusses again but as an engineer i can’ t help myself. Every bridge truss made has an inherent uncertainty associated with its length. You can measuree1000 of them and get an average but the fish plates you use to connect them better allow for the uncertainty associated with each individual truss. Since a large shipment from the same run can have the same uncertainty those uncertainties can certainly add. Your overall span can wind up either short or long. And you better allow for it. Uncertainties are not the same thing as random errors which might cancel.

          • A agree with the comment concerning uncertainty.

            However it is not uncertainty in modeling low level cloud cover.

            It worse than uncertainty.

            The cloud changes are not random. There are being driven by something.

        • You guys lost me a long time ago. If we are talking about a doubling of CO2 I am assuming you mean the CO2 contribution due to human activity which I understand to be 3% of total CO2 in the atmosphere with 97 % contribution from natural sources. Question I have is that if there were no humans on earth, then 100 % of all CO2 would be naturally occurring. If so, would there still be climate change and what would the models predict as the average global temperature.

          • “I understand to be 3% of total CO2 in the atmosphere with 97 % contribution from natural sources”

            This is incorrect but… human’s annual contribution might only be 3%, but it is cumulative and the natural sinks are not keeping pace. That is why the increase from 280-290ppm 200 years ago to 413 today is 100% anthropogenic.

          • loydo: you have NO (zero) proof that the beneficial CO2 increase is totally man made. You seem to struggle with basic logic.

          • Loydo – September 13, 2019 at 9:19 pm

            That is why the increase from 280-290ppm 200 years ago to 413 today is 100% anthropogenic.

            Loydo, your above comment proves that you have not overcome your nurtured addiction for the taste of CAGW flavored Kool Aid.

            Your above, per se, 200 years increase in atmospheric CO2 is a 100% natural source, the ocean waters of the world. As long as the ocean water continues to warm, atmospheric CO2 will increase.

            When those ocean waters start cooling again, , atmospheric CO2 will begin decreasing.

          • Loydo
            The challenge to your assertion is that the CO2 does not rise in a manner equal to the emissions from fossil fuel. I suppose to knew that but chose to claim the cause is known anyway, hoping someone will one day validate your assertion.

            800 year-buried warm water rising from the deeps (meaning warmer than normal for the past 6 centuries) causes CO2 concentration to rise without any of humanity’s exhalations. Remember the 800 year lag?

            So, how is you are sure a rise equal to half of that from fossil fuel burning is “100%” anthropogenic? It should be 200% and it could be zero. No one knows save you. Please explain.

        • Anyone who graduated with a technical degree should have taken Physical Chemistry and should understand the difference between systematic error and measurement error. The first experiment done was asking groups of students to measure various sticks using rulers that were on the bench. One ruler was a yardstick made by a lab tech by scribing lines by hand 1 in. apart. The other was machined on a mill to an accuracy of .001in on each 1/16in. mark.

          Some students who weren’t so technically minded were surprised that any particular stick could be measured with both rulers and would give a result accurate to 1/16in. or so. The hand made ruler had more systematic error built in, but with enough measurements the result would be accurate, but with a wide variation. The machined rule would have similar accuracy but with a very narrow range very little variation.

          That is the difference between systematic error, precision, and accuracy. Systematic error can be averaged or countered. It simply builds exponentially until the error completely washes out the usefulness of any results.

          Since climate models have been made with many assumptions about what contributes to climate changes they automatically have a built in wide range of outcomes. In addition there are numerical calculation errors that can cascade out of control, and errors in how the various processes interact and how repeatably they interact.

          There have been a number of posts and reviews of papers on the subject here and other places. Systematic and numerical calculation errors don’t cancel out as random error does. With every iteration of the model the error increases exponentially to the point it tends to go to an asymptotic limit.

    • I have been watching the discussion closely and have enjoyed it very much. I still haven’t made up my mind but am tending towards Pat Frank’s position. Part of the reason for that is highlighted by the above analogy. The models are tuned so that over the long term they generate the known temperature of the earth. Some variables may be overweighted and some underweighted but I think Dr. Spencer’s point is that we don’t need to worry about that be cause we know empirically that the models can generate the correct temperature. My problem is that with errors and physical system that you don’t understand, it is possible for the errors to cancel or for them to accumulate. For a given condition, we can tune models so that they generate what is expected. However, because we don’t understand the physics of the system fully, we don’t know if we can rely upon the conditions to remain constant. In the above example, if the lid melts due to the higher energy flux, then the system changes and our ability to make any useful productions is gone. An analogy with the earth might be cloud cover. As the conditions change the cloud lid could get denser and or thinner and change the behaviour of the system completely so that the previously tuned models are useless. This is what I think Dr. Frank means by error. It is a behaviour that comes from uncontrollable causes and so can’t be relied upon to behave in the same way that it did when the model was tuned.

  4. Dr. Spencer,

    How useful to understanding this is the spreadsheet model on your webpage ?

    I can’t set it up to return anything but positive temperature increases over the 50 year time span.

    Also, for the 8 parameters that can be adjusted in the model spreadsheet, can you identify which of those fields corresponds to the various physical forces are being discussed in these latest set of posts on the subject.

    For instance, water depth is 1 meter (what does this represent ?)
    Feedback Coef ?
    radiative heat flux parameter (another name for ?)
    non-radiative heat flux parameter (another name for ?)
    CO2 increase (units are w/m2 per decade of energy rejected back to earth ?)

    Thanks

  5. There have been years of work put into calculating ECS. 3C still seems like a prety good bet.
    Adapted from Knutti et al 2017 meta-analysis.

    • calculating?….if you mean going back and adjusting things….they don’t understand…to get the results they want
      woops…too much clouds….adjust that down…ah much better
      …then if we tweak humidity a little…we’ll have it

      ..and don’t even mention what they’ve done to temp history….of course the models show a faster rate of warming….when you first adjust the temp history to show a faster rate of warming to fit the agenda

        • Dr Spencer:

          The answer depends not so much upon uncertainties in the component energy fluxes in the climate system, as Frank claims, but upon how those energy fluxes change as the temperature changes.
          Loydo, it’s all about tweaking.
          And that’s what determines “climate sensitivity”.

          That is valid as far as it goes but ignores that some parameters are tweaking sensitivity to variable natural forcings ( like stratospheric volcanoes ).

          If you program an exaggerated cooling sensitivity to volcanic eruptions you can balance that with an exaggerated warming sensitivity to GHG. That will work in your hindcasts and add a few dips for realism.

          However, once there is a pause is major eruptions ( eg since Mt Pinatubo in 1991 ) the erroneous balance falls apart and all you are left with is your exaggerated warming. Your models warm too quickly. This is exactly what we see.

          I discussed this in detail on Judith Curry’s site, with detailed quotes from Hansen’s papers showing they intentionally abandonned physics-based modelling in favour of tweaking parameters:

          https://judithcurry.com/2015/02/06/on-determination-of-tropical-feedbacks/

          Lacis et al 1992 did physics based modelling and got close to matching data from 1982 El Chichon eruption. Hansen et al 2002 states that they were making arbitrary changes to parameters in order to “reconcile” GCM model output with the climate record. They admit that the new model produces twice the warming in the stratosphere than was observed. An indication they are doubling the real effects, presumably resulting in a similar exaggered cooling in the troposphere.

          Both these papers are from the same team , taking turns at being lead author.

      • Latitude, you forget a fabricated high-level of (of course) human-generated aerosols to explain the lack of predicted temp rise. AFAIK, those artificially high aerosol effects are still in the models.

    • Loydo

      Sometimes you are funny! If it was 3 C it would already be up 2.5 C since 1900. It’s not. Therefore ECS is not 3 C.

      • plus 3C is for a doubling of CO2 and we are a long way from doubling.

        Howeve, models which do not reproduce the early 20th c. warming at all can hardly be trusted just because a bunch of poorly constrained parameters have been rigged ( sorry “tuned” ) to produce something similar to the late 20th c. warming.

  6. The article says:

    “I’m not saying this is ideal, or even a defense of climate model projections. Climate models should ideally produce results entirely based upon physical first principles.”

    These statements seem awkward, convoluted, perhaps indirect or even insincere. “Climate models should ideally produce results entirely based on…”

    Dr. Spencer could be being careful, or unable to speak directly (which seems necessary in science), but more likely is being subtly evasive, especially considering his comments were hoped to be responses to the compact and concise work of Dr. Frank.

  7. I understand why you cannot agree with Dr Frank .
    You are not retired yet .Your paycheck depends on it .
    I understand .

    • I work in a research institute that is affiliated with a major university. A lukewarmer that is frequently mentioned here used to be in this same institute. In a meeting I had with the director earlier this week, he mentioned that individual and the steps that he had taken to make sure he was no longer affiliated with this institute, though that individual is still with the university. Any deviation from the party line is career damaging.

      This is not to say that this necessarily applies to Dr. Spencer and his motivation, but what you say is real in many cases.

  8. Very poor analogy.
    The only way heat energy enters the pot system (water) is conduction. And the vast majority of the heat leaves via convective latent heat transport.

    We can’t be sure how much radiative (sw) energy enters the Earth’s climate system because albedo and thus insolation to the surface is constantly changing everywhere on the (nearly) spherical Earth. The modelers tweak and turn their multiple water/water vapor/water physics parameter knobs until they get something on the output they like. Junk science.

    Simple analogies are how we got into this mess with climate models from agenda driven Cargo cult modelers in the first place.

    • Yeah, why don’t we call it the “Pot-top Effect”.?

      Well, the atmosphere does not really act as a pot lid — we know this, but it is such a cool name that kiddies can remember and the public can easily be fooled to believe in, because it relates to something everyday-practical.

      Yes, let’s replace one bad analogy with another. Keep it fresh. Never mind the reality of the actual physics dictating it.

      [moderation note to self: watch it RK, you’re getting mighty snarky with a respected figure in the industry]

      • @ Robert Kernodle – September 13, 2019 at 3:08 pm

        Robert, …. I just hafta gotta paraphrase your comment, …… to wit:

        Yeah, why do we call it the “Greenhouse Effect”?

        Well, the atmosphere does not really act like a greenhouse — we know this, …. but it is such a cool name that kiddies can remember and the public can easily be fooled to believe in, because it relates to something everyday-practical.

    • exactly > ” The modelers tweak and turn their multiple water/water vapor/water physics parameter knobs until they get something on the output they like.”

      …and jiggle a bunch of parameters they don’t even understand…and they all put in..and jiggle different parameters until they cancel them all out….and end up with this much CO2 causes this much warming

    • I must agree, it is an absolutely atrocious analogy.

      The pot without a lid is losing energy more rapidly because water molecules are being physical transported away with their energy. Why is this being considered as a viable cognate for radiative transfer?

      In fact, if I am not mistaken, the the water will be the same temperature in either case even with same input (lid-on/lid-off). What will change is the pressure. Correct me if I’m wrong. The water will stay at 100C until its all boiled off.

      • This is exactly the same error (not uncertainty) that led to the concept of “Green House Gases”. Everyone knows that greenhouses operate based on convective blocking, not CO2 increase. But we still sell AGW as the green house effect (maybe less now than 10-20 years ago). How ironic.

      • Beeze – September 13, 2019 at 4:45 pm

        The pot without a lid is losing energy more rapidly because water molecules are being physical transported away with their energy. Why is this being considered as a viable cognate for radiative transfer?

        Beeeze, energy will be radiated away from the surface of the heated water in the pot …… regardless of whether or not the water has reached its boiling “point”.

        Iffen you don’t believe me, just heat a pot of water to 210F @ STP, then turn the stove off …… and the H2O will cool back down to room temperature without losing a single molecule of water.

        if I am not mistaken, the the water will be the same temperature in either case even with same input (lid-on/lid-off). What will change is the pressure. Correct me if I’m wrong.

        You are mistaken and therefore …….You are wrong.

        If lid-on, steam pressure will increase, and its temperature will also increase.

        The two common steam-sterilizing temperatures are 121°C (250°F) and 132°C (270°F).

    • We seem to have missed one of the main points (which was not stated); The pot-on-the-stove model is NOT, and was not intended to be, an EXPERIMENT to demonstrate one or more of the attributes under discussion.
      Since it does not well represent the earth as we understand it – from an average temp perspective – It also is weak as a explanatory tool.

    • Precisely. The pot without a lid is an open system and the pot with a lid fully deployed is a closed system. You get radically different behavior with mass flow.

      Then there’s this statement:

      But we do know that if we can get a constant water temperature, that those rates of energy gain and energy loss are equal, even though we don’t know their values.

      This is absolutely not guaranteed. Temperature is constant at phase transitions so the statement above would be true for two pots of melting ice melting at radically different rates, i.e. radically different energy fluxes. Now, you can claim that the fact that it’s water implies that it’s liquid water and not near a phase boundary, but given that latent heat (of water!) is such a significant effect in the climate system, this is just bad.

      Just remember, the precision of an powered instrument turned off and reading zero is perfect too. Just don’t worry about its accuracy.

    • Well to be fair, it’s impossible to create a proper analogy of the “greenhouse gas back radiation effect” because it’s pure pseudoscience.

      • Robert, re “ pseudoscience”
        The amount of heat being radiated from one surface to another is
        q/a= [k/(1/ehot+1/ecold-1] x (Thot^4-Tcold^4).
        The back radiation is the -Tcold^4 term and is proven every day in furnaces and heat exchangers worldwide. You are simply incorrect. As far as the atmosphere:
        The ground is at Thot due to being warmed by sunshine,
        If the atmosphere was only N2 and O2, it would be completelely transparent to Infrared. In that case, Thot would be ground temperature and Tcold would be outer space at -270 C. But CO2 and H2O readily absorb and reradiate IR. Because the H2O and CO2 are the same temperature in the atmosphere as the N2 and O2, the ground radiates to “the sky” instead of outer space, and the “sky” is much warmer than outer space. You can take an IR thermometer and typically read the temperature of clouds at about freezing and blue sky down to -80, but $40 IR guns do not have proper emissivity settings to be accurate for this job. Anyway my point is that the ground temp will warm more in the daily sunshine in order to radiate the same amount of heat it receives from the sun, when there are radiating gases between the ground and outer space. That extra temperature is caused by the Sun, but is a result of the greenhouse gases mixed with the Nitrogen and Oxygen in the atmosphere. That is the Radiative Green House effect, RGHE. Prove it to yourself with some basic SB calcs. Think about your warm face radiating to the walls of your house and the walls radiating back, etc….

  9. Quoting the post: “The answer depends not so much upon uncertainties in the component energy fluxes in the climate system, as Frank claims, but upon how those energy fluxes change as the temperature changes.”

    Hair splitting. The distinction between what Dr. Spencer is saying and what Dr. Frank is saying is hard for me to see. Changes in the energy fluxes cause temperature to change. Temperature changes cause energy fluxes to change. Temperature changes cause the rate of evaporation at the ocean surface to change, oceans cover 70% of the Earth surface, this changes low level cloud cover, this changes energy flux. To this simple observer, Dr. Spencer and Dr. Frank are saying the same thing. Can someone explain the difference to me?

    I do accept that Dr. Frank did not disprove the models, he only showed (conclusively IMHO) that they are not accurate enough to compute man’s influence on climate change or any potential dangers from climate change. But, Spencer and Lindzen and others have shown that also in different ways. I just don’t see a clear and meaningful difference in what Spencer and Frank are saying.

    • Andy,
      The distinction between Dr. Spencer’s position and Dr. Frank’s is quite simple. Dr. Frank claims that
      if you increase the long wave cloud forcing by 4 W/m^2 then the temperature increases by about 1.8 degrees each and every year. Dr. Spencer’s position is that if the long wave cloud forcing is changed then other parts of the climate system will adjust themselves so that the final temperate will reach a new equilibrium that will remain constant in time. Dr. Frank’s claim violates conservation of energy and so it is not plausible.

      • Izaak
        You have lost me with your claim that Frank’s position violates conservation of energy. The official climate models are predicting a steady increase in temperature as CO2 increases. Is that a violation of conservation of energy?

        Perhaps you could explain in a manner that even I could understand.

        P.S. I think you misunderstand Frank’s position. He is claiming that the envelope of uncertainty increases every year because the calculations are iterative and depend on the previous value to calculate the new value.

        • Clyde,
          If you suppose that there is no increase in CO2 forcing in a GCM then each model will
          stabilise at some equilibrium temperature where the outgoing radiation equals the
          incoming solar radiation, i.e. they conserve energy. Dr. Frank’s model claims instead
          that every year it is possible that the temperature rises by about 1.8 degrees for as long as
          you run the model and so will very quickly violate conservation of energy since the incoming solar flux does not change.

          I am not sure what the “envelope of uncertainty” means but how such an envelope grows
          depends critically on the equations being solved. Suppose that you try and numerically calculate the terminal velocity of a skydiver by solving the equations of motion. If you only know the mass of the skydiver to 10% then your numerical answer will be out by 10% no matter how long you do the simulations. Dr. Frank analysis would predict that instead the error will grow the longer you run the simulations. In general if a set of differential equations converges to a fixed point then the error will also converge to a finite value and will not continue to grow.

          • ” In general if a set of differential equations converges to a fixed point then the error will also converge to a finite value and will not continue to grow.”

            This is true for ONE iteration of solving the differential equations. But when the input of the next iteration is the output of the previous iteration then uncertainty certainly increases with each iteration. A converging solution is no guarantee that there is no uncertainty as you admit – error converges to a finite value.

            In addition, error and uncertainty are two different things. Errors may not accumulate, uncertainty does.

          • ” error and uncertainty are two different things”
            The paper is titled “Propagation of Error and the Reliability of Global Air Temperature Projections”. So which of these two is it about?

          • Tim Gorman,
            In this instance, it is not about convergence for a solution iteration, it is about convergence of the model to a result over a period of time.
            Pat Frank’s emulator uses simplifications which detach it from the energy balance equation (from which it derives). If you replace it with a simple single body heating model
            net flux = CdT/dt = F – lambda T for a constant forcing applied to a system in steady state flux balance, you will find that errors in F do not accumulate. They are bounded (max error in T = max error in F/lambda).
            Dr Frank’s increasing uncertainty envelope comes from the ever increasing uncertainty he attributes to F.

          • “In this instance, it is not about convergence for a solution iteration, it is about convergence of the model to a result over a period of time.”

            You are missing the point. If the output of a single iteration has uncertainty associated with it then so do successive iterations based on that uncertain output. It doesn’t matter if those successive iterations converge to a result over a period of time, the value they converge to is subject to the accumulation of uncertainty over successive iterations.

            For the equation you give, net flux = F – lambda T, if the actual value of any of the components are uncertain then any output of the equation is equally uncertain. If the models *assume* anything then their output has to be uncertain, otherwise “assumptions” would not be required.

          • kribaez. –> Dr. Frank’s doesn’t deal with the energy balance in the equations at all. He has shown that the increase in temperature output in the models is linear. Using the linear simulation of the models output values he has shown how uncertainty can grow in iterative runs.

            Arguments against Dr. Frank’s paper need to deal with how accurate his linear approximation is to the models output and if iterative runs do or do not increase uncertainty. Attempting to move the argument to one about the internals of the models is not appropriate.

          • I posted this late on the previous thread, by which time the party had already moved, but it is highly relevant to this conversation:-
            Reposted:
            It is obvious that many commenters here think that the “LW cloud forcing” is a forcing. Despite its misleading name, it is not. It forms part of the pre-run net flux balance.

            Dr Frank wrote ““LWCF [longwave cloud forcing] calibration error is +/- 144 x larger than the annual average increase in GHG forcing. This fact alone makes any possible global effect of anthropogenic CO2 emissions invisible to present climate models.”

            Dr Spencer replied above:- “While I agree with the first sentence, I thoroughly disagree with the second. Together, they represent a non sequitur. ”

            Pat Frank implies in the above statement that the LWCF is a forcing. It is not. In his uncertainty estimation, he further assumes that any and all flux errors in LWCF can be translated into an uncertainty in forcing in his emulator. No, it cannot.

            Forcings – such as those used in Dr Franks’s emulator – are exogenously imposed changes to the net TOA flux, and can be thought of essentially as deterministic inputs. The cumulative forcing (which is what Dr Frank uses to predict temperature change in his emulator) is unambiguously applied to a system in net flux balance. The LWCF variable is a different animal. It is one of the multiple components in the net flux balance, and it varies in magnitude over time as other state-variables change, in particular as the temperature field changes.

            They have the same dimensions, but they are not similar in their effect.

            If I change a controlling parameter to introduce a +4 W/m^2 downward change in LWCF at TOA at the start of the 500 year spin-up period in any AOGCM, the effect on subsequent incremetal temperature projections is small, bounded and may, indeed, be negligible. If, on the other hand I introduce an additional 4 W/m^2 to the forcing series at the start of a run, then it will add typically about 3 deg C to the incremental temperature projection over any extended period.
            The reason is that, during the spin-up period, the model will be brought into net flux balance. This is not achieved by “tweaking” or “intervention”. It happens because the governing equations of the AOGCM recognise that heating is controlled by net flux imbalance. If there is a positive/negative imbalance in net TOA flux at the aggregate level then the planet warms/cools until it is brought back into balance by restorative fluxes, most notably Planck. My hypothetical change of +4 W/m^2 in LWCF at the start of the spin-up period (with no other changes to the system) would cause the absolute temperature to rise by about 3 deg C relative to its previous base. Once forcings are introduced for the run (i.e. after this spin-up period), the projected temperature gain will be expressed relative to this revised base and will be affected only by any change in sensitivity arising. It is important to note that even if such sensitivty change were visible, Dr Frank has no way to mimic any uncertainty propagation via a changing sensitivity. It would correspond to a change in his fixed gradient which relates temperature change to cumulative net flux, but he has no degree of freedom to change this.

            None of the above should be interpreted to mean that it is OK to have errors in the internal energy of the system. It is only to emphasise that such errors and particularly systemic errors can not be treated as adjustments or uncertainties in the forcing.

          • kribaez, “during the spin-up period, the model will be brought into net flux balance. This is not achieved by “tweaking” or “intervention”. It happens because the governing equations of the AOGCM recognise that heating is controlled by net flux imbalance.”

            This puts the onus on the modelers to take the next step and demonstrate how their models will handle this external uncertainty statistic directly.

      • Izaak Walton says, “Dr. Frank claims that if you increase the long wave cloud forcing by 4 W/m^2 then the temperature increases by about 1.8 degrees each and every year.”

        That is not at all what Dr. Frank is claiming. The +/- 4 W/m^2 is a calibration error statistic. The propagation of this type of error informs one as to the reliability of the output of the models, but not what the output may be.

        As Crispin in Waterloo put it in response to a different article on this topic (https://wattsupwiththat.com/2019/09/11/critique-of-propagation-of-error-and-the-reliability-of-global-air-temperature-predictions/), “the ±n value is an inherent property of the experimental apparatus, in this case a climate model, not the numerical output value.”

        • Barbara,
          A calibration error statistic means that if you run your model with different values of a
          parameters then you will get a different output. In Dr. Frank’s paper he explicitly states that
          the +/- 4 W/m^2 error should be put into his emulation model to give a temperature error.
          This is the same as saying that a GCM with a higher value of long wave forcing will output a higher temperature. And the next year you add in a second lot of 4 W/m^2 to the emulation model to get an additional 1.8 degrees of temperature rise. A full GCM will not behave in this way and thus Dr. Frank’s emulation model is wrong.

          • No. That is not correct. The emulation model is used, not to calculate future output, but to calculate the reliability of the future iterations of the models. It’s calculating the increasing inability of the models to have predictive value, not what values they will predict.

          • Barbara,
            You cannot calculate the reliability of a model without calculating its future
            output. It is the same thing. Suppose you have a function f(x) and you want
            to find the error if x is known to within 10%. The error df is given by
            df=f(x*1.1)-f(x*0.9). So you have to be able to model what happens for different
            value parameters in order to calculate the error. Hence the emulation model must
            be capable of providing an estimate of future temperatures before it can be used to
            calculate the errors associated with those predictions.

          • “You cannot calculate the reliability of a model without calculating its future output”.

            Yes you can. Look at any basic textbook uncertainty analysis. Your position seems to say you have no concept of uncertainty assessment.

            Even better than that. Pat Frank knows a helluva lot more about the topic than anybody else on this forum. Why not pay attention to what he is saying and you might learn something

          • No, that’s not correct.

            The errors do not (necessarily) lead to different outputted temperatures. They reduce your confidence that those temperatures are predictive in the real world — that is, that the system is an accurate model of reality.

          • Prove it. Pat Frank established the linear emulator was adequate to reproduce the results of many GCMs under differing conditions. To use the actual GCMs it would be most effective to enumerate all possible errors including branching for every time step over 100 years (1^100 runs for the last time step). But there may be ways to randomly sample intermediate steps while running the outer envelope (+4, +4, +4… and -4, -4…, -4…).

          • Izaak
            Per Barbara and Jordan, please take a closer look at uncertainty analyses, especially Type B errors. You can use a Fisherman’s ruler graduated to 0.001″ to measure your fish to 10.602″ long. But if the ruler is actually only 6″ not 12″, then your fish is actually only 5.301″ long. Similarly if you have fluxes Fi of 101 W/m2 in and Fo 100 W/m2 out, you have 1 W/m2 net fluxes. You can model and calculate the 1 W/m2 to 4 significant figures – assuming that the other 100 W/2 Fi remains constant. However if Fi drops to 81 and Fo to 70, than you have a 11 W/m2 difference. There are huge known unknowns and also unknown unknowns. The IPCC models ASSUME most balancing factors remain constant – but we don’t know if they will or for how long. McKitrick
            & Christy 2018 test the predicted Tropical Tropospheric Temperature using independent satellite and radiosonde data after the models are tuned to surface temperatures. They found predicted trends of 285% of actual. Thus NOT Proven, and NOT fit for policy purposes.
            AND That is just since 1979. What about the 1000 years before – or the next 100 and then 1000 years. The actual could be far from current projections – EITHER WAY. https://agupubs.onlinelibrary.wiley.com/doi/full/10.1029/2018EA000401

      • Pat establishes a degree of uncertainty in the GCMs. I like to describe uncertainty as “lost information”, whereas Pat calls it “our ignorance”.

        Once information is lost, there is no way to recover it. We may introduce new assumptions to patch-up as we think fit, and we may force model solutions down “constraint corridors”. None of this can recover the lost information. Uncertainty cannot reduce, and adding more modelling assumptions (levels of abstraction) can only increase uncertainty.

        And the iterative nature of the GCM’s is a key point you seem to be missing when you talked about solving ode’s.

        Each stage of GCM iteration inherits the uncertainty of the previous, and then adds a bit more. Pat’s uncertainty range grows and grows, while the model marches-on with its iterative solution, blissfully unaware of how far off-course it may have drifted from the real world. As the outputs relate to the future, there is no way check and correct the GCM course.

        All we can realistically do is use the GCM output as a prediction, and wait to see if reality confirms what it said.

        So the uncertainty range provided by Pat cannot violate conservation of energy. All it does is give a measure of how far from reality the GCM output could be.

  10. “This is analogous to adjusting (say) low cloud amounts in the climate models, since low clouds have a strong cooling effect on the climate system by limiting solar heating of the surface.”

    Huh? If you *adjust* the amount of low cloud amounts instead of calculating that amount based on atmospheric conditions that actually exist then you are doing nothing other then getting the answer you want instead of a model of reality.

    What you are suggesting is that you can predict an automobile’s speed by measuring the temperature of the engine with no actual knowledge of the air/fuel flow, the aerodynamics involved, or the internal loads on the engine such as a power steering pump or air conditioner compressor.

    If you can’t model the internals of the Earth’s thermodynamic system then how can you pick just one variable and say it is a controlling factor? That’s an *assumption* made in order to make the calculations work, not a known physical reality.

  11. Sit in front of a roaring campfire on a chilly autumn eve.

    Raise a blanket (atmosphere) up between yourself (earth) and that campfire (sun).

    Are you warmer now or colder?

    Drop that blanket down.

    Are you warmer now or colder?

    This simple thought experiment just trashed the greenhouse effect theory which says you get warmer with the blanket & colder without it.

    No atmospheric greenhouse effect, no CO2 warming, no man caused climate change.

    (The atmosphere obeys Q = U A dT same as the insulated walls of a house.)

    • Sit on your camp chair long after the fire is dead, staring into the dark and radiating away your body heat. Hang a blanket a foot away from you, do you feel warmer or colder?

      • Because it stifles the convection, not because of BB IR.
        The only way a surface radiates as a BB is into a vacuum.
        Energy leaves my body by both non-radiative (conduction, convection, advection, latent) and radiative processes.
        The blanket decreases U and dT increases.

    • Nick,
      Nick
      No, the blanket is at an intermediate temperature between your body and the campfire, and radiates between blanket and campfire, and your body and the blanket according to the Stephan-Boltzmann equation. Since CO2 and H2O are infrared radiating gases, analogous to your blanket, you are actually proving CO2 warming and the radiative greenhouse effect with your thought experiment, except maybe it would be clearer to assume you are standing by the fire (ground surface warmed by the sun) and you hold the blanket (GH gasses) up between yourself and the cold starry night sky.

    • Nick Schroeder – September 13, 2019 at 3:05 pm

      Raise a blanket (atmosphere) up between yourself (earth) and that campfire (sun).

      Are you warmer now or colder?

      Drop that blanket down.

      Are you warmer now or colder?

      This simple thought experiment just trashed the greenhouse effect theory which says you get warmer with the blanket & colder without it.

      Nick S, … “simple” is correct, …… give it up, ….. you absolutely, positively cannot “trash” the greenhouse effect theory …. by talking “trash”, …… no matter how hard you try.

      Now iffen you want to claim that the atmosphere, per se, “blankets” the earth, …… FINE.

      But ….. “DUH”, there is no way in hades you can raise a blanket (atmosphere) up between yourself and the campfire …… simply because that per se atmospheric “blanket” is already in place.

      But iffen you wanted to lie flat on the ground …. or maybe roll a big rock up in front of the campfire and hide behind it.

      California wisdom, ……. no thank you.

  12. Dr. Spencer,

    I am going to bookmark this page, and every time someone on a thread states that Pv=mRT determines surface temperature, I am going to anchor this URL with the phrase “The first law of thermodynamics determines temperature”.

  13. “How much will the climate system warm in response to increasing CO2?”
    The answer is and always has been – ZERO!!!!

    The 396 W/m^2 upwelling is a theoretical “what if” with ZERO physical reality.
    ZERO 396 means ZERO 333 for the GHGs to absorb/reradiate.
    ZERO GHG warming.
    ZERO man caused climate change.

    • Nick,
      “The answer is…. ZERO”. No, the answer is around 1.5 to 2.5 C per doubling of C02, plus or minus about 1C., so not really accurate enough to declare a climate crisis.
      The IPCC thinks it might be as high as 4.5 C but the last 50 years temperatures do not support such a high number.
      As far as your GHG warming statement:
      The amount of heat being radiated from one surface to another is
      q/a= [k/(1/ehot+1/ecold-1] x (Thot^4-Tcold^4).
      396 is the Thot^4 term on average, and 333 is the Tcold^4 term, again on average, simply because the sky has a temperature much warmer than outer space.
      The ground is at Thot due to being warmed by sunshine,
      If the atmosphere was only N2 and O2, it would be completelely transparent to infrared, and the ground would radiate directly to outer space at absolute zero. In that case, Tcold^4 term would be zero instead of 333, and the ground would radiate to outer space at 396 W/sq.M on average, instead of 396-333= 63 W/sq.M on average that is radiated from ground to the sky.
      Day and night are quite different, we are talking averages here. There isn’t much wrong with Trenberth’s numbers that you refer to, a few watts one way or another….

      • It is quite typical that when somebody refers to the climate sensitivity values of the IPCC, he/she uses the equilibrium climate sensitivity (ECS) values which are somewhere in the range of 3.0 to 4.5 C.

        I am astonished that nearly nobody has ever referred to the following statement of the IPCC in AR5, p. 1110: “ECS determines the eventual warming in response to stabilization of atmospheric composition on multi-century time scales, while TCR determines the warming expected at a given time following any steady increase in forcing over a 50- to 100-year time scale.” And further, on page 1112, IPCC states that “TCR is a more informative indicator of future climate than ECS”.

        I just remind that the average value of TCR/TCS is about 1.8 C. It is the right climate sensitivity value per the IPCC for this century.

        • That is from the (per Dr Frank and my independently grounded previous posts here on models) climate models. We can say reliably that it is too uncertain (Frank) and too (computational intractability=>large grid cells=>parameterization=>tuning to best hindcast=>attribution problem).

          The alternative energy budget ‘observational’ approach produces a TCR ~ 1.35.

      • DMacKenzie – September 13, 2019 at 7:07 pm

        Nick,

        No, the answer is around 1.5 to 2.5 C per doubling of C02, plus or minus about 1C., so not really accurate enough to declare a climate crisis.

        MacKenzie, ……I hate telling you this, ….. but you are mimicking a “junk science” claim because the claimed “CO2 warming effect” on surface temperatures (+1.5 to 2.5 C) has NEVER been measured or scientifically proven, ….. but only calculated via use of “fuzzy math” and the mis-use of a scientific property associated with “doubling of C02”.

        It is obviously bogus when the claimed “warming” ranges from “1.5 to 2.5 C” …. to “4.5 C”, … per doubling of C02.

  14. RE: “The errors have been adjusted so they sum to zero in the long-term average.”
    How, specifically, do you ‘adjust’ an unknown number of unknown errors such that they sum to zero in the long term average?
    What assumptions are you making?
    What makes you think this is a valid thing to attempt/apply to real stove top pots, let alone unreal climate models?

    • J Mac
      This has always troubled me. If there are unknown forcings, or very poorly characterized forcings, and other forcings are adjusted to make everything balance or behave as expected, then there is no guarantee that those particular adjustments will accomplish the same thing when the system is in a different state. One can only be certain that the ‘fix’ is valid for that particular state, and not for all states.

    • One would run the models with zero CO2 forcing which would result in zero warming. Any modeling errors are necessarily zero in this case. It is by design.

      • How do you know there would be no warming with zero CO2 forcing? You would be calibrating the model against a standard that may or may not represent reality. That alone generates an uncertainty in the output of the model.

        • We don’t know whether or not there would be zero warming in the real world, but I was talking about the models and modeling errors. It’s an assumption. Your comment is, like so many other here, a non sequitur.

          • Tom, we do know, don’t we? Glacial periods end due to increased insolation caused by orbital mechanics. Didn’t the current interglacial begin long before CO2 started increasing? Isn’t “equilibrium” an observational artifice, like the Assumed Position used in celestial navigation (useful in determining position but not itself an actual position)? It appears that on geologic time scales “equilibrium” is a moving target with no intrinsic set point.

          • tom,

            “We don’t know whether or not there would be zero warming in the real world”

            Which is exactly what I just said.

            ” I was talking about the models and modeling errors”

            Uncertain inputs generate uncertain outputs, even in models. That uncertainty grows with every iteration that uses uncertain outputs from the previous iteration – even in models.

            “Your comment is, like so many other here, a non sequitur.”

            I’m not sure you know what a non sequitur is.

          • Tom – September 14, 2019 at 4:27 am

            One would run the models with zero CO2 forcing which would result in zero warming. Any modeling errors are necessarily zero in this case. It is by design.

            Tom, you are absolutely correct.

            The original intent of the “climate modeling” computer programs is/was to provide undeniable “proof” to the populace that atmospheric CO2 was causing the increases in near-surface temperatures.

            “DUH”, it wasn’t until a few years after Charles Keeling started making accurate measurements of atmospheric CO2 (March 03, 1958 – 315.71 ppm) that they started creating those “climate modeling” computer programs.

            And given the fact that Mauna Loa CO2 ppm data was the only, per se, accurate atmospheric “entity” that the “climate scientists” had access to, it was therefore the “controlling” factor that governed the “output” of the “climate models”.

            Just like a “Fortune Telling Program”, ….. it would be designed to “tell you” what they wanted you to hear.

    • Agree.

      … and the cloud cover change is not random and it is a fact that there has been a reduction in low level cloud cover in high latitude regions.

      The entire warming can be explained by the measured reduction in cloud cover which explains why there is regional warming rather than global warming which explains why it the 1970s cooling and the 1997 to present pause could occur.

      .. and as the GCM have over a hundred different variables which must be ‘tuned’ the GCM , the cult tuned the GCM to produce the 3C warming,

      It should be noted that there are one-dimensional CO2 vs planetary studies that estimate the warming for a doubling of atmospheric CO2 to be 0.1C to 0.2C.

      We should re-look at the simple one-dimensional analysis.

    • The only thing we appear to know is that we do not know. By adjusting assumptions a model can get the results the alarmists want. However, if the assumptions are wrong the results will not match the climate in the real world. I see no reason to spend 16 trillion dollars or stop eating meat because I know we don’t know.

  15. with a stove flame (solar input) heating the pots

    I’ve been using this very same analogy to try to explain to Leif Svalgaard and Willis Eschenbach for years that it is perfectly possible for solar activity to decrease from SC21 to SC23 and still cause a temperature increase as long as solar activity is above the equilibrium (average) level. I even made a picture about it:
    https://i.imgur.com/CErrHrT.png
    Reducing the fire under the pot can reduce the rate of warming without causing cooling. Only when the fire is reduced beyond the point of equilibrium the pot starts cooling.

  16. Perhaps it is time for Dr. Spencer to give it a rest and let the impact of Dr. Frank’s paper sink in for a while.

  17. Roy has now inspired an evermore infrequent guest post, since most intelligently useful have already been posted at least once.
    I will cogitate, and almost certainly provide CtM another guest proffer. Somewhen. As personal issues now take precedent.

  18. We can measure the temperature of the water precisely. We can calculate the amount of heat absorbed by the water over time. We seem unable to do either of those things with climate.

    A more appropriate analogy for climate is using the gas stove to heat a cold room in the winter with the ceiling fan on and the windows closed. Measuring temperature is problematic because its different depending on distance from the stove and incoming cold (heat loss) from conduction through the windows and the ceiling fan air convection creates temperature gradients. All we can do is pick a few spots and measure changes. Hopefully spots away from the stove and windows. The amount of heat loss to the outside we don’t know exactly. All we know for certain, within error bars, is how much gas we consume from the gas meter.

    We could improve this and add buckets of water in the kitchen and place them near the windows and stove. Then we get some humidity from evaporation and condensation on the windows. We can measure the water temperature change which helps estimate how much heat the water absorbed. We can measure the water level and calculate how much water is lost to evaporation. Alas, unless its really cold we have no ice. We have no clouds or rain. No ocean currents. Climate is way more complicated

  19. The analogy leads with

    Next, imagine that we had twenty pots with various amounts of coverage of the pots by the lids:

    But for the analogy to work, the pots have to be different sizes and shapes. Some will have a large surface area on the sides and be less impacted by a lid some will have more water and take longer to get to equilibrium.

    This represents the error in the models because we want to compare to a “standard pot”, whatever that is.

    When you do that, the idea

    “This is why climate models can have uncertain energy fluxes, with substantial known (or even unknown) errors in their energy flux components, and still be run with increasing CO2 to produce warming, even though that CO2 effect might be small compared to the errors. The errors have been adjusted so they sum to zero in the long-term average.”

    is indeed correct, however the final temperature of the water will be unknown and the time to get there is also unknown. All due the the differences in pots’ thermal properties.

    • I do want to address this specifically, though.

      The errors have been adjusted so they sum to zero in the long-term average

      This is simply not true. There will inevitably be a bias over the long term because the components cant be cancelling errors over all the states of the individual components of the GCM as they all change with continued forcing. That will result in warming (or cooling) as they run, away from their balanced control values.

  20. My real concerns are the actual and real large effects of phase change and the idea of a solid boundary (lid) in the example, that does not exist in real life

    OPEN POT: When water is slowly heated, more water vapour is produced, so the energy transferred out from the open pot is by a combination of radiation, natural convection, and the mass transport of potentially-condensable water vapour (‘carries’ latent heat with it.). ‘

    CLOSED POT: If there is a lid, some condensing vapour will give up its energy (latent heat) to the lid and heat it. Hot droplets will recycle energy back to the pot and the whole chamber will get EQUALLY hot (150F). Energy transferred above the lid is now only due to radiation from the lid surface, plus natural convection of local air impinging the external hot lid.

    The lid is impervious to vapour transport. In the real world, the ‘so-called lid’ is the phase-change clouds which can also transfer back to vapour (condense and precipitate). These are not impenetrable or a solid-like lid-barriers: indeed they are quite unlike the solid barrier lid.

    Clouds can never be as warm as the liquid source (oceans). They result from phase change (condensation) with the associated latent heat energy released. The lid-pot system and ALL the zone between the 150F water being heated, will all reach a constant temperature. This is also unlike the atmosphere [Thermal Lapse Rate 6.5 C drop per km rise above earth]. There are also humidity gradients in the real world atmosphere, but not inside the lidded-pot.

    One additional point, the ‘lid’ cannot be a carbon dioxide system, as water vapour is over 10 times more radiation-effective with long wave electromagnetic energy than carbon dioxide (radiation from earth). So the earth’s environmental temperature cannot be due to an alleged blanket/lid. Water vapour is 20-30 times higher in concentration and is the only phase-change GHG.

    Energy transfer is dominated by evaporation-humidification-condensation-precipitation + water vapour-radiation. Water vapour is self-buffering; self-regulating; self-compensating and self-restoring.

    One final point: in the real earth-system, the supply solar energy comes in from above the clouds (‘lid’), the supply energy must interact with the lid first (reflection, absorption etc.) the ‘cloud-lid also acts as an umbrella on sunny days!

    I think that the heated pot system misses a lot of these vital mechanistic issues!

    • Great comment. Also, the lid is solid and CO2 is non phase changing gas. CO2 only gives back IR as heat when its surrounds are -80C. So absolutely zero effect near ground levels and truly dispersed by convection where it might collide with matter.

      CO2 is a life supporting passenger.

    • Good comments all around. But I think you overdo the analogy.
      You have a hard lid which is impervious, and everything under the lid is isothermal, OK, that is what Roy describes.
      Consider this:
      Your basic chemistry student’s distillation apparatus. You have a heated pot and instead of a lid, a straight tube distillation column. Heated vapor goes up the column until it cools and condenses and starts it’s return to the pot. The column absolutely does have a lapse rate. Further, the column is open to the air but it does have a virtual lid. The vapor goes up until it condenses, then no further. This virtual lid is governed by thermodynamics and is just as impenetrable to the vapor as if it was made of steel. You can reflux your solution all day and not lose a whiff of vapor just so long as you do not overheat and flood the column. On Earth, this virtual lid is what we call the tropopause. The boundary between where water vapor is, and where it is not. When you turn up the heat, you get more evaporation and condensation, perhaps a bit higher up the column, but you do not get a higher temperature.
      Just like those islands down in the Caribbean, in the tropics, surrounded by the ocean, where it rains every day?????????????

  21. Dear Dr Spencer,
    I think the “big question” is key to the disagreement between you and Dr Frank. I think you are addressing different questions. Here’s what I mean…

    Dr Spencer’s big question: “How much will the climate system warm in response to increasing CO2?”
    Dr Spencer’s answer (my interpretation): The models can’t predict this because they don’t even account for the fact that climate feedbacks change as the climate changes.

    Dr Frank’s big question (my interpretation of his comments): “Given the large errors in basic inputs, can the models possibly determine whether fossil fuel CO2 can have a significant effect on the climate?”
    Dr Frank’s answer (my interpretation): The models cannot possibly resolve any effects of fossil CO2 because the errors in their basic inputs are huge compared to any possible influence from fossil CO2.

    You very well may both be correct. But I think Dr Frank’s is the more basic and important of the two. You may be able to prove that the models do not include the important effect you mention. But if, as Dr Frank has shown, those models are not even up to the task of their stated purpose, then they are a pointless exercise in the first place.

    • I both like and support your analysis of the dispute (although I’m uncertain if either author will agree). Indeed, I find the models unconvincing for both reasons above and more.

      Models are a very useful research tool for comparing observations with physical mechanisms hypothesized to be in play. We know that the heat capcity of water is about 1 BTU/lb per degree Fahrenheit. If one measures about 1,000 BTU’s going into 500 pounds of water, you’d expect (give or take) about a 2F gain in temperature for the system. If the thermometer reads +5F, its time to check ones instruments and/or thinking.

      What models are never is reality. Yes, the physical sciences textbooks are crammed with pages of equations (models) that have been derived from a combination of first principals (hypothesis, mind you) and observations. They have been well tested and often provide useful estimates of the inter-relationship between the parameters. You know many quite well. PV=nRT, F=ma, H=Cp(T2-T1). Take yer pick.

      However, all of these models are incomplete and have limitations and simplifications. Back in school, we joked about the Perfect Scientific Corporation, proveyors of fine massless pulleys and frictionless incline planes. Sometimes the incompleteness and simplifications are acceptable errors. Sometimes they are not. F=ma is pretty good aside from frictional loses until relativistic effects heave into site and the mass starts its march towards infinity. Websearch for Compressibility (Z) which is used in the revised version of the Ideal Gas Law: PV=ZnRT. The careful worker is mindful of both the assumptions and limitiations of a given model.

      Where I part company from many workers is when they try to calibrate their model to observations and then conclude that the model parameters are “complete” and intereptation of the finer points of the universe may now proceed. Bullpucky.

      Back in July, Lord Monckton and Joe Born went three rounds about what may or may not properly be included in the feedback model. Lord Monckton’s addition of a solar term to the previously published feedback model and the recalculation of the GHG feedback illustrates the problem with this frankly empirical approach. The nicely treed worker has two outs: (a) aruge that sunshine is not a relevant input to the climate feedback model (Have fun storming the castle!); or, (b) argue that the GHG forcing function is still menacingly high because we’re now adding a new term that’s been most recently off-setting the GHG gain but its played-out now and we’re all going to burn! Climate hockey, anyone?

      Joe took option (b) and had some fun utilizing other contributions to the feedback transfer function and demonstrated that one can get all sorts of outcomes from melting the mountains to condensing the atmosphere (ok, I exagerate, a bit). Lord Monckton (in his trademark colorful style) disputed the validity of these alternative model contributions with good cause. Some of Joe’s hypothesised high-gain contributions are on the thin edge of plausible.

      So, was Lord Monckton justified to point to the model outputs from these scenarios and call the hypothesis into question? Yep, the model is quite good for that. However, in Lord Monckton’s first posting on the feedback model (back in July), he claimed that the recalculated GHG gain factor was proof that there is no hazardously high GHG feedback forcing. Was Joe Born justified in demonstrating that the model construct could be used to obtain any GHG gain one cares to conjur? Yes he was given the way Lord Monckton argues that his recalculation is the one and only true outcome.

      The empirical feedback models cannot sort truth from fantasy. The models on offer have many obvious shortcomings that have been amply cited recently. One can try to wish-away those contributions by burying them into some “long-term equilibrium” or some “near-invariant” base climate signal. Not a very convincing argument when the interested reader applies for details.

      In the words of the Down Easter, you can’t get there from here. The models are incapable of either proving or disproving AGW as they can be tuned to prove near anything. Sure, it’s easier to dismiss the results that stretch credulity. Given that this exercise is trying to discern between contributions amounting to tenths of a degree per decade, I see near zero confidence that fiddling with the very few parameters believed to be known allows the model to pick between the good and less-good hypothesis.

      • Actually, I didn’t say anything like that.

        Although Lord Monckton tricks his theory out in talk about taking the sun into account, it really boils down to the proposition that as a result of the feedback theory used in, e.g., electronic circuits and control-systems theory the global-average surface temperature at equilibrium has to be so linear a function of the value it would have without feedback as to preclude high equilibrium climate sensitivity. What I say is that feedback theory requires no such thing.

        Lord Monckton has more recently so changed his argument as to base it on the proposition that IPCC statements about “near invariance” are inconsistent with a function nonlinear enough to permit high climate sensitivity. Moreover, he claims that Dr. Spencer has finally been persuaded by this latest wrinkle (which Lond Monckton adumbrated in his “Wigmaleerie” post on this site).

        In my view, though, the math shows that such statements are not so restrictive and that Dr. Spencer is mistaken if he has been persuaded that they are. Unfortunately, this site declined to run a head post I proposed to demonstrate that fact, so its readership won’t get the benefit of a different viewpoint.

        • I will submit that we’re in agreement. I appreciate your prespective on what you wrote (one seldom gets the author’s input on that usually silent discourse!). I do apologize if you think that I put words into your mouth; but, you did actually demonstrate that the math allows for either a high and low-gain GHG terms. As you said, that model construct precludes neither case. “Fun” is likely a regretable word choice in this instance as I was not on the receiving end of what Lord Monckton considers wit.

          I’ve learned to calibrate my expectations on scientific rigour in this forum. There are frequently many transgressions that would be called-out for correction in a different venue. The prime example is the use (including Dr. Spencer) of the term equilibrium when it should be properly steady-state. (Trying to correct the usuage would likely create more confusion than it would seek to fix.) Consequently, the argument that model math (a priori) makes not a proof is perhaps a bit esoteric for this venue.

          My observation is that certain workers are trying to overcome that limitation by spreading a thin layer of first principals onto the math. The battle is thus shifted to excluding the gain terms that do not agree with the worker’s bias. Lord Monckton hangs his hat on the IPCC assertion of near invariance. Dr. Spenser likes his “long-term equilbrium” and the resulting nice and flat non-GHG climate signal. Gosh. That’s a pretty sweeping assumption.

          The intellectual quicksand is then considering the merit of a hypothesized mechanism in light of the suspicion that the climate behaves in a low-gain and linear manner. It is a suspicion that I frankly share; but, I recoginize it as merely a bias and I in no way confuse it with proof sufficient to out-of-hand preclude any higher-gain or non-linear contributors. We have previously spoken about the fact (via feedback theory) that an observed over-damped response offers no illumination of the gain/linearity of the indivdual mechanisms driving the transfer function.

          If the model workers want to undertake finding and calibrating all the significant contributors to the massive heat engine that is the climate (not excluding the oceans, clouds and ice-caps), then I can get behind that intellectually. However, the proposal to just toss nearly all of the candidates into some quasi-invariant-equilibriumish bucket is polietly declined.

          I will make this observation: Both the pro- and anti-ACGW parties have a vested interest in the near-invariant assumption. To allow for the possibility that the baseline since 1855 could have moved on it’s own (up or down) would be lethal to any attempt to show the effect/non-effect of CO2 on the global temperature. So, at least the contestants have that common ground.

          So, I would urge you to rework you submission. It’s time will come.

  22. A better question to start the report would have been:
    “How can climate models produce any meaningful forecasts when they have zero ability to hindcast what has already happened?”
    Given the current Co2 ppm, and that of say, 100 years ago, a model run backwards can be checked precisely. A hundred years ago (~1920), the average Co2 was 304. In 1820, it was 284. Now, it is 411. Obviously, if you run the models backwards, you will find that we were in an ice age in 1920 according to the models. Less Co2 by 25% from now, but the increase has been 33% since then.
    G’head – Run them.
    Publish it.
    I already know the answer – Co2 has nothing to do with the climate, so it cannot be modeled backwards. We already have a record of the Co2 levels AND the temperatures – so let’s see the models run backwards.

    • They will simply “tweak” them, where necessary, to make them match…(that’s why the programs are super secret ! lol

  23. “This is why people like myself and Lindzen emphasize so-called “feedbacks” (which determine climate sensitivity) as the main source of uncertainty in global warming projections.”

    Which is why it is important to properly resolve Lord Monckton’s criticism of the feedback mathematics.

    • I apologize in advance if this seems snide; it isn’t meant to be. But I have to ask what your criterion is for deciding whether it has been properly resolved. It seems to me that the only way in which you can resolve it for yourself is actually to master that aspect of feedback theory.

      Now, with the possible exception of the nonlinearity question, the feedback-theory aspect his criticism deals with is the most-rudimentary aspect possible; indeed, it’s so basic that the control-systems texts I’ve seen don’t even deal with it; they’re concerned with non-equilibrium, usually vector, states, which require differential equations. His criticism in contrast deals only with scalar equilibrium, for which algebra is adequate.

      Nonetheless, my experience is that the vast majority of readers here won’t have the patience to work through the math, so to most of those who don’t take Lord Monckton’s pronouncements as gospel it will seem unresolved.

    • Yes, even in something as small as a pot there temperature gradients, hot spots, etc. Measuring the average temperature of water in a real pot is not trivial.

  24. I want one of those pots with a lid whose coverage is a function of the water temperature. I’m tired of lid off where it takes forever to boil and lid on where it boils over. We need the Iris Lid. I hereby give the patent to public domain for the benefit of humanity.

  25. How can we get more warming of the surface—on a surface that is 70% water—if we do not have more heat of condensation popping out somewhere in the overlying air column?

  26. Another item about pots boiling: at 1 Atmospheric Pressure, the water is going to boil at 100° C, and as long as there’s water, the temperature is not going to change no matter how much you turn up the burner.

    I suspect that confection, conduction, and cloud formation works on earth’s temperature the same way. A lot of added energy will go into latent heat.

  27. For all their complexity, Frank has reduced climate models to a simple linear extrapolation of radiative forcing. That does not surprise me because that is what they have been tuned to achieve. He then looks under the hood of climate models and finds huge variation between one of the output variables, cloud fraction, and what has been measured. The error is so large it swamps any possible variation caused by CO2.

    If climate models had predictive merit they would not need to be tuned. They would apply the fundamental physics to give accurate results for any climate variable in any location on Earth.

    The fact that models are tuned without any CO2 forcing to maintain a constant temperature over time does not mean they do not have large errors. It simply means the large errors cancel each other out through the tuning process that targets no temperature change.

    My simple model of Earth’s climate system has greater predictive ability than highly complex ocean – atmosphere coupled models. Water vapour increases rapidly over water surface above 20C thereby limiting tropical ocean surface temperature to around 30C; sea ice forms at -2C limiting heat loss from polar oceans; water vapour condenses as air moves from tropics to poles creating reflective clouds that limit heat input to the oceans. The area averaged global sea surface temperature is 16C – exactly the area average of 30C at tropics and -2C at the poles. Model remains accurate providing there is sea ice at both poles.

    The fundamental error in climate models is the attention given to the atmosphere. This stems from the model background in weather forecasting. The energy in the climate system is in the oceans. That energy drives weather.

  28. That is all fine and good, but until someone can explain why “a watched pot never boils”, I think I’ll just stick to using the microwave, radioactivity be damned.

  29. Its all very interesting, but so what. It reminds me of my earlier days as
    a Police prosecutor dealing with a rape case.

    As rape seldom occurs in front of witnesses, it comes down to unless there
    is physical injury to the rape victim, its a case of He said, and She said..

    The “”Warmer” lobby are free to arrange their evidence to suit what they
    want it tot show. So the defence side, the sceptics have to counter that.

    In the Court of public opinion, results count, so we have to play the same
    game as the Warmers..

    Using the same basic IPCC data as the warmers use , but with items such
    as clouds which I understand they do not like to use, to hopefully come up
    with totally different results.

    Then you can challenge the warmers as to how your models show for
    example cooling, while theirs show warming. This would be far better than
    lots of words such as today’s article, which most people including myself
    have difficulty in understanding.

    MJE VK5ELL

  30. “The big question is, “How much will the climate system warm in response to increasing CO2?” The answer depends not so much upon uncertainties in the component energy fluxes in the climate system, as Frank claims, but upon how those energy fluxes change as the temperature changes.

    And that’s what determines “climate sensitivity”.

    This is why people like myself and Lindzen emphasize so-called “feedbacks” (which determine climate sensitivity) as the main source of uncertainty in global warming projections.”

    Since I first entered this debate it was clear that this is the only point worth debating

    Instead, skeptics waste all their time and energy, undermining their own credibility, by insisting
    on silly arguments that are dead wrong. Like Pats. Like Hellers. Like Salbys. Like Sky dragons.
    These folks earned the “D” word appellation, problem is ya’ll get tarred with it by refusing to disown
    their crap.

    1. It is getting warmer, there was an LIA. The temperature record is not fake. Sorry Heller

    2. GHGs warm the planet, they do not cool the planet. Sorry Ned.

    3. C02 is a GHG. Its a powerful trace gas that helps plants grow, and warms the planet

    4. Humans are responsible for the increase in C02. Sorry Salby.

    There is no point in debating these. Oh yes yes, because all knowledge is uncertain you can
    try to debate them. But you will fail.

    Which leaves only Three Open and interesting questions:

    A) How much C02 will we emit? is RCP 8.5 crazy? (yup)
    B) How much will doubling c02 warm the planet? is Nic Lewis right ( good debate)
    C) How do we balance our investments in
    1. Mitigation ( reducing GHGS)
    2. Adaptation ( preparing for the weather of the PAST)
    3. Innovation

    Note: “C” isn’t a science debate.

    Now I want you all to notice something. Nowhere in #1-4 are models even mentioned. We know it’s getting warmer by looking at the record. No GCMs required. Plants know its getting warmer. Insects know this. Migrating animals know this. The damn dumb ice knows this. It is silly to deny it. It is silly to argue ( as some do) that the record is a hoax. A few years ago the GWPF started its own audit of the temperature record with a panel a great skeptical statisticians. They stopped. They quit.

    Next we know GHGs ( water, C02, Ch4) warm the planet, they do not cool it. We even know how they warm the planet. They reduce the rate of cooling to space. We’ve know this for over 100 years. No GCMs required.

    Next we know C02 is a GHG. basic lab work. No GCMS required.

    And last we know that our emmissions are responsible for the rise in C02. No GCMs required.

    None of the core science, core physics even requires a GCM. Even if every GCM is fatally flawed we will still know: we put the c02 there; c02 causes warming.

    • History also shows us that the contribution of CO2, at rates several times larger than today, to warmer temperatures is so trivial that it is overwhelmed by other natural climate forcings. This makes items 1, 2, and 3 only of academic interest, and completely unimportant.

      • Agree completely. Theoretical hogwash is what this is. Astronomical and geological influence aside, the Earth has a self regulating climate – hence life. Co2, very obviously, has little to nothing to do with the climate.

      • ” contribution of CO2, at rates several times larger than today”

        Today’s rate is almost 3ppm/year. When exactly was it “several times larger”?

    • First of all Mr. Mosher, you pain all skeptics with the same brush. That’s bullsh*t and you’ve been arounf on this forum long enough to know it.

      Second of all, the debate for the average skeptic has never been if CO2 causes warming. The debate is the ramifications of continuing fossil fuel use versus the ramifications of not continuing fossil fuel use. The first is unknown and highly debatable. The second can be pretty much predicted with a great deal of certainty, which is death in the billions.

      Third, almost no one is actually in the debate. Russia not in. China not in. Africa not in. In fact almost the whole world is making noises but doing nothing but increasing fossil fuel use.

      So stop painting those of us who actually have some perspective as if we’re all a bunch of idiot hill billies who never went to school and wouldn’t have graduated if we did. You’re no better than Nick Stokes. You have plenty of knowledge to bring to the discussion but instead you insist on p*ssing on everyone like you’re the only one who knows anything about anything.

      • “you paint all skeptics with the same brush”

        No he doesn’t, he clearly delineates.

        “the debate for the average skeptic has never been if CO2 causes warming”

        Unforunately this is not true. It might be for you and some others but for many – like Mike below (“…you cannot use the co2 hypothesis…”) for whom the debate is exactly that. They are the D team that Steven rightly identifies.

        • bullsh*t.

          He says, and I quote:

          problem is ya’ll get tarred with it by refusing to disown
          their crap.

          I’ve personally disowned all but one of the sources that Mosher listed, and I’ve never looked at Heller’s work so I do not have an opinion. The Sky Dragons have been banned from this site. Ned was totally demolished on this site. Salby has been widely criticized on this site. Yet Mosher complains that “y’all get tarred with it by refusing to disown their crap”. Seems to me they’ve been pretty much as disowned as you can get. They’re endorsed by a few repeat commenters who have no influence and don’t represent the majority of skeptics, but Mosher paints them as if they do.

    • Mosher……
      3. ”C02 is a GHG. Its a powerful trace gas that helps plants grow, and warms the planet”

      So what cooled the planet from the 40’s to the 70’s? Did someone remove the C02 that caused the warming from the warming back then? If you add and it warms then you must remove to cool right. That’s your argument not mine. Or doesn’t it work in reverse.
      If you cannot answer that, you cannot use the co2 hypothesis with any accuracy. Sorry Mosher.

    • +++

      You’ll get loads of disagreement from the D team Steven, but you wont find Spencer’s, Curry’s, Christie’s, Lindzen’s or even Watts’ names among them. Funny about that.

    • Mosh, we know a little bit more: A doubling of the CO2 content in the atmosphere generates an additional forcing of 3.8W/m² at the TOA. No model needed. The question is: How much warming will generate the additional amount of energy. Nic Lewis used the best available observational data and found out (with an EBM) : 1.3 °C on short timescales (TCR) and 1.8 °C on longer ones, reaching an equilibrium in the oceans( ECS). The GCM-mean gives higher values. Why? This is the main remaining question. A hot trace: patterns. The GCM projections do not reflect the patterns of observed warming in the SST. In the obs. there are pronounced differences i.e. in the tropic east Pacific , not so in the GCM. Those differences bring the obs. sensitivity down. One possibility: The obs. patterns are randomly, some kind of internal variability. On the long run the models would be right in this case. The other possibility: The “patterns” are not a product by chance, they are a product of the forcing itself. This would point to GCM deficits and the (in all cooling) patterns: La Nia like patterns in the Eastpacific would persist with ongoing forcing. Two new papers bolster this solution: https://www.researchgate.net/publication/335650196_Indian_Ocean_Warming_Trend_Reduces_Pacific_Warming_Response_to_Anthropogenic_Greenhouse_Gases_An_Interbasin_Thermostat_Mechanism/link/5d7931ce4585151ee4af4295/download ; https://www.researchgate.net/publication/334012764_Strengthening_tropical_Pacific_zonal_sea_surface_temperature_gradient_consistent_with_rising_greenhouse_gases . Thsi would point to reliable sesitivity estimates following Nic Lewis also in the future and some work to do for modelers. The question is not if GHG warm the planet, the question in science is: how much!

    • The discussion catalyzed by Frank is whether current GCMs can have useful predictive skill,nothing more,nothing less. How are your points relevant to that?
      Regards,
      Ethan Brand

    • “The big question is, “How much will the climate system warm in response to increasing CO2?” The answer depends not so much upon uncertainties in the component energy fluxes in the climate system, as Frank claims, but upon how those energy fluxes change as the temperature changes.

      And that’s what determines “climate sensitivity”.

      Is it just me, or does this statement strike anybody else as ridiculous, because it assumes the very thing that is being questioned?

      How can it not depend upon uncertainties in component energy fluxes and yet depend on changes in those fluxes, if the uncertainties in question are those very fluxes? The statement simply denies the importance of dealing with those uncertainties properly, in order to use those uncertain fluxes to make the calculations anyway and assert them as being reliable.

        • Okay, I guess I’ll have a go at it:

          “The big question is, “How much will the climate system warm in response to increasing CO2?” The answer depends not so much upon uncertainties in the component energy fluxes in the climate system, as Frank claims, but upon how those energy fluxes change as the temperature changes.
          And that’s what determines “climate sensitivity”.

          I spoke on this in my earlier comment … https://wattsupwiththat.com/2019/09/13/a-stove-top-analogy-to-climate-models/#comment-2794834

          This is why people like myself and Lindzen emphasize so-called “feedbacks” (which determine climate sensitivity) as the main source of uncertainty in global warming projections.”

          Feedbacks involve clouds, whose uncertainty is the issue in question. So you put aside a proper treatment of uncertainties here and proceed to use simulations based on these uncertainties as justifications for ignoring the treatment of uncertainty in these feedbacks, and then proceed to talk about feedbacks as the main source of uncertainty, when feedbacks INCLUDE cloud uncertainties — the thing in question — as if uncertainties in clouds are not part of the uncertainty in the feedbacks you wish to focus on. This is a circular refusal to focus on the issue. You can’t just say that uncertainties in component energy fluxes are the wrong focus, while, at the same time, claiming vindication for the very energy fluxes in clouds whose uncertainty are the very issue.

          Since I first entered this debate it was clear that this is the only point worth debating.

          There is no debate in a foregone conclusion that simply dismisses the other side of the debate, in which you subsume as proven in your assertion something that has yet to be proven.

          Instead, skeptics waste all their time and energy, undermining their own credibility, by insisting on silly arguments that are dead wrong. Like Pats. Like Hellers. Like Salbys. Like Sky dragons. These folks earned the “D” word appellation, problem is ya’ll get tarred with it by refusing to disown their crap.

          … empty condemnation, per my previous statements.

          1. It is getting warmer, there was an LIA. The temperature record is not fake. Sorry Heller.

          Depends on what you mean by “getting warmer”, and so what, … if it’s only by minimal amount? Small concession that you acknowledge a LIA (“Little Ice Age”). Stating that the temperature record is not fake does not make it so.

          2. GHGs warm the planet, they do not cool the planet. Sorry Ned.

          Merely stating this, while dismissing arguments to the contrary, does not make it so.

          3. C02 is a GHG. Its a powerful trace gas that helps plants grow, and warms the planet.

          GHG (“Green House Gas”) has always been an absurd label, since there were never any gases that act like a greenhouse, and to continue using the name after this fact became clear is a plain signal that the idea of a roof trapping heat is the idea being pushed by the mere utterance of this mistaken, long-outdated label. CO2 is an infrared-active gas (an IAG, if you will). The GHG-label encourages and enables the continued confusion of convection with radiation.

          4. Humans are responsible for the increase in C02. Sorry Salby.

          Merely stating this, while ignoring strong arguments to the contrary does not make it so. Sorry, Steven.

          There is no point in debating these. Oh yes yes, because all knowledge is uncertain you can try to debate them. But you will fail.

          … empty, over-bloated confidence.

          Which leaves only Three Open and interesting questions:

          … only three that YOU want to consider. There are much larger and even more interesting questions, which I’ll get to in a sec.

          A) How much C02 will we emit? is RCP 8.5 crazy? (yup)

          Better Question: Does it matter how much CO2 we emit? — RCP 8.5 science fiction? (yup)

          B) How much will doubling c02 warm the planet? is Nic Lewis right ( good debate)

          Better Question: What does it matter how much a doubling of CO2 will warm the planet? Is Nic Lewis even a relevant reference here, since the very fact that CO2 warms the planet at all can still be effectively argued? (better debate)

          C) How do we balance our investments in
          1. Mitigation ( reducing GHGS)
          2. Adaptation ( preparing for the weather of the PAST)
          3. Innovation
          Note: “C” isn’t a science debate.

          Okay, a good question, as long as you remove #1.

          Now I want you all to notice something. Nowhere in #1-4 are models even mentioned. We know it’s getting warmer by looking at the record. No GCMs required. Plants know its getting warmer. Insects know this. Migrating animals know this. The damn dumb ice knows this. It is silly to deny it. It is silly to argue ( as some do) that the record is a hoax. A few years ago the GWPF started its own audit of the temperature record with a panel a great skeptical statisticians. They stopped. They quit.

          All those statements are married to models in one way or another, whether the actual word, “model” appears in them or not.

          (1) I am breathing, (2) eating regular meals, (2) walking upright consistently, (4) observing the passage of day into night. I want you to notice something — nowhere in #1-4 is living (being alive) even mentioned. Yet the actions I describe are married to the concept of living (being alive). Word-elimination games get us nowhere, especially when those words can be read between the lines.

          Condemning argumentation of strongly unsettled points smacks of desperation to silence the other side.

          Why was the GWPF audit stopped? Who stopped it? Was it merely stopped, impeded, ran out of funding, or shut down by refusal to cooperate with the investigators, ? I need details to know what bearing this has on the actual state of the temperature records.

          Next we know GHGs ( water, C02, Ch4) warm the planet, they do not cool it. We even know how they warm the planet. They reduce the rate of cooling to space. We’ve know this for over 100 years. No GCMs required.

          Sorry, but all crap, according to sound arguments to the contrary.

          Next we know C02 is a GHG. basic lab work. No GCMS required.

          What I know is that “GHG” is a poor choice for a name — basic communication clarity. And GCMS have the still questionable, badly-named assumption programmed into their workings.

          And last we know that our emissions are responsible for the rise in C02. No GCMs required.</b.

          Sorry, crap again.

          None of the core science, core physics even requires a GCM. Even if every GCM is fatally flawed we will still know: we put the c02 there; c02 causes warming.

          … empty preaching, per previous statements, and per numerous sound counterarguments to the contrary. The core physics and core science you refer to give physics and science a bad name. “Abhorrent physics” and “abhorrent science” might be better labels.

          • Good comment Robert.

            The GCMs and the (cough) “core physics” require a tropospheric hotspot to provide a thermodynamic source for the assumed extra downwelling IR due to increasing pCO2.

            But this is not observed. Somewhere out of this train wreck, Mosher still seems to “know” that increasing pCO2 causes surface warming.

          • “Somewhere out of this train wreck, Mosher still seems to “know” that increasing pCO2 causes surface warming.”

            Every alarmist has the same problem Mosher has: They assume things not in evidence and then extrapolate from there. Not scientific at all. It’s more like wishful thinking.

    • “We know it’s getting warmer by looking at the record. ”

      Actually we do *NOT* know that it is getting warmer, not if you mean maximum temperatures are increasing. We know the average temperature is increasing but that can also be because of minimum temperatures going up. In fact, maximum temperatures could be decreasing if minimum temperatures are increasing more, the average would still be going up! The exact second you began using averages you totally lose sight of what is actually happening, the true data becomes masked.

      If you look at the number of cooling-degree days at various sites around the globe it becomes pretty obvious that maximum temperatures are not going up. If they were going up the number of cooling-degree days would be going up also, but they are not, at least not over the past three years.

      So what would cause nighttime (i.e. minimum) temperatures to go up but not maximum daytime temperatures. CO2? I highly doubt that. It would have the same impact on IR radiation whether it is day or night.

    • Models are the only way to attempt to predict the future. Saying CO2 is a greenhouse gas (by the way what do greenhouses have to do with CO2? They operate on convective blocking) and will cause warming means nothing. How much warming? Is it significant?Is it beneficial? What are the natural cycles? etc.

      All arguments and policy decisions come down to the models, and that is a reality. Without some predictive capability it is all hand waving.

    • Mosher, what appellation should be applied to the Warmist cranks, so that both sides can distance themselves?

      You know, like those who say sea level rise is an imminent threat (it’s not).

      Those who say climate change is responsible for every newsworthy weather event? (It’s not)

      Or that Greenland or Antarctica are melting down (they’re not).

      Surely you are equally annoyed with the cranks on your side, who distract from the interesting bits.

    • Mosher wrote: “1. It is getting warmer, there was an LIA. The temperature record is not fake. Sorry Heller”

      That’s “Hockey Stick” thinking. I can imagine Steven visualizing his favorite Hockey Stick as he recites the above.

      The truth is it has only been warming since the 1970’s, and has only warmed today to levels that are equal to or less than the temperature levels in the 1930’s. All the regional, unmodified temperature charts from around the world say so.

      So it is really not warming in the sense that Mosher implies which is that it is warming more than is normal for the Earth, when in fact the warming today is no warmer than the Earth was in the recent past. The regional chart for the US actually shows we have been in a temperature decline since 1934. The US is currently about 1C cooler now than in 1934. That’s not what I would describe as “warming”.

      The only thing that shows warming is Steven’s bogus, bastardized Hockey Stick chart, which resembles no other unmodified regional chart on Earth. Sorry Mosher.

  31. Your pot model is wrong!

    The lid should be UNDER the pot!
    Or if you prefer to keep the lid on top, the heat source should come from the TOP, not the bottom!

    Then you would have a more accurate model!

  32. The problem here is that the only experiment” is the actual one running on Planet Earth v1.0.

    Everything else is an in silico animation that can do anything the programmers wants it to do.

    Leap Tall Buildings in a single bound? … yep.
    Stop speeding locomotives with aoutreached hand?… yep.
    Fly through the air with the power of thought? … yep.

    In junk.

    Pat Frank just showed the in silico from dozens of GSM-junk has no clothes.. The Emperor is naked.

    It may not be perfect, because the, YES Dr Spencer some models do close the energy balance at the TOA (at least to what we know), and others do not, but the fact that the internals of a complex system simulation are all wrong, means we have no idea where the energy went with these simulations.

    My suspicion is there is a lot more heat lost in the polar radiative losses than the models account for. Probably a model implementation of the Earth’s surface versus real the geodesy of the Earth problem.

    Simply put, I suspect the polar TOA’s are not being given enough radiative control (energy loss) in the models. Which is why when modellers balance the TOA, their models run too hot. They got the geodesy and thus heat transport wrong. An inspection of Pat Frank’s Figure 4. strongly argues that point. The modelers do not know where the energy is going in their models. Thus everything is increasingly more junk as they time step.

  33. Dr. Spencer,

    Two issues that haven’t been mentioned in detail, but may play a role here:

    1) A previous post on WUWT demonstrated the rate of temperature increase over a 30-year span from 1910 to 1944 was nearly identical to that from 1975 to 2009, based on the HADCRUT4 set. (If someone can remember the post, please give the attribution.) IF, as you frequently contend, the atmosphere is in equilibrium, then the temperature excursion in the first half of the century must have arisen from long-term statistical fluctuations in the natural forcing. The excursion in the second half is commonly attributed to carbon dioxide forcing, along with its positive feedbacks. Your Figure 1 from your previous post shows the forcing during a 100-year annealing period on an unfortunately large scale, but from what I can make out, the longest excursion is 10 years. Converting the forcing magnitude to temperature variation is somewhat difficult, but if you have that handy for this case, I’d like to see it. This suggests to me that a zero energy gain during the annealing period is not a sufficient condition to assume a proper background to separate that effect from the carbon dioxide perturbation during the run.

    2) The primary effect of OCO appears to be modeled strictly as radiative energy transfer. The collisional transfer to nitrogen, oxygen, and water vapor is going to carry a lot of that energy through the atmosphere, and will be a fairly strong function of pressure altitude and temperature. It appears to me that these effects would fall into the category of what Pat Frank asserts, namely that excluded computations can lead to a directional bias in the resulting temperature trajectories, and any distinction between natural and anthropogenic contributions is not possible.

    I appreciate the work that both of you have put into this discussion.

  34. All this discussion reminds me of a similar situation I was involved in some years ago. I had a theory about how the electrical resistivity of an alloy changed as precipitation of a second phase occurred, based upon the electron mean free path. Another group of scientists had an alternative theory based upon the anisotropy of conduction electron scattering. We went at each other tooth and nail in the scientific journals arguing our cases, each convinced that “our” theory best explained the observed phenomenon. We finally realized that application of either theory first made the effect of the other seem small and that in fact, both were necessary to fully explain observation.

    It seems to me that the same might apply in this discussion. The various protagonists argue the merits of their own cases, whereas it is likely that all reasonable cases contribute to the whole picture of why climate models are not suited to the purpose of determining the effect of CO2 on global temperature.

    I suggest this for the following reasons:

    GCM’s are not based solely upon first principle physics and so have to be “tuned” so that they produce a stable result over time before the CO2 button is pressed. This implicitly assumes that either the tuning is valid over all states of the system or the system state remains unchanged when CO2 is introduced. Neither has been verified and Frank’s analysis shows how uncertainty is introduced and propagated if just one factor (LWCF) is allowed to drift over possible values. Mosher and Lindzen correctly argue that uncertainties in the feedback processes similarly throw GCM predictions into serious doubt. Spencer correctly argues that the lack of inclusion of specific of factors like water vapour feedback renders the GCM’s worthless. There is also the fact that none of the GCM’s can account for the long term historical (natural?) record, and none of them properly incorporate the effects of external drivers like the orbital effects, the influence of cosmic rays on cloud formation, and so on.

    Similarly, the physical data record has some serious issues to resolve: the fact that changes in atmospheric CO2 seem to lag behind temperature changes on most if not all time scales violating the notion of CO2 as a major causing factor, and surely this is a most critical piece of evidence requiring validation; the land surface temperature record has serious deficiencies, like why only the night time temperatures around cities show significant increase over recent times (obviously UHI) and yet these have a strong influence in the homogenisation process leading to the suspicion that any changes the “global temperature average” (surface or near surface) may be reflecting increased urbanisation rather than any direct CO2 effect, similarly how much might the satellite data be due mainly to urbanisation, “natural” causes or CO2 effects, and so on. All of this is compound by cherry picking data from the historical record to push particular agendas.

    My apologies for any errors in reducing all the well constructed arguments of the various proponents to just a few words, all I am trying to suggest is that maybe we would all be better served if we started to take a more holistic approach to the issue of climate change: all the reasons why GCM’s fail in accounting for any influence of CO2; what are the critical physical data required to invalidate a hypothesis and how well are they established? This does not mean that new ideas should not be tested rigorously, of course they should and the current discussion on the Frank proposal is a good example of such testing. What I am suggesting that if they are found to have merit, further arguing for one over another is just not necessary, it is actually probably counterproductive.

    I am of course assuming that all concerned are driven by scientific altruism and not personal or political agendas!

  35. One aspect of this debate seems to be going unmentioned. Folks keep discussing the GCM’s as if they are experiments… and produce results from measurements of some real process.

    But, they are NOT experiments. They are programs, and only produce the output they are programed to produce. The fact that 20 models produce similar results doesn’t prove a darn thing… because they are programmed to produce that result… they can’t produce any other!

    In fact, if you wanted to, you could write another program that could examine the inputs and code for the computer models, and predict the outputs… WITHOUT EVER RUNNING THE MODELS. Computer models are NOT experiments.

    And, although Dr. Frank seems correct is asserting that error propagation results in significant uncertainty… it doesn’t matter. The problem with the GCMs is not the uncertainty or error propagation… it is the models themselves, and the fact that computer models don’t prove anything as they just regurgitate the result you tell them to.

    • “Computer models are NOT experiments.”

      I know very little physics. But this is so basic that anybody should be able to understand it.

      For a model to be skillful, it must meet two criteria, at the least:

      1. It must be derived from nature

      2. It must be successfully tested against nature

      That’s how we have high certainty of F=ma, but very low certainty of pretty much any computer model.

  36. After reading the comments I feel that the majority of people have not understood the basic feature of the GCMs. A direct quote: “This is why climate models can have uncertain energy fluxes, with substantial known (or even unknown) errors in their energy flux components, and still be run with increasing CO2 to produce warming, even though that CO2 effect might be small compared to the errors. The errors have been adjusted so they sum to zero in the long-term average.”

    I would say this even more simply: Climate models do not adjust or change cloud effects on an annual basis. They keep the cloud forcing effect constant during the coming years. Therefore there is no propagation error of cloud effects in the model runs. Dr. Spencer says this in another way that the sum of the long-term average is zero. For me, it is the same thing.

    It looks like nobody really knowing the properties of GCMs have ever commented on this issue.

    • If the cloud forcing effect is highly uncertain itself then it contributes to the uncertainty of the model output. That uncertainty adds with each successive iteration of the model. Merely holding the cloud forcing effect constant doesn’t change this one iota.

      • In fact it is probably the very reason the models fail. If there were a compensatory mechanism, it would have dampened the perturbation (at least in the actual GCMs as opposed to the emulator).

    • I have a problem with this part: “Climate models do not adjust or change cloud effects on an annual basis. They keep the cloud forcing effect constant during the coming years”

      Not in the accuracy of the statement about how GCMs deal with clouds but with
      A) The assumption that in reality cloud cover is stable. What length of sattelite images/measurements do we have of global cloud cover?
      B) Even if stable what about placement of cloud cover? A cloud in the tropics prevents the ocean warming greater than a cloud in far the north/south due to angle of solar energy.

  37. I reckon Dr Spencer’ post actually confirms main tenants of Dr Frank’ article.

    with all of its uncertain physical approximations or ad-hoc energy flux corrections

    In other words: if models were not subjected to constant ad-hoc adjustments and tinkering they would run wildly, possibly in the region of +/-20 C for the next decades. Yet, when a model wants to go haywire at this very moment a modeller appears to rescue, like deux ex machina, and by ad-hoc adjustments and corrections forces model predictions to look more sensible.

    This is why climate models can have uncertain energy fluxes, with substantial known (or even unknown) errors in their energy flux components, and still be run with increasing CO2 to produce warming, even though that CO2 effect might be small compared to the errors. The errors have been adjusted so they sum to zero in the long-term average.

    That’s unclear for me. If errors were adjusted that means these are not errors anymore. If we can nicely balance out in our model all energy terms that means there are no unknowns errors anymore, at least in our mathematical model – all variables representing energy fluxes balance out precisely.

    • That what I expected that someone thinks that climate modelers adjust on a yearly basis a proper correction term to get a wanted temperature output. It is not so. That kind of model has no meaning and GCMs do not behave like that.

      It seems to be too difficult to accept this basic simple property of GCMs: cloud forcing is not variable in those models but is a constant effect. IPCC says this way: cloud feedback has not been applied. It means that cloud forcing is not changing according to temperature variations or according to GH gas concentrations.

      • “It seems to be too difficult to accept this basic simple property of GCMs: cloud forcing is not variable in those models but is a constant effect. ”

        Except clouds are *not* a constant effect. Cloud cover is highly variable from day-to-day, month-to-month, and year-to-year. If cloud cover is highly variable then their “effect” is highly variable also. And since that variability has a direct impact on the entire thermodynamic system we know as Earth then any model not accounting for that variability has to have a highly uncertain result.

        • Exactly. Above I had written about the placement of clouds having an affect as well as the daily, monthly, yearly possibilities you correctly cover:

          “Even if stable what about placement of cloud cover? A cloud in the tropics prevents the ocean warming greater than a cloud in far the north/south due to angle of solar energy”

    • The errors have been adjusted to make them look like they are zero. The uncertainties cannot be so adjusted, and remain undiminished (but unnoticed, it would appear).

      The uncertainties are there at the outset, and can never be reduced.

      Some hope for uncertainty cancellation, in the way a noise process might revert to mean in aggregate. Model uNcertainty doesn’t do that, especially in an iterative model.

  38. Anton Eagle plus the following comment have it correct.

    A suggestion, we are told by the Warmers that its only from the late
    1950 tees that we have a problem with CO2.

    But what about the massive increase in war industry from about 1937,
    lots and lots of CO2. So when did it start to get warmer. ?

    Why it got cooler, and we were told to prepare for the Ice Age. Lots of
    what to do stuff from some of today’s warmers of course, but now its heat.

    What to do, lets Panic. the end of the world is nice. stuff.

    So what about telling the Warmers to take their wonderful intelligent
    computers back to say 1937 when industry started to produce massive
    amounts of war material and of course lots of CO2.

    Then see what the PC’s come up with, especially as to why it got colder by
    the 1970 tees

    MJE VK5ELL

  39. What happens to the pot model if we introduce another factor which can have a much greater effect than the lid configuration? Let us say that this is the calorific value of the flame (cloud coverage) and we have no means of measuring it.

    We can make assumptions, we can still run our model, we can still tweak the gas flow and the lid position and we can still calculate the relative effects.

    But do we have the same confidence in the results? We immediately have an attribution problem and if the calorific value happens to be dominant (but unknown) what confidence do we have in the calculated effect of the lid?

  40. Interesting, but “greenhouse gases” aren’t a lid, unless the lid has massive golf ball sized holes in it. The energy can still escape. The question is whether it can escape at the same rate as having no lid. They are called greenhouse gases because that’s where all this nonsense came from. Someone observed that a greenhouse is warmer and wanted to know why. The earth isn’t a greenhouse. It is an open system.

    • “unless the lid has massive golf ball sized holes in it. ”

      Exactly, the “lid” should only cover 0.04% of the pot !

  41. Charles, I can appreciate the dilemma you’re pointing out, here. But it’s your example of a simplified model that still has the problem. If you ASSUME that the system must balance, and then you build a model that forces it to balance, you’ve done nothing to understand the system. You’ve only employed circular logic that can assist you not at all.

    It’s easy to say we might know roughly what the air temperature is above the pot of water warmed by a flame, or how the lid will effect the temperature immediately above the pot. But since the pot lid is the only thing you’re allowing to change, you’re not modeling the open system that the climate is. Clouds do not ONLY “trap” heat. They also reflect light directly back into space. To model the Earth’s climate, your example must also allow the strength of the effect of the lid to also change the size of the flame. If the size of the pot’s lid reduces the amount of heat from the flame, then your output changes completely. If the size of the pot increases the amount of heat from the flame (as CAGW theory posits), then why has not the Earth’s climate gone into meltdown mode from some period of earlier warming, and already boiled away the oceans? Since we don’t know with any precision how that works in the climate, we can’t model it. But the fact that we still have liquid water on the planet, we ought to assume negative feedback rather than positive feedback, as climate models do. None of the models is helping to answer the question of what is the nature of the feedback. And their circular reasoning will never help answer the question of feedback, because they’re, well, circular.

    Your model is too simple to be analogous to the Earth’s climate, and so a pointless exercise.

  42. I’m struggling with this part of Dr. Spencer’s analysis:

    “This is why climate models can have uncertain energy fluxes, with substantial known (or even unknown) errors in their energy flux components, and still be run with increasing CO2 to produce warming, even though that CO2 effect might be small compared to the errors. The errors have been adjusted so they sum to zero in the long-term average.”

    In particular, the last sentence. How can you “sum errors” to zero? The errors here, as I understand it, are an expression of uncertainty about a model being an accurate expression of the real world. What trick is this, that let’s you net out uncertainty by *adding* more errors into to the system? The only way you can net errors out, in the model, is by tuning for a particular preconceived outcome, but that’s a world apart from actually claiming to model reality.

    Dr. Spencer seems to be saying the models can be tuned and parameterized to consistently output “reasonable” (within the bounds of what we’d expect in the real world) and “precise” (consistent with itself and other models) predictions.

    What Dr. Frank seems to be saying is, so what. You should have very little confidence that they’re modeling reality. Dr. Spencer even seems to concede this point, when he says,

    “Climate models should ideally produce results entirely based upon physical first principles. For the same forcing scenario (e.g. a doubling of atmospheric CO2) twenty different models should all produce about the same amount of future surface warming. They don’t.”

    It’s like the two professors are talking past each other. Dr. Spencer saying “it can’t be done” and Dr. Frank saying “yes, but it’s without meaning”.

    The analogy I can think of is in financial modeling, because I’ve seen a lot of shenanigans there. Suppose you have a business and you want to forecast your profits. You hire a consultant who builds a super fancy model based on first principles you’ve established together. For example, you know your profits next year depend an awful lot on how many wingdings you sell in Kalamazoo at Christmas, among other things. And, your profits next year depend on your profits the previous year (say, because it affects how many wingdings you and make and ship to Kalamazoo).

    And, your consultant builds a model whose output looks totally reasonable and even correctly hindcasts your profits. Notably, the models produce higher profits when a “consultant impact” parameter is present.

    Would you trust that? Probably not. Then, Dr. Frank walks in and points out that the consultant’s historical model is wildly incorrect at predicting the number of wingdings sold at Christmas in Kalamazoo. It’s been tuned to produce the “right” final result. It’s a model of something. But it’s certainly not a model of your business.

  43. Roy Spencer is correct. The models are spun up to a steady state. The errors suggested by Frank do not appear in this steady state. Think of attribution. We know the errors are there we say. Where? In there. We know we caused more than half the warming. Where? It’s in there.

    You can get about everything wrong in a CMIP. But if you get CO2 right, what does it matter? The next step is to get CO2 and water vapor as a change in that GHG right. And then the next step is add ocean storage. Lewis and Curry tried to do this. You can still have everything else wrong as long as you get the spin up steady state right and the CO2 and water vapor right and the ocean storage right. That an error can be really really big, doesn’t mean it matters.

    We have CMIPs. They aren’t going anywhere. We may not like their shortcomings, but it’s like not liking money. You don’t trust it. Not being backed by gold.

    I found a mistake once in an audit of a company. It could have caused everything to crash and to indicate the company had no money. It didn’t turn out that way. The company was fine. The thing I found could’ve caused something to happen but it didn’t.

      • For studying changes. My point was to eliminate an argument. To the extent the error is supposed to be a problem, that is cancelled. I think because the model stays in balance or seeks it.

    • Or, if “you can get about everything wrong,” then why not simplify the model down to what you think needs to be right, and proceed from there?

      • Then you end up with a simple linear model, where people like Dr. Frank start asking standard questions using standard procedures about suitability and assumptions and uncertainties.

        All the extra unnecessary knobs provide a handy excuse of complexity.

  44. Because the models don’t capture the physics, they are “tuned” to what they’re supposed to predict. Just ask IPCC on this. They admit it themselves: the models are formulated with data containing correlates of the physical effects they are purportedly trying to predict. In Data Science, and Mathematical Modeling generally, that’s called “leakage”. It’s a no-no. It’s cheating. It’s like saying: Tell me how much rain and frost, etc., we’ll get and I can predict crop yields. The point is to predict the rain and frost, etc..

  45. Roy,

    Error propagation is a red herring when the models don’t even have the complete or correct physics. One example is cloud seeding. Models are not able to predict cloud cover. That’s like saying we have a model for the stovetop pot without (crucially) knowing the flame size, in your analogy. But if the models incorporate the “greenhouse effect” as conventionally defined, we have an even bigger problem. The latter is predicated on an application of the Stefan-Boltzmann equation which gives unphysical results in familiar situations. More specifically, “green house effect” calculations involve adding radiant heat fluxes (IR intensities) together and then solving for temperature in J = s T^4, where s is a constant and J is the light intensity. For example, suppose we have two candles, each at temperature T. If we apply the SB equation as is conventionally done in the climate science milieu––i.e., adding light intensities and then solving for T––we will deduce a temperature for the candles together to be 2^(1/4) T. Try it yourself; take the above equation and plug in 2J (one J for each candle) then solve for T (yes, two candles means twice the intensity if you capture all the energy). But, of course, adding two same-temperature sources does not give a higher temperature result in the real world. BTW, the term “radiative” forcing is misleading and should not be used. Radiation does not always “force” temperature up unless the receiving medium is cooler than the radiating one. Insisting on using “forcing” subconsciously leads folks to assume radiation in all circumstances results in heating. So that’s one issue in climatology: inaccurate language leads to unphysical formulations, in this case of SB.

    The only way the atmosphere could possibly result in higher temperatures on the surface, all else being equal, is to change the its composition such that it is a better insulator of radiant energy flux. Even If we assume that CO2 is a better IR insulator than whole air––though I have not seen experimental support of the latter––then in principle it could cause warming. If CO2 is indeed more IR-insulating than whole air, then the question is scale, especially relative to other phenomena. There is only one CO2 molecule per 2500 air molecules in the atmosphere today. If we took all of the CO2 and raised its temperature by dT degrees, its impact on surrounding air would be to raise its temperature, at most, by dT / 2500 (because air has greater heat capacity than CO2, due to the presence of water vapor). That’s the approximate scale of theoretical impact, 0.0004 dT to its surroundings. So, to effect a 0.1C change on its surroundings, CO2 would have to be separately raised by 250C somehow from the surface of the earth. If IPCC models are to be believed, that would mean CO2 would have to be heated up by 2,500C to justify an increase of at least 1C of surrounding air. Shall I continue this reductio ad absurdum?

  46. That what I expected that someone thinks that climate modelers adjust on a yearly basis a proper correction term to get a wanted temperature output. It is not so. That kind of model has no meaning and GCMs do not behave like that.

    It seems to be too difficult to accept this basic simple property of GCMs: cloud forcing is not variable in those models but is a constant effect. IPCC says this way: cloud feedback has not been applied. It means that cloud forcing is not changing according to temperature variations or according to GH gas concentrations.

    • Antero Ollila
      Even if cloud forcing is treated as a constant instead of a variable, it is a necessary parameter and has associated error for which only the upper and lower bounds are estimated. That error has to be taken into account for assigning uncertainty to the output of the chain of calculations. There are two ways in which the error can be handled. 1) The extreme upper-bound value is added to the nominal value, and the extreme lower-bound is subtracted; the calculations are then performed for both values. 2) The calculations are only performed for the nominal value of the parameter, and the propagation of error is performed separately. The latter approach is the easiest and quickest. However, all too frequently, the associated error is overlooked.

      In summary, when determining a calculated output, using nominal values will give an estimate of the mean value from the chain of arithmetic operations. However, as Frank is demonstrating, the uncertainty can grow so rapidly that the mean value has almost no meaning because the possible range has become so large.

  47. I try one more example. The TCS of the IPCC is about 1.8 C and there is only one effect besides CO2 and it is positive water feedback. IPCC says that water feedback about doubles the original CO2 warming effect. Do they inform that there are any other effects like albedo changes, cloud feedback changes or anything else? No, they do not say that.

    This does not mean that I keep the IPCC’s model correct and fully explaining the temperature increase since 1750. I do not think so and my own reproduction of CO2 radiative forcing study shows that the real CO2 forcing is about 41 % about the 3.7 W/m2 per 560 ppm.

    I do not think that Dr. Spencer would try once again.

  48. I know this post probably will be lost ….. BUT ….

    The more appropriate experiment would be to record the temperatures of the pot with various lids of porosity and create a record. Then get a computer model that uses an algorithm that inaccurately predicts the size of the lid going back in time. ….. and then claim that the model is accurately predicting the temp of future lids.

    That is what Franks paper did.

    You’ll note, it doesn’t directly calculate the forcing from a particular lid, that is settled physics. The issue is predicting which lid will apply.

  49. In all this discussion the term error has been used with two different meanings. One meaning is the difference between a value and the true value. This is a bias. Bias’s can add and subtract and thus can cancel in part. The other meaning is a statistical error or a measure of the certainty of a value. Statistical errors always add and cannot cancel out. We need to be much more careful of our use of these terms in this discussion.

  50. I have to confess that the more I learn about climate models the less confidence I have in their usefulness. As Steven Mosher points out, it is the feedbacks that are important. Unfortunately the major feedbacks can be positive or negative depending on the circumstances and the really important ones like cloud cover cannot be modelled anyway for a number of reasons. So what use is the CMIP6 series? We know that it has the same predictive skills as the previous models (none).

    The take away message is that policy is being based on model output but politicians have not got a clue what that means. They hear the tripe put out by ER and an adolescent who should be in school. Words like crisis, emergency and tipping point are being cynically used to panic the decision makers, the public and the children. Tales of extreme weather dominate our daily news but the data shows otherwise. But how many scientists point this out? Instead we see government scientists cherry picking temperature measurements, comparisons and dates in order to claim doubtful records at every opportunity, not to mention re-writing the data, editing the historic record and changing the gradients of the temperature charts.

    Climate science has become a cesspit of misinformation. Where are the true scientists?

    We see some here, and we should salute their honesty and integrity. But the majority keep their heads down while others ruthlessly exploit the scam that climate change has become.

    I fully expect this comment to be removed but I hope it is allowed to remain. The current debate is about whether or not our ignorance of the model inputs renders the output meaningful or meaningless. It has more importance than simply academic interest. For that reason, I hope the current impasse can eventually be resolved.

  51. Wow, I think this comment is very interesting (I haven’t read a;ll of the tread so if this has been discussed I apologize in advance)

    “This is why climate models can have uncertain energy fluxes, with substantial known (or even unknown) errors in their energy flux components, and still be run with increasing CO2 to produce warming, even though that CO2 effect might be small compared to the errors. The errors have been adjusted so they sum to zero in the long-term average.”

    I think this is where Pat and Roy are at odds. Yes, the climate models have been “adjusted” to produce output that makes sense. By definition, when you have to adjust the errors for output to make sense you get the physics wrong. As I said in the last post by Roy, the way to think about Pat’s paper is that it shows how much adjusting need to be done in order for the climate models to work.

    I really like analogies, but I don’ think Roy’s is a very good one. I would start with a pot without a lid and a pot with a very porous screen over it. I would next ask the question – what happens to the temperature if the screen mesh is made slightly smaller. My guess is that the answer is that the convection forces increase slightly through the slightly smaller openings. I believe this is what Steven Wilde would argue is happeoing. A slight change in convestive forces offset any effect of higher co2 levels.

    • Nelson
      ” By definition, when you have to adjust the errors for output to make sense you get the physics wrong.”
      This is the kind of mistake we are all making (not me, of course) when we use the error term to discuss fixed biases and then in the next sentence use the error term to mean statistical error as Pat is doing. In this case Nelson means bias when he uses the term error. He is talking about accuracy, not statistical error. In all these computations the quadrature sum of the statistical errors of ALL the parameters sum (appropriately) through all the iterations. It is obvious to me that the statistical error will quickly reach unreasonable numbers and the calculated answer has no validity. In the 60’s when computers were still small (our huge computer had 4K of magnetic core memory) I dabbled in the programs for combining statistics. That was eye-opening. 😉

      • John Andrews
        When I got my first Atari computer I attempted to model the terminal velocity of a falling object, using the approach of System Dynamics modeling. Things were going great at first, with reasonable results. Then, as the object approached what I thought was a reasonable terminal velocity, the results started to oscillate wildly. The problem was round-off error and division by numbers approaching zero.

        I’m of the opinion that people all too frequently plug numbers into equations without giving thought to whether the inputs are reasonable or what the associated uncertainties are. Hence the old GIGO.

  52. “Modeling of real-world systems always involves approximations. We don’t know exactly how much energy is being transferred from the flame to the pot. We don’t know exactly how fast the pot is losing energy to its surroundings from conduction, radiation, and evaporation of water.

    But we do know that if we can get a constant water temperature, that those rates of energy gain and energy loss are equal, even though we don’t know their values.”

    That’s a gross misrepresentation of the real world climate system, where ENSO and the AMO act as negative feedbacks to net changes in climate forcing, controlling low cloud cover and lower troposphere water vapour also as negative feedbacks. The AMO is always warm during each centennial solar minimum. Playing the internal variability zero sum game completely obscures the negative feedbacks.

  53. It seems to me a large part of the controversy is caused by people trying to imagine the physical meaning of an uncertainty measure. People do this because the uncertainty bounds Dr. Frank arrives at exceed the physical bounds people think are reasonable for the climate system.

    The problem of uncertainty values getting larger than what is physically possible is always there when using a probability distribution that runs from -infinity to +infinity.

    This “problem” sidetracking the discussion could be avoided if Dr. Frank used his emulator and its associated uncertainty propagation to answer the question “How long can a climate simulation run until the uncertainty bounds start approaching the physical bounds?”. From his paper it is clear the outcome would be much shorter than the 80 years left in this century. Illustrating the climate models are not fit for purpose.

    • kletsmajoor,
      “It seems to me a large part of the controversy is caused by people trying to imagine the physical meaning of an uncertainty measure. ”
      It may be a problem for some people, but that is not the main argument against Dr Frank’s approach. The problem is that (a) he is treating the LCF as though it is actually a forcing (It isn’t) and (b) he is arguing that any and all uncertainty in LCF will accumulate year-on-year in quadrature into temperature uncertainty (It doesn’t).

      • kribaez,

        In his paper Dr. Frank says at page 2:

        “To be kept in view throughout what follows is that the physics of
        climate is neither surveyed nor addressed; nor is the terrestrial
        climate itself in any way modeled. Rather, the focus is strictly
        on the behavior and reliability of climate models alone, and on
        physical error analysis.”

        To analyse uncertainty propagation it is not necessary to understand the physical system described by a set of equations. Only the mathematical properties of those equations matter.

        In my view the question Dr. Frank has to answer is whether his emulator is good enough to represent the uncertainty propagation properties of CMIP5 climate models. I think he does that in his paper. As an engineer I’m no stranger to error analysis and far as I can see his math is correct.

        • “To analyse uncertainty propagation it is not necessary to understand the physical system described by a set of equations. Only the mathematical properties of those equations matter.”
          I agree.

          It is the mathematical properties of the system which force the net flux to zero over the spin-up period, and ensure that any change in LCF has a limited effect on temperature. Sampling from an LCF distribution, say U(-4, +4) returns a tightly constrained GSAT with an uncertainty distribution of ca U(-3, +3) for an average GCM, not a distribution with a range of 100K.

          I have decided that Dr Franks has perhaps made a serious statistical error by using SI10.2. It would explain why be believes that he can accumulate all uncertainty in LCF despite high autocorrelation, and why he is resistant to the idea of offset errors in net flux even though the system is mathematically obliged to reduce net flux to zero during the spin-up . In both instances, the application of SI10.2 would lead him to wrong conclusions.

          • “why he is resistant to the idea of offset errors in net flux even though the system is mathematically obliged to reduce net flux to zero during the spin-up ”

            You just identified one of the biggest uncertainties associated with the models. How do we know the net flux should reduce to zero during the spin-up? In fact, the Earth has been warming since the end of the last ice age, leading to the conclusion that the net flux is *not* zero and hasn’t been for thousands of years. When you force the model to output a stable system when we know the system is not stable then there is an in-built uncertainty from the very beginning!

          • Tim Gorman,
            I don’t disagree. However, the aim of the spin-up period is not so much to “match history”, but to test the long-term characteristics of the AOGCM. If in reality, there is a post-1700 long-term upward drift or an unmatched oscillatory behaviour in actual global temperature then the subsequent projection of the GCM might still represent a valid estimate of the incremental temperature change caused by the input forcing series. It just becomes improper to compare that with observed temperature change without taking into account the underlying natural variation component.

            But Dr Franks is not challenging the view that the radiative flux balance controls temperature gain. His uncertainty calculation is sourced from a component of that flux balance.

            In practice, it is not necessary to assume that the net flux balance is exactly zero. It is sufficient to show that it is well-bounded in order to highlight the problem with Dr Frank’s approach. This can be done using either physics or a statistical argument. But Dr Frank won’t accept the statistical argument until (a) he recognises that an error in a component of the flux balance is not the same as an error in the net flux imbalance, and
            (b) he accepts that his Equation S10.2 is just wrong when we are dealing with correlated data.

          • kribaez: “However, the aim of the spin-up period is not so much to “match history”, but to test the long-term characteristics of the AOGCM. If in reality, there is a post-1700 long-term upward drift or an unmatched oscillatory behaviour in actual global temperature then the subsequent projection of the GCM might still represent a valid estimate of the incremental temperature change caused by the input forcing series.”

            I note carefully your inclusion of the word “might”. That alone indicates that there is uncertainty in what the AOCGM model outputs. Thus it follows that it is unknown whether the model gives a valid estimate or not. If that uncertainty is greater than the incremental temperature change the model outputs then the model is useless for projecting anything . And *that* is what Frank’s journal shows.

            “n practice, it is not necessary to assume that the net flux balance is exactly zero. It is sufficient to show that it is well-bounded in order to highlight the problem with Dr Frank’s approach.”

            Being well-bounded is not a sufficient criteria. If the limits of the boundaries are greater than the what is being output then, again, the output is useless for projecting anything. Again, that is what Frank is showing in his paper.

        • You also raised the question of whether Dr Frank’s emulator is “good enough”.
          One of its disadvantages is that it masks the relationship between net flux and temperature. Indeed, it converts that relationhip into a relationship between forcing and temperature which is a poorman approximation. There are much more accurate ways of emulating a AOGCM, notably by convolution (superposition) of the AOGCM’s step-forcing data. This has the advantage of a more accurate match to temperature but also offers a solution to net flux.

          Having said that, if he could convince me that he really had found a source of massive accumulating uncertainty in net flux, then I would give Dr Frank’s emulator a “let”, since the resulting uncertainty is large enough on its own to declare the GCMs unreliable. However, I am not convinced he has found such a source.
          Additionally, the errors and uncertainties which he decribes as errors in the internal energy state of the system, and which are real if there is an error in LCF, will tend to change the sensitivity of the system – in simple terms the gradient of his relationship between temperature and cumulative forcing. However, with this formulation of emulator, Dr Frank has no degree of freedom left, or indeed calculation basis, to assess the uncertainty in temperature introduced by this type of error in sensitivity.

  54. Somewhere up above, I tried to explain why it is totally improper to treat a calibration error in LCF as though it translated into an uncertainty in forcing.
    Perhaps, there is a very simple way to demonstrate why what Dr Frank is doing is silly.
    Every GCM has at least 500 years of spin-up before any runs are carried out. This allows the GCM to reach a net flux balance. Except that this is, in fact, not perfectly true. There is at least one forcing series during the spin-up which assures a small variation around a zero net flux balance, and that is the TSI variation associated with the 11 year solar cycle. There are also small autocorrelated stochastic fluctuations in net flux and temperature. So, it is perfectly legitimate to say that the 500 year spin-up for initialisation is identical in character to the calculations which appear in the runs. If Dr Frank is correct that his calibration of LCF translates into a year-on-year uncertainty in forcing for the GCM projections, then it is equally valid to say that the same uncertainty calculation can be applied to the 500-year spin-up period, so let us do so.
    If we apply the same propagation methodology, we find that after 500 years, the uncertainty of the absolute temperature of the planet in 1850 is (+/-) 50K. Those model labs who have run the spin-up for even longer periods are of course carrying even larger uncertainty estimates.
    If, on the other hand, I adjust the LCF at the start of this run within an error-range of (+/-)4 W/m^2 the associated uncertainty in absolute temperature in 1850 is well-bounded at (+/-)3K irrespective of whether I run the spin-up for 500 years or 1000 years.
    So perhaps Dr Frank can explain what interpretation I should put on the (+/-) 50K?

    • Thank you very much for your comments; it’s always heartening to see that not all the lucid thinkers have given up on this site. (I see “Jeff Id” is still lurking.)

      I hasten to add that you could be misrepresenting Dr. Frank’s latest work for all I know. Having been unable to make sense of what he previously did on the subject, I haven’t yet been able to face slogging through this one.

      If I do screw my courage to the sticking place, though, I’m sure your comments will make the ordeal easier. And I’ll admit I know what I expect to find.

    • You still haven’t accepted the difference between error and uncertanity.

      You keep harping on “precision”, i.e., how much error is contained in a number output by a program. If you wish you could program your model to output temps out to one ten thousandth of a degree with an error of +/- 0.000001 degree. This does not effect the uncertainty of the result.

      Yes, the uncertainty could be as large as +/- 50 degrees. What does this mean? It means that with the uncertainty of clouds used by Dr. Frank, you can not predict the temperature with any certainty regardless of how precise the output is.

      Here is a question. Would you bet your life that the models give the correct output? If you have to think, even for a second, then you are uncertain. The precision of the prediction doesn’t matter does it.

      • Jim Gorman,
        My first degree was in Maths and Statistics. Post-grad engineer for almost 40 years, with a very heavy slice of numerical modeling, error propagation, uncertaity analysis, importance analysis, integrating multisourced information calibrated over different scales, probabilistic characterisations and risked decision-making. I could throw in a bunch more, but you know what? I think I do understand the difference between error and uncertainty, precision and accuracy and even the Bayes-frequentist paradox, but, there again, I may have just been fooling a lot of people for a long time.

        “Would you bet your life that the models give the correct output?” No, I think they are useless for informing decision-making. I can give you a long list of the reasons. But the fact that the models are useless does not make Dr Frank’s analysis correct.

        Here are three questions for you in return. Dr Franks sets out an equation in his SI, labeled S10.2.
        Can you see anything wrong with that equation?
        Would you use that equation to accumulate uncertainty if there was a strong year-to-year autocorrelation in the data?
        What is the variance of A-B if A and B co-vary with a correlation of 1?

    • kribaez
      You remarked about “… that is the TSI variation associated with the 11 year solar cycle.” It is more than just the small variation in insolation. The spectral distribution of energy changes in what is probably a more significant way, with estimates of the increase in UV from 5-15% during high sunspot activity.

      • Clyde,
        There is increasing independent evidence that the solar impact on climate goes beyond accounting for the relatively small TSI variation, I agree, and it certainly raises some questions about the completeness of the governing equations used in GCMs. However, it is not directly relevant to the question on the table, which is whether a calibration of one single input into the net flux is sufficient unto itself to justify Dr Frank’s methodology for the estimation of uncertainty, even if one accepts ad argumentum the validity of those governing equations.

  55. I am a simple EE. Yet I understand the difference between error and uncertainty. I’ve designed a ton of multistage analog circuits. You don’t need to lecture me about error and uncertainty.

    You sound young enough for me to expect you’ve never built a complicated analog computer which is what climate modelers should be using. Numerical solutions of diff eq aren’t always the best for identifying non-linarities and uncertainties.

    Your bono fides don’t really impress me nor does your equations or programming skills. Have you ever worked with a machinist? My father was an outstanding one. Most of them can explain uncertainty to you, in concrete terms.

    • Jim Gorman,
      I retired 9 years ago and have 3 grandchildren if that gives you a clue about my age, and my Dad was a toolmaker. I still have his measuring instruments.
      No lecturing involved in my comment, I assure you. It was a response to your suggestion that I didn’t understand the difference between error and uncertainty.
      My questions to you were not a test incidentally. I was drawing your attention to what I believe is a major conceptual error in Dr Frank’s paper. Did you look at S10.2?

  56. I’m no scientist (which might soon become very obvious), but it seems to me that this analogy could be improved a little if a mirror (that reflected the heat) was placed under the pot. The heat source (representing the sun) should remain constant. If the analogy includes a lid to represent the warming effect of GHG, then shouldn’t it also include something to represent the shielding effect (by way of clouds)?
    The mirror and the pot lid should move together. Don’t ask me what their relative speeds should be though. Does the mirror move in the same direction as the pot lid, or the opposite direction? Do they move at the same speeds? I have no idea.

  57. Loydo September 13, 2019 at 9:19 pm

    “I understand to be 3% of total CO2 in the atmosphere with 97 % contribution from natural sources”

    This is incorrect but… human’s annual contribution might only be 3%, but it is cumulative”
    ______________________________________________________

    Loydo, no matter if “human’s annual contribution [] is cumulative”;

    The sum of, the proportion of CO2 in the atmosphere lags temperature differences in that atmosphere:

    Is predetermined, determined by temperature of the atmosphere.

Comments are closed.