Some Dilemmas of Climate Simulations

Guest post by Wallace Manheimer

A great deal of the recommendation that the world should modify its energy infrastructure to combat climate change, costing tens to hundreds of trillions of dollars, is based on computer simulations. While this author is not what is called a ‘climate scientist’, a great deal of science is interdenominational, and experience from one field often can fertilize another.  That is the spirit in which this opinion is offered.  The author has spent a good part of his more than 50-year scientific career developing and using computer simulations to model complex physical processes.   Accordingly, based on this experience, he now gives his own brief explanation of his opinion, on what computer simulations can and cannot do, along with some examples. He sees 3 categories of difficulty in computer simulations, where the simulations go from mostly accurate to mostly speculative.  He makes the case that the climate simulations are the most speculative. 

First consider the case where the configuration and equations describing the complex system are known, and, where the system can be modified in known ways to test the accuracy of the simulation in a variety of circumstances.  An example of this is the development of the gyrotron and gyroklystron.  These are powerful microwave tubes at high frequency.  They are based on giving an electron beam energy transverse to the guiding magnetic field, and tapping this transverse energy to produce the radiation.  In the last 2 decades of the 20th century, I was involved in the theoretical, simulation, and experimental part of this effort.  I participated in writing one of the first simple simulation schemes capable of examining the nonlinear behavior of the electron beam coupled to the radiation (1). The simulations schemes have become more and more complex and complete as the project developed.  The project, and the simulations were successful.  Figure (1) is a plot, with its caption, of power and efficiency of a gyroklystron as calculated by simulation, and the experimental results taken from (2).  Clearly the simulations were successful.  Gyrotrons have been used now for heating fusion plasmas, and gyroklystrons have been used to power W band (94GHz) radars.  Figure (2) is a photo of the 10 KW average power, 94 GHz radar WARLOC at the Naval Research Lab.  Prior to gyroklystrons the highest average power 94 GHz radar was about 1 Watt. 

Figure 1: Numerical simulations of the power and efficiency of a 4 cavity gyroklystron and the actual measurements. Clearly the simulations are reasonably accurate.

Figure 2: The NRL WARLOC radar

Second let us say that the configuration is well known, and can be varied in a controlled way, but the relevant physics is not.  In my career I have spent a considerable amount of time working on laser fusion. The largest effort is the National Ignition Facility at the Lawrence Livermore National Lab (LLNL) in Livermore California. The lab built a gigantic laser, costing billions, which produces about a megajoule of ultraviolet light energy in a pulse lasting several nanoseconds.   Figure (3) is a photo of the laser bays, in a dedicated building, roughly half a kilometer in each direction.   Their target configuration places the millimeter size target in the middle of a cylindrical can called a hohlraum.  The laser is focused on the interior walls of hohlraum, producing X-rays which impinge on the target. The target compresses and heats, so that fusion reactions can take place.  LLNL has done many computer calculations and simulations of the process and concluded that fusion energy should be ten times the laser light energy, i.e. Q=10 (3,4). When they did the experiment, they found, to their dismay, that Q ~ 10-2, on a good day. Their estimate missed by more than a factor of 1000! What went wrong?  The problem is that there is a great deal of physics going on in the target, which is not understood well. For instance there are instabilities driven by the interaction of the laser with the plasma; instabilities of the fluid implosion, generation of a small number of extremely energetic electrons, generation of a small number of extremely energetic ions, generation of intense magnetic fields, unpredicted mixing of various regions of the target, expansion of the hohlraum plasma, all in a very complex and rapidly changing geometry… Don‘t get me wrong; LLNL is a first class lab, which hires only the very best scientists and computer engineers. The problem is that the physics is too complex; too unforgiving.  In the 8 years since the end of their ignition campaign, when they had hoped to achieve Q=10, they have only succeeded in getting the Q~ 2-3%, better, but still nowhere near what they had promised in 2010.

It is worth noting that there was considerable opposition to the construction of NIF (5), opposition based on 2 suppositions; first that LLNL could not get the laser to work, and second that the target and its interaction with the laser was much too complex, there were too many aspects of the physics which were uncertain and could not be modeled by their simulations.   The skeptics were half wrong and half right.  LLNL got the laser to work well (years late and billions over the original budget), and this this is certainly a significant achievement.  However, the skeptics were correct in their assessment that the that the target would not fuse.

Figure 3: The laser bays at the NIF laser at LLNL

Now let us go to the third level of difficulty. There are cases where neither the configuration, nor the basic physics needed for a simulation are well known. Add to that the fact that unlike with NIF, it is not possible to repeat experiments in any controlled way. When this author first got to NRL, the problem we were all working on was to figure out plasma processes going on in a nuclear disturbed upper atmosphere, or High Altitude Nuclear Explosions (HANE). When a nuclear bomb, or multiple nuclear bombs explode in the upper atmosphere, the atmosphere forms ionized plasma. With the strong flows generated there, the behavior is not governed by conventional fluid mechanics, but by the nonlinear behavior of plasma instabilities. The key was to work out a theory of these extremely complicated processes by particle simulations as well as theory. These results would then be put into the other computer codes used in the radar, tracking, communication, electronic warfare, etc. An unclassified version of our conclusions is in (6).  Was our theory correct? Who knows. Will anyone ever do the experiment? Hopefully not. If the experiment is done and the theory does not work, will there be an opportunity to continue to work on it and improve it?  Nobody will be alive to do it.

This author makes the case that the climate computer simulations, on which the governments have spent billions, are of this third level of complexity, if not even more complex.  The motivation is that we are presumed to be in a ‘climate crisis’ which is an existential threat to humanity.  For these climate simulations, the basic physical system is almost certainly much more complicated than the LLNL laser target configuration, for which the simulations failed. The scientists at Livermore at least know what they are starting out with and can vary it. First, there is there is the fact that these climate simulations involve the entire earth. To do the simulations, the earth is broken up into a discrete grid, both around the surface and vertically. Since the computer can only handle a fine number of grid points, the points are dozens of miles apart horizontally. But many important atmospheric effects are on a much smaller scale. For instance, cities are usually warmer than the surrounding countryside, so the computer calculation would have to somehow approximate this effect since it occurs on a space scale smaller than the grid spacing. Then there is a great deal of uncertain physics. The effect of clouds is not well understood.  Also, what effects arise from the deep ocean, the effect of CO2 on water vapor, aerosols and their content and size, cosmic rays, turbulent ocean, turbulent atmosphere, uncertain initial conditions, variations in solar radiation, and solar flares? What impurities are in the atmosphere and where and when were they here or there …..?  All these effects are handled by a method called ‘tuning’ the code.  When I was an undergraduate, we used to call these ‘fudge factors’.  For years this has been kept under wraps by the various code developers.  More recently some of this has been discussed in the scientific literature (7).  The different modelers use very different tunings. It does not inspire confidence, at least with this experienced scientist.

With that introduction to climate simulations, let’s see how good these simulations are at predicting the earth’s rising temperature.  Figure (4) is a slide presented in congressional testimony by John Christy of the University of Alabama at Huntsville, along with his caption (8).  Also, on the graph are the actual temperature measurements.  Christy, with Roy Spencer are the two scientists there mainly responsible for the obtaining and archiving the space-based earth temperature measurements.  The fact that he prepared this data for congressional testimony indicates to me that he took extraordinary care in setting it up.   I certainly believe it.  Notice that all of the curves vastly overestimate the temperature increase.  As Yogi Berra said, “It’s tough to make predictions, especially about the future”.  The curves cannot be making random errors, if they were, there would be about as many that underestimated the temperature rise.  Hence it seems that a bias for a temperature increase is built into the models.      Possibly there is other more recent literature showing perfect agreement from 1975 to the 2020, and then predicting disaster in 30 years.  But how credible would that be in the light of Christy’s viewgraph?  It brings to mind John von Neuman’s famous parable “With four parameters I can fit an elephant, and with 5 I can make him wiggle his trunk”.  Figure 5 from (9) is a plot of the elephant wiggling his trunk, done with 5 parameters.

Figure 4: The viewgraph John Christy on the accuracy of numerical models for predicting temperature rise, which he presented in congressional testimony.

Figure 5: The elephant wiggling its trunk

Yet at least in part due to these faulty simulations, there is a large move afoot to switch our energy from coal, gas, oil and nuclear to solar and wind.  The cost of this would be astronomical.  Figure (6) is a graph of the worldwide cost of the switch to solar and wind in the last few years (10).  The costs are in the neighborhood of half a trillion $$ per year.   Yet according to (10), this is not nearly enough.  Here is a quote

While climate finance has reached record levels, action still falls far short of what is needed under a 1.5 ˚C scenario. Estimates of the investment required to achieve the low-carbon transition range from USD 1.6 trillion to USD 3.8 trillion annually between 2016 and 2050

Figure 6: The global climate finance $$$ from 2013 to 2018

Of course, this author realizes that there is a place for computer simulations in earth science, just like in every other science.   But is it really worth this kind of societal effort, ~$50-100T to attempt to accomplish this energy transformation, which probably is not even be possible (11,12), and likely is not even needed (13)?   Especially where it is based, at least in part on faulty computer simulations.   Furthermore, where windmills and solar panels take up a great deal of land, and use a tremendous amount of materials, concrete, steel, rare earths…. how sure can we be that the enormous effects of this transformation would be environmentally beneficial?  Where are the simulations that examine these aspects?   Perhaps the climate modelers should have a little more humility and a little less hubris and certainty, in the face of the transformation they are telling society to make.  This costs real money!

It is interesting, that as Christy points out, there is one curve that got it about right; the Russian model!  In 1995, in the Yeltsen era in Russia, I took an 8-month sabbatical as a visiting professor in the physics department of Moscow State University.  I learned there that Russia has had a very strong, independent scientific tradition dating at least since the time of Peter the Great, when he set up the Russian Academy of Science.  Even during the Communist era, the Academy was as independent of party control as any organization there could be.  So how could the Russian modelers have gotten it right when all the western models all got it wrong?

My answer perhaps descends into speculation and might be judged frivolous, but it seems to the author to be well worth recording.  In the United States and the west, we do not arrest or execute dissident scientists, as the Russians did under the worst abuses of Stalin.  However, we do punish dissident scientists in other ways, we simply cut off or deny their funding.  In fact, most vocal skeptics are retired or emeritus; they do not have to worry about their next grant.  In my April 2020 essay in Forum on Physics and Society https://www.aps.org/units/fps/newsletters/202004/media.cfm,  I listed 9 expert skeptical scientists in the area of climate science (several in the NAS).   Except for Spencer and Christy, who perform indispensable service for NASA and NOAA, I believe none are able to get any funding for their research.  In fact, none even seem to be able to publish in the standard scientific literature; they use blogs.  I know of one expert at an Ivy League university, an NAS member, who expressed skepticism of the standard dogma (14).   He was in charge of a large project in biophysics, which abruptly got canceled (15).  He stopped being a public skeptic then, and told me he did so, in 2015. Was his project termination because of his climate heresies?  Who knows, but it happened about when his climate stands gained publicity.

It is unlikely Russia has the same worry about climate change that we do.  Perhaps Russian scientists do not have to ‘tune’ their codes to obtain politically correct results.  BTW, if anyone is interested in my experience in Russia, I wrote a diary as a pdf file.  Email me and I will send it to you.

To conclude, computer simulations are a vital and powerful scientific (and societal) tool.  But in utilizing them we should be cognizant of the fact that the ‘tuning’s we do, and physics uncertainties we approximate, are weakening links to the chain; a chain which is only as strong as its weakest link.  If these tunings allow the simulation properly calculate the known data, that does not mean they will do so as new data comes in.  Remember the elephant.  We should be especially cognizant of the fact that that these ‘tunings’ might well be to please sponsors.   And of course, we should never forget GIGO.

References:

1.      P. Sprangle and  W. Manheimer,  Coherent nonlinear theory of a cyclotron instability,     Physics of Fluids 18, 224 (1975)

  •  M. Blank et. al,  Demonstration of a 10 kW average power 94 GHz gyroklystron amplifier, Phys. Plasmas, 6, #12, p 4405, 1999
  • J. Lindl et al, The physics basis for using indirect drive targets on the National Ignition Facility, Phys. Plasmas, 11, 329, 2004
  • S.W.  Haan et al, Point design targets and requirements for the 2010 ignition campaign on the National Ignition Facility, Phys. Plasmas, 18, 051001, 2010

5.      NIF Moves Forward Amid Controversy, Physics Today, 54, Issue 1, p 21, 2001,  https://physicstoday.scitation.org/doi/full/10.1063/1.1349602

  • M. Lampe, W. Manheimer, and K. Papadopoulos, Anomalous Transport coefficients for HANE applications due to microinstabilities, NRL Memorandum Report, AD-A0014411, NRL-MR-3076, 1975
  • Paul Vossen, Climate scientists open their black boxes to scrutiny, Science, October 28, 2016,  volume 354, Issue 6311,  p401
  • J. Mayer, K. Khairy, and J. Howard, Drawing an elephant with four complex parameters, Am. J. Phys. 78, 648, 2010
  1. Barbara Bushner et al , Landscape of  global climate finance 2019, Climate policy initiative, November 2019, https://climatepolicyinitiative.org/publication/global-landscape-of-climate-finance-2019/

11. Mark Mills, The “New energy economy”, and exercise in magical thinking, Manhattan Institute Report, March 2019, https://media4.manhattan-

institute.org/sites/default/files/R-0319-MM.pdf

12.Wallace Manheimer, Midcentury carbon free sustainable energy development based on fusion breeding, Sections I and II; IEEE Access December 2018, Vol 6, issue 1, p 64954-64969, https://ieeexplore.ieee.org/document/8502757

13. www.CO2coalition.org


14. APS News, December 2009, volume 18, #11

15, Gabriel Popkin, Nature, Weekly International Journal of Science, vol 524, August 5, 2015, (http://www.nature.com/news/trailblazing-cancer-physics- project-ac cused-of-losing-ambition-1.18122).

To contact the author, use the contact form under the about tab and I will forward messages. ~ctm

173 thoughts on “Some Dilemmas of Climate Simulations

  1. What a very interesting and useful post, sir. Thank you very much.

    In my own experience I found a modeling anomaly that you may find of some interest. What I discovered by accident is that, in the case of carbon budgets, when climate science ran into anomalous results because of a statistics error, they decided to fix it by adding more and more variables to their climate model. Here are the comical but true details.

    https://tambonthongchai.com/2020/04/09/climate-statistics/

    • a great deal of science is interdenominational, and experience from one field often can fertilize another.

      If there is one field of study where there is no shortage of “fertilizer” it’s climatology !

      What it needs is good dose of disinfectant.

      • Figure (2) is a photo of the 10 KW average power, 94 GHz radar WARLOC at the Naval Research Lab. Prior to gyroklystrons the highest average power 94 GHz radar was about 1 Watt.

        I’m always suspicious of folks writing about this sort of thing that can’t get the units right. It’s like an authority on literature getting ‘their’ and ‘there’ mixed up. It jumps out of the page at me.

        10 kW and 1 watt , please.

        Please can we have the equation for the elephant. I’ve always doubted it that could really be done, and was not to be taken literally.

        I seems like there is a sinewave ( three terms already ) modulated by some other cyclic fn. If that is five parameters, I’ll bet there are a few implicit unity variables not being counted.

        • The Warloc radar defintely had an average power of `10 KW, and earlier 94GHz radars, based on EIO tubes definitely had average power of ~ 1Watt. The equation for the elephant was given in one of my references.

          • Ah, thanks. That is obviously the source of the graphic supplied above , now I see what the messy trunk is about, I thought it was supposed to be tusks in the monochrome rendition supplied.

            It also shows that my guess at the generating functions was spot on. I’d hesitated calling the second fn as sinusoidal because of the triple “trunk”. Had it been reproduced faithfully, it would have made more sense.

            Four *complex* parameters is a bit of a cheat since it is clearly 8 but that confirms my initial impression that von Neumann was not literal in his original comment.

    • Technically nothing wrong with the climate or corona virus graphs if read properly. The grand mistake in both is showing an average line which is actually meaningless. The lower line of the climate simulation is then (barely) within actual data. Of course not showing the middle dashed line in the corona virus graphs makes them pretty useless for forecasting as do climate models which are probably wider if more runs were included. They are more akin to the wide margin error circles used in hurricane forecasting while the hurricane is still in the Atlantic.

  2. Wallace Superb critique!
    Visualizing von Neuman’s elephant wriggling his trunk with 5 parameters
    To see what you can do with complex parameters, see:
    Elephant with a wriggling trunk
    https://www.youtube.com/watch?v=w1GU27P_sqA
    Elephant
    https://www.youtube.com/watch?v=KfNPAXplLbc

    How to fit an elephant John D. Cook (With code)
    https://www.johndcook.com/blog/2011/06/21/how-to-fit-an-elephant/

    “Drawing an elephant with four complex parameters” by Jurgen Mayer, Khaled Khairy, and Jonathon Howard, Am. J. Phys. 78, 648-649 (2010), DOI:10.1119/1.3254017. (Paywall)

  3. True evil is concealed. Perhaps better to have Stalin openly arresting dissidents than the soft censorship wielded by the globalists.

    • It certainly takes less muscle and ammunition, this soft censorship. They have been fine tuning Totalitarianism and elevated it to an art form.

      It has become Central Authoritarianism by membership, breeding and voluntary acceptance. More Brave New Worldish if you will.

  4. I, too, like Dr. Manheimer spent my career building and using models to simulate system-level performance. And, I have said, a number of times to this forum, even with carefully “tuned” models, one can only have confidence in the results of a simulation that interpolates between data points that were used in the tuning. Extrapolations must be made with caution…

  5. Here is a demonstration — peer-reviewed, published, and beyond rational doubt — that climate model cloud error alone makes their simulations utterly unreliable.

    • STILL pimping that error filled, theoretically flawed thesis, Dr. Frank? It was technically outed almost a decade before you finally slid it thru. Folks, download it and find even ONE relevant tech citation, many months after it got by.

      Actually, a left handed compliment for peer review, folks. They err on the side of allowing a few stinkers, with the goal of encouraging alt. views….

      • The question remains, how do climate modellers handle errors?

        Based on my personal interactions with some of them, they ignore the errors in their projections.

        • It’s not that they ignore errors, they just don’t look for them because if they are wrong, the political implications are too harsh to consider.

        • Modelers insist that all simulation errors are constant offsets and subtract away when taking projection anomalies, Brooks. It’s beyond clueless.

      • Can you provide any support to where Dr. Frank’s paper was “technically outed” in the literature, or are you just providing evidence of the author’s assertion of how climate change orthodoxy is enforced?

        • No problem. Here’s the ATTP take down. I think you can follow it to the point by point ~40 minute utube as well. And a few google keystrokes will get you t Nick Stokes smite on this fora not too long ago. NO statistically significant rebut. Oh, forgot, Roy Spencer, the climate skeptic, did his own….

          Also, follow the ONE cite the paper got, and tell me how it relates to the context of that citation. I.e., NO technically relevant citations, after many months…

          https://andthentheresphysics.wordpress.com/2019/09/08/propagation-of-nonsense/

          • ATTP? Sorry mate, invoking one of the aforementioned climate change orthodoxy enforcers doesn’t cut it. Nick’s a capable modeler, but he never could grasp, or perhaps refused to grasp, the difference between error analysis (Frank) and model repeatability (“statistics”). Ditto Dr. Roy. Interesting your would invoke him, since he and Cristy have done the work, as per figure 4, above, that buries any semblance of climate model output to physical reality alive.

          • Mr. ATTP (Ken Rice) produced probably the most inept attack of all.

            He couldn’t figure out where the calibration uncertainty came from despite that entire sections of the paper introduce and discuss it.

            In that link, he mistakes a 20-year calibration average of simulation error for a base-state constant offset error.

            He credits Patrick Brown’s video, which did not survive critical analysis.

            Ken Rice showed his acuity in the comments section there by not displaying any understanding of calibration or resolution.

            Even the truly fatuous ‘year^-1’ index criticism raises its head again, despite that Nick Stokes knows it’s wrong.

            The entire comments section is a monument to smug dismissals.

          • From the refutation you gave:

            “However, I’ll briefly summarise what I think is the key problem with the paper. Pat Frank argues that there is an uncertainty in the cloud forcing that should be propagated through the calculation and which then leads to a very large, and continually growing, uncertainty in future temperature projections. The problem, though, is that this is essentially a base state error, not a response error. This error essentially means that we can’t accurately determine the base state; there is a range of base states that would be consistent with our knowledge of the conditions that lead to this state. However, this range doesn’t grow with time because of these base state errors.”

            As usual the author starts out talking about “uncertainty” and then jumps to “error” as if they are the same thing. THEY ARE NOT THE SAME THING.

            And in a model that uses iterative steps, where the base state for the next iteration is dependent on the base state of the prior step then the uncertainty adds through each iteration. It is *not* an error term that carries through unchanged through multiple iteration steps.

            What will it take for so-called climate scientists to finally understand the difference between uncertainty and error. These folks would fail a second year electrical engineering course, e.g. trying to find the stall load for a motor. You have an uncertainty for things like field current, belt tension, and ammeter readings. You can run the same experiment two times in a row and get different answers. This isn’t because of errors, it is because of the uncertainty in the controls and measurement devices. Have you ever taken apart a wire-wound rheostat? Do you understand what the term “granularity” means? Do you understand the term “slop” as applied to setting something like a rheostat? It isn’t an error term, it is an uncertainty term.

          • Good to see you here, Tim. 🙂

            Willing to put your shoulder again into the Sisyphean labor of explaining the difference between error and uncertainty to those determined to not accept it. Very admirable. And appreciated.

        • In my experience, all it takes for a skeptical paper to be “outed” is for one of the big names in climate science to declare, “you’re wrong”.

          • Its more hilarious than that. Many years ago I write a critique of renewable energy. It was the first time I used the name I use here online. Ever.

            Imagine my surprise when it was refuted by I think skepticalscience.com as being ‘from that well known climate denier, Leo Smith’

            I think that, more than anything, illustrated to me that there was more than a genuine difference of opinion going on.

      • Great mindless smear, blob, bereft of any substantive content. As usual for you lot.

        Let’s see you or anyone else disprove it.

        We note you hiding behind a fake name. So let’s add moral cowardice to your list of personal qualifications.

        The list that includes ignorance of science and disposed to partisan smears.

        • I would like to take another example of uncertainty which average people can understand. In track, people used to take the times with a stop watch. Some judges stopped too fast which gave better timing results. Other were late in stopping and still others were sometimes too fast or too slow in stopping the stopwatch. Now what would be uncertainty with the added results of a 100 m run taken 10 times with an average time of 10 seconds per run and an average stopping error of 1/10 of a second (too early or too late = +/- 0.1 [s]). Of course the added result will be in the range of 100 [s] (10 x 10 [s]) +/- 1 [s] (10 x (+/-0.1 [s])) which is a range between 99 and 101 [s]. The absolute uncertainties add up and don’t cancel out. The relative error stays the same: 0.1/10 is the same as 1/100 = (+/-) 1%.

          • What I forgot to mention: for a simple SUM the absolute errors are added, for a product the relative errors are added. For complex mathematical functions such as computer simulations, everything gets out of hand. These people are not able to calculate the uncertainties and thus leave them out which is fundamentally wrong.

          • For a simple sum, x = u + v, errors are combined as the root-sum-square. See Bevington & Robinson Data Reduction and Error Analysis for the Physical Sciences. 3rd ed., p. 48.

      • bigoilbob
        You said, “It was technically outed almost a decade before you finally slid it thru.” Why should I believe you? You aren’t even using your real name. How about a citation for your claim? Anybody can make a claim. It takes a lot more than a few words to make an acceptable claim. Are you up to it?

        • Clyde, I began my first submission in 2013. The editor wouldn’t even send the manuscript out to review. In all it took 6 years and many submissions to get published. I described the process at WUWT here, here, and here.

          If you like, you can download and inspect most of the seriously incompetent reviews I faced and answered, here. (45 MB zip file. Webroot scanned for viruses).

          All but three or four of the reviewers evidenced zero understanding of physical error analysis. Propagation of error through a calculation was an entirely foreign concept to them. They uniformly argued that offsetting errors produced physically reliable predictions of novel states. Among other mistakes.

          I previously uploaded the anonymized reviews of the published version, to allow their inspection as well, but Frontiers asked me to remove access.

          • “All but three or four of the reviewers evidenced zero understanding of physical error analysis. Propagation of error through a calculation was an entirely foreign concept to them. ”

            And apparently every superterranean statistician/technologist, left, right, center, who has touched it, still can’t comprehend your sideways “understanding” of it

            Folks, contrast Pat’s blockage with the (final) bow to science by Laura Resplandy. Both with a peer reviewed paper. Both, justifiably technically outed by Nic Lewis. It was obvious that Repslandy was embarrassed, but she finally withdrew hers. In spite of the fact that the point she was trying to make was not nearly as audacious as that of Dr. Franks. Per Elizabeth Cook, “Sometimes it takes balls to be a woman”…

          • “Both, justifiably technically outed by Nic Lewis.”

            Sorry folks, I can’t back this up. NO idea what Nic Lewis thinks. But still strong on the fact that the Pat FrAnk paper is ONLY VIABLE IN SUBTERRANEA….

          • Blob: “And apparently every superterranean statistician/technologist, left, right, center, who has touched it, still can’t comprehend your sideways “understanding” of it

            I can’t speak for anyone else, but evidently you can’t comprehend it.

            As to the climate modelers’ non-understanding of error propagation, go ahead and download the reviews. They document that fact.

            Gavin’s assumption of constant error is wrong right at the start. His offset clock time falls completely flat.

            The correct diagnosis of the problem involves recognizing the impact of uncontrolled variables on the intermediate and final results of a sequential set of calculations..

            Uncertainty grows across the sequence. If you don’t get that, blob, it’ll evidence that you’ve never been an engineer.

          • Blob; “Folks, contrast Pat’s blockage …

            Pretty funny. Folks, contrast blob’s posture of rectitude with his tedious rhetorical cant.

            He has yet to mount any worthwhile argument or critique. It’s all argument from demonstrably wrong authority.

            None of these people seem able to distinguish precision from accuracy, inductive probability from deductive result, or statistical conjecture from physical prediction.

            Those distinctions are the sine qua non of science. Anyone unwilling or unable to distinguish them is not, or cannot be, a scientist or engineer.

          • You would be amazed about how many so called scientists just can not shake the belief that the “error of the mean” describes the precision of the mean. If you just have enough data points you can average data recorded to the nearest tens digit, divide √N and increase the precision to 1/10ths or even 1/100ths. Never heard of significant figures being applied to measurements!

            Uncertanity? Why you can reduce that to almost nothing by first calculating anomalies out to two decimal places (even for 1880 data), and again divide by the √N.

            I ask a couple of them if they would buy 2×4’s from a company that told them the lengths were accurate to 1/4 inch but then ended up with a ceiling that had a wave of 4 inches? “Well, no! When you point out the lumber yard did exactly what they were doing, “well no it’s not the same thing!”

          • Pat,
            Yes, I have been following your efforts here for some time. I personally don’t see any problems with what you have done and agree in principle with your paper. However, I don’t consider myself an expert in things that even experts argue about. I’ve asked my neighbor, who is a senior mathematician and former department chair at the Air Force Institute of Technology, to review your paper for comment. Unfortunately, the COVID-19 mess has him stuck at home trying to convert all his teaching material to online. It will probably be some time before I can brow beat him into looking at your paper. But, I’m of the opinion that those of the ilk of bigoilbob don’t really understand the issue and are simply regurgitating the objections of the likes of Schmidt, who I don’t think understand it either. Schmidt has a vested interest in not understanding it, however.

        • “It was technically outed almost a decade before you finally slid it thru.” Why should I believe you? You aren’t even using your real name. How about a citation for your claim? ”

          Ok.

          “As Gavin Schmidt pointed out when this idea first surfaced in 2008, it’s like assuming that if a clock is off by about a minute today, that tomorrow it will be off by two minutes, and in a year off by 365 minutes. In reality, the errors over a long time are completely unconnected with the offset today.”

          You can click your way to the original Gavin Schmidt take down, ~12 years ago.

          https://andthentheresphysics.wordpress.com/2019/09/08/propagation-of-nonsense/

          “You aren’t even using your real name.”

          Nor do half of the posters here. When I started posting, I was an active petroleum engineeering manager. An industry that doesn’t tolerate non PC views well. Your tender tissues concerns over “incivility” pale against your blatant hypocrisy.

          • Wow! If there is an ERROR in a clock’s mechanism that causes it to lose one minute / day, then it will indeed be off by two minutes in two days, 365 minutes in a year, etc. I’d ask that you point that out to your friend Gavin, but I don’t think logical reasoning is a big part of your skill set.

          • Missed the last portion of your post: There’s an old joke about engineers, or “engineeers” as you spell it, i.e., ‘I can’t spell engineer, but now I are one’. Based on your postings here, I find it hard to believe you were ever an engineer or were ever employed in the oil industry.

          • “it’s like assuming that if a clock is off by about a minute today”

            This totally highlights the difference between uncertainty and error. If the clock is is consistently off then that is a calibration *error”. If it is “about” one minute off then that is uncertainty! The word “about” implies that you don’t actually know how much it is off. That is measurement uncertainty. And that uncertainty adds with each iterative reading of the clock! I

            If you take a reading at one minute then the uncertainty is +’/- x. If you take a reading at two minutes then the overall uncertainty interval grows, it has too. It will be +/- (fx), where f is some factor. Uncertainty adds by the root-sum-square. If the uncertainty for each individual iteration is the same for each interval then you get

            overall uncertainty = sqrt (x**2 + x**2 + x**2 …….] for however many iterations you make.

            If you take a ruler that is exactly off by 1″ and lay it end to end ten times then your overall measurement will be off by 10 x 1″ = 10 inches. Thus the error accumulates with each iteration.

            If that ruler is off by *about* one inch, say +/-0.1 inch, then at the end of the ten iterations you won’t know *exactly* how far off your measurement will be. The uncertainty interval will have grown with each iteration and with the tenth iteration your overall uncertainty will be 0.1 x (10**.5).

            If you know the *exact* error the ruler is off then you can compensate for that with each iteration. You simply add 1″ (or subtract if it is too long with each iteration.

            But how do you compensate for uncertainty? You don’t know how much to add or subtract with each iteration. If you can’t compensate for it then your uncertainty grows just like your uncompensated error will grow with each iteration.

            This is the point the climate modelers miss with their models. They think it is just a matter of using a “fudge factor” to compensate for errors. The problem is that they try to use the same *exact* fudge factor applied to each iteration as if they know what the *exact* error is physically. They *assume* there is no uncertainty associated with their model inputs.

            This is a common mistake by mathematicians and computer programmers. They think they can calculate an average out to ten decimal places when the inputs are only accurate to one decimal place! By God that what the math and the computer program spits out so it must be *exactly* correct. No understanding of significant digits at all! And no understanding of uncertainty intervals versus calibration errors at all.

          • bigoilbob
            Gavin Schmidt did a p!ss poor job of describing an analogy. If the clock keeps perfect time, then the 1 minute error, at T-naught, is a constant bias and can be corrected. If the clock runs consistently slow or fast, then it is a linear, time-dependent bias that can similarly be corrected if the rate of change is known accurately and precisely. The problem is when the speed of the clock is erratic, say because of orientation or temperature changes and there isn’t any calibration information on what is causing the change in speed, or how much it changes. You are then left with an indeterminate error that is, at best, bounded by a probability. Typically, when something is known imprecisely, the best estimate is accompanied by a 1 or 2 sigma error-estimate. Therein lies the problem! If the speed of the clock, or the cloud forcing, has an indeterminate error that is probabilistic, and cannot be described with a function with respect to time, then that probabilistic error has to be propagated every time the measurement is used in a calculation! It is a direct consequence of the use of significant figures and precision in calculations. It is a problem akin to round-off errors in recursive calculations.

            https://wattsupwiththat.com/2017/04/12/are-claimed-global-record-temperatures-valid/

            I can accept your anonymity as long as you provide a citation and don’t expect people to accept your word alone.

          • @Clyde Spencer

            “Gavin Schmidt did a p!ss poor job of describing an analogy.”

            Well no. The piss poor job is your understanding of it. I think I’ll go with the appeal of authority of Gavin and 150 years worth of progressive understanding of Engineering Statistics over – what? Please provide me with even ONE above ground statistician who agrees with Pat Frank’s spectral take on error propagation. Not rhetorical, ONE.

            All you need to do is check out ANY post 1980 – present hindcast coupled with the Trumpian YUGE Pat Frank error band, to see how IMPROBABLY it would be that all of the present day expected v actual parameters fall nearly spot on the Pat Frank P50 line.

            “I can accept your anonymity as long as you provide a citation and don’t expect people to accept your word alone.”

            Well, I’ve already provided many, if you click from link to link, so I’m quite relieved that you “accept my anonymity”.

            BTW, for those who doubt my oiliness, you infer that you would know the difference. So, quiz me if you can, I’m ready. Or perhaps STFU otherwise….

            Clyde, don’t forget to link us toANY superterranean sttatistician who agrees with Dr. Frank….

          • “Well no. The piss poor job is your understanding of it. I think I’ll go with the appeal of authority of Gavin and 150 years worth of progressive understanding of Engineering Statistics over – what? Please provide me with even ONE above ground statistician who agrees with Pat Frank’s spectral take on error propagation. Not rhetorical, ONE.”

            You continue to highlight the problem without realiizing you are doing so!

            Statisticians are MATHEMATICIANS. Mathematicisns are not trained in either significant digits or in uncertainty in measurements.

            Take this entry from wikipedia: “In metrology, measurement uncertainty is the expression of the statistical dispersion of the values attributed to a measured quantity. All measurements are subject to uncertainty and a measurement result is complete only when it is accompanied by a statement of the associated uncertainty, such as the standard deviation. By international agreement, this uncertainty has a probabilistic basis and reflects incomplete knowledge of the quantity value. It is a non-negative parameter.[1]”

            Uncertainty doesn’t have a standard deviation. It does *not* have a probability density. This entry was written by a statistician whose hammer is statistics and who sees everything as a statistical nail.

            When I tell you that a LIG thermometer has a +/- 1deg uncertainty there are lots of things that go into that uncertainty that have no relation to probability. For exeample, the height of the person reading the thermometer can change and thus the parallax for the reading changes resulting in different readings for different people. There is no probability function that describes this phenomena! Therefore the uncertainty interval cannot be represented on a probabilistic basis.

            The true value can lay anywhere in the uncertainty interval. It may or may not be at the mean of all the readings. Therefoer the law of large numbers that statisticians like to refer to is useless in this situaton. You can caluclate the mean of all the readings more and more precisely but that doesn’t imply that the mean is the true value.

            The law of large numbers only apply when you are measuring the same thing with the same device each time. When you have different people reading the thermometer then in essence you have a different measuring device each time.

            Statisticians and mathematicians can’t seem to ever get this into their brains. They have a hammer and everything they see is a nail. And you seem to be of that same veain.

        • Frank From NoVa

          “There’s an old joke about engineers, or “engineeers” as you spell it, i.e., ‘I can’t spell engineer, but now I are one’.”

          Discredited for a typo? How Cliffey Claviney of ya’. Sorry you can’t keep up technically, but big mistake to diss my oiliness. Don’t remember any Franks From NoVa when I was pulling slips on a Sooner Trend workover rig, under age, ~50 years ago.

          • No, discredited for not being able to understand a joke. And also for not understanding that systematic errors, whether in a clock or a climate model, accumulate. Btw, working a rig floor is honorable work – however I’m curious as to how someone who purports to have some knowledge of petroleum production gets comfortable with the idea that the earth’s climate never changed until we started using fossil fuels….

          • bigoilbob
            If you think that the simple two sentences:

            Frank confuses the error in an absolute value with the error in a trend. It is equivalent to assuming that if a clock is off by about a minute today, that tomorrow it will be off by two minutes, and in a year off by 365 minutes. In reality, the errors over a long time are completely unconnected with the offset today.

            is even remotely a refutation of Pat Frank’s analysis, then it demonstrates that you neither understand the problem and its nuances, nor are you capable of logical analysis of simple problems. You seem to be at your best when citing what you think are authorities. Schimidt’s terse remark is so devoid of understanding that it is almost a non sequitur.

        • “As Gavin Schmidt pointed out when this idea first surfaced in 2008, it’s like assuming that if a clock is off by about a minute today, that tomorrow it will be off by two minutes, and in a year off by 365 minutes. In reality, the errors over a long time are completely unconnected with the offset today.”

          It doesn’t seem like that at all, Bob. It seems more like Schmidt has confused uncertainty with error. The uncertainty of a proposition is not the same thing as the accuracy of the instrument measuring it.

          In Schmidt’s example, the clock’s report of time is incorrect when compared with the actual time, thus indicating some issue with the mechanism, initial calibration or whatever, it doesn’t matter. What matters is that’s an error of the instrument, but that’s not the same thing as the initial uncertainty with the first proposition of attempting to measure time at all.

          Pat’s argument deals with uncertainty, which is a variable that (as I understand it anyway) exists before the instrument is ever checked for accuracy, thus is something else entirely. “Uncertainty” and “error” are two wholly distinct concepts that purport to describe wholly different thing-a-maBigOilBob’s.

          Take for example the proposition, “God exists.” Since this proposition is wholly untestable, we could say the intrinsic uncertainty value is 100%. The uncertainty value, however, has no relation to the proposition’s actual truth value. The proposition is either objectively true or false regardless of its intrinsic uncertainty. Thus, even though it’s 100% scientifically uncertain that God exists, that bears not on the actual truth value of his existence.

          Dr. Frank may or may not choose to confirm whether or not I’ve described this theory accurately, but I can be sure that I’ve thought of it this way rather precisely for some time.

          (*mah!* did ya see what I did there Dr. Frank?)

          • In that quote, Gavin is just demonstrating that he has no idea how to work with errors.

            The way climate science works, they assume that if the clock is off by 1 minute at 2pm, off by 1 minute at 3pm and again off by 1 minute at 4pm, they can then average all those readings together and have a reading that is accurate to 20 seconds.

          • sycomputing

            You are correct of course, as are many above. I also find it hard to believe bigoilbob is a graduated engineer BSc, while not knowing the difference between a systematic error, an experimental error and the propagation of uncertainty.

            Knowing that a clock is off by one minute on day 1 in no way predicts that it will be off by two minutes on day 2. It might still be 1 minute off – we don’t know because we don’t know if it was set incorrectly (mis-calibrated) or it cannot keep time (inaccurate). The former is experimental and the latter is systematic.

            A common mistake reflected in NASA-GISS work is the idea that making multiple measurements of different things (one each) with an instrument increases the accuracy of an aggregated result. Unbelievable. Essentially this “argument” is what underlies the claim that a computer simulation of the atmosphere can predict the global temperature 100 years from now more accurately than 1 year from now. That is not even “science”.

          • Mark:

            That’s clever. If you don’t mind I may use that later, with attribution of course. It’s a little like the joke we have in Texas:

            Q: “How many Aggies does it take to change a light bulb?”
            A: “Five. One to hold the bulb and 4 to rotate the ladder.”

            Crispin:

            I also find it hard to believe bigoilbob is a graduated engineer BSc . . .

            Well that’s understandable, but there’s one other prestigious regular commenter around here that’s a PhD something or other in the movement of liquids (I think anyway), and he doesn’t get it either. He even published his own article here at WUWT against Pat’s theory. He didn’t really talk about Pat’s theory at all, but rather, error propagation.

            Here I am, a mathematical moron nobody with an undergrad liberal arts degree (Philosophy) and I’m just not having too much trouble grasping the underlying concepts of the subject matter.

            It just makes one shake one’s head in wonderment at the goings on in the world of today . . . ???

          • sycomputing
            I think it was a paternal grandfather who was the actual expert in liquids.

          • syscomputing, crispin

            It’s nice to see that there are others out there that actually understand the concept of uncertainty.

            ” “Uncertainty” and “error” are two wholly distinct concepts that purport to describe wholly different thing-a-maBigOilBob’s.”

            “A common mistake reflected in NASA-GISS work is the idea that making multiple measurements of different things (one each) with an instrument increases the accuracy of an aggregated result.”

            If only more so-called “climate scientists” understood these two points the idiot global warming models would be seen for what they are – usleless.

            I would only add these observations:

            – a global average temp is useless. Environment impacts occur at the edges of the temp envelop,e not at the average. Trying to predict environmental impacts using the average temp is an exercise in futility.

            – climate impacts are *regional”, not global. Trying to tell people in the US or in Siberia that global *warming* is going to kill them when their maximum temps are actually going down only leads to the dismisal of the claims by reasonable people. If the warming is happening in central Africa then that is where the impact will be and is where the models should concentrate.

          • Clyde:

            I suspect there’s an obvious hidden clue in your comment that mine own unfortunate moronity fails me to conceive and understand. Regardless, I thank you!

            Tim:

            That you should remark in my direction in such a way (I’ve seen you take down the “Nickers” for his “failure” to understand Pat’s theory) flatters me more than you could know.

            Many thanks!

      • Re:

        … the computer can only handle a fine number of grid points, the points are dozens of miles apart horizontally. But many important atmospheric effects are on a much smaller scale. … Then there is a great deal of uncertain physics. The effect of clouds is not well understood. …

        To name just 2 problems with climate models and physics:

        1. The Navier-Stokes equations have not been solved. (these govern the flow of fluids — including, of course, air and water )

        If you don’t understand those equations, then you really can’t have an intelligent conversation about climate.

        [26:10 – 27:15 in Christopher Essex video embedded in this post: https://wattsupwiththat.com/2015/02/20/believing-in-six-impossible-things-before-breakfast-and-climate-models/ ]

        2. Re: clouds and other weather phenomena, the parameterization grid cells are 100’s of km wide. Thunderstorms, for instance, will be invisible. Thunderstorms….. that is a lot of energy transfer data which is simply invisible to the computer running the code for the climate model. Parameterization is, essentially, “fake physics.”

        [42:10 – 47:40 Essex video]

        *****************************

        I highly recommend watching Dr. Essex’s entire lecture. Here are the main issues:

        8 Important Underlying Points to Bear in Mind

        1. Solving the closure problem.

        2. Computers with infinite representation.

        3. Computer water and cultural physics.

        4. Greenhouses that don’t work by the greenhouse effect.

        5. Carbon-free sugar.

        6. Oxygen free carbon dioxide.

        7. Nonexistent long-term natural variability.

        8. Nonempirical climate models that conserve what they are supposed to conserve.

        {at ~ 1:07:06 in the video}

        *********

        Bottom line: Dr. Pat Frank is in excellent company –> Dr. Manheimer and Dr. Essex and Dr. Lindzen and Dr. Hal Lewis and Dr. Fred Singer ….. and that is just the first few feet of an enormous rock of a list upon which the AGW trolls have gone hard aground. Lol, it really is kind of pitiful to watch them, wild-eyed, revving their engines, crying out, “Full ahead!” while the knots on the pitometer read: “0.”

      • Good morning, chaamjamal! 🙂

        (in case you come back here Tuesday morning)

        It really is cool to see you here, reading so diligently, sitting at your computer so far away from most of us. Hope all is well with you and yours.

    • All anyone needs to do is look up at the sky and notice all the different cloud configurations ad realize the only way to model the impact of clouds is with Fudge Factors. So Climate Models are all based on Garbage Assumptions picked by those with an end in mind.
      While looking at the clouds notice how much cooler you are when the clouds are there, except at night where the reverse is true.
      Clouds are a major player in the climates. Water is King of the Climate, the Sun the Emperor and CO2 a tiny bit player.

  6. ⭐⭐⭐⭐⭐
    × 1,000,000

    Any HONEST person who has ever spent time building computer models knows full well that attempting to accurately model a highly complex (possibly chaotic), dynamic, non-linear, multi-variate system is futile.

    • I wouldn’t go so far as to say futile, but definitely a fool’s errand if one expects perfect performance.

      “All models are wrong, but some are useful.” -George Box

      • The correction to the quote should have been

        “All models are wrong, but a small number are of limited use.”

        • Well beyond a small number.
          Every car and airplane made is extensively modeled before the first one is made.
          Every circuit that is designed is heavily modeled prior to first layout.
          Every integrated circuit is heavily modeled prior to the first one being made.

          Just because climate science misuses models is not evidence that all models are bad.
          You might as well proclaim that since the climate alarmists claim to be scientists, that all science is useless.

          • MarkW,
            Yes, but we then go about prototype testing where we find all the model mistakes before final design production in every one of the fields mentioned. The problem with models is they work very well in the very narrow environment they were designed for, but fail miserably once any single parameter gets outside the zone of linearization.

            All models are wrong, some provide usable design starting points.

          • The standard progression in almost any engineering field is:
            1) Models
            2) Small scale prototypes
            3) Large scale prototypes
            4) Small scale production runs
            5) Full production

          • The advantage to models is that you can try out hundreds of what if scenarios in a couple of days. As long as what you model isn’t too different from what has gone before, the chances are that the model will behave fairly accurately.
            The thing you have to remember is that the further away you get from the tried and true, the better the chances are that your model will run off the rails.

          • Well beyond a small number.
            Every car and airplane made is extensively modeled before the first one is made.
            Every circuit that is designed is heavily modeled prior to first layout.
            Every integrated circuit is heavily modeled prior to the first one being made.

            Just because climate science misuses models is not evidence that all models are bad.
            You might as well proclaim that since the climate alarmists claim to be scientists, that all science is useless.

            You are not an engineer are you?

            An enormous amount of engineering is not modelled at all, simply because at the time of its inception the ability to do so was limited, and in many cases still is.

            Nearly all electronic design done when I was active in the field was started with a few rough back of envelope calculations to see if it was possible (and sometimes not even those) then built and tweaked until it worked, or until a Boffin calculated that it never could work.

            Only with the advent of digital computers was modelling most engineering tasks even possible. Somewhere online there is a video about t development of the 1960s Ford Cosworth racing V8 Formula One engine “we removed metal from the crankcase until it broke then we put the last bit back again”.

            Formula one teams still use wind tunnels because modelling turbulent flow to sufficient accuracy is still impossible.

            This massive ‘Physics Envy’ that people seem to have that the world obeys theoretical laws and can be successfully modelled by them belies the fact that engineering from the first palaeolithic human chipping away at two pieces of flint and learning how to make an edge, was tens of thousands of years ahead of the ability to successfully model such a process.

            Romans built stone arches with very little idea as to why they stayed up. You need post Newtonian mathematics for that.

            Many mediaeval cathedrals fell down..

            Even something that is extensively modelled – like a large computer chip – is never completely modelled, because it cannot be. Take something very simple like prorogation delay from one part of the circuit to another, or through a logic or gate array. This depends on the random levels of doping in the chip semiconductor. It varies. You can do worst case models that tell you the chip will not work. Or best case models that show you its performance will be stunning at 10x overclock. Or a Monte Carlo analysis that will show you that you should on average get a yield of about 90% of usable chips AFTER TESTING. At room temperature, If your silicon is pure.

            In my own life at one point I was tasked with designing a radio. At a given point it became noisy. I knew, but had forgotten how I knew, that it was on account of the German tuner the German customer wanted us to use (I had started with Japanese).
            A consultant was brought in to model the part I had designed, and in due course I was exonerated.

            ‘I could have told you that a month ago ‘ ‘ How?’ ‘Look’ I said, removing the tuner head from the circuit board and substituting a 50hm resistor ‘No noise!’

            Far from making your point, Mark, the reality shows the exact reverse. In engineering extensive modelling is the exception, not the rule. Only massive capital projects use it, because the reality is that even with today’s computer power, building a prototype is how most of it is done. And then stepwise refinement – development – takes over.

            The model and the real thing are the same. Instead of a digital computer, Reality ™ is where the testing happens.

            Climate modellers would probably have more success building a scale model planet out in space somewhere …

            Engineers largely do not model. They build and test.And even when they do model, they can come terribly unstuck, when Reality ™ presents them with a case they didn’t model for, or indeed test against. Bridges that failed, engines that disintegrated, planes that fell out of the sky… shuttles that exploded after liftoff.

            That last is a classic example of ‘modelling failure’. O-ring elasticity is a function of temperature.. Too cold and it won’t seal. How cold is too cold? only testing will tell, because it’s COMPLICATED. So don’t operate where we haven’t tested. You might not get away with it. Real world materials with impurity levels don’t fit into nice linear equation models very well.

            The last resort of an engineer, is mathematical modelling. The first resort is prototyping, the second is scale modelling.

            Why? because the mathematical models cost more and take longer to build and are less reliable than the physical ones.

            A friend was involved in the development of the first ARM chips, for the BBC micro. Every few hours, they would crash. investigation showed a race condition between asynchronous events, which could be ameliorated by inserting a wait state in one part of e code. The failure rate was now one a week.
            They added another, the models implied thirty years to a crash ‘that’s good enough – its more likely to be struck by a cosmic ray that wrecks the code in that time, and anyway, people will just go ‘what? and reboot it, and it will never happen again in all probability’

            THAT is how engineering goes. The only modelled as a last resort and then only a tiny part of it.

            Most software cannot be modelled, by the way.

          • Yes, I’m sure many models are essential for important engineering tasks.
            But I think there are two kinds of models. The first type assess how a system will respond to specific inputs and conditions, for example a model of a bridge or jumbo jet. I believe these kinds of models are required by law for engineering projects where lives could be at risk. Of course, the models must be properly validated, which climate models are nt.

            The second type of model is one that attempts to predict the future. A weather model that predicts the next few days may do quite well, but beyond that weather forecasts are futile. Forecasts for a few months ahead usually fail spectacularly e.g. the UK’s “BBQ summer” some years ago.

            And yet climate scientists would have us believe that weather and climate forecasts decades and even centuries into the future are valid. I don’t know which planet these people live on, but it certainly isn’t the earth.
            Even the IPCC has specifically stated that climate forecasts are not possible due to the chaotic nature of the climate system – and yet they do it anyway.

            So, in a nutshell: many engineering and scientific models are valid and probably very useful. But models that claim to predict the future beyond a few days are mostly junk.
            Chris

        • I think you confuse the concept, in engineering in aeronautics and automobiles the models are nothing more than a confirmation check phase and you don’t use them to predict you use them to check 🙂

          If we took that concept into Climate Models you would simply use models to verify what the theory says within the current measured range. The problem comes when you try to use the models to predict things outside the measured range.

          The issue is that most use the short form of the original quote without context and without understanding what it refers to what the George Box paper was about. As some used the quote stupidly Box corrected it to a longer form in 1987 it is listed for reference in wiki
          https://en.wikipedia.org/wiki/All_models_are_wrong

          All models are approximations. Essentially, all models are wrong, but some are useful. However, the approximate nature of the model must always be borne in mind….

          That last sentence is the problem I have with the way many are using the quote.

          So your right models are used in Engineering and Cars etc but that is relevant to the quote which was about using models to predict things.

  7. We already know the climate models in CMIP 3 and 5 have failed. Their key prediction of the much discussed tropical hotspot, a region centered in the 6-10 km band of the troposphere, predicted to be warming faster than the lower troposphere is not observed in the datasets of radiosondes nor observed in the satellite-based AMSU derived temperature profiles. This is a colossal failure of a key prediction of the climate modelers hypotheses of a 2x – 4x water vapor (WV) amplification of the heating built into the simulations. Without the WV amplification effect as programmed and tuned for in the models, the GHG effect on CO2 forcing is likely in the 1.6º C/2xCO2 or less. This climate response value of equilibrium climate sensitivity is not anywhere near the alarmist value (something much greater than 2.0ºC/2xCO2) needed for the power-hungry UN bureaucrats, globalists, and other ecozealot-socialists.

    All of these outwardly appearing anti-fossil fuel groups with different underlying motivations, unite in a common cause to eliminate Western-style freemarket capitalism while economically neuter the US as a super-power able to confront China’s imperialistic ambitions in Asia and the Pacific.

    That so many so-called scientists have gone along with the perversion of the scientific method into this post-modern science, wherein maintaining consensus matters more than failed hypothesis predictions, tells us that pseudoscience has taken over our once-premier science academies in politicization in pursuit of a desired policy outcomes. We see this clearly here in the US with AAAS and NAS pushing for policy advocacy irregardless of failed predictions. This on one level is noble-cause corruption. On another level it is the fact that politcally driven policy injection into the government science grant-making bodies politicized who got rewarded and who didn’t, with the obvious natural selection of survival of pseudoscience over science.

    To begin to fix this, we must get to the root cause and not treat the symptoms. That we now have vast cadres of academic pseudoscientists now hungry for grants to acquire university tenure now spread out across so many non-climate disciplines feeding-off unrealistic emissions scenarios (RCP8.5) is a symptom, not a cause. And this all falls back to the fundamental failure to discard the GCMs outputs (CMIP3/5 and soon to be 6) that are utter failures of science.

    • The root cause is that political agendas have interfered with science, yet those who have polluted the science with politics are blind to the damage they have caused because in their tiny deluded minds they consider that destroying science for a political goal is a means justified by the ends.

      • These are the same people who, once they get power, have no problem with executing those who disagree with them. Justified by the same “nobel” ends.

    • I agree 100% with your post Joel O’Bryan.
      The models all run hot and politicians and world leaders and most of the worlds population have been taken in by this scam.
      Surely it is time for a rethink now before more damage is done to the world economy.
      Unfortunately we will not see a change untill the general populations of the democratic countries realize that the whole climate change movement is based on lies as you have written above .
      CO2 is essential to life on earth and a doubling of CO2 in the atmosphere can only raise the earths temperature by .6 C that is point six of one degree Celsius as the effect of CO2 on temperature is logarithmic .
      Any rise above this cam only be caused by positive water vapour feed back .
      This is the problem that the climate models cannot model as water vapour in the atmosphere forms clouds and clouds both cool and warm the earth
      Graham from NZ.

    • Corrupted Science is but one of the smaller heads of a politically very powerful hydra….with many powerful heads.

      These Big Heads (in approx. decreasing power): The Press. The Unelected Embedded Government (Deep State). Democrat Party. Academia. Govt funded Science. Big Social Media. Law/Courts/Leftist Prosecutors. Finance/Banking. UN. Leftist Billionaires. Now even the Church (leftist Pope).

      This powerful syndicate has already destroyed justice in the US. Leftists commit crimes in broad daylight (e.g. Hillary ignores subpoenas then destroy the evidence…no charges) and seldom are hard questions even asked…almost never any indictments/prosecutions. This, while conservatives only need to be lied about in the press (Cavanaugh) — guilty until proven innocent and oftentimes solid proof isn’t enough to overcome the inevitable weeks long propaganda storms.

      Remarkably, most of this evil operation is financed by the taxpayers. We get to pay for our own destruction.

      Does anybody honestly expect that the good guys are going to win this war? No mistake…this is a war…a very well financed war with only one of the armies fighting and that fighting army has all the guns and all the ammunition.

  8. The problem with climate science is not that the physics is not well known, but that the well known physics is not being applied, that being the Stefan-Boltzmann Law and Conservation of Energy which are all that apply for a top down simulation that can deterministically establish the climate sensitivity with little uncertainty. The problem is that when these first principles laws of physics are applied, the resulting sensitivity is far too low to justify mitigation. The solution they found is to dive into bottom up simulations where the physical interactions are not well known nor are the coefficients quantifying them which provides enough wiggle room for the simulations to be wrong enough to get the answers they want while still seeming plausible.

    The average surface temperature is about 288K and the average planet missions are about 240 W/m^2. The SB Law is E=oeT^4, where E are the emissions and T is the temperature. Plug in T and E and solve for the effective emissivity, e and it becomes about 0.62. Since in the steady state, E is equal to the total incident forcing, F, the deterministic sensitivity is dT/dF = dT/dE which can be trivially calculated as 1/(4eoT^3) resulting in a sensitivty of about 0.3C +/- 10% per W/m^2. Compare this to the IPCC’s presumed and obviously wrong ECS of 0.8C +/- 50% per W/m^2 whose lower limit of 0.4C per W/m^2 which even exceeds the upper limit prescribed by the laws of physics.

    To get the correct answers from bottom up simulations, the testing loop must be closed by comparing against a top down simulation of the same system. This lesson was learned by the semiconductor industry decades ago when it comes to modeling the behavior of chips whose complexity makes the climate look trivial by comparison.

  9. By reflecting away 30% of the ISR the atmospheric albedo cools the earth much like that reflective panel behind a car’s windshield.

    For the greenhouse effect to perform as advertised “extra” energy must radiate upwards from the surface. Because of the non-radiative heat transfer processes of the contiguous atmospheric molecules such ideal BB upwelling “extra” energy does not exist.

    There is no “extra” energy for the GHGs to “trap” and “back” radiate and no greenhouse warming.

    With no greenhouse effect what CO2 does or does not do is moot.

    • Nick,

      The GHG effect certainly does exist, it’s just that the ECS applied to it by the IPCC and its prefabricated self serving consensus is too big by a factor of between 3 and 4.

      The ‘extra’ energy isn’t extra at all and just the W/m^2 emitted by the surface in excess of the W/m^2 received by the Sun. This works out to about 0.62 W/m^2 per W/m^2 of forcing. Offsetting he extra 620 mw/m^2 per W/m^2 of forcing is the entire contribution by clouds and GHGs towards making the surface warmer than it would by based on the solar forcing alone. The origin of the power offsetting the ‘extra’ emissions is the half of the radiant energy emitted by the surface captured by atmospheric GHG’s and clouds and to be returned to the surface in the future where the remaining half is ultimately emitted into space to offset the solar input. The delay between absorption and emission is important to understand for why there’s no actual ‘extra’ energy involved and it’s just old surface emissions that were absorbed and returned to the surface at a later time.

      Note that clouds are a more powerful warming influence then GHG’s since clouds are broadband absorbers of surface emissions, as opposed to the narrow band behavior of GHG’s, moreover; clouds cover 2/3 of the planets surface nullifying the GHG effect between the clouds and the surface since the clouds would be absorbing and re-emitting the energy absorbed by GHG’s anyway.

      You and others must stop saying that there’s no GHG effect. THIS IS INCORRECT. There definitely is a GHG effect, it’s just far smaller than the alarmists require. Saying it doesn’t exist doesn’t help the skeptical cause and only invites the denier epithet. Quantum mechanics undeniably explains and quantifies the GHG effect as the absorption and emission of photons by the electron shells of molecules changing state. This is undeniable first principles physics that I’m very familiar with, has been tested extensively and reinforces my skeptical position, rather then dispute it which you seem to think is the case. While Quantum Mechanics may be above your pay grade, it doesn’t mean you can deny its applicability just because you don’t understand it.

      • Nick is correct. The atmosphere cools.

        It can’t be any other way. If the lack of an atmosphere made it colder, how would an atmosphere form?

        You need kinetic energy to create the atmospheric pressure, which counters gravitational pressure.

        The surface would be hotter without an atmosphere, all other things (solar, geothermal) being equal.

        • Nick is wrong, as are you.

          As long as the temperature is warm enough for an atmosphere to not liquify or freeze, there will be an atmosphere.

          As long as a molecule is above absolute zero, it will have kinetic energy.

        • Nick, Zoe,

          All things being equal, if the Earth had 1 bar of an N2 only atmosphere, no water and no other GHG’s, the albedo would be the same 0.1 as the Moon and the average surface temperature would be slightly below freezing. In fact, if the Earth had a 10 bar N2 only atmosphere or no atmosphere at all, the average surface temperature would be the same. If you think otherwise, starting from the temperature required to match the forcing received by the Sun with no atmosphere at all, what law of physics can be applied to quantify the surface temperature as a function of this starting temperature and the atmospheric pressure at the surface?

          A gravity induced lapse rate starts at the temperature of whatever surface is in DIRECT thermal equilibrium with its energy source (the Sun). With an atmosphere that’s transparent to both the incident solar forcing and the outgoing LWIR emissions, that surface is the surface of the planet below and the temperature decreases with decreasing atmospheric pressure at increasing altitudes. I suspect that what’s confusing you is Venus and the reason is that you don’t recognize that the surface in DIRECT equilibrium with the Sun is the cloud tops, not the solid surface below, and the temperature increases with increasing atmospheric pressure as you go towards the surface. Note as well, that unlike Earth clouds, Venusian clouds are uncoupled to the solid surface below, while Earth clouds are tightly coupled via the hydro cycle.

          The fact that the actual atmosphere is not completely transparent to LWIR emitted by the surface owing to both GHG’s and clouds can be thought of as analogous to back pressure in the path between the surface and space requiring those surface emissions to be larger than the incident energy in order for balance at TOA to be achieved.

          Once more I see the conflation of the effects of energy transported by photons and energy transported by matter. I understand why since Trenberth made the same error decades ago where he conflates radiant energy and non radiant energy like latent heat or thermals
          and it’s never been corrected. The most significant differentiator is that only radiant energy can leave/enter the planet to/from space.

          • Do you have any evidence that the molecules that make up a blanket are good CO2 absorbers? Beyond that, anything that is above absolute zero will emit photons, with the frequency of the photons being determined by the temperature of those molecules.

            This is not the same thing as what CO2 molecules do.

            Beyond that, as any fool should know, blankets work by trapping warm air next to the body. Anyone who wants to use a blanket as an analogy for what CO2 does has demonstrated a complete ignorance of both CO2 and blankets.

          • Zoe,

            Your analogy is misleading. While a blanket will absorb the LWIR emitted by a body and warm itself, about half of the energy warming the blanket is re-radiated from the top of the blanket. Most of the warming by a blanket is accomplished by trapping air warmed by conduction with the heat source wrapped up within the blanket (your body).

            Conduction and convection have absolutely nothing to do with the planets radiant balance. This is why it’s called a RADIANT balance. It only involves a balance in the exchange of photons between the planet and space. The GHG effect is only relevant to the photons involved in that exchange and at only very specific wavelengths. A similar effect happens with clouds, except that photons of all wavelengths are affected. Note that just like GHG’s clouds only return about half of what they absorb to the surface while the remaining half escapes to space.

            The top level math couldn’t be any simpler. The surface at 288K emits about 390 W/m^2 of photons. About 300 W/m^2 of these are absorbed by clouds and GHG’s leaving 90 W/m^2 of photons to pass through and escape into space. 240 W/m^2 of photons arrives from the Sun (after reflection) and to offset that, 150 W/m^2 of photons must come from the atmosphere and added to the 90 W/m^2 passing through. This is half of the 300 W/m^2 absorbed. The remaining 150 W/m^2 is added to the 240 W/m^2 of solar forcing to offset the 390 W/m^2 of photons emitted by the surface and all the photons are in perfect balance.

            We can take this first order top level model and extend it to account for as many second order effects as you can name, for example latent heat and convection, and the basic characteristics of the radiant fluxes stays the same leading to a deterministic ECS that’s less than the IPCC’s lower bound.

            Also, the Moon is on average colder than Earth because it has no GHG’s or clouds, not because it doesn’t have an atmosphere. Although, given its 672 hour day, both it’s night time lows will be colder than Earth and its day time highs will be hotter than Earth whether or not it has an atmosphere.

          • “150 ‘must’ come from the atnosphere”

            lol, no.

            ~330 comes from geothermal
            ~165 comes from solar

            398 goes into surface upwelling IR, and ~107 goes into sensible and latent heat.

          • Zoe,

            Do you live inside an active volcano? I can’t figure out where your misconception is coming from. Believe it or not, I’m trying to help you understand the actual science, you just need to be open to accepting the ground truth. This is the last time I’ll try to help you understand where you went wrong.

            You must embrace to the concept of falsification which is trivially accomplished for your geothermal hypothesis (as well as for the IPCC’s claimed ECS). The IPCC doesn’t accept falsification as relevant to climate science and this contributes to why they’ve been so wrong for so long.

            Based on the temperature gradient of the thermocline (believe it or not, water at a sufficient thickness is an insulator), the NET energy transfer from the deep ocean is on the order of 1 W/m^2 and consistent with the accepted average of the global geothermal input to the climate from the bottom of the oceans which is significantly larger than at the surface. You can also apply this test to the temperature gradient at the surface.

            I suggest you investigate Fourier’s Law of heat conduction and everything will become clear. For 330 W/m^2 to come from geothermal, the temperature gradient in the rock near the surface would need to be more than 80 degrees per meter (assuming a thermal conductivity of about 4 W per meter*degree) requiring a core temperature of many millions of degrees. The actual near surface gradient is closer to .025 degrees per meter.

            If as you say, 330 W/m^2 is geothermal, we wouldn’t even need a Sun to keep the planets surface warm and winter temperatures at the poles in the dead of winter would be well above freezing despite the absence of incoming solar energy.

            Sorry to burst your bubble, but I’ve presented 4 different tests that falsify your hypothesis. The scientific method demands that you modify your hypothesis when even just one test fails. This is how science is supposed to work. While the method hasn’t been relevant to climate science since the inception of the IPCC, it doesn’t mean that you should follow suit.

          • So by definition, you are never wrong, it’s just that nobody else is smart enough to understand you. I’ll keep that in mind.

          • Zoe,

            I understand your hypothesis and since it leads to your claim that 330 W/m^2 of geothermal energy contributes to the radiant balance, it’s prima facia preposterous and I don’t need to read your paper to know that. I know I said I wasn’t going to help you anymore, but I was curious as to why you’re so convinced in something so absurd, so I decided to read your paper to see what you did wrong.

            Your error is that you misunderstand Fourier’s law of conduction. You can’t just plug in the geometry of your concrete block to determine the temperature at the other end. In the heat transfer equation, the heat flux is not a free variable and the equation assumes there’s another heat source to maintain 50C on the other side and then you will achieve the calculated heat flux, but in fact, the temperature on the other side of the concrete block can be arbitrary and an appropriate heat flux will emerge. The salient point is that both sides of the insulating/conducting block must be forced to a temperature. Note that insulators and conductors only differ by their k values.

            If your concrete block was in a vacuum, given enough time, the entire block will become 75C and the input power will be far more than SB requires to maintain .8 m^2 at 75C since the block is radiating over a lot more area and that radiated energy will need to be replaced.

            Now, what was that you were saying that only smart people understand what you’re doing? Well, it doesn’t look like you know what you’re doing.

          • “the entire block will become 75C”

            So here you agree that
            CHF will become 0 W/m^2
            And CSR = e*s*(273.15+75)^4

            You must have somehow missed that.

            So when you are told that geothermal heat flux is 91 mW/m^2, you are given a differential measure which has nothing to do with what can emerge at the top. This number is not enough information to know what can emerge at the top.

            Now go onto to the next link.

          • “The salient point is that both sides of the insulating/conducting block must be forced to a temperature.”

            The force is strictly from one side.

            Please don’t tell me your one of those idiots that believes that boiling a pot of water will result with water temperature at the top being whatever the air and walls of a room give it.

            Remember, the bottom heat supply is constant, not a one-shot deal.

            The sun also has a small internal flux …

            You see, conduction is limited by an inverse length component, and radiation is not. You therefore can’t equate the two.

            Molecules can walk (conduct) and chew gum (radiate) at the same time, and the equating the rates of two is just DUMB.

            See the videos in my link.

          • The stated geothermal flux of about 91 mW/m^2 is an absolute average flux density, not a differential flux density and represents the rate that the core is cooling through the surface. The only difference is between the temperature of the core and the temperature of the surface. If there was no Sun, the temperature of the surface would be the temperature of space, or close to 0K, but this will converge to about 36K corresponding to the geothermal flux of about 100 mw/m^2. Owing to the T^4 relationship between W/m^2 of emissions and temperature, the 100 mw/m^2 only contributes less than .02 K to a surface temperature of 288K in response to the Sun.

            Your pot of water analogy doesn’t apply since convection is also involved and your one sided force is non physical and I don’t know why you think I’m confusing radiation and conduction because I’m definitely not. It’s just W/m^2 entering the surface from below by conduction and W/m^2 entering the surface by radiation from above. Joules are Joules no matter their origin and W/m^2 is just the rate density that Joules are arriving.

            Consider an insulated wall in your house. The inside temperature is whatever you set the thermostat to. Given the inside temperature, the k factor for the wall and its geometry, can you calculate the temperature on the outside of the wall? According to your analysis, you can, but the fact is that the temperature on the outside of the wall is whatever the outside temperature is and the heat loss (or gain) as a flux through the wall depends on the sign and magnitude of the difference in temperature between the inside and outside.

          • “the outside of the wall is whatever the outside temperature is”

            No! Because hot can warm cold. Cold isn’t king, it can be raised by hot.

            https://www.popsci.com/resizer/g6_4tBy0A4fJs49cUVAifb0oWBE=/760×506/arc-anglerfish-arc2-prod-bonnier.s3.amazonaws.com/public/HLSF4OACNUVCCGMILJX5XOQZSY.jpg

            Does it look like the outside wall of the house is simply set by the colder air outside?

            P.S. Why would replace my boiling water on a stove example with a radiator in a house example? It’s not that different, but the scale makes things purposefully difficult to examine.

            The radiator is small compared to the house, but the stove is more comparable to the pot. The point is the pot of water becomes uniform temperature, and not CSR = CHF, but CHF = 0, CSR = Big.

            Mentioning convection is pointless because it could only steepen the gradient. Yet this doesn’t happen anyway with a pot of water. Convection is a theoretical disadvantage to my argument, yet it’s overcome anyway, and CHF goes to ZERO.

          • Zoe,

            Your GIGO visualization is based on your incorrect assumption about what Fourier’s Law quantifies. It quantifies the flux through matter between points at arbitrarily different temperatures, presuming the source of those temperatures is independent of the matter itself. It does not quantify the temperature across a distribution of matter or the flux passing through it when a small section of it is forced to a temperature or subject to a specific rate of incident energy. There’s no valid way to calculate a conduction flux based on the geometry and a single temperature and then solve for the other temperature.

            Forcing a temperature and forcing an energy density as the energy input are not equivalent based on the SB Law and you got this wrong too. If E is the SB equivalent power emitted at T and you want to maintain a small portion of that matter at T, the incident power density required will be x*E. where x is the ratio between the surface area of the matter and the surface area receiving the energy. If you only supply E, then the steady state T will necessarily be lower by a factor of (x)^-0.25

            Your 75C in, 50C out concrete block example is non physical and can not be reproduced by experiment, much like the feedback model used by climate science and any other non physical, unverifiable model, once you’ve committed to it, all that follows is junk.

            I don’t know where you got a radiator from the house wall example. How the house is heated is irrelevant to the argument.

            I didn’t ignore your refutation argument, I dismissed it because it doesn’t refute what you said it does die to your incorrect assumption.

            You have so much wrong, it’s useless to continue, especially since it’s abundantly clear that you can’t accept that you made an error. It’s counter productive to pollute the skeptics case with arguments that are even more fanciful than those of the IPCC, so I suggest you reassess your position.

    • The alarmists think the COVID-19 epidemic presents an opportunity for them to advance to socialism and Green New Deals.
      They couldn’t be more wrong of course.

      The failure of the COVID-19 mortality model predictions from early March to the reality we see now, at the cost of a harsh economic downturn that will now turn into another Great Depression, will forever sour the public to accept model junk again.

      And the climate scam of alarmism and demands for People to cede their remaining prosperity rests squarely on the public’s acceptance of GCMs. “Fool me once, shame on you. Fool me twice, shame on me,” will be reply to the climate scammers touting their model junk outputs.

      The climate scam is dead. And the Left has yet to realize it. For they will stay in denial of its death and will make ever louder proclamations of dooms, while the pissed-off public will give them the finger at the ballot box.

  10. Readers ==> It is an intersting exercise, fr those locked-down in the homes with time on their hands, to read this as a series.

    Begin with Wallace Manheimer’s original piece in Physics & Society: October 2019 Climate change: On media perceptions and misperceptions.

    Follow with Seaver Wang and Zeke Hausfather’s response: “Climate Change: Robust Evidence of Causes and Impacts”.

    Then go on to Manheimer’s response to W&H: “Media, Politics, and Climate Change, a Response to Wang and Hausfather”.

    The current post here could be considered a follow-up on the series.

  11. “A great deal of the recommendation that the world should modify its energy infrastructure to combat climate change, costing tens to hundreds of trillions of dollars, is based on computer simulations.”

    actually not.

    • Hi Steven,

      If it’s not based on GCM-projected temperatures nor on paleo reconstructions, what is it based on?

      • It’s based on a bunch of activists not having a source of income if the scam is ever over turned.

    • There is no real world evidence that we need to do anything. all, and I repeat ALL of the scare stories that drive this nonsense come from computer models.

    • Steven,

      What do you think it’s based on? It’s definitely not the laws of physics or the data, so if it’s also not the result of simulating nonsense, then all that’s left is fear driven ignorance reinforcing political goals, which I will also accept as the cause of this insanity.

      • It’s based on averaging intensive properties from disparate locations and calling it “global”. And it’s based on correlation = causation.

      • I would venture to guess they can only be based on unicorn flatulence and gobs of wishful thinking.
        Even the instrumental record of global temp rise of 1910-1940 is based on magical fairy dust, because Hausfather and his ilk steadfastly avoid discussing that “inconvenient problem” of an instrumentally recorded warming before CO2 level rises could be invoked to explain it.

    • Steve is actually just qualifying the “A great deal of …

      Like the rest of us Steve knows that All of the recommendation … is based on climate models.

  12. A long-standing principle of debate is that the more assumptions you have to make to prove your argument, the weaker is your argument.

    For purposes of argument, let us replace the hundreds of major, mini, and micro assumptions made inside today’s climate models with just two very comprehensive ‘maxi’ assumptions:

    (1) The 1850-2019 HADCRUT4 Global Mean Temperature record includes the combined effects of all natural and anthropogenic climate change processes as these have evolved through time over the past one-hundred seventy years. Similar processes will operate from 2020 through 2100.

    (2) The pattern in global mean temperature change more likely than not to occur between 2020 and 2100 is that which most closely resembles the pattern that occurred between 1850 and 2019.

    The illustration below, dated April 15th 2020, contains a single-page graphical analysis for predicting where global mean temperature, as measured by HADCRUT4, will likely end up by the year 2100.

    Beta Blocker’s Year 2100 GMT Prediction Envelope

    The graphical analysis is divided into four Year 2100 GMT prediction scenarios:

    + 2.7 Scenario: The 1975-2019 trend line of +0.19 C/decade continues uninterrupted from 2020 to 2100. (Less likely to occur.)

    + 2.0 Scenario: The past pattern of GMT trends which occurred between 1850 and 2019 remains fully operative. (More likely to occur.)

    + 1.3 Scenario: The linearized long term trend of +0.05 C/decade for the period of 1850-2019 dominates through 2100. (Less likely to occur.)

    + 0.6 Scenario: A moderate GMT cooling trend of -0.06 C/decade starting in 2020 continues past 2050 and dominates through the year 2100 and beyond. (Less likely to occur.)

    The ‘more likely’ GMT pathway between 2020 and 2100 is the one most consistent with the historical pattern of warming, then cooling, then warming that occurred in an ever-upward stepwise progression between 1850 and 2019. In other words, it is the pattern which most closely resembles the HADCRUT4 pattern between 1850 and 2019.

    As a lukewarmer, my bet is on the + 2.0 Scenario. However, other outcomes certainly remain possible, even if I judge them to be less likely.

    Here is the bottom line as it concerns the value of today’s mainstream climate models.

    For purposes of public policy decision making, do the computerized climate models with their vast complexity and their many physical assumptions, large and small, have any more useful predictive power than does my very simple single-page graphical analysis?

    • Q: Do the computerized climate models have any more useful predictive power than a simple model?
      A: No, but that’s not their true purpose. Only the fool media reporter thinks they are useful predictive tools.

      What they they provide are well-paying jobs for large clusters of scientists and computer engineers to produce something that looks like science to the un-trained/un-informed, while delivering politically-useful result for globalists and the doomster industry to frighten the masses for purposes of political control. If those large cluster of scientists and computer engineers actually started producing projections that matched reality, then (1) the political handlers would be displeased and they’d lose their funding and thus their jobs. Then (2) the funding money would go to a group willing to produce junk output that satisfied the political interests that control the funding.

    • You want to calculate uncertainty? Look at a graph that illustrate the numbers you just elucidated. These GCM’s are supposed to be accurate and based on physics. You want to validate Pat Frank’s paper, compute the uncertainty between 0.6 and 2.7.

      • Jim Gorman: “You want to calculate uncertainty? Look at a graph that illustrates the numbers you just elucidated. These GCM’s are supposed to be accurate and based on physics. You want to validate Pat Frank’s paper, compute the uncertainty between 0.6 and 2.7.”

        My graphical analysis is all graphical. It includes no information and no conclusions taken from any GCM. The only mathematics used are the obvious ones which are clearly illustrated, or which can be readily inferred, on the graphic.

        The entire analysis is based on the readily obvious warming and cooling trends which occurred within the HADCRUT4 Global Mean Temperature record from 1850 through 2019. The entire analytical process is contained, and is readily transparent, from simply looking closely at my one page illustration. See again:

        Beta Blocker’s Year 2100 GMT Prediction Envelope

        The basis for my conclusions rests on two major assumptions:

        (1) The 1850-2019 HADCRUT4 Global Mean Temperature record includes the combined effects of all natural and anthropogenic climate change processes as these have evolved through time over the past one-hundred seventy years. Similar processes will operate from 2020 through 2100.

        (2) The pattern in global mean temperature change more likely than not to occur between 2020 and 2100 is that which most closely resembles the pattern that occurred between 1850 and 2019.

        All of the relevant physics pertinent to a change in global mean temperature are contained within my first assumption: “The 1850-2019 HADCRUT4 Global Mean Temperature record includes the combined effects of all natural and anthropogenic climate change processes as these have evolved through time over the past one-hundred seventy years. Similar processes will operate from 2020 through 2100.”

        Because we don’t know with any useful certainty how all these many physical processes work together to produce a rise or a fall in global mean temperature, no formal mathematical calculations of uncertainty are made in my analysis. Such calculations simply wouldn’t have any traction with the actual real-world physical processes and their many interactions, whatever these might be.

        And because we don’t have a reasonably good understanding of what is actually happening inside the world’s climate system, to formally generate mathematical uncertainties for my GMT prediction envelope would serve no useful purpose.

        The best that can be done on that score is to use my second assumption: “The pattern in global mean temperature change more likely than not to occur between 2020 and 2100 is that which most closely resembles the pattern that occurred between 1850 and 2019.”

        The visual demarcation of the four scenarios into ‘more likely’ versus ‘less likely’ plausibility regions does the job just as well any formalized mathematical approach might do. In this case, where so much of the physics of climate science is so uncertain, a rigorously formalized mathematical uncertainty analysis would only serve to further cloud what is readily obvious visually.

        OK, I have listed my own simple criteria for what I think is ‘more likely’ to occur as opposed to what I think is ‘less likely’ to occur. That said, anyone who has followed the long debate over the validity and the usefulness of today’s mainstream climate science, including the accuracy of the GCM’s and their Year 2100 temperature projections, can generate long lists of arguments for each of my four scenarios, even for those they think are highly unlikely.

        Moreover, anyone who wants to offer a list of scientific arguments as to why one of the four scenarios is more likely to occur than the other three is welcome — and even encouraged, I dare say — to offer those arguments.

        • “The entire analysis is based on the readily obvious warming and cooling trends which occurred within the HADCRUT4 Global Mean Temperature record from 1850 through 2019.”:

          How do you determine what is actually happening from the mean?

          You talk about warming and cooling trends. The mean, however, can go down while maximum temps are going up, minimum temps just have to go down more. So is that a “cooling” trend or a “warming trend”?

          Similarly the mean can go up while maximum temps are going down and minimum temps are going up. Is that a warming trend or a cooling trend?

          The “mean” represents a loss of data with relation to the actual temperature envelope. Why do people focus on the mean when it is meaningless? It is the maximum and minimum of the temperature envelope where most environmental impacts occur, not at the mean.

          “Moreover, anyone who wants to offer a list of scientific arguments as to why one of the four scenarios is more likely to occur than the other three is welcome — and even encouraged, I dare say — to offer those arguments.”

          If you don’t know if maximum and minimum temps are going down or up then how do you determine which of the scenarios is more likely?

          • Tim Gorman / Jim Gorman — Tim or Jim, whatever name you go by — I have a question for you. Has the world actually warmed since 1850; and if it has, by how much has it warmed?

          • Beta,

            two different people.

            Trying to measure whether the world is warming or cooling by using some kind of composite global “average” temperature is a losing proposition. Every time you tke an average you lose the data that would actually tell you what is happening.

            I am a proponent of using cooling degree-days to determine if maximum tempeatures are going up and heating degree-days to determine if minimum temperatures are going down.

            If you take a sample of varous locations around the globe the number of cooling degree-days are mostly going down over the past three years – meaning that it is quite likely that maximum temperatures are going down. That’s true in places as widely varied as the US, Brazil, Siberia, and Europe. The opposite is true for heating degree-days.

            Of course there are areas where the opposite is true. Places like India and central Africa.

            To me that means that climate, which is largely dependent on maximum and minimum temps and not on the “average” temp, is very regional in nature. It is not a global thing. The climate alarmists focus on trying to scare everyone on the globe that maximum temps are going to kill them all when it is most likely a very regional phenominon. And their solution of taking the entire globe back to the early 20th century when electiricty was not a dependable thing in order to fix a regional problem is a typical government approach where one size fits all. The climate alarmists would be far more useful if they and their models could explain why different regions are encountering different temperature profiles and offering solutions that are tailored toward specific regions.

            Consider: the US has some of the highest concetratios of CO2 on the globe yet most of the US appears to be cooling. If CO2 concentrations were actually the controlling factor for maximum temperatures then how could this be happening? Central Africa has some of the lowest CO2 concentrations yet that region seems to be opposite of the US. Again, if CO2 concentrations are the controlling factor then why is that happening? How would taking the US back to the mid-20th century fix what central Africa is seeing?

            As for your question, the global average saw a step-up around 1998 creating a new baseline for the global average. I have seen *no one* explain how the global average temperature can have a step function associated with it. Either some natural event occurred or the global average calculations, based either on the surface record or the satellite record, somehow changed. There simply was not some kind of huge jump in global CO2 emissons in one year. I have never seen anyone offer up a natural event that would correspond to a stup function in the temperature profile. Have you?

            go here: https://www.nsstc.uah.edu/climate/2020/march/202003_bar.png

            You can graphically see the step function in 1998 quite distinctly. How would *you* explain it?

          • Re Tim Gorman’s comment: https://wattsupwiththat.com/2020/04/27/some-dilemmas-of-climate-simulations/#comment-2981249

            Tim Gorman, regardless of what we think of the true value of GMT as a tool for predicting the environmental and human impacts of a warming world, the fact is that although the data is noisy, long-term patterns lasting three or more decades are readily evident in the global mean temperature record.

            I use HADCRUT4 data as plotted by the woodfortrees.org interactive app because it is a commonly used reference index for global warming, and also because the Hadley Centre maintains the Central England Temperature record.

            My view is that when it comes to predicting where global mean temperature will go in the next eighty years, it doesn’t really matter which measurement techniques and data collection methods are being used. It doesn’t even matter if the GMT record is being consciously manipulated for purposes of promoting AGW alarmism.

            The world has been warming for the past one-hundred seventy years. A rough estimate would be a one degree centigrade rise since 1850, more or less. The odds are that the world will continue to warm, with pauses here and there along the way, and with the expected differences in regional warming rates and extents that will occur within that general long-term warming trend.

            As long as this general warming trend continues, regardless of how small or large it might be, and regardless if pauses occur here and there along the way, mainstream climate scientists will continue to defend the accuracy of their temperature prediction models.

            Those in the ruling class responsible for making public policy decisions will either continue to rely on those models in formulating their energy and environmental policies, or they won’t.

            IMHO, only when serious sacrifice starts being demanded of the average Joe and Jane voter on Main Street, only then will the debate over climate science and climate modeling accuracy go critical mass, including the lower level details of how global warming trends are being measured, recorded, and analyzed.

          • Beta,

            You simply couldn’t address any of the issues I brought up, could you.

            “The world has been warming for the past one-hundred seventy years. A rough estimate would be a one degree centigrade rise since 1850, more or less. The odds are that the world will continue to warm, with pauses here and there along the way, and with the expected differences in regional warming rates and extents that will occur within that general long-term warming trend.”

            1. WHAT HAS BEEN WARMING? Minimum temps? Maximum temps? The answer is required in order to actually address the environmental impact of the “average” going up?

            2. WHERE HAS IT BEEN WARMING? If there are expected differences in regional warming rates and extents then how can you say “THE WORLD IS WARMING”? In actuality PART OF THE WORLD IS WARMING.

            3. If you can’t tell from the average what is happening with the global temperature envelope then how can you even state the globe is warming? It’s physically impossible to tell!

            4. If the world is warming then why is the cooling degree-day data over the past three years showing a downward trend over so much of the globe? If you can’t answer this then you actually don’t know if the world is warming or not. Neither do the climate scientists.

  13. Steven is correct on this.
    Things used to push the Climate Crisis:
    – the U. N. one world government,
    – the no-meat crowd,
    – glaciers that break into the oceans,
    – socialists,
    – polar outbreaks,

    – those seeking riches (think Al Gore),
    – tornadoes,
    – a heat wave,
    – Maurice Strong,
    – John Kerry,
    – droughts,
    – floods,
    add you own

    • John
      You confuse the organizations and individuals promoting a climate crisis, with the ‘facts’ they present to convince those who are not scientifically knowledgeable, that there is an existential threat. Many of your ‘facts’ are in turn supported by models, such as melting glaciers, heat waves, droughts, and other extreme meteorological phenomena.

  14. Thank you for a very relevant and thoughtful post. When I teach about science process and how it often goes wrong I describe models as being essentially a numerical representation of an hypothesis. They represent what a scientist thinks may be the actual behaviour of a process or processes in the real world. They are not evidence of how natural processes work. The next step should always be to try and validate (or invalidate) the model. If experiment (or in the case of climate models, observation over time) finds the natural world responding as the models predict then the hypothesis remains viable (but not proven). In that case one would still want to examine the plethora of models using different assumptions and wildly different turnings and wonder how they reach similar predictions.

    If the real world behaves differently than models predict, as is the case almost uniformly with climate models excepting the single Russian example, then it is not the real world that is wrong and whose data needs changing, it is the models. Yet the surest sign that “climate science”, as practiced by the most popular personalities, is not actually science is that they respond to failing models by trying to game the observational data, serially adjusting temperature records, creating demonstrably unsound temperature reconstructions, redefining their model predictions after the fact and engaging in a childish, immoral campaign of denigration of anyone with the integrity to point out the failings of main stream climate science.

    In my field the same modelling issues apply to the CoVID predictions that are massively overestimating both the spread and mortality of the virus. Early observations would invalidate some important assumptions that went into the models.

  15. “But how credible would that be in the light of Christy’s viewgraph?”

    Yes, it is a viewgraph. And that is all. It is not from a peer reviewed paper. And that means more than just that no scientist has been asked to check the result. It means that the information you would need to check it has never been provided. The “model results” are not published results (for mid-troposphere). Christy worked them out, somehow. The “average of 4 balloon datasets” etc are not otherwise published. Christy worked them out, on some basis. What were those datasets? No-one seems to be bothered to find out. How did he do the average? How did he turn individual balloon flights into a representative global average? No-one knows. And Christy has never published his methods. So we don’t have any basis for knowing that the claimed model results and the claimed observations are comparable.

    • Hi Nick,

      I would think that someone with your modeling skills could easily “scrape” the public sources of radio sonde temperatures and satellite data to produce and publish a “viewgraph” that corroborates the GCM projections. Or maybe it’s already out there and I just missed it. Grateful if you can provide a link. Thanks!

      • It isn’t just a matter of “scraping” radiosonde data. That would give you a bunch of flights at various times on various days, at various locations. None of those are evenly distributed in space or time. How do you turn all that into a global average, over time, for a specific region, the mid-troposphere? How do you decide that it is the same region as whatever he has extracted from the model results?

        I do a lot of that for surface temperature data. At least there the region is clearly known, and the observations are frequent over time. I describe in great detail the methods I use for spatially integrating the results. With Christy’s mid-troposphere, there is nothing.

        • Thanks Nick,

          You give the impression that the areal and temporal coverage of the radiosonde data is inadequate for the purpose of checking the GCM models, which is contrary to the fact that meteorologists have been flying these instruments on set schedules and from fixed locations for decades in order to initiate their weather models (among other uses). You also conveniently fail to mention the satellite data, which has the most consistent areal and temporal coverage of any data source. I am aware (from Clive Best’s blog) that you and he do some good work on how to mathematically model the very problematic areal and temporal coverage of surface thermometer data. However, if you’re saying that there is no way to tie any of these data sources together, there doesn’t seem to be much sense in taking the GCM projections seriously, is there?

          • Frank,
            “You also conveniently fail to mention the satellite data, which has the most consistent areal and temporal coverage of any data source.”
            The satellite data is far more problematic than the surface data.

            “However, if you’re saying that there is no way to tie any of these data sources together, there doesn’t seem to be much sense in taking the GCM projections seriously, is there?”

            Christy has deliberately chosen a region (mid-troposphere) which actually is hard to measure (or even clearly define). It is also one that matters to us far less than the surface where we live, and have good measurements. But of course the difficulty of measuring this region does not detract from GCM results. It’s not their fault.

          • Nick,

            Responding to your comment of 2:35 pm:

            Nice deflection. The satellite adjustments you refer to are hardly “problematic” if they are made publicly and justified by changes in instruments, orbits, etc. By comparison the “adjustments” incorporated into “official” surface records, whether provided by NOAA or BOM, have rendered these data nearly meaningless for climate purposes. You’re the expert here – are the GCMs parameterized to tie back to the adjusted surface record?

          • Frank,
            “are hardly “problematic” if they are made publicly and justified by changes in instruments, orbits, etc.”
            Such large adjustments in the record in going from one version to another are certainly problematic. What will the next version bring?

            You could say the same about surface temperatures, but there, as I show, the effect of adjustments is much smaller.

            “are the GCMs parameterized to tie back to the adjusted surface record?”
            No, they aren’t tied back to the surface record at all.

          • Nick,

            Replying to your comment of 4:57 pm:

            I think applying a warming “adjustment” of about 2.5 degrees (F) to the US surface temperature record from 1919 to 2019 swamps any adjustments to the satellite data. Be that as it may, should I presume that CMIP 6 will be little changed from version 5?

        • Nick, Christy chose the region not on a whim, but because the proposed mechanism for the amplified global warming predicts that it will start there. Not on the surface, not at the poles, but that layer of the troposphere near the equator. The proposed model says this will warm and become more humid. We have now 30-40 years of data showing that it hasn’t. What do you do when your hypothesis predicts a measurable phenomenon and you go and measure it but it isn’t there? I decide that my mechanism at best and my model need more work. Nothing like a little data to help decide if the model has any value. How do you explain that?

          • In response to Nick’s 5:05 pm comment:

            “We revisit such comparisons here using new observational estimates of surface and tropospheric temperature changes. We find that there is no longer a serious discrepancy between modeled and observed trends in the tropics.”

            An earlier version of Santer et. al. said there was no such discrepancy. Just kidding I guess.

            If at first you don’t succeed, try, try again…

    • Nick ==> Honestly, your comment stinks of a plain old ad hominem attack on Christy — who is a well respected award-winning scientist with a long-term contract with NASA to produce the satellite-based atmospheric temperature data set — whose data and opinions you brush off as if they had been produced by some silly post-doc at an unknown mid-western college.

      This is the kind of disrespect for colleagues that gives Climate Science that rancid, old-fish smell.

    • Stokes
      You dismiss Christy’s work because it has not gone through peer review and been published. Similarly, your complaints have not been published in a peer-reviewed journal. Should we dismiss your remarks then? Or, should remarks be given some tentative acceptance based on who makes them and the logic displayed? Or, perhaps, consider this forum to be a non-traditional peer-review exercise, and give serious consideration to those who provide citations to support their position, or at least provide their real names?

      • “You dismiss Christy’s work because it has not gone through peer review”
        As I said, it is discounted not from the formal lack of peer review, but because it lacks the supporting information that would make review possible, by the sages of this forum (who don’t seem bothered about that) or anyone else.

        • What are you talking about. Climate science peer review became pall review decades ago upon the inception of the IPCC.

          The Schlesinger paper that ‘fixed’ Hansen’s 1984 feedback paper which togther provided AR1 with the theoretical foundation for the idea that feedback amplifies an insignificant effect into a massive temperature change was rushed into publication in an obscure journal just in time for AR1 and never subject to adequate peer review, specifically, the only ‘reviewer’ who claimed to be an expert in feedback systems was Schlesinger himself. I’m an accomplished expert in feedback control systems and when I reviewed Schlesinger’s paper over a decade ago, I identified several fatal flaws in his analysis.

          These are that approximate linearity around the mean is insufficient to satisfy Bode’s precondition for strict linearity and that assuming the average not accounted by the incremental analysis is the power supply is insufficient to satisfy Bode’s precondition for an implicit source of Joules powering the gain. Two wrongs that reinforce each other don’t make a right even if the climate alarmist religion would collapse without the errors.

          To be clear, strict linearity means that if 1 unit of input produces 2 units of output, 100 units of input will produce 200 units of output. This is definitly not true for the relationship between W/m^2 and temperature, incrementally or otherwise.

          An example of a missing power supply analogous to how feedback was applied to the climate is to connect both the power cord and audio input of an amplifier to the signal source.

          The bottom line is that the only thing that can be remotely considered as feedback like is the 0.6 W/m^2 per W/m^2 of forcing returned to the surface by the atmosphere to offset the emissions in excess of the forcing. The ‘consensus’ claim of linear temperature feedback equivalent to 3.3 W/m^2 per W/m^2 of forcing is complete garbage with absolutely no correspondence to the ground truth.

  16. i also developed models to simulate real life systems, in particular the computer code for a Coal fired electrical generating station and also for a Nuclear Reactor generating system. This computer code provided all the necessary information to drive the meters, lights, alarms, an d respond to the operators actions to switches, dials,m etc. to provide a Simulator” that looked like the control panels in the control room and make the operator think he was in the real control room operating the plant. The models used in these applications are verified, tested to extremes and in many cases certified by government agencies. However, they only simulate what they model.

    Some here may not be aware of the fact that the Nuclear Power Plant Simulator used by the company that designed TMI-II did NOT include modeling of an unknown leak from the top of the pressurizer. Think of it as a a shock absorber that uses a steam bauble to keep the water under the correct pressure) With a leak at the top of the pressurizer you could lose the steam bubble and you would be in a “Solid Pressurizer” operation. That is a pressurizer with no steam bubble at the top. Operators were just mandated to “Never Operate Solid.” Instead of modeling this they simply “ASSUMED”(*) it would act just like a leak anywhere else in the system. The lack of this one small chunk of code and the lack of operator training on at power/accident operations of a NPP in Solid conditions changed the nuclear industry Greatly. To my knowledge, no other significant lapses in NPP Simulators have been discovered in the last 40 years.

    The lack of this code played a significant factor of the mindset of the operators, management and regulators during the accident, and toward the actions taken to prevent recurrence of improper actions. The Nuclear industry spent hundreds of billions making corrections to prevent an accident that should never have happened – if trained properly! Worse, this accident destroyed the nuclear power industry in the USA. I strongly believe that if this accident had not happened the USA would be generating over 50% of the electrical power today by Nuclear Power. Inadequately designed “Models” are dangerous.

    * – The common saying about assumptions applies.

  17. Good article showing the inherent difficulty with climate modeling. There’s another factor that makes climate modeling especially difficult – the long times needed to gather data, learn, and revise models. This is very well illustrated by the CMIP-5 comparison exercise. Nearly all the models have been invalidated but it took almost a decade. Now that almost all of the models have been invalidated it will take some time to make updates and perhaps another decade’s wait before we can settle on the models that haven’t been invalidated.

    One small correction – there were several high atmosphere nuclear tests done in the late 1950’s culminating in the Starfish Prime nuclear burst in 1962. We developed a model at TRW in the 1980s to predict the enhancement of the Earth’s radiation belts from weapon injected electrons. While our model wasn’t completely validated, a competing model was invalidated by the Air Force Weapon’s Lab by comparing it to data taken after Starfish Prime. The short time needed to invalidate the competitor’s model led to the adoption of our model. So why are many of the invalidated climate models still used?

  18. The interesting questions are what is different about INMCM4 (in Christy’s congressional chart) from the rest of CMIP5 that makes it ‘more accurate’. Have commented generally on this before: higher ocean thermal inertia, higher CFR (lower speculative positive cloud feedback), and a resulting lower ECS (~1.8, close to the observational energy budget estimates by Lewis and Curry of ~1.7). The lower ECS is of course ‘heretical’ because it means you can cancel climate alarm and furlough all the ‘climate scientists’.

    INM has published several papers on the main improvements and changes for INMCM5. We don’t (well I don’t) yet have any comparisons of it to the other models comprising CMIP6.

  19. S.I Hayakawa: “The map is not the territory.”

    Simulations of future climate are less than a “map” because the assumptions, the physical relationships, aren’t based on observation & measurement and it hasn’t happened yet.

    A dodgy map will not predict the future.

  20. I just don’t understand how climate science is called science.
    Science is reproduceable. In order for your theory to be proven correct, others have to come behind you and do the same work to come up with the same conclusion every time. No one can do that here, unless someone has gone into the future to retrieve all the non adjusted climate data in the year 2100.
    I watched the NFL draft this weekend and Mel Kiper was busily projecting the careers of all players drafted without their playing a single down in the NFL. That made me think about climate scientists, who basically are projecting the winners, losers, and exact statistics down to within a few yards catching, throwing, or rushing of every game through the Super Bowl, after the opening Thursday night games in week 1. If only that were the case, says the bettor in me. Kiper makes some projection errors, but climate scientists are on another level and will insist their predictions are correct.
    Not only that, these climate scientists basically have gotten through a quarter of the season and can’t make a model that accurately reproduces the first four weeks of the season they just finished. Many belong to the same organization which can’t predict the winner in a game at halftime with accuracy (for instance, our NWS being off by 8 degrees on an afternoon high temperature prediction made for my area at 11AM one day last week).
    Suddenly Mel Kiper doesn’t look so bad (except for his hair helmet). Maybe he can make a climate prediction model.

  21. What an informative article! I have been trying to tell my “non-skeptical” friends for years that climate is a chaotic system, and we do not have a clue as to how many actual varibles there are, and their relative import.

    I further take note of the author’s reference to the remarkably accurate Russian model, of which I had heard only obliquely. He says “It is unlikely Russia has the same worry about climate change that we do. ”

    Indeed. I suspect Russia has the same fears of “global warming” that Canada has (none), and for the same reason.

    • In Canada the loons of the left polish the turd of globull warming extra hard so that all weather is now climate change & each new years climate change is extra scary, scarier than last years climate change.
      Amongst the zeolots there is only belief in the satanic CO2 molecules devine power.

  22. The dirty little secret of climate science is that EVEN IF the climate models were 100% accurate and reliable, the impact studies and economic models don’t remotely justify the high-cost, low-impact mitigation strategies promoted by Green activists.

    • Also, if it needs to subsidized (like forever, not just for startup), it probably is never more energy or resource efficient.

  23. Although most know that water vapor is a ghg, hardly anyone appears to have noticed that water vapor has been increasing about 1.5% per decade. WV increase has been more than POSSIBLE from temperature increase (feedback). A rather simple calculation, using data from Hitran which solves the Quantum Mechanics, shows that, at ground level, WV increase has been about 10 times more effective at causing ground level warming than CO2 increase. The added cooling from increased CO2 in the stratosphere counters the small warming from CO2 at ground level with the net result that CO2 has no significant effect on climate. https://watervaporandwarming.blogspot.com

  24. As someone who is very familiar with both computers and certain computer models, let me remind all readers that the fact that the Russian Model is tracking close to reality does NOT mean that the reasons it is tracking are correct. I would at least give the Russian Model much higher credibility and say it is likely closer to modeling the underlying reality than the other models, but it isn’t a fact.

    Reality of a chaotic complex system may or may not ever lend itself to a computer model. I believe the best we can ever hope for is one that is mostly right up to a chaotic change in behavior, and then once adjusted for that change is mostly right again. Whether such changes happen every ten years, 50 years, 200 years, or…wait for it…a seemingly random interval of years is anyone’s guess at this point.

    Until a climate model has predicted something very unusual and that event occurs within a reasonable amount of error for time and change, they are pretty much useless. Predicting that temperature would continue to increase by about the same amount as in the past is NOT a prediction of something unusual. Obviously all the other models fail completely having missed their predictions.

  25. Correlation does not mean causation. Non-correlation certainly does not mean causation either, regardless of any model “tuned” to “prove” otherwise. All present and past data show no correlation between CO2 levels and temperature so there is no causation of CO2 for global “warming” (which is not even proven, if one puts “adjusted” measurement data aside), period. The only correlation with temperature up to now seems to be with solar magnetic field activity. The causation has yet to be proven, but at least the data points in the right direction. If Trump gets reelected, I hope he will organize as promised a hearing and bring the debate where it should be: public with ALL scientific arguments being heard. There is no room for junk or partisan science.

  26. There are three ways that energy, in whatever form, moves from the surface to space.
    1) By direct radiation which involves matters of the Greenhouse Effect and suchlike.
    2) Convection which involves matters of relative densities etc.
    3) Intrinsic buoyancy which involves the physical behaviour associated with phase change.

    IMO the last of these three has been basically ignored or subsumed in the climate models by considering water as a mere ‘also ran’ feedback influence on the two other mechanisms, which has resulted in a considerable overestimate of the resulting temperature.

    At phase change of water some 694 Watthrs. of energy is converted into Latent Heat at constant temperature giving a value of zero to the Planck sensitivity coefficient and further providing intrinsic buoyancy to the resulting vapour.
    The obvious result of this is seen in the structure and behaviour of the clouds which carry this latent energy up through the atmosphere towards space due to this third mechanism which is being ignored, the thermodynamics of which is very different to that of convection.
    Sadly the mindset within the modelling community appears to have difficulty in dealing with this enthalpy based area of science which is disassociated from matters of radiation.

    In conceptual terms the Hydro Cycle operates as a Rankine Cycle. With this in mind a great deal may be explained. Not least of which being that water actually provides a strong NEGATIVE reaction to the Greenhouse Effect contrary to the orthodox opinion.

    • I’ve had several arguments recently about the same. There are many who have dug themselves into a radiation hole too deep to see out of. Radiation in and out is all there is to determining the surface temp of the earth. They refuse to believe O2 and N2 have any affects at all since they are non-radiative. H2O is only important as a feedback for CO2.

Comments are closed.