Climate Models Have Not Improved in 50 Years

Guest “how can he write this with straight face?” by David Middleton

Even 50-year-old climate models correctly predicted global warming
By Warren Cornwall Dec. 4, 2019

Climate change doubters have a favorite target: climate models. They claim that computer simulations conducted decades ago didn’t accurately predict current warming, so the public should be wary of the predictive power of newer models. Now, the most sweeping evaluation of these older models—some half a century old—shows most of them were indeed accurate.

“How much warming we are having today is pretty much right on where models have predicted,” says the study’s lead author, Zeke Hausfather, a graduate student at the University of California, Berkeley.


Most of the models accurately predicted recent global surface temperatures, which have risen approximately 0.9°C since 1970. For 10 forecasts, there was no statistically significant difference between their output and historic observations, the team reports today in Geophysical Research Letters.


Seven older models missed the mark by as much as 0.1°C per decade. But the accuracy of five of those forecasts improved enough to match observations when the scientists adjusted a key input to the models: how much climate-changing pollution humans have emitted over the years.


To take one example, Hausfather points to a famous 1988 model overseen by then–NASA scientist James Hansen. The model predicted that if climate pollution kept rising at an even pace, average global temperatures today would be approximately 0.3°C warmer than they actually are. That has helped make Hansen’s work a popular target for critics of climate science.


Science! (As in “She blinded me with)

The accuracy of the failed models improved when they adjusted them to fit the observations… Shocking.

The AGU and Wiley currently allow limited access to Hausfather et al., 2019. Of particular note are figures 2 and 3. I won’t post the images here due to the fact that it is a protected limited access document.

Figure 2: Model Failure

Figure 2 has two panels. The upper panel depicts comparisons of the rates of temperature change of the observations vs the models, with error bars that presumably represent 2σ (2 standard deviations). According to my Mark I Eyeball Analysis, of the 17 model scenarios depicted, 6 were above the observations’ 2σ (off the chart too much warming), 4 were near the top of the observations’ 2σ (too much warming), 2 were below the observations’ 2σ (off the chart too little warming), 2 were near the bottom of the observations’ 2σ (too little warming), and 3 were within 1σ (in the ballpark) of the observations.

Figure 1. Less than 1 out of 5 model scenarios were within 1 standard deviation of reality.

The lower panel depicted the implied transient climate response (TCR) of the observations and the models. TCR is the direct warming that can be expected from a doubling of atmospheric carbon dioxide. It is an effectively instantaneous response. It is the only relevant climate sensitivity.

Figure 2. Equilibrium climate sensitivity (ECS) and transient climate response (TCR). (IPCC)

In the 3.5 °C ECS case, about 2.0 °C of warming occurs by the time of the doubling of atmospheric CO2. The remaining 1.5 °C of warming supposedly will occur over the subsequent 500 years. We’re constantly being told that we must hold warming by 2100 to no more than relative to pre-industrial temperatures (the coldest climate of the Holocene).

Figure 3. The 2.0 °C limit. (Vox)

I digitized the lower panel to get the TCR values. Of the 14 sets of observations, the implied TCR ranged from 1.5-2.0 °C, averaging 1.79 °C, with a very small σ of 0.13 °C. Of the 17 model scenarios, 9 exceeded the observed TCR by more than 1σ, 6 were more than 1σ below the observed TCR. Only 2 scenarios were within 1σ of the observed TCR (1.79 °C).

Figure 4. Implied TCR (°C/2xCO2), observations vs models.

A cross plot of the model TCR vs. observed TCR yields a random scatter…

Figure 5. Implied TCR (°C/2xCO2), observations vs models. The “expected trend” is what would have resulted if the subsequent observations matched the model projections.

Atmospheric CO2 is on track to reach that doubling around the end of this century.

Figure 6. Atmospheric CO2 Mauna Loa Observatory (MLO, NOAA/ESRL) and DE08 ice core, Law Dome, Antarctica (MacFarling-Meure, 2006)

An exponential trend function applied to the MLO data indicates that the doubling will occur around the year 2100. If the TCR is 1.79 °C, we will stay below the 2 °C and be barely above the “extremely low emissions” scenario on the Vox graph (figure 3). However, most recent observation-based place the TCR below 1.79 °C. Christy & McNider, 2017 concluded that the TCR was only about 1.1 °C, less than half of the model-derived value.

Putting Climate Change Claims to the Test
Date: 18/06/19

Dr John Christy
This is a full transcript of a talk given by Dr John Christy to the GWPF on Wednesday 8th May.

When I grew up in the world of science, science was understood as a method of finding information. You would make a claim or a hypothesis, and then test that claim against independent data. If it failed, you rejected your claim and you went back and started over again. What I’ve found today is that if someone makes a claim about the climate, and someone like me falsifies that claim, rather than rejecting it, that person tends to just yell louder that their claim is right. They don’t look at what the contrary information might say.

OK, so what are we talking about? We’re talking about how the climate responds to the emission of additional greenhouse gases caused by our combustion of fossil fuels.


So here’s the deal. We have a change in temperature from the deep atmosphere over 37.5 years, we know how much forcing there was upon the atmosphere, so we can relate these two with this little ratio, and multiply it by the ratio of the 2x CO2 forcing. So the transient climate response is to say, what will the temperature be like if you double CO2– if you increase at 1% per year, which is roughly what the whole greenhouse effect is, and which is achieved in about 70 years. Our result is that the transient climate response in the troposphere is 1.1 °C. Not a very alarming number at all for a doubling of CO2. When we performed the same calculation using the climate models, the number was 2.31°C. Clearly, and significantly different. The models’ response to the forcing – their ∆t here, was over 2 times greater than what has happened in the real world.


There is one model that’s not too bad, it’s the Russian model. You don’t go to the White House today and say, “the Russian model works best”. You don’t say that at all! But the fact is they have a very low sensitivity to their climate model. When you look at the Russian model integrated out to 2100, you don’t see anything to get worried about. When you look at 120 years out from 1980, we already have 1/3 of the period done – if you’re looking out to 2100. These models are already falsified, you can’t trust them out to 2100, no way in the world would a legitimate scientist do that. If an engineer built an aeroplane and said it could fly 600 miles and the thing ran out of fuel at 200 and crashed, he might say: “I was only off by a factor of three”. No, we don’t do that in engineering and real science! A factor of three is huge in the energy balance system. Yet that’s what we see in the climate models.


I have three conclusions for my talk:

Theoretical climate modelling is deficient for describing past variations. Climate models fail for past variations, where we already know the answer. They’ve failed hypothesis tests and that means they’re highly questionable for giving us accurate information about how the relatively tiny forcing, and that’s that little guy right there, will affect the climate of the future.

The weather we really care about isn’t changing, and Mother Nature has many ways on her own to cause her climate to experience considerable variations in cycles. If you think about how many degrees of freedom are in the climate system, what a chaotic nonlinear, dynamical system can do with all those degrees of freedom, you will always have record highs, record lows, tremendous storms and so on. That’s the way that system is.

And lastly, carbon is the world’s dominant source of energy today, because it is affordable and directly leads to poverty eradication as well as the lengthening and quality enhancement of human life. Because of these massive benefits, usage is rising around the world, despite calls for its limitation.

And with that I thank you very much for having me.


Dr. Christy’s presentation is well-worth reading in its entirety. This is from the presentation:

Figure 7. TCR estimate from Christy & McNider, 2017.

Figure 2: Hansen Revisionism

Figure 3 was yet another feeble effort to resuscitate Hansen et al., 1988.

Figure 8. Scenario A is “business as usual.” Scenario C is where humans basically undiscover fire at the end of the 20th Century.

Hansen’s own temperature data, GISTEMP, tracked Scenario C (the one in which we undiscovered fire) up until 2010, only crossing paths with Scenario B during the recent El Niño

Figure 9. Hansen’s very epic fail.

According to Hausfather et al., 2019, Scenario B was actually “business as usual”…

H88’s “most plausible” scenario B overestimated warming experienced subsequent to publication by around 54% (Figure 3). However, much of this mismatch was due to overestimating future external forcing – particularly from CH4 and halocarbons.

Hausfather et al., 2019

I think it might be impossible to not overestimate the warming effect of CH4, because it doesn’t seem to be present in the geologic record. The highest atmospheric CH4 concentrations of the entire Phanerozoic Eon occurred during the Late Carboniferous (Pennsylvanian) and Early Permian Periods, the only time that Earth has been as cold as the Quaternary Period.

Figure 10. CH4 levels were 3-5 times as high as modern levels during the coldest climatic period of the Phanerozoic Eon. Phanerozoic pCH4 (Bartdorff et al., 2008), pH-corrected temperature (Royer & Berner) and CO2 (Berner). Older is toward the left.

The fact is that the observations are behaving as if we have already enacted much of the Green New Deal Cultural Revolution (¡viva Che AOC!)…

Figure 11. The observations (HadCRUT4) are tracking an AOC world: RCP2.6-RCP4.0. (Modified after IPCC AR5).

Models Have Not Improved in 50 Years

This is one of the alleged #ExxonKnew models…

Figure 12. What #ExxonKnew in 1978.

“Same as it ever was”…

Figure 13. The models haven’t improved. RSS V4.0 MSU/AMSU atmosperhic temperature dataset vs. CMIP-5 climate models. The yellow band is the 5% to 95% probability band. Apart from the recent El Niño, RSS has tracked cooler than more than 95% of the models. The predictive mode is post-2005. (Remote Sensing Systems).

“Same as it ever was”…

Figure 14. Whether taking the temperature in the atmosphere (UAH v6.0) or at airports (HadCRUT4), the observations track near or below the bottom of the 5% to 95% range. Apart from the recent El Niño, the observations track cooler than 95% of the models. (Modified from Climate Lab Book)

If the oil & gas industry defined accurate predictions in the same manner as climate “scientists,” Macondo (Deepwater Horizon) would have been the only failed prediction in the past 30 years… because the rig blowing up and sinking wasn’t within the 5% to 95% range of outcomes in the pre-drill prognosis.


Bartdorff, O., Wallmann, K., Latif, M., and Semenov, V. ( 2008), Phanerozoic evolution of atmospheric methane, Global Biogeochem. Cycles, 22, GB1008, doi:10.1029/2007GB002985.

Berner, R.A. and Z. Kothavala, 2001. “GEOCARB III: A Revised Model of Atmospheric CO2 over Phanerozoic Time”.  American Journal of Science, v.301, pp.182-204, February 2001.

Christy, J. R., & McNider, R. T. (2017). “Satellite bulk tropospheric temperatures as a metric for climate sensitivity”. Asia‐Pacific Journal of Atmospheric Sciences, 53(4), 511–518.‐017‐0070‐z

Hansen, J., I. Fung, A. Lacis, D. Rind, S. Lebedeff, R. Ruedy, G. Russell, and P. Stone, 1988. “Global climate changes as forecast by Goddard Institute for Space Studies three-dimensional model”. J. Geophys. Res., 93, 9341-9364, doi:10.1029/JD093iD08p09341

Hausfather, Z., Drake, H. F., Abbott, T., & Schmidt, G. A. ( 2019). “Evaluating the performance of past climate model projections”. Geophysical Research Letters, 46.

Royer, D. L., R. A. Berner, I. P. Montanez, N. J. Tabor and D. J. Beerling. “CO2 as a primary driver of Phanerozoic climate”.  GSA Today, Vol. 14, No. 3. (2004), pp. 4-10

162 thoughts on “Climate Models Have Not Improved in 50 Years

  1. I should write to the London Times as follows;
    I believe there should be a debate between climate scientists who believe in an impending crisis and those who are sceptical. To keep it under control, all protagonists should be Nobel prize winners.
    Yours etc

    • And you should add:

      P.S. Despite his claims to the contrary, Michael Mann did not win a Nobel Prize.

      • If London were the center of western civilization that it arguably once was, you would be correct. Now, however, it is just another city overrun with illegal immigrants like New York or Los Angeles. And like the Times of those cities, the Times of London has fallen to the level where birds disdain to defecate upon it.
        Thus, however it may be styled on its masthead, it is necessary to identify the once great city which hosts a once great newspaper, whether it is London, New York or Los Angeles.

  2. So approximately 6 above 6 below and 6 roughly on target which is exactly what one would expect from a random distribution.

    • When they say match current temps do they mean a point temp say 2018 avg or all years from the start of model? This isn’t clear to me.

      • Yes. What is the range of the models? What is statistically significant? If range is say .3C per decade and statistically significant is +-0.5 then I absolutely it is random.

        • If the results were random and evenly distributed the average would match observations.

          That is not what we find. The average of all model runs is three times observations. Clearly then, the outcome is being manipulated. If six are high and six are low, one could get a high average by having more runs of the hot models. After all, the “ensemble” is all runs.

          To me, it seems having models like the chronically hot U Victoria model by Weaver is incompetence, not “a spread”. It is not about opinion, it is about science. Weaver should have adjusted the sensitivity until the result matched observations. He didn’t. Instead he has created, at vast expense, an RCP6.0 which is functionally useless. It is retained to” keep the average up”.

          • True, but if they can’t pass this basic smell test everything else is mute.
            At that point the only defense is to use the ones they claim are accurate only. It does bugger belief that people actually think averaging wrong answers makes the answer closer to being correct. It’s kind of like averaging math students answers to a problem and saying the average is the correct answer. Even saying it will be close is wrong. What they are NOT saying is that a significant portion agree and therefore those are correct.

    • When I was a kid we had two weather stations in my city of Victoria, B.C. Canada, one on a windy hill and another at the airport. Sixty years later we have three weather stations, one on the windy hill, one on a now much larger and busier airport and one at the University. The three never agree with each other and there is usually a difference of 1 C between them, sometimes more.

  3. Zeke continues to try way too hard to justify the unjustifiable.

    Climate models CANNOT be realistic or verified or validated. Naomi Oreskes explained why herself in a 1994 Science paper (guest post on that forthcoming whenever CtM deems it appropriate).

    Or, summarizing previous guest posts here on climate models, the CFL numerical solution constraint on partial differential equations presents about a severn ORDER of MAGNITUDE computational constraint given today’s supercomputers, forcing Parameterization. The unavoidable parameter tuning to best hindcast delta T drags in the attribution problem. The warming from about 1920-1945 is indistinguishable from that from about 1975-2000. Yet the former period could NOT have been AGW, so was mostly natural. The unknowable attribution problem is how much of the latter parameter tuning period is AGW versus natural. AGW attribution causes all but one CMIP5 model to run provably hot. QED.

    • The insane thing is that, considering the complexity of the climate and all of the factors that influence it, the climate models do a pretty good job of simulating it, They are very good heuristic tools.

      They just aren’t useful predictive tools.

      • Istvan
        Who is “Zeke” ?


        A real climate model requires a thorough understanding of what causes climate change, in detail.

        No such understanding exists.

        Therefore all we have are computer games, not “climate models”, based on the personal opinions (or consensus opinion) of the programmers.

        That’s not a real model of climate change on our planet — it is a guess
        that might be right, or almost right, only by chance.

        Use enough personal opinions, to program enough computer games, and some are likely to look “right”, or in the ball park.

        That does not mean they will continue to be “right” in the future.

        Calling a computer game a “climate model” does not make it one.

        And if the computer game makes wrong predictions, as so called climate models do, on average, that proves they are not real climate models — just faoled prorotypes.

        Some people are confused, believing that computers have the ability to predict the future climate, without having an accurate climate physics model as the foundation for that computer model.

        The null hypothesis of climate change has not yet been rejected — after 4.5 billion years of natural causes of climate change, it is MUCH too soon to claim natural causes are just “noise” and man made CO2 emissions have become the “climate control knob”.

        Every change of the climate in our lifetimes could have had natural causes.

    • “The unavoidable parameter tuning to best hindcast delta T drags in the attribution problem.
      They do not tune parameters for subgrid modelling from hindcast delta T.

      “The warming from about 1920-1945 is indistinguishable from that from about 1975-2000.”
      The distinction is that the latter warming is from 1975 to 2019, and shows no sign of stopping.

      • A prediction discredited through observation of pauses, reversals, and a global state of irregular order. The changes, not change, are not progressive or monotonic, in hindcast, or forecast, and there is no demonstrable skill to predict either.

      • “… 2019, and shows no sign of stopping.”

        Hahahahahaha. Send me your warming Obi Stokes we are so cold.

      • It had stopped for over a dozen years, then the super El Nino created a lot of heat that has been gradually leaving the system for the last 3 years.

        • MarkW:

          “It had stopped for over a dozen years, then the super El Nino created a lot of heat that has been gradually leaving the system for the last 3 years”

          The super El Nino was a man-made event, caused by a massive reduction (~29 Megatons) in dimming SO2 aerosol emissions between 2014 and 2016, by China, due to a Clean Air Directive implemented in 2014.

          The warming was simply due to cleansing of the atmosphere, which allowed sunshine to strike the Earth’s surface with greater intensity, and ended because the 2015 VEI4 eruptions of Chikurachki (Feb 16), Calbuco (Apr 22), and Wolf (May 25) re-introduced SO2 aerosols into the atmosphere.

          Examination of the climate data since 1850 shows that all El Ninos have been caused by reduced levels of SO2 aerosols in the atmosphere, primarily volcanic induced.

          (I have been beating this drum for a long time, but most writers still view El Ninos as mysterious events, which is why I keep trying to wake them up).

      • Adding twenty yrs doesn’t change the previous 25yrs values. His point still stands undisputed. Can you be a little more illogical?

    • ..but it’s exactly what they needed to make outlandish claims..and Zeke handed it to them

      According to climate models:
      Antarctica and Greenland with melt in 50 years
      Miami will be under water in 50 years
      millions will die in 50 years

      • As with the CO2 lagging temperature in ice cores, the missing heat, and the missing tropospherical hotspot, if somebody writes a paper purporting to “fix” the problems they don’t like to admit to, then it is a guaranteed publication no matter how tenuous or bad.

  4. Volcanic activity is a wild card in all this. It’s been awfully quiet but they are going off periodically, though no really big ones. There have been studies indicating volcanic activity picks up during solar minimums, so it is quite possible we may have an unexpected surprise in the next year or two.

    Who knows, if the climate is in a cooling phase, some mountain blowing its top might be the tipping point to accelerate the cooling. Wouldn’t that be ironic? But then it is also possible eruptions under the sea are warming the waters and will provide the opposite effect.

    • The only thing crazier than trying to predict the weather decades in the future, is in trying to predict volcanic eruptions.

      • Simpler to characterize, but vastly more unwieldy. It’s easier to apply a smoothing function to the atmosphere and forecast a week in advance.

      • Well why don’t we write nineteen models and average the result? Surely that will tell us? Won’t it?

        (Yes, sarc)

    • We have been in a long-term ice age for the last 2.6 million years called the Quaternary Glaciation by geologists.

      Cold weather causes 20 times as many deaths as hot weather. From that statistics it looks like it is too cold not getting too hot. Also everybody has bunches of clothes and shoes to keep warm.

      Because water vapor is a greenhouse gas that is 100 times as potent as CO2 and no one knows how to model it. That makes the error bars +/- 20 degrees Celsius when they are added to the IPCC climate models. this article has been mentioned on this site.

      • With water vapor the major greenhouse gas, I am surprised there is no estimate of the impact of Irrigation.
        I would think flooding fields to grow rice in California has a substantial impact on the climate in those areas.
        Lot’s more crops growing all over the world have added substantial water vapor to the atmosphere.

        • Gordon
          And, I suspect that pivotal irrigation results in a lot more evaporation than from older ditch or even flooding irrigation.

        • And the effect of vapour6on UHI. For example C8H18 + 25 O2 —> 16 CO2 +18 H2O.. Octane converts into 18 water molecules on combustion and all Hydro carbons do the same thing. I know that the water will eventually fall as rain somewhere but I also see it doesn’t condense into clouds and rain particularly quickly in urban areas in summer. Therefore it must be Greenhousing away while in the vapour state.

      • To me a cursary look at the link here reveals potential serious flaws in the logic driving these models with incipient circularity within it.
        For instance the IPCC. definition of Radiative forcing (RF) is; as it assumes no change in state, in fact a Force. However this Force somehow morphs into an energy flux at some 1.6 Watts/sq.m. This is a no no as these parameters have different units. Force as Lbs. and flux as Ft. lbs. The only way the two can be linked is if detailed knowledge of the opposing forces are known.
        An electrical analogy perhaps explains: A Force or Voltage is applied to a circuit. No energy is involved unless a current flows . The only way to calculate the current is by means of a known resistance in the circuit and hence an energy flux with this being very specific to the physical circuit involved.
        Transposing such a specific energy flux value to other circumstances is thus a logical error and it appears that this is being done in the basic equation 1 shown in the link with potentially other similar problems.
        I suspect that this form of error has been applied to considerations of the behaviour of water in the the climate which has led to anomalies; but so far cannot identity it in valid detail.
        Need to spend time on this. This being just a cursory look.

      • Ralph, from your link:


        There ya go, GMCs in a very small nutshell. All the modellers & supercomputers can quit, a small calculator is all that’s necessary….

  5. Shame that after the Billions invested, we can’t use these models to predict how much warmer it will be if we can force the atmosphere back to 800-1200ppm of CO2. The increasing warmth might help is get there.

  6. “0.9°C since 1970.”

    Note the Cherry-picked date by Zeke.
    What happened between 1945 and 1975?
    Why was 1910-1945 so notable for its warming when CO2 levels were still low. A warming on par with the 1980-2015 warming.

  7. “The accuracy of the failed models improved when they adjusted them to fit the observations”
    Yes, because the “observations” are of the GHGs that were actually emitted. Like many people, Cornwall doesn’t really get this, but climate models don’t predict what emissions people will choose to make. They predict the consequences of the emissions that are made. That is the scenario. When you evaluate the prediction, you have to establish by observation what scenario eventuated.

    • … climate models don’t predict what emissions people will choose to make. They predict the consequences of the emissions that are made. That is the scenario. When you evaluate the prediction, you have to establish by observation what scenario eventuated.

      “Consequences of emissions” is a particular presumption open to argument, and yet the models must be programmed with it, in order to produce a given scenario, right? Certain presumptions about this are fed in, and expected consequences come out. Wow, … cy-uhntz!

    • Nick,

      Models are tuned by adjusting their many knobs and dials. It’s relatively easy to select a large set of parameters and exhaustively adjust them to best fit a complicated function to arbitrary data. The problem is that the more you constrain a broken model by the past, the worse it’s predictions of the future become.

      There’s very little actual physics in them. They’re mostly a bunch of table driven heuristics based on presumptive causality. Consider ModelE from GISS. The file RADIATION.F is the most critical file for calculating the radiant balance, yet it contains many thousands of floating point constants baked into unmaintainable, untestable, spaghetti Fortran, many of which are sparsely, if at all, documented as to their origin or purpose. And lets not forget the megabytes of data files that configure the cells being simulated.

      BTW, I’m still waiting for an answer for how the planet distinguishes Joules from the next W/m^2 from the average Joules of solar forcing so that the next Joule can do so much more work to maintainin the surface temperature then the average Joule. To refresh your memory, the nominal ECS requires surface emissions to increase by 4.4 W/m^2 caused by the next W/m^2 of forcing, while the average W/m^2 of solar forcing contributes only about 1.6 W/m^2 to the surface emissions. You do understand that Joules are the units of work and that the work required to sustain 4.4 W/m^2 of incremental emissions is far larger than the work required to sustain 1.6 W/m^2 of emissions, right?

      What’s the origin of all these extra Joules, beside the implicit power supply that the feedback model presumes exists, but is not actually a part of the climate system?

      • Since the science is settled, we should only need 1 climate model that predicts the future climate.
        Also since it is settled, all the fudge factors, whoops key parameters, can be published, verified and agreed on.

      • “The file RADIATION.F”

        I just looked over it You’ve got to be amazed how well they know the 1850 levels. /sarc

        Also love to see a single source file having over 10000 lines, being full of both code and hardwired data with cryptic names. Looks like software experts wrote it. NOT.

        • Adrian,

          When I first saw this, I was horrified that trillions of dollars in waste, fraud an abuse is being based on the results of code like this.

          Among other things I’ve been responsible for was the development of over a million lines of modeling code for a variety of integrated circuits, where a broken model could cost many millions of dollars and months of delay. If anyone who worked for me wrote code as sloppy as ModelE, they definitely wouldn’t have lasted very long.

          The cryptic names are likely the result of the version of Fortran that the model was originally written in, where only the first 6 characters of variable names were significant …

    • Nick Stokes
      A small correction needed
      “The accuracy of the failed models improved when they adjusted them to fit the observations”
      Yes, because the “observations” are of the GHGs that were actually emitted. Like many people, Nick Stokes doesn’t really get this, but climate models do predict what emissions people will choose to make. They predict the consequences of the emissions that are made. That is the scenario.

        • Individuals have free will. The larger the number of people, the more predictable their collective actions will be.
          That was a pretty pathetic attempt at mis-direction.

          • “The larger the number of people, the more predictable their collective actions will be.”
            So can you predict election outcomes?

            Scientists know how the atmosphere works. They can tell you what happens if you add a certain amount of GHGs. They have no special knowledge of whether you, or society in general, will do that. Society might even listen to the scientists.

            Hansen, in 1988, had know special knowledge of whether the Montreal agreement would be ratified by governments, let alone whether it would effectively reduce CFCs. Only his scenario C was optimistic about this.

          • Stokes
            And yet, Hansen assumed two volcanic eruptions putting aerosols into the air. Eruptions that he had no way of knowing when, if, or the amount of. Where is the quantitative science in guessing on events that may not happen.

          • “Scientists know how the atmosphere works.”

            Absolutely not and the proof is undeniable, for if this were true, climate alarmism would have died decades ago.

          • “Scientists know how the atmosphere works. ”

            They also know how a double pendulum works.
            That does not mean they can predict a system on a long term.

            We also know pretty well how an atom works. Putting a bunch of atoms together and we cannot explain how some things work, because of emergency.

            But I reckon if I tell you how an atom works, you would be able to explain high temperature superconductivity, after all, the samples have way less atoms than the Earth you claim to ‘know’. And you have to simulate much shorter time periods than the climastrological models have to, too.

            The claim of ‘knowledge’ is very funny indeed, especially after experts being verifying as having strong negative knowledge, often being worse than total ignorance:

          • Nick Stokes “Scientists know how the atmosphere works. They can tell you what happens if you add a certain amount of GHGs.”
            A favourite topic of warmists is that we only have one world and we cannot run experiments on it.
            So no, scientists do not know what happens if you add a certain amount of GHG’s.
            They theorise , Nick.
            And they cannot get the weather right beyond 5 days nor predict droughts floods and ENSO’s.

          • “Scientists know how the atmosphere works. They can tell you what happens if you add a certain amount of GHGs.”

            Yeah, sure they can! What a ridiculous statement !!!

            Haven’t you been paying attention? This is what the argument has been about. The fact is scientist DO NOT know how the atmosphere works, and they certainly CANNOT tell us what happens when a certain amount of GHG’s are added.

            You, nor any climate scientist you want to name, can tell us what CO2’s TCS/ECS number really is. You and all of them are just guessing.

            Please tell us what happens when CO2 goes from 280ppm to 415ppm. What happens when we add that much CO2? You claim to know. Why haven’t you narrowed down that 1.5C to 4.5C ECS estimate? You insinuate you know this figure. What is it?

            And if you can’t give that figure then I guess your statement above is wrong. Woudn’t you agree?

          • Tom,

            Yes, the 3C +/- 1.5C nominal ECS is a joke. In what Universe is a metric with +/- 50% error bars ‘settled’, especially when the lower bound of 1.5C is larger than anything asserted by skeptics which implies that the already excessive uncertainty isn’t even enough! Denigrating repeatable science that doesn’t fit within the accepted error bars isn’t a legitimate way to decrease the uncertainty, yet this is the primary defense the alarmists use against the scientific truth.

            I doubt that Nick will address your point, as he seems to have trouble with questions whose answers undermine his beliefs. He’s just another alarmist who can’t defend his position with anything but circular logic, self righteous indignation and hearsay and he surely must know that such tactics will not work in this forum. I think he’s afraid of accepting the truth because it’s too disruptive to his politics.

          • Scientists know how the atmosphere works? Really? Other than the climate alarmists, there are no actual scientists who make that claim.

          • Clyde
            “Eruptions that he had no way of knowing when”
            Yes, and well acknowledged. That is part of the scenario – things that scientists can’t predict, like society’s actions on GHG emissions. You check afterwards to see which scenario was followed. If you want to treat the modelling as a prediction, you have to make your own decision on which scenario you think is likely. Or even, which you will try to make come true.

          • Stokes
            Acting as an apologist for Hansen, You said, “That is part of the scenario – things that scientists can’t predict, like society’s actions on GHG emissions.” The variable of interest, CO2, formed the basis for the core scenarios, and to cover a range of possibilities, different values were used. We are confident that major volcanic eruptions cause cooling. However, we have little ability to predict them. The track record is even worse than temperatures predictions. He had no business putting in hypothetical eruptions because they just become confounding variables. They provide no insight on how CO2 alone impacts temperatures! As it was, his hypothetical impacted the “Draconian Reductions” more than any of the other scenarios, making it look like extreme actions are justified. My point being, had Hansen not included any fictitious eruptions, NONE of his predictions for the various CO2 scenarios would have been close to reality, even the “Draconian Reductions.” Once again, you have contorted yourself into a pretzel to try to rationalize unsupportable actions. You have no shame!

          • “I doubt that Nick will address your point, as he seems to have trouble with questions whose answers undermine his beliefs.”

            You’re right, he didn’t. But what can he say? He definitely does not know the critical number even though he claims he does. And he definitely doesn’t know how the atmosphere works since climate science hasn’t even worked out the feedbacks that may be present.

            Alarmists are real good at making assertions, but actually backing them up with facts is beyond them. Mainly because they don’t have any facts to back up their assertions. We tell them to “prove it” every day, and all we ever get is “crickets” from them. Just goes to show they don’t really have any viable arguments to make.

        • Stokes,
          Yes, but population dynamics can be predicted pretty well. Maybe its that pesky Law of Large Numbers that makes random events more tractable.

    • Nick, “They predict the consequences of the emissions that are made.

      They predict nothing of the kind.

      The worst of it, Nick, is that you defend climate models knowing they are unreliable.

      Long ago, in a careless moment, you allowed that climate models are engineering models. You know very well that engineering models are non-predictive outside their parameter calibration bounds.

      But for reasons of your own, none having to do with professional integrity, you defend them.

          • Christina, “Stokes is right, you are wrong.

            How would you know Christina?

            Nick Stokes has no training in science. He shows no understanding of physical data evaluation, of calibration, or of resolution, and takes no notice of the fundamental distinction between precision and accuracy.

            Climate modeling has been a playground for Nick and Ken Rice and their numerological like because it’s been no more than a statistical construct ever since Jim Hansen torpedoed it in 1988.

            Climate modeling is no longer a branch of physics. It’s entered the subjectivist regime of critical theory, in which assumptions have the weight of evidence and every study is confirmatory.

        • Stokes
          So, you are claiming that all models can be relied upon to give reasonable results predicting outputs in the performance envelope that exceed the range of observations? Gibberish!

        • Seems to me that anyone with some historical data, graph paper, pencil, ruler and the ability to draw a nearly straight line, would have just as accurately predicted the temperature as all the climate scientists and their climate models.

        • I love the way Nick goes out of his way to actually prove his points.
          Note to the clueless, that was extreme sarcasm.

          • Let’s see someone actually produce a CFD-like engineering model that tells you you can’t go beyond some “parameter calibration bounds”. Nastran? Ansys, Fluent? Anyone?

          • Stokes
            Fundamentally, if you are operating in a regime for which there is no data to verify with, you are extrapolating. Extrapolating for non-linear systems is always more risky than interpolating.

            As I recollect from my reading, some of the earliest attempts at exceeding the sound barrier resulted in crashes because the airplane became uncontrollable. I think (not certain) that the problem was that the response to the control surfaces reversed at the speed of sound. That was not predicted.

          • “…Nastran? Ansys, Fluent? Anyone?…”

            Ansys makes Fluent. WTF is “Ansys, Fluent?” Is that like, “Microsoft, Excel?”

        • Widmann
          Pat has provided a link to his scholarly article defending his claim. Can you do as much? Do you think so highly of yourself that you feel that your ‘vote’ over-rides all other opinions and facts? How about trying to justify your opinion?

      • “The worst of it, Nick, is that you defend climate models knowing they are unreliable”

        Yes, that *is* the worst of it. Nick is someone who ought to know better, but for some reason he doesn’t know better.

    • Not scientific, if the number of “correct’ models is not reduced accordingly, when adjusted for “GHGs that were actually emitted”. Probably that adjustment included more tuning/fudging knobs.

      And weren’t CO2 emissions for a long time higher than assumed, hence wouldn’t an “adjustment for GHGs” have made the predictions even hotter?

      • Thus the population control schemes normalized by catastrophists, not limited to planned parenthood (e.g. selective-child). An old atheist philosophy where humans are characterized as parasitical, and are ordered to defer to mortal gods. Progress.

    • It’s this kind of thinking that leads to results like “ice cream causes murders“. If you fit the model to just the CO2 parameter, then you’ll invariably get a CO2 causes warming result.

      What the heck, I’ll give the hypothesis a fair hearing:

      Over the last 50 years:

      My income (relative to inflation) has increased with CO2 levels.

      The incidents of Red Sox World Series wins has increased with CO2 levels.

      So far, I don’t see a problem.

      My alopecia has increased with increased CO2 levels.


      Can someone direct me to the nearest chapel of St. Greta?

      • Global warming correlates extremely well with maritime pirate depletion. “The Pause” was clearly driven by the rise of Somali pirates… 😎

  8. “Climate Models Have Not Improved in 50 Years”
    The paper says they got it right 50 years ago. “Not improved” is not a criticism. You can’t do better than get it right.

    “However, most recent observation-based place the TCR below 1.79 °C. Christy & McNider, 2017 concluded that the TCR was only about 1.1 °C, less than half of the model-derived value.”
    1 paper, C&M, is not “most recent observation-based”. But it is also not observing the same thing. This is a paper about surface warming. Christy’s observations are about the lower troposphere, which indeed, in the UAH measure, has been observed to warm more slowly than surface. RSS says otherwise. Whatever, the observations about surface warming confirm predictions of surface warming.

    “Hansen’s own temperature data, GISTEMP, tracked Scenario C (the one in which we undiscovered fire) “
    No, it’s the one where the Montreal protocol was agreed and put into effect. And it was.

    “The models haven’t improved. RSS V4.0 MSU/AMSU atmosperhic temperature dataset vs. CMIP-5 climate models”
    Again. different things. Zeke is talking about surface temperature predictions, and their match to surface temperature outcomes.

    • “Climate Models Have Not Improved in 50 Years”
      The paper says they got it right 50 years ago. “Not improved” is not a criticism. You can’t do better than get it right.

      They didn’t get it right 50 years ago. Less than 1 in 5 got it right…

      There was no predictive skill demonstrated at all, apart from 2 models.

      There’s also Otto et al, Lewis & Krok, Lindzen & Choi, Libardoni & Forest (2011/2013), Schwartz (2012).

      • David,
        “Less than 1 in 5 got it right”
        You say “off the chart” but present a chart with them all on it. In fact, all but 2 predicted TCR were between 1.4 and 2.7 (both outliers on the cool side), compared with observed 1.79.Since it was a priori not given that there would be any warming at all, and the periods for measuring a trend are short,that is pretty good.

        • The fact that the periods for measuring a trend are short is not working against the models; the shortness of the analyzed period is precisely why the estimates fail to reject almost all models. See the lower panel of figure 2: for several of the periods, the “accurate” TCR ranges from less than 1ºC to well over 2.

          Obviously, if prediction A says temperatures will increase by 0.9ºC over the next century and prediction B says they will rise 2.4ºC, they can’t both be accurate. “Plausible” is a better fit.

          The problem is that, in order to estimate “observed” TCR and compare it with a model’s implied TCR, the authors use only the years for which that model made predictions. While this may be statistically correct, it is physically absurd; all measurements we have of TCR suggest it’s a stable property, and to estimate it you should use as much data as possible (i.e. as long a period as possible). If a model’s implied TCR is 0.9ºC it’s almost certainly wrong, and the same goes if it’s 2.4ºC – but of course it’s impossible to prove it wrong over a period of a couple decades.

          I actually believe the authors’ point about using the models’ implied TCR, and not just their temperature projections, is exactly right; I made the same point a few months ago.

        • Nick, what I miss in the Hausfather paper is a discussion of the different “observed” TCR. In Otto et all (2013) , L/C (2014;2018)… is mentioned a TCR ( about 1.3) which is well below the “observed” TCR in the paper, which one can estimate by around 1.8. The reason IMO is: the time spans of the “observations” in the paper are too short ( 1970s…2000s) to estimate the TCR correctly when one includes some kind of IV. Therefore it makes no wonder when the “observed” TCR-values are indeed matching the modelled values to some degree. It doesn’t say anything about the reliability of the “obs. TCR” and the model-obs. comparison is of limited value. In the paper one doesn’t find this kind of dicussion which is a pitty, IMO

      • None of them got it right. Correspondence of a climate model projection with the observed trend is purely happenstance.

        Even Jim Hansen admitted that truth. I can post a reference to his admission, later.

        Climate models are completely unable to resolve the effect of a 0.035 W/m^2 annual increase in tropospheric thermal flux.

        The only reason the positive idea is entertained, is because climate modelers (and Nick Stokes, most likely) have no idea about, or understanding of, limits of resolution.

        • Spot on regarding resolution, although I think that’s more relevant to multi-proxy reconstructions than the models.

          • David, model annual average resolution of tropospheric thermal flux — the main determinant of air temperature — is no better than (+/-)4 W/m^2 in a regime where the annual increase in forcing is 0.035 W/m^2.

            The effect of CO2 on the climate, if any, is invisible to the models. That is the unavoidable message of my error propagation paper.

            On the other hand, the multi-proxy air temperature reconstructions have no known physical connection to temperature.

            I discussed that at WUWT here, in detail, and also published an analysis demonstrating that fact here.

            Proxy temperature reconstructions represent an obvious descent into pseudo-science. They make a mockery of the real hard-won science done by such high-integrity people as Paul Dennis at UEA. Paul is working hard to derive a real and reliable proxy, while all those folks like Michael Mann, Rosanne D’Arrigo, Phil Jones, Gbrielle Hegerl, and the rest fake their way to fame and fortune.

          • “The effect of CO2 on the climate, if any, is invisible to the models. That is the unavoidable message of my error propagation paper.”

            Except to Nick Stokes and his climate science buddies. Nick claims he can see CO2’s effect on the atmosphere. Greta can see CO2 in the air, according to her, and Nick can see what CO2 does in the atmosphere, according to him.

        • Here’s the reference to Jim Hansen repudiating the physical meaning of his own scenario B: Hansen, J. E. (2005). Michael Crichton’s “Scientific Method. Available at:

          Hansen wrote, “Curiously, the scenario [B] that we described as most realistic is so far turning out to be almost dead on the money. Such close agreement is fortuitous.

          • Totally at variance with what you said, which was:
            “Correspondence of a climate model projection with the observed trend is purely happenstance.”
            Hansen is not talking about projections or trends. He is talking about a scenario of future GHG levels. This is explicitly not something that they claim to predict; that is why several scenarios are chosen to cover the range of possibilities. It happens that one scenario (B) included CO2 concentrations that turned out almost exactly right. As Hansen says, that is fortuitous.

          • Nick, “ It happens that one scenario (B) included CO2 concentrations that turned out almost exactly right.

            You submerged yourself in deep water with that comment, Nick.

            Here’s Hansen’s complete thought: “Curiously, the scenario that we described as most realistic is so far turning out to be almost dead on the money. Such close agreement is fortuitous. For example, the model used in 1988 had a sensitivity of 4.2°C for doubled CO2, but our best estimate for true climate sensitivity is closer to 3°C for doubled CO2. There are various other uncertain factors that can make the warming larger or smaller3. But it is becoming clear that our prediction was in the right ballpark.(my bold)”

            Hansen says they got the “right” answer with an incorrect model.

            Then you come along and say they got the CO2 concentration just right. Implying the model is correct.

            But the right CO2 concentration would have produced the wrong trend. Because the model sensitivity is 40% too high (over the calibration region, anyway).

            So in actuality, the “right” trend was produced using an incorrect model and the wrong CO2 concentration.

            I’ve compared plots of the scenario B forcing and the Myhre 1998 forcing of the actual trend in CO2. They’re not at all alike.

            Guess that means your answer is made-up gibberish, Nick.

          • Nick Stokes
            Another favourite warmist meme.
            Pretend that a scenario is not a projection or a prediction.
            When it clearly is.
            Gibberish from Nick
            “Hansen is not talking about projections or trends. He is talking about a scenario of future GHG levels. This is explicitly not something that they claim to predict; that is why several scenarios are chosen to cover the range of possibilities.”
            Of course a scenario is a prediction, Nick.
            Giving 4 different predictions for 4 different CO2 levels gives you 4 bites at the apple.
            If you cannot even predict the right scenario for the actual level of future CO2 how can you hope to predict the future at all?
            One has to laugh when the gurus of the future are to scared to call a prediction a prediction because they know if they are specific they will be caught out much more quickly in their Ponzi scheme.
            The Ponzi Climate scheme.
            Wish that would catch on.

        • Pat Frank Sums it up
          “Climate models are completely unable to resolve the effect of a 0.035 W/m^2 annual increase in tropospheric thermal flux.”
          “The only reason the positive idea is entertained, is because climate modelers (and Nick Stokes, most likely) have no idea about, or understanding of, limits of resolution.”

          Sadly Nick Stokes, and I would imagine most modellers have a very good understanding of the limits of resolution.
          The fact that they ignore it in their models and argue about it when they know they are wrong is because they have an agenda to push.
          Nick is very clear.
          He will come on and argue deliberately to stir people up, knowing where he is wrong but also to defend the indefensible. It is both risible and sad that he and friends like Zeke do this but the agenda must be pushed. Since they do not do it for the money they traduce their scientific bona fides for the noble cause.

      • Actually, they use models to adjust past observations. Then, they compare climate models to modelled past observations. In other words, models are being compared to models. There is nothing empirical. Phil

    • “The paper says they got it right 50 years ago. “Not improved” is not a criticism. You can’t do better than get it right.”

      So you get the climate models to match up with the bogus, bastardized Hockey Stick charts and that’s what you call “getting it right”.

      The official temperature record is a fraud. Matching that up with your atmospheric computer models tells you nothing about the atmosphere. Try matching those models up to Hansen 1999, the REAL temperature profile of the Earth. They won’t match up because the computer models are bogus, as is the official temperature record.

      You’re doing science based on lies. What an absurd fiasco climate science has become! Making things up is the new climate science.

  9. I would be surprised if any model even came close. These are all open loop, bottom up models that hope to converge a proper top down behavior. I would bet anything that if you plotted monthly averages for slices of latitude of the modeled surface emissions vs. the emissions at TOA, clouds as a function of temperature or even temperature as a function of solar energy, you would not come close to observations which are readily extracted from decades of continuous weather satellite measurements covering the entire surface of the planet.

    It would be so easy to sanity check the models against actual behavior, rather than attempt to fit tiny temperature trends that are the result of many possible causes, nearly all of which are natural variability. Of course, if they did this and made the models conform to reality, they would be useless for predicting a climate catastrophe.

  10. They didn’t get the amount of warming right.
    They didn’t get the timing of the warming right.
    They didn’t get the distribution of the warming right.

    But other than, they are right on.

    • MarkW
      They were correct in that the slope of the warming was, like the actual temperatures, positive. That’s good enough for government work.

      • Yeah… Kind of like describing an exploration well as successful because the older rocks were underneath the younger rocks… 😎

        • David
          Yes, that is part of the problem. There is no real definition of what a successful prediction is. In most engineering situations one has a design goal of an accuracy of something like 5 or 10%, or much tighter where human injury is possible. Climastrologists do a lot of hand waving and say that the old models performed as well as the new ones, without citing such things as an absolute temperature delta, or standard percentage error. Its easy to think that individuals like Greta may not yet have even learned how to calculate the percentage error. However, those who cast a large shadow in the field cannot use such an excuse. One has to attribute a more sinister reason for shying away from quantitative evaluations.

  11. David, you opened you post, “Guest ‘how can he write this with straight face?’ by David Middleton”

    That’s an assumption on your part. He probably smiled as much writing it as you smiled writing your post.

    Have fun, David. I surely have fun reading your posts, so thank you.


    • Ralph,

      Water vapor is predictably causal to temperature as you can see in the accompanying scatter diagram. Each little dot is a 1 month average for a 2.5 degree slice of latitude. The larger blue and green dots are the long term averages per slice. Note the same response per hemisphere (blue vs. green long term averages). Fit a curve to the long term averages and you will never do any better, although it’s definitely not a linear relationship.

      On the other hand, the relationship between cloud coverage and temperature is not only non linear, but is also not even monotonic and is unique per hemisphere.

      Clouds modulate the planets emissions by reducing the emissions at TOA relative to the emissions of the surface below, thus the ratio between the surface emissions and planet emissions is dynamically adjusted based on the amount of clouds. If we plot the emissions of the surface vs. the emissions at TOA, an interesting result arises.

      The resulting average relationship is nearly linear! There are small bumps in the response corresponding to the cloud features at 273K and 300K, but they are significantly reduced relative to the size of the cloud response. More importantly, a bizarre non linear cloud response to temperature that’s unique per hemisphere combined with a non linear relationship between water vapor and temperature results in the same, mostly linear response of planet emissions to surface emissions for both hemispheres!

      This is not a coincidence and the clouds are adapting to the relative difference between land and ocean in order to meet the goal of a mostly linear average response in the power domain, as required when COE is applied to a passive linear system like the Earth’s climate.

      If you consider that the goal of the climate system is a linear response in the power domain which is EQUIVALENT to a gray body with an emissivity of 0.62 and calculated the amount of clouds required to best fit this constraint based on the measurements of other attributes, the bizarre measured average relationship between cloud coverage and the temperature will emerge and is about as good as any model will can get.

      • The relationship between heat and clouds, or perhaps energy and matter, reminds me of an analogous relationship between the sexes: male and female, equal and complementary.

  12. When I read these models results, and the claimed hundreds one degree Celsius data values that are their input basis, I find it very hard to resist going full heretic on the whole global average temps construct – averages of averages of averages claimed to represent accuracy & precision to thousandths of a degree in some cases.

    The layman in me accepts that you can get say a calibrated micrometer to give you an accurate measurement of a piece of some material fixed in place on a workbench, and using a magnifying glass to see the tenths of a millimetre graduations, but an average diurnal temperature value over 5(?) different planetary climate zones over 4 (?) seasons sounds to me like a very flaccid data basis to be using for decadal temps comparisons.

    But what would I know, as a lowly retired auditor?

  13. Figure 9 “Hansen’s very epic fail” apparently shows a very recent GISTEMP opus. Here’s a comparison of what GIGTEMP said in 1997 with 2018

    So far this year NASA’s GISTEMP has made the following number changes each month:

    Jan Feb Mar Apr  May Jun Jul Aug Sep Oct Nov Dec
    843 370 481 633 1359 566 281 400 674 284

    • Hello steve case,

      I tried to load the archived version of the GISTEMP data from 1997. Sadly, I receive the following message:

      You don’t have permission to access /Data/GISTEMP/GLB.Ts+dSST.txt on this server.

      By the looks of it, climate change alarmists are attempting to scrub archived versions to rewrite history. They know they’re in deep trouble. I can only hope that somebody has managed to save all the archived versions of GISTEMP to continue exposing this fraud in the future.

    • NASA’s GISTEMP is Climate Fantasy Land. Bogus, bastardized global surface temperature charts created to promote the Human-caused Climate Change agenda.

      See: Climategate.

      Any temperature chart that does not show the 1930’s as being just as warm as today is a bogus, bastardized temperature chart. In other words: A Great Big Lie, meant to fool you into believing something that is not true: That humans burning fossil fuels is overheating the planet.

      Unmodified charts from all over the world show the 1930’s to be as warm as today.

      Here are some examples:

      Tmax charts

      US chart:

      China chart:

      India chart:

      Norway chart:

      Australia chart:

      So if you see a chart that shows the 1930’s as being cooler than today then know that you are looking at a lie. A deliberate lie.

      • Tom Abbott:

        NASA GISS Land-Ocean temperature charts are of AVERAGE global temperatures.

        During the period 1929-1939 there were 5 VEI4 and 2 VEI5 volcanic eruptions, all of which had the effect of cooling global temperatures. There were La Ninas between JJA 1933 and MAM 1934, and between MJJ 1938 and FMA 1939 because of the cooling from the eruptions. The 1933-1934 La Nina followed the 2 VEI5 eruptions and 2 VEI4’s and was one of the strongest ever recorded.

        I agree that the 1930’s in parts of the world were at least as hot as now, but when all global temperature measurements are averaged together, the NASA GISS values are as would be expected.

      • Tom Abbott:

        Here is a comparison of the average Jan-Dec average anomalous global temperatures for the 1930’s as reported by the British Met. Office Hadcrut4.600 data set, and those of NASA GISS.

        Hadcrut4: 1930 (-) 0.14, 1931 (-)0.09), 1932 (-)0.14, 1933 (-)0.27, 1934 (-)0.13, 1935 (-)0.18, 1936 (-)0.15), 1937 (-)0.03, 1938 (-)0.01, 1939 (-).05

        NASA GISS: 1930 (-)0.14), 1931 (-)0.10, 1932 (-) 0.17), 1933 (-)0.30, 1934 (-)0.14, 1935 (-)0.21), 1936 (-)0.16, 1937 (-)0.04, 1938 (-)0.03, 1939 (-)0.03

        The two data sets are essentially identical, with no evidence of any tampering by NASA GISS.

        Up until 2014, NASA GISS also reported anomalous temperatures for the Northern Hemisphere, and they do show higher temperatures for the 1930’s, although because of averaging, the highest anomalous temperature recorded for the 1930’s was only (+)0.6 deg. C., in 1938.

  14. It seems to be accepted at least by “deniers” that the small CO2 increase does not cause warming because its greenhouse effect is minimal compared to water vapour. The increase in CO2 though appears to be causing the increased greening in the world. Will the greening cause increased cloud cover and could this start a greenhouse effect? I suspect that the blanketing effect will balance the reflection effect and will be effectively a thermostat.

    • Assuming accurate observation and interpretation then and now, this implies greater temperature changes then, than now, and, absent an identifiable anthropogenic source, a greater forcing then, undetected, which may persist and explain temperature anomalies today.

  15. The article is either unclear or I just may have missed it, but is the “historic observations” used for comparison the raw or ‘adjusted’ data?

    • ScienceABC123 … at 5:14 pm
      The article is either unclear or I just may have missed it, but is the “historic observations” used for comparison the raw or ‘adjusted’ data?

      See my post above.

    • They are comparing the atmospheric computer models to their bogus, bastardized surface temperature charts. Then they get a “match” and think they have done something. They have matched a false reality with their computer model false reality. GIGO.

      Any surface temperature chart that doesn’t show the 1930’s as being just as warm as today is a bogus, bastardized temperaure chart. Just look at where the 1930’s is placed on the chart and you will know whether you are looking at a lie or not.

      This whole human-caused climate change scam is predicated on these bogus, bastardized charts. Without them, the Alarmists have NOTHING.

  16. I don’t know what you are saying was not accurate.

    They predicted there would be temperatures and there were.

  17. We should embrace this revelation and use it to justify stopping all climate warming activity, since it is obvious the last 50 years of model improvements have all been a wasted effort.

  18. The main point here is that the comparison stops wtih CMIP3. This is actually a paper from the genre of papers, “gee, Hansen and those old guys got it right.”

    That’s not the issue, because CMIP3 still stayed in the range of reasonableness. Still ran too hot, but not as “too hot” as CMIP4 and CMIP5. And soon we will be looking at the CMIP6 models which are running even hotter than CMIP5.

    This is a paper about ancient history climate models. Who cares? The models driving the SPM in CMIP6 bear no relation to these models. Those models run way hotter, and are way more wrong. But it will take a few years for that to show up, meantime the alarmist flame is fanned.

  19. Some more comments on this paper. The most intriguing fact of the paper is that the “observed” TCR is about 1.8ºC. In the energy-budget papers normally the figure for TCR normally is 1.3ºC or 1.4ºC; in Lewis&Curry 2018 it’s 1.2ºC using HadCRUT and 1.33ºC using Cowtan&Way. There was an update to HadCRUT (around May this year iirc) which increased warming, and the update was then implemented by Cowtan&Way, but still it’s hard to see how the observed or estimated TCR could get even to 1.5ºC following the energy-budget method.

    I believe much or most of the difference between the energy-budget estimates and Hausfather’s is that the latter has smaller forcings, by 15-20% (at least comparing it to LC18). This would make Hausfather’s TCR 20-25% higher.

    Lewis&Curry forcings are here:

    Hausfahter et al forcings, I believe, are these:

    Shameless plug: my app to calculate forcings for a given period (using Lewis&Curry):

    Another point I haven’t really dived into is how exactly they calculate TCR using trends. The energy-budget papers normally use the difference method: comparing the average of one period (say 1930-1950) with the average of a second period (say 2001-2016). They simply subtract one periods’ figures from the other, in order to calculate how much forcing and temperature increased between them.

    In Hausfather’s paper this is impossible as the periods analyzed are too short to be broken into “starting” and “ending” sections. Instead they use the whole period (1970-2000, 2001-2017, etc). It may seem ridiculous but just using trends instead of subtraction can have significant effects on the sensitivity one calculates.

    This issue is obviously far too complex to be dealth with in a comment on a blog post, but the above links will hopefully be useful to those trying to figure out why there are differences in estimated TCR.

  20. A nit-pick: in your “Figure 4. Implied TCR (°C/2xCO2), observations vs models,” it seems that you’re testing if each model’s mean prediction fits in the σ uncertainty/variation in the *observations*. That’s irrelevant — it’s the reverse comparison that matters: whether the observed data fits in the prediction range (uncertainty) of the models.

  21. David,

    If you will send me your address, I’ll send you a copy of my new book “The solar-magnet cause of climate changes and origin of the Ice Ages.” It contains data for all climate changes (for which there are data) over the past 800,000 years. The data are rather astonishing.


  22. Even IF the models did a good job with global temperature – and they DIDN’T and DON’T – they a garbage with temperature smaller scales, garbage with cloud cover, garbage with precipitation (especially on scales smaller than global), etc.

    But if you take care with your parameter tuning, you can take a bunch of inaccuracies all over the globe and add them up and get a fair representation of global temperature anomalies. Have to use that g-word again…garbage. Not much different than saying, “Well I derived an answer of 9.8 meters per second squared for acceleration due to gravity. I came up with 7.8 for half the globe and 11.8 for the other half, so it averages to 9.8. Perfection.”

  23. As a retired computer modeller can I ask why the models are even considered worthy of debate? The day it became climate change rather than global warming they ceased to become the way to determine the cause or causes of changes.

    With one world you have to model the difference between fossil fuel and not. With regional changes it is a pure heat generation and heat transfer problem immeasurably simpler to deal with by comparing fossil fuel use areas with low fossil fuel areas.
    Followers of the climate cult will now say movement of emissions to high anomaly areas but emissions are highest at the source and disperse not concentrate at the north pole. Even if they did we only need examine the rate of heat changes to see if greenhouse effect is plausible given the temperature rises and the fact the Arctic is hardly a sunshine and beachwear destination of choice.
    Can anyone point me to the flaw in this argument please?

  24. Am I understanding correctly here?

    The proponents of climate models claim that the models are reasonably accurate, when the models can miss by amounts that, in other contextual discussions, are alarming?

    So, if a model is off by a few tenths of a degree, this is close to reality, but if reality shows an increase of a few tenths of a degree (within a historic range where this has happened before), then the few tenths of a degree signal catastrophe ahead?

    Models are not alarmingly inaccurate by those amounts, yet climate is alarmingly warmer by those amounts?

    Or am I missing something?

  25. “Climate Models Have Not Improved in 50 Years”

    – and scepticism won’t help due to new censorship aka “why giv’em a platform, a tribune”

    By Guardian, BBC, ABC, CBC, NBC … Command output:

    5 Things Millennials Want Everyone To Know About Political Correctness (That Older Generations Don’t Understand)

    Being PC isn’t about restricting free speech, it’s about creating space for meaningful conversations

    The heated debate about political correctness is often misunderstood. While many individuals across generations dislike the pejorative use of political correctness to represent censorship, a closer investigation reveals generational differences in the desire to use inclusive language.

    Millennials know that using appropriate language invites rather than restricts productive conversation. Creating a supportive environment makes space for all individuals to feel welcome in sharing their opinions, rather than fearing that people will demonize their personhood and attack their character based on their identities. Thanks to the internet, Millennials are citizens of the globe and ambassadors of social justice. Unfortunately, not all generations understand how using certain words or phrases prohibits dialogue and hurts other people.

    To discover five things that all millennials want older generations to know about political correctness that they don’t understand, read the list below.

    1. There is a major difference between “being honest” and spewing prejudice.

    2. Political correctness is not about censorship, it’s about showing respect.

    3. Millennials feel more connected to global citizenship and human rights than nationalism.

    4. Inclusive language creates space for meaningful conversations to take place, offensive language makes people feel unsafe.

    5. Millennials are not being sensitive, they’re being morally minded and ethically informed global citizens.

    :: and He that believeth shall be saved; but he that believeth not shall be damned.

    :: damned!

  26. OMG: Down there ´s america:

    Figure 13. The models haven’t improved. RSS V4.0 MSU/AMSU atmosperhic temperature dataset vs. CMIP-5 climate models. –> Figure 13. The models haven’t improved. RSS V4.0 MSU/AMSU -atmospheric–temperature dataset vs. CMIP-5 climate models.

    Down there ´s america:

    Image › article

    Layers | Science project |

    atmosperhic from

    Fourth Grade Science Science projects: Atmosperhic Layers. Every night before you go to sleep, you probably cover up with a blanket.

  27. OMG: Down there ´s america:

    Figure 13. The models haven’t improved. RSS V4.0 MSU/AMSU atmosperhic temperature dataset vs. CMIP-5 climate models. –> Figure 13. The models haven’t improved. RSS V4.0 MSU/AMSU -atmospheric–temperature dataset vs. CMIP-5 climate models.

    Down there ´s america:

    Image › article

    Layers | Science project |

    atmosperhic from

    Fourth Grade Science Science projects: Atmosperhic Layers. Every night before you go to sleep, you probably cover up with a blanket.

  28. Zeke Hausfather’s study is nothing but deliberately deceptive “spin.” To excuse extreme inaccuracies of modeled projections that failed to anticipate negative feedbacks would mitigate GHG emissions, he substituted GHG level increases after the effects of negative feedbacks, in place of emissions.

    I fired off a 16-part tweetstorm about it, starting here:

    it’s “unrolled” here:
    or here:

    As part of the effort to build support for creation of the IPCC, on a sweltering June 23, 1988, James Hansen testified to Congress about the temperature projections from
    GISS’s state-of-the-art “GCM Model II” climate model. He described three “scenarios,” dubbed A, B and C. He told Congress that “scenario C” represented “draconian emission cuts,” “scenario A” represented “business as usual,” and “scenario B” was in-between. Here’s the transcript:

    Their paper on the same topic, Hansen et al 1988, Global climate changes as forecast by the GISS 3-D model, J. Geophys. Res., was
    published a couple of months later. It filled in the details.

    The three scenarios, it said, “represented the response of a 3D global climate model to realistic rates of change of radiative forcing mechanisms.” Its discussion focused mostly on scenario A, which they said “goes approximately through the middle of the range of likely climate forcing estimated for the year 2030 by Ramanathan et al. [1985],” though they acknowledged that, “Scenario A, since it is exponential, must eventually be on the high side of reality in view of finite resource constraints and environmental concerns, even though the growth of emissions in scenario A (=1.5% yr⁻¹) is less than the rate typical of the past century (=4% yr⁻¹)… [so] Scenario B is perhaps the most plausible of the three cases.”

    Hansen, in both his testimony and the paper, strongly conveyed the impression that “scenario A” was the realistic one, except in the very long term, when “finite resource constraints” must “eventually” limit emissions, making scenario B more plausible. But Hausfather deliberately distorted Hansen’s meaning, by omitting Hansen’s “eventually” qualifier, to make it appear that Hansen had presented scenario B as the most realistic, even in the near term.

    There were many problems with GISS’s “scenario A.” Perhaps most obviously, by 1988, CFC emissions were already slated to decline, because of the 1985 Vienna Convention for the Protection of the Ozone Layer and 1987 Montreal Protocol. So, building an exponential increase of those emissions into any of their scenarios was shockingly dishonest. But they did it anyhow.

    Other than that, Scenario A’s emission projections actually were conservative. In fact, it under-projected CO2 emissions. Over the next 26 years CO2 emissions actually increased by an average of 1.97%/yr, rather than scenario A’s 1.5%.

    Yet scenario A still projected 200% to 300% too much warming.



    There were two main reasons for the extreme inaccuracy of their “business as usual” scenario A:

    One reason was that it inexcusably projected exponential growth in CFC emissions, which were already slated to decline.

    The other reason was that they completely failed to anticipate that powerful negative feedbacks like “greening” and ocean processes, would remove CO2 from the atmosphere from at an accelerating rate, thus mitigating CO2 emissions. That’s why CO2 levels have increased so slowly, while CO2 emissions increased so rapidly. In fact, their paper conflates “emissions” with GHG level increases, using the terms interchangeably, because they didn’t realize that higher GHG levels would accelerate the natural processes which remove those GHGs from the atmosphere.

    That’s why the climate models did such a terrible job of projecting future temperatures.

Comments are closed.