Hansen’s 1988 Predictions Redux

Guest Post by Willis Eschenbach

Over in the Tweeterverse, someone sent me the link to the revered climate scientist James Hansen’s 1988 Senate testimony and told me “Here’s what we were told 30 years ago by NASA scientist James Hansen. It has proven accurate.”

I thought … huh? Can that be right?

Here is a photo of His Most Righteousness, Dr. James “Death Train” Hansen, getting arrested for civil disobedience in support of climate alarmism …

I have to confess, I find myself guilty of schadenfreude in noting that he’s being arrested by … Officer Green …

In any case, let me take as my text for this sermon the aforementioned 1988 Epistle of St. James To The Senators, available here. I show the relevant part below, his temperature forecast.

ORIGINAL CAPTION: Fig. 3. Annual mean global surface air temperature computed for trace gas scenarios A, B, and C described in reference 1. [Scenario A assumes continued growth rates of trace gas emissions typical of the past 20 years, i.e., about 1.5% yr^-1 emission growth; scenario B has emission rates approximately fixed at current rates; scenario C drastically reduces trace gas emissions between 1990 and 2000.] The shaded range is an estimate of global temperature during the peak of the current and previous interglacial periods, about 6,000 and 120,000 years before present, respectively. The zero point for observations is the 1951-1980 mean (reference 6); the zero point for the model is the control run mean.

I was interested in “Scenario A”, which Hansen defined as what would happen assuming “continued growth rates of trace gas emissions typical of the past 20 years, i.e., about 1.5% yr-1“.

To see how well Scenario A fits the period after 1987, which is when Hansen’s observational data ends, I took a look at the rate of growth of CO2 emissions since 1987. Figure 2 shows that graph.

Figure 2. Annual increase in CO2 emissions, percent.

This shows that Hansen’s estimate of future CO2 emissions was quite close, although the reality was ~ 25% MORE annual increase in CO2 than Hansen estimated. As a result, his computer estimate for Scenario A should have shown a bit more warming than we see in Figure 1 above.

Next, I digitized Hansen’s graph to compare it to reality. To start with, here is what is listed as “Observations” in Hansen’s graph. I’ve compared Hansen’s observations to the Goddard Institute for Space Studies Land-Ocean Temperature Index (GISS LOTI) and the HadCRUT global surface temperature datasets.

Figure 3. The line marked “Observations” in Hansen’s graph shown as Figure 1 above, along with modern temperature estimates. All data is expressed as anomalies about the 1951-1980 mean temperature.

OK, so now we have established that:

• Hansen’s “Scenario A” estimate of future growth in CO2 emissions was close, albeit a bit low, and

• Hansen’s historical temperature observations agree reasonably well with modern estimates.

Given that he was pretty accurate in all of that, albeit a bit low on CO2 emissions growth … how did his Scenario A prediction work out?

Well … not so well …

Figure 4. The line marked “Observations” in Hansen’s graph shown as Figure 1 above, along with his Scenario A, and modern temperature estimates. All observational data is expressed as anomalies about the 1951-1980 mean temperature.

So I mentioned this rather substantial miss, predicted warming twice the actual warming, to the man on the Twitter-Totter, the one who’d said that Hansen’s prediction had been “proven accurate”.

His reply?

He said that Dr. Hansen’s prediction was indeed proven accurate—he’d merely used the wrong value for the climate sensitivity, viz: “The only discrepancy in Hansen’s work from 1988 was his estimate of climate sensitivity. Using best current estimates, it plots out perfectly.”

I loved the part about “best current estimates” of climate sensitivity … here are current estimates, from my post on The Picasso Problem  

Figure 5. Changes over time in the estimate of the climate sensitivity parameter “lambda”. “∆T2x(°C)” is the expected temperature change in degrees Celsius resulting from a doubling of atmospheric CO2, which is assumed to increase the forcing by 3.7 watts per square metre. FAR, SAR, TAR, AR4, AR5 are the UN IPCC 1st, second, third, fourth and fifth Assessment Reports giving an assessment of the state of climate science as of the date of each report. Red dots show recent individual estimates of the climate sensitivity

While giving the Tweeterman zero points for accuracy, I did have to applaud him for sheer effrontery and imaginuity. It’s a perfect example of why it is so hard to convince climate alarmists of anything—because to them, everything is a confirmation of their ideas. Whether it is too hot, too cold, too much snow, too little snow, warm winters, brutal winters, or disproven predictions—to the alarmists all of these are clear and obvious signs of the impending Thermageddon, as foretold in the Revelations of St. James of Hansen.

My best to you all, the beat goes on, keep fighting the good fight.

w.

207 thoughts on “Hansen’s 1988 Predictions Redux

    • Let’s see if I have the technique down.

      I doesn’t matter how much warming Hansen predicted.
      He predicted that it would warm.
      It did.
      Therefore if we don’t stop producing CO2 we are all going to die.

        • Willis , the old adage that goes something like “best not to argue with idiots as they will drag you down to their level and beat you with experience”.

          I am amazed you have the patience for twitter ,it is an aptly named platform.

          • Really? It is misspelled constantly! The actual name is TWATTER, little Jackie Dorsey has been running from the real name since 2006 and has failed to escape it yet. That is why he looks so constipated all the time.

          • bit chilly, I find Twitter has lots of interesting stuff, and I spend very little time there arguing with idiots.

            Go figure,

            w.

      • Sorry to have to disagree, but no I don’t think you have “Got it in one”.
        The Tweeterverser said “The only discrepancy in Hansen’s work from 1988 was his estimate of climate sensitivity. Using best current estimates, it plots out perfectly.”.
        So temperature doesn’t even have to go in the same direction as the original prediction – it didn’t even have to warm.
        It’s difficult to put the Tweeterverser’s idea into simple words in a way that doesn’t make it sound as potty as it really is, but maybe something like this:

        The forecast was made 30 years ago, so only an idiot would test it without first bringing it up to date using the known conditions over those 30 years. When you do this, the forecast is proven to be completely accurate.

        We need that thinking program for schools, and we need it yesterday.

        • Basically they are saying that if you take what we know now, and apply it to the predictions from back then, the predictions from back then would have looked what we know now. Therefore Hansen’s predictions back then were spot on.

          Circular math at it’s best.

        • Everyone is 100% correct when you can change your prediction based on today’s knowledge. His hypothesis was falsified and needs to start over. You are definitively right in you need to take a course in logic and the scientific process.

        • My standard operating principal is “you first”, then I will write the account of you going first and I am pretty sure “globall warmining” will not be the listed cause of death.

      • When will science finally admit lambda = 0? Lamba = 0 makes perfect sense, and is consistent with every scrap of information we have, and exactly explains the warming to date. The only downside of admitting lambda = 0 is there suddenly is no crises.

    • Oh yes.

      Of all the climate models, one is fairly close to the observed trend. Therefore, we have to believe that the most extreme models are credible as well. link

    • At this point, Hansen’s Scenario A prediction is only off by 2 standard deviations (and drifting further and further off every day).
      So he’s only off by 95%+.

      • Technically, that’s not exactly correct.
        What I should have said is that we can be 95%+ sure that Hansen’s Scenario A prediction is a boat load of crap.
        Fixed.

    • “The only discrepancy in Hansen’s work from 1988 was his estimate of climate sensitivity. Using best current estimates, it plots out perfectly.”

      All that this really demonstrates is that the alarmists do not understand the skeptical point of view at all. They live in a bubble created by failing public schools and a mainstream media, both only capable of presenting one point of view. I’m so cynical about the future of our culture…

  1. “The only discrepancy in Hansen’s work from 1988 was his estimate of climate sensitivity.”

    Oh, is that all? Well, we all know climate sensitivity isn’t that important.

    • Roy,
      And scenarios B and C showed the effects of a postulated volcanic eruption in 2014 (that didn’t happen), which was the only reason that those two scenarios were reduced enough to come close to the historical record. That is, even extreme mitigation of CO2 emissions would have been much hotter than what actually happened, had it not been for the reduction in the temperatures from the eruption that didn’t roar.

  2. “The only discrepancy in Hansen’s work from 1988 was his estimate of climate sensitivity. Using best current estimates, it plots out perfectly.”

    That response is just beautiful!

    • He was wrong back then, however if we adjust his predictions to match what actually happened, then his predictions were correct now. And that is all that matters.

  3. The real world data seems to better match somewhere between Hansen’s scenarios B and C, even though neither match the trajectory along the way particularly well. If global temperatures continue to fall back to the trend after the El Nino peak of 2016, then Hanson’s scenario C will be closest.

    But clearly there was no “drastic reduction of trace gasses between 1990 and 2000” as required under Hansen’s scenario C, so something is dramatically wrong in his modelling.

    • Are there any climate alarmist models that have survived the test of time?

      It’s been a while since I casually looked for a single model that was ‘close’ for longer than 5 years. I couldn’t find any.

      • Wouldn’t that give you two uncles?
        Something else has to change as well, not just your aunty.

        • No – we’re talking about a parent’s sibling here. I think you’re assuming that the “aunt” is an “aunt” by marriage to an “uncle” by blood. If my mother is an only child and my father only has one female sibling, the statement “if my aunt had balls, she’d be my uncle” is perfectly logical and requires no “second uncle.”

          • Yea, it took a while, but I finally got to the same conclusion. I somehow mixed up uncle and father. I’m a bit dizzy today. Monday…

  4. I still think he got the choice of hat about right though. I lost a similar one in Seattle back in 1996, but I expect they’re not related.

    • You mean like the Russian model’s sensitivity, or perhaps Lindzen, or Curry Lewis, or Dr Spencer, or Dr Michaels or etc, etc?

      • I think Willis ought to do a comparison between Hansen’s predictions and the Russian climate model.

        And with UAH.

        What kind of scientific results can one get from using bastardized data like GISS and HadCrut? I would say bastardized results.

        • That’s it Tom. At this point in time with data being corrupted like it is, methinks UAH should be the main resource point.

          • I would like to see Willis do a comparison of all these charts with the Tulsa, Oklahoma surface temperature chart.

            It won’t look like a Hockey Stick chart with the warming of the 1930’s removed, instead it will show 1936 as being the hottest year on record in Tulsa, warmer than subsequent years. Tulsa, Oklahoma has been in a temperature downtrend since 1936.

            Let me tell you about the summer of 1936 in Tulsa. In the summer of 1936, Tulsa had about 60 days of over 100F, and 20 of those days were over 110F, and four of those days were 120F. And the surrounding states were just as hot. If we had weather like that today, the Alarmists would go crazy with fear. But we don’t have weather like that today, instead we have some of the most benign weather in memory. The Alarmist are not describing this reality.

            The decades of the satellite era (1979 to present) don’t even come close the the extreme weather and temperatures of Tulsa, Oklahoma in the 1930’s. And the rest of the United States shows the same high temperatures verses today’s temperatures, if you go by city and state temperature charts. NASA/NOAA have managed to bastardized even the US surface temperature chart now, but the individual city and state temperature charts show a different story than what NASA and NOAA are telling us, and the individual charts are the real reality.

            Compare those Global Climate Models and Hockey Sticks to some real data, local data that hasn’t been tampered with.

            New Orleans has a nice, long surface temperature record. Let’s see if it looks anything like a Hockey Stick.

    • The problem wouldn’t exist if they used the standard approach from the start. But they wanted to prove CO2 was the problem and they were successful, based on the bad policies implemented and the trillions of dollars wasted. They wanted and got a predetermined result for a political agenda; in Hansen and the gang were successful. Accurate climate forecast? Not so much.

      • They have taken and politicized an idea of NATURE and use that, Nature dictatorship, to dominate Man and capitalism. Their real aim is a radical chance of society.

      • It’s not so much that the predictions were wrong, it’s that there were bad assumptions built into the model.
        Replace the bad assumptions and the previous model does better.
        Ergo, the you never have to admit that you were wrong, you just disavow all previous work.
        I wonder if the trillions of dollars wasted because of the now disavowed predictions can be clawed back as easily?

    • Quite so. Re-run the model with the new (lower) sensitivity.

      WHOOPS! Now the model agrees (within reason) with more recent observations – but the modeled temperatures are way out of line with earlier observations; the model is far COLDER.

      No problemo! Just “adjust” (the NewSpeak word for “fake”) the earlier observations so that they are colder than real history said they were. The year 1984 is a good place to start…

    • Steven Mosher: The statement “standard approach is to re run the model with the new sensitivity value” is seriously deranged. I suppose it comes from the same school of post-modern science as the need to protect a ruling paradigm: “The failure of a result to conform to the paradigm is seen not as refuting the paradigm, but as the mistake of the researcher.”.
      Look, this thing is really simple. To test a prediction, you compare results with predicted results. Period. If you update using new values, you are making a new prediction, and it does absolutely nothing to the old prediction. The old prediction is still “out there” for testing.
      Regrettably, many in science don’t seem to understand simple basics. Science, or at least some of it, would appear to be in a very sorry state.

      • Richard Feynman summed it up brilliantly in his talk to some students that you can see on YouTube, saying that if the observations do not match the hypothesis then the hypothesis is WRONG.
        No matter how clever you are, how clever the hypothesis is, it is WRONG.

      • “The failure of nature to conform to the General Circulation Models is seen not as refuting the models, but as errors of reality and mistakes of the researchers.”
        Generic IPCC Climate Scientist

    • If my latest prediction is correct, that proves that all my previous, bad predictions, are also correct.
      That may be the standard approach in climate science, however in actual science, scientists own up to their mistakes and move forward with new data and new knowledge.

    • And change another value and re-run the model and then re-run again.

      How many models do we have? Why do have so many models?

      Settled science indeed

      • My model is correct, and so are all the others! It’s that bloody reality that’s the problem! See my most-recent grant-funded paper!/(sarc)

    • Let’s see. I have a model that predicts the yearly-average temperature for the next 30 years. My “model” contains beaucoup degrees of freedom (independent variables). I set a numerical value for each independent variable and plot the model output temperature for the next 30 years. Over that time, measurements are made of the yearly-average temperature. The measurements don’t come close to agreeing with my model predictions; but using the “standard approach” I adjust a few of independent variable values, and voila my model matches measurement quite well. From this I conclude that I had a good model. Give me a break. My model wasn’t just the selection of a set of independent variables; my model also contained a numerical value for each independent variable. To claim that I had a good model all along is a joke. With enough degrees of freedom, I can make a model that will fit any finite set of measurements. By this “standard approach” line of reasoning, there is no such thing as a “bad model,” only models with too few degrees of freedom.

      • As an example, imagine the following conversation between a broker and an investor he advises.

        Investor: “What happened? In addition to the 10 grand I paid you because you convinced me you had a can’t fail stock-price-prediction model, I lost my shirt making investments in line with your model.”

        Broker: “My model is actually very good. I just used the wrong values for a few of the model parameters. I fixed that problem by inserting new values into my model. My model with the new values with 100% certainty tells you how you should have invested to make a fortune. So obviously my model was and is good. It just didn’t accurately predict the future, but it will now. Given that, when can I expect another $10,000 check from you for my advice?”

        Investor: “Just a sec, I’ll have to check my model to predict what your worth is to me. It doesn’t look good. My model says you can expect a check from me sometime between when hell freezes over and when pigs fly.” See, I too have a pretty good model.”

    • Mr. Mosher, apparently you didn’t get the consensus memo; climate sensitivity is an emergent property of the models, not an input to the models. Just ask Gavin or any of the rest of the gang.

      This is all so much BS for inquiring (susceptible) minds.

    • Steven Mosher: standard approach is to re run the model with the new sensivity value.

      Which new sensitivity value?

      Of possible sensitivity values, should we regard the one that best corrects the model-data fit as a new estimate?

      • Again, the model calculates the climate sensitivity. One must fundamentally change the model to get a different ECS.

        • Dave Fair: Again, the model calculates the climate sensitivity. One must fundamentally change the model to get a different ECS.

          Steven Mosher: standard approach is to re run the model with the new sensivity value.

          What Steven Mosher recommended (or says is a standard approach) is something that can not be done. When a bunch of parameter estimates and other model details are changed, then there isn’t an entity “the model” to which you could amend one sensitivity value.

    • **standard approach is to re run the model with the new sensivity value.**
      I thought it was”standard approach is to re FUDGE the model with the new sensivity value.”

      • Again, sensitivity values come out of models; they are not put in. Modelers dick around with math and parameters until they get something that “seems right” to them. At the time, Hansen liked his then-model because it gave him a sensitivity value greater than 4.

        He was hoping for 5, but couldn’t dick around too much because of those darned historical values. It took NOAA’s Karl to get around a lot of history.

        People got tired of model-seances for sensitivity values and went about using empirical methods to come in with values somewhat less. See Lewis, especially.

    • Yay, Moshpit comes to the rescue with another obtuse, obfuscating drive by comment!

      Moshpit means re run the model with whichever sensitivity value necessary to make Hansen’s boat load of crap prediction match reality.

      That’s how climate “scientists” roll.

      • I still get flashbacks of the Ozzy Osborne mosh pit experience. I wouldn’t have missed it for the world, but man, it comes at a cost.

      • The all-knowing Mr. Mosher forgot that climate sensitivity is an emergent phenomenon of the models, not an input. But the modelers do dick around with the math and parameters to get the sensitivity that “sounds about right.” Hansen was shooting for 5, but just couldn’t tweak fast enough for Big Al’s climate circus.

    • But his prediction was for terrible things to happen due to *less*CO2 than was actually emitted.
      He was wrong.

    • Willis, did you try to recalculate the TCR from the observations? In your figure 5 you showed essentielly the ECS bandwidth, however the TCR ( which is smaller than the ECS) is more appropriate in this case IMO. I tried it elsewhere and found about 1.3 °C/doubling CO2 which is also the result of Lewis/Curry 2018. Can you confirm this value?

    • if you feed in the values founded on actual data it is no longer a forecast or prediction its a hindcast. Hardly the same thing is it?

    • standard approach is to re run the model with the new sensivity value.

      That maybe the standard approach for what passes as science where you come from Mosh, but in real science you don’t get do-overs on your predictions. Real science is predictive, and your predictions stand or fall on what they originally predicted. If your predictions fail, then you start over as your hypothesis that made those predictions is FALSFIED. You don’t get to rejig your predictions and then claim your predictions were actually accurate.

    • But that doesn’t make the original prediction right, does it? Hansen said that given scenario A, X would happen … given scenario C, Z would. We have had scenario A and Z happened!

      Yes, by all means redo the figures but you then have to wait 30 years before that is proved right or wrong and meanwhile we have lives to lead and no reason to suppose Hansen’s second attempt at glorified guesswork would be any better than his first.

      Not forgetting that “the climate system is a coupled non-linear chaotic system, and therefore the long-term prediction of future climate states is not possible.” Or put another way 3 metres of snow in 24 hours in the Bavarian Alps may be freak weather or some side-effect of global warming or the first sign of an impending (little) ice age. And we don’t know which.

    • “standard approach is to re run the model with the new sensivity value.”

      … following which all the “tipping points” and “end-of-the-world” scenarios fade away to nothing (on business as usual growth).

    • What if the value of climate sensitivity that gives the result closest to observations is zero? I. e. carbon dioxide has no effect on climate?

  5. Willis why is the Russian model the closet to observations over the last 31 years? Were they just lucky or perhaps not hampered by the competitive need to alarm us all?

  6. Isn’t this a method to determine climate sensitivity – compare Hanson’s projections under various emissions amounts and the one that matches is Hanson’s estimate of sensitivity.

    thanks
    JK

    • You are trying to determine Radiative Transfer sensitivity with classical physics measurements … good luck with that we can’t even do that in a lab setting because you are dealing with EM waves/particles.

      Try the most basic experiment called the coloured cup experiment this is what you do with the kids

      Gather five coffee cups, identical except for color. Run hot water from a tap for a minute or two, until it reaches its maximum temperature. Fill the cups with hot water and move them to a dark, cool room. Place a thermometer in each one and wait 20 minutes. Read the thermometer in each cup and compare the temperatures and colors. The darkest colors should read the coolest .. now explain why?

      Classical physics says all the cups cool at the same rate, and that is just one QM effects that are at play that you are trying to cover with a single sensitivity number for Earth.

  7. Come on…its perfectly simple…he had the sheet of paper tilted at the wrong angle…otherwise perfect!
    Much in the same way Obama and the head of NOAA accidentally tilted the entirety of the twentieth century in the writing of the infamous “Pausebuster Paper.” A perfectly understandable misunderstanding. In taking the Pausebuster Paper to the UN Paris blatherfest and representing it as fact when it was complete bullshit…was a simple miscalculation. Not the greatest fraud in human history…costing the world trillions of dollars ongoing and being responsible for hundreds of thousands of deaths due to pneumonia by power bill hikes etc. They just accidentally tilted the page…a simple mistake anyone could make, surely?
    By the way, I’m guessing you all know about that? Whistleblowers from within NOAA went to the Senate and blew….NOAA belatedly apologized and were awfully sorry that “unfortunately” their computer broke and they couldn’t show how they came to these findings, so changed them, and the publication Nature changed their criteria for accepting papers saying they wouldn’t publish papers in the future that couldn’t be replicated, and then absolutely nothing happened!? WTFH is that? Does anyone know what is going on with the investigation into the greatest fraud in human history? My best guess is…zip. Evidently Trumps Tax returns from decades ago are a far more compelling way to spend investigative recourses than the current ongoing greatest fraud in human history. Meanwhile the countries that are paying vast sums of money based on the greatest fraud in human history…are still paying. Even though Obamas fraudulent lies are actually exposed in front of the Senate and are proven to be lies.
    If I were running the joint things would be very different.

        • Depends on how the Digital display “stopped”. If its still getting power just not updating the display (IE “frozen” display), then the “right time twice a day” phenomena holds. If it’s lost power/showing a blank display then you are correct.

          That has been your daily dose of pedanticism, you are welcome..

          • John Endicott

            Unless it’s showing a 24 hour clock. Once a day then.

            pedanticism cubed………you are also welcome. 🙂

  8. Dr Curry expects the AMO to change to cool phase sometime in the not too distant. So what will be their excuse if/when this happens and temps start to drop or pause in the NH?
    The NH and Arctic temp increase has been so important for them to drive the debate about their so called CAGW. Not so much the SH or Antarctica.
    Perhaps the see saw effect will help them out as we’ve seen in the past? Who knows, but their co2 must act like a sort of pixie dust if that is the case.

    • Dr Curry expects the AMO to change to cool phase sometime in the not too distant. So what will be their excuse if/when this happens and temps start to drop or pause in the NH?

      About 10 years ago we were being told on this very blog that the world was about to cool rapidly (in fact, had already started to do so) because of PDO/AMO fluctuations and below average sunspot numbers in solar cycle 23 (Don Easterbrook, David Archibald). Instead what followed was the warmest decade on record according to every data set we have, including UAH satellite.

      So perhaps the question to ask is what will be the excuse if/when temps *don’t* start to drop, again.

      • DWR54. the AMO has remained positive. I have no idea who expected it to go negative 10 years ago. I suspect you are making that up.

        BTW, the temperature did follow most expectations (flat) for several years until the super El Nino. You really aren’t still one of those who thinks that is climate, are you? You must be extremely disappointed with the cooling over the past 3 years.

      • Richard M

        DWR54. the AMO has remained positive. I have no idea who expected it to go negative 10 years ago. I suspect you are making that up.

        I didn’t say anyone expected it to go negative. I said that “fluctuations” in AMO, PDO and solar activity were being used to produce future cooling projections. From Don Easterbrook’s WUWT post from Dec 2008:-

        Comparisons of historic global climate warming and cooling, glacial fluctuations, changes in warm/cool mode of the Pacific Decadal Oscillation (PDO) and the Atlantic Multidecadal Oscillation (AMO), and sun spot activity over the past century show strong correlations and provide a solid data base for future climate change projections.

        https://wattsupwiththat.com/2008/12/29/don-easterbrooks-agu-paper-on-potential-global-cooling/

        Easterbrook forecast cooling of by 2007 (± 3-5 yrs) of about 0.3-0.5° C until ~2035. How come no one here ever compares ‘that’ prediction against Hansen’s?

        BTW, the temperature did follow most expectations (flat) for several years until the super El Nino.

        The forecast wasn’t for “(flat) for several years”, it was for cooling of of about 0.3-0.5° C which “seems to have already begun” (Easterbrook, 2008, linked to above).

        You really aren’t still one of those who thinks that is climate, are you? You must be extremely disappointed with the cooling over the past 3 years.

        On the contrary, the “cooling” over the past few years was entirely to be expected and was widely predicted. The 2015/16 el Nino pushed temperatures above the long term warming rate, as el Ninos tend to do. This so-called “cooling” is simply ‘reversion to the mean’; the ‘mean’ being the long term warming trend of ~ +0.2C/dec.

  9. Willis, Can you tell us what climate sensitivity Hansen used to make his graph back then? Knowing that allows comparison with the “best current estimates”. I don’t see a red dot for 1988 in fig.5.

    • I would like to see Hansen’s estimate for climate sensitivity as well.

      And for those us who don’t remember, what was scenario B and C and how do they compare?

      • Peter for settled science why are there 3 scenarios in the first place?

        Climate sensitivity factor seems to me like someone doesn’t understand their variables

    • Hansen’s 1988 testimony says he used the model given in Hansen, Fung, Rind, Lebedeff, Ruedy and Russell. (1988) “Global Climate Changes as Forecast by Goddard Institute for Space Studies Three-Dimensional Model.” Journal of Geophysical Research, 93 D8: 9341-9364. August 20, 1988. At the time of his testimony, the paper was in press. The paper is paywalled and some pay access does not go back that far. However, in case you can’t get access, the relevant text is on page 9342 under section 2 Climate Model. There they say:

      “The equilibrium sensitivity of this model for doubled CO2 (315 ppmv -> 630 ppmv) is 4.2 degC for global mean surface air temperature (Hansen et. al. 1984). This is within, but near the upper end of the range 3 degC +- 1.5 degC estimated for climate sensitivity by National Academy of Sciences committees (Charney, 1979; Smagorinsky, 1982), where their range is a subjective estimate of the uncertainty based on climate-modeling studies of the empirical evidence for climate sensitivity. The sensitivity of our model is near the middle of the range obtained in recent studies with general circulation models (GCMs) (Washington and Meehl, 1984; Hansen et. al 1984; Manabe and Wehterald, 1987; Wilson and Mitchell, 1987).”

      • thank you fah

        The equilibrium sensitivity of this model for doubled CO2 (315 ppmv -> 630 ppmv) is 4.2 degC for

        pronouns are the bane of science and engineering. I can’t tell whether “this” applies to scenario A, B, or C.

        • It looks like, from Section 4 of the paper, that the scenarios A, B, and C were designed to refer only to variations in the trace gas composition of the atmosphere and not other aspects of the models, particularly any hydro or thermodynamics.

          Further down, in Section 6.1 of the paper, they say
          “The climate model we employ has a global mean surface air equilibrium sensitivity of 4.2°C for doubled CO2. Other recent GCMs yield equilibrium sensitivities of 2.5°-5.5°C,
          and we have presented empirical evidence favoring the range 2.5°-5°C (paper 2). Reviews by the National Academy of Sciences [Charney, 1979; Smagolins/..y, 1982] recommended the range l.5°-4.5°C, while a more recent review by Dickinson [1986] recommended l.5°-5.5°C.
          Forecast temperature trends for time scales of a few decades or less are not very sensitive to the model’s equil­ibrium climate sensitivity [Hansen et al., 1985]. Therefore climate sensitivity would have to be much smaller than 4.2°C, say l.5-2°C, in order to modify our conclusions sig­nificantly. Although we have argued [paper 2] that such a small sensitivity is unlikely, it would be useful for the sake of comparison to have GCM simulations analogous to the ones we presented here, but with a low climate sensitivity. Until such a study is completed, we can only state that the observed global temperature trend is consistent with the “high” climate sensitivity of the present model. However, extraction of the potential empirical information on climate sensitivity will require observations to reduce other uncer­tainties, as described below. The needed observations include other climate forcings and key climate processes such as the rate of heat storage in the ocean.”

          When they say [paper 2] they refer to Hansen et. al. 1984.

      • Since climate sensitivity to CO2 is an emergent property of the model, one cannot change the sensitivity of the model without going in and fundamentally changing the model’s guts. Show me a quote that contradicts that; ECS is an emergent property of the models, according to the modeling experts.

        • You are 100% correct Dave Fair. Unfortunately many people on this site will not understand your point.

          • Not only that, but the climate sensitivity as defined would NOT be constant over time, given the nature of the myriad modes of energy transport (see non-linearity, capacitance).

            There is absolutely no basis upon which to make that assumption, or even that its phase angle would not change with time (relative so some putative “independent” variable). Except maybe averaged over geologic time periods. And we know how that relationship looks. Atmospheric CO2 LAGS temperature.

            Some of us understand.

        • This is not my field but I do have access to most journals and it looks like you are quite correct. I have been mistaken for quite some time on that point. One of the references in Hansen et. al. 1988 had a nice explanation (at least to me) of the origins of climate modeling. They discuss climate sensitivity and from whence it comes at some length. I was particularly interested in the derivation by analogy to electrical engineering rather than thermodynamics, since it accounts for and makes more understandable (to me) the peculiar terminology used, from the perspective of physics. The reference is available scanned on some Brit’s web site:

          http://www.350.me.uk/TR/Hansen/Hansenetal84-climatesensitivityScan.pdf

      • fah, I join with others in expressing great thanks to you for digging up this 30-year old information having such detail.

        I take particular note of this portion of the quoted statements “. . . the range 3 degC +- 1.5 degC estimated for climate sensitivity by National Academy of Sciences committees (Charney, 1979; Smagorinsky, 1982), where their range is a subjective estimate of the uncertainty based on climate-modeling studies of the empirical evidence for climate sensitivity.”

        So, decoded, it says NAS committees (note plural) reviewed mathematical models that incorporated some undefined amount of “empirical evidence for climate sensitivity”, and based on that process they SUBJECTIVELY ESTIMATED the “range” (was it one-sigma? two sigma ? 0.1 sigma?) of ECS uncertainty to 0.1 degC precision. Yeah, right.

        I seriously doubt any of the “empirical evidence” used in the mentioned climate model “studies” had data accuracies and consistencies/repeatabilities of even +/- 0.5 degC, let alone +/- 0.1 degC. Then you run that empirical data through some climate models (or maybe 10-plus different climate models . . . how many sophisticated global climate models did they have in the mid-1980’s?) and expect the data accuracy to improve? And then you run those model outputs through the admittedly “subjective” minds of those serving on the NAS committees (How many? What were their qualifications to evaluate the model output results?) and expect whatever “data” accuracy to not be further degraded? Finally, what process did they use to reach a “consensus” that the ECS uncertainty range was no greater than +/- 1.5 degC, all things considered?

        The stupidity in above-quoted NAS ECS uncertainty assertion . . . it burns! And of course, this BS went unchallenged at the time by a great number of so-called “climate scientists” because it served so well to demonstrate the accuracy they had in determining ECS back then.

        • The later NAS document on CO2 and climate is available online at

          https://www.nap.edu/catalog/18524/carbon-dioxide-and-climate-a-second-assessment

          This is the later reference Hansen et. al. 1988 use for climate sensitivity. It has a good bit of discussion of their thoughts on it at that time. Hansen was a participant in the assessment. Those who are knowledgeable in this area might find it interesting reading. I only skimmed it a bit and spent a little time on the sensitivity section. One thing that jumped out at me was how much time and effort they spent on attacking estimates made by S. B. Idso. It looks like they felt a fair amount of effort was warranted doing so. I vaguely recall seeing Idso mentioned off and on in the blogs. A quick look at the desmog blog indicates he is not in favor in those circles.

  10. Is the West Side Highway underwater yet? That was supposed to happen by last Monday. Or maybe he meant by 31 Dec 2019?

    • Well, it did rain on December 31st, 2018 – so it was underwater. Just not the water most people assumed…

    • His West Side Highway scario (Hey—autocorrect finally did something right!) depended on large chunks of Antarctic ICE falling into the ocean, and he’s said his timeline for that extends to 2040.

  11. Hansen later argued that his model was demonstrated to be reasonably accurate with regards to climate sensitivity because what his model missed on was not emissions, or climate sensitivity, but the amount of greenhouse gasses remaining in the atmosphere. Essentially he was saying that the real world CO2 concentrations followed his scenario B, as did temperatures. This twitter guy you were talking to was just winging his response, and doesn’t know what he’s talking about.

    The problem with Hansen’s defense was that, in that revisionist history, he was ignoring the difference between correlation and causation. The only way to demonstrate your understanding of causal relationships in a system via a prediction is to accurately predict the result of causing a change in an input; simply predicting corresponding values of two variable only shows correlation between the two and not causation. In 1988 Hansen was clearly using his model to show the causal effect of emissions on temperatures – that’s why he laid out three “emissions” scenarios, testified under oath that Scenario A was the “business as usual scenario” and tried to advocate for emissions reductions.

    But when the world kept emitting as usual, and his doomsday Scenario A was way off, Hansen judged his model in retrospect by cheating; he pretended that his model only set out three different possible scenarios of future CO2 concentrations and argued that the scenario with the CO2 concentration closest to reality also had temperatures closest to reality. But if that was the original purpose of his model it would have only been designed to test the correlation between temperatures and atmospheric CO2 level – not at all testing whether temperatures follow changes in CO2 concentration, or whether CO2 concentration follows changes in temperature, or some combination of the two.

    This is one of the reasons I think Hansen and his myriad sycophants are less scientists than they are propagandists that believe global warming as a matter of dogma and just rig the procedures and mathematics of their published papers so that they conform to the dogma.

    • Are you saying that Hansen predicted future CO2 concentration in the atmosphere based on an estimate of future emissions but overestimated the fraction that would remain in the atmosphere? (In which case, he was far off the mark as to the harmful impact of continued use of fossil fuels which was his main point?)

      Are you also arguing, as I would, that seeing CO2 concentration be a function of temperature is exactly what we would expect if the climate is warming naturally and CO2 has little effect on temperature?

      For the sake of argument, if ECS were zero (which I’m not claiming), and therefore temperature would not even be weakly a function of CO2 concentration, we would still expect CO2 concentration to rise as temperature rises.

      If ECS is about 1.3-1.5K, then the observed rise in temperature is partly due to CO2, but must also be largely due to other factors.

      By contrast, Hansen and the CAGW believers say that temperature is strongly a function of CO2 concentration and virtually all temperature rise has been caused by CO2.

      Willis asserts that Hansen underestimated the emissions as well. So that implies that if he had accurately predicted emissions, he would have predicted even worse consequences.

      • “Are you saying that Hansen predicted future CO2 concentration in the atmosphere based on an estimate of future emissions but overestimated the fraction that would remain in the atmosphere? ”

        That’s essentially what Hansen was trying to claim in 2006 when he testified to Congress and wrote a follow-up research paper evaluating the accuracy of his original 1988 paper. Hansen’s follow-up testimony and corresponding 2006 paper tried to gloss over the fact that his original paper and original testimony presented his emissions scenario A as the “business as usual scenario.” Instead, he tried to pretend in 2006 that his original forecast was accurate because both CO2 concentration and temperatures in the real world were close to the Scenario B curve, even though actual emissions more closely followed scenario A.

        When making this presentation, Hansen deceitfully mischaracterized his earlier paper as presenting a “worst case scenario,” saying that his original paper described the scenario as being on the “high side of reality.” That quote was taken grossly out of context; scenario A was described in the original paper as the consequence of “continued exponential growth” in GHG emissions and was qualified by the caveat that, since it assumed exponential growth it “MUST EVENTUALLY BE be on the high side of reality” since fossil fuel supplies must at some point start to run out.

        My post above merely points out that this historical revisionism is silly; Hansen originally used his model forecast as a causal prediction of the consequences of three fossil fuel EMISSIONS scenarios (not concentration scenarios) in an effort co convince Congress to curtail fossil fuel use. But in 2006 he was trying to pretend that he could validate that causal prediction simply by showing that both the actual temperatures and actual GHG concentrations somewhat matched the temperatures and GHG concentrations predicted for Scenario B, even though emissions followed Scenario A. But that kind of prediction would be useless for policy purposes because even if it did hold true, it would not show a causal relationship that rising CO2 concentration causes temperature to rise – it would only prove a correlation.

        I don’t know whether naturally rising temperatures would cause an increase in CO2. I don’t have any idea about how you would even begin to scientifically test that. But I do know that if you want to test whether increasing CO2 concentrations cause an increase in temperatures by making a prediction, you have to predict future temperatures as a function of how much CO2 is emitted, and not just show some relationship between CO2 concentration and temperatures.

        • CO2 concentration in sea water is a function of temperature easily measured in the laboratory. Outside the laboratory you can easily observe this basic fact if you have a bottle of cola sitting in the hot sun. If you open it while it is hot, the CO2 will rapidly come out of solution and the mass of bubbles will cause the bottle to overflow. The same bottle allowed to cool down in the refrigerator will only release a small amount of effervescence when opened. This is due to the fact that CO2 solubility in water is dependent on temperature in an inverse relationship.

          Paleoclimate evidence shows that in fact CO2 concentration lags temperature change over all time periods.

          • Just because the capacity of sea water to hold CO2 is a function of temperature does not mean that an increase of temperature over interval X was a cause of a corresponding rise in CO2 concentrations in sea water over that interval.

            And to rely on the paleoclimate record to prove that changes in temperature caused a corresponding change in CO2 concentration you first have to assume that the proxy reconstruction methods can match the time scales of CO2 changes to temperature changes to an accuracy smaller than the time lag you are relying on. You also have to assume that there is no other common variable that is causing both temperatures and CO2 to rise in concert.

          • Yes, I agree that it is imposssible to separate all the factors and I do not claim that we can. I am only saying that a warmer ocean, and for that matter, a warmer land mass, will outgas CO2, which can be empirically demonstrated.

            As for paleoclimate data having insufficient temporal resolution to prove conclusively that CO2 concentration changes lag temperature changes, you may be right. Certainly the point is in dispute. Yet the correlation between CO2 and temperature is trumpeted by no less than Al Gore (though he has causality reversed). While there is a strong correlation between CO2 concentration changes and temperature changes as must be the case if my hypothesis is to be valid, there is a very poor correlation between temperature and CO2 concentration. (Not changes, but actual temperatures and concentrations). If the causation is from CO2 concentration to temperature, then there should be a strong correlation. It should not be possible to enter a glaciation at 4000ppm and be in an interstadial at 300ppm. But if CO2 has little to no impact on the ultimate temperature of the climate system, then there is no contradiction that glaciation may occur at any CO2 concentration while CO2 nevertheless increases with increasing temperature and decreases with decreasing temperature.

            In other words, although we may not have sufficient data to prove my hypothesis, the warmist hypothesis should be considered falsified. As you rightly point out, this does not rule out a third factor causing both temperature and CO2 to rise and fall in concert. But actually, although CO2 lags temperature when there is a correlation, it is also the case that CO2 can change independently of temperature. When the total amount of CO2 in the atmosphere-ocean system is roughly unchanging, CO2 concentration in the atmosphere rises with rising temperature and falls with falling temperature. If there is a change in the total quantity of CO2 such as through extensive volcanism or long-term carbonate formation, then CO2 can vary independent of temperature. This is again consistent with my hypothesis and contradicts the idea that CO2 drives temperature.

    • IIRC, Hansen’s defenders on Skeptical Science claim that Hansen wasn’t talking about just CO2 emissions, but all GHG emissions, and that he over-estimated either the amount of them or their sensitivity, so if they are removed his predictions look better.

      • Current IPCC climate model estimates of sensitivity bunch around 3 and have been shown to run way too hot. Hansen’s was over 4. All the activists’ Hansen sophistry can’t change that.

      • Except that according to Willis, Hansen underestimated CO2 emissions, so how does that add up?

        Plus if the whole point was to predict the effect on temperature of continued fossil fuel burning, and his temperature estimate was way too high at the underestimated CO2 emission rate, it would have been an even bigger failure at an accurate CO2 emission rate.

        As Dave points out, the ultimate question is what is the observed apparent ECS compared to Hansen’s assumed ECS. No amount of “the dog ate my homework” is going to fix his mess if his ECS was three times as sensitive as reality.

  12. From figure 5 it is clear that the global warming component flattens with the time [which I stated more than ten years back] as seen from the climate sensitivity factor. If so, there is no chance of getting 1.5-2.0 oC raise in global average temperature as of recent IPCC report.

    Dr. S. Jeevananda Reddy

  13. Don’t forget Willis global warming is a very flexible discipline where scientists get to retrofit their predictions to observations, while drawing wild future scenarios.

    • That’s exactly right. It’s what happens when you define the physical system you are studying as an open-ended set of statistics that can be mined for whatever type of curve you want to show. “Climate” is itself defined as an average of something over a sufficient number of years. An average of what? Whatever strikes the fancy of the researcher – temperature, maximum daily temperature, minimum daily temperature, precipitation, daily precipitation relative to annual precipitation – the list is endless, limited only by the imagination of the “researcher.”

      An average over how many years? Well that again depends on what the researcher wants. If it’s establishing a base period for showing temperature anomalies in a graph, its 30 or more years. But if it’s showing how “climate” changes over time relative to the base period in that same graph, thirty year averages are way, way too long and just won’t do. Maybe a five-year running average is good enough. Maybe it’s ten years. Maybe there is no running average shown and they just put a linear-fit trend line through annual average temperatures. The fact that when they need to, they say that five year averages and ten year averages are just “noise” in comparison to the longer-term “climate” should never be taken as any kind of inconsistency at all. Maybe for global temperatures “climate” is defined as a thirty year average while for precipitation it’s a fifty year average. Again, the “researcher” can just wing it and use whatever metrics are needed to get the right “look” for the pictures they want to show the politicians.

      The idea of setting a uniform set of metrics to define climate and measure the change in those metrics, like how the rest of the scientific world defines a single standard for what a kilogram is, or what a volt is, just never occurred to the climate science community – because I suspect – they want the flexibility to influence their results so that they fit the story they want to tell.

      • World Meteorological Organization [WMO] of United Nations [UN] defined 30 year period [this was decided by experts from national meteorological departments] — 1931-60; 1961-90; — these serve general climate condition — averages, extremes, etc.

        To get trend and natural variability WMO in 1966 brought out a manual on “Climate Change”. This presents the methods to understand the data series of a meteorological parameter follow random variation or cyclic pattern. Then it gives the trend. To get periodicity and amplitude and phase angles methods were proposed. From IITM, Pune (late) Dr. B. Parthasarathy applied these with precipitation data series [in 1995 presents meteorological sub-division-wise yearly – monthly seasonal rainfall series for 1871 to 1994]. He prepared programmes in Fotran-IV language — compiled through punched cards through IBM1600.

        I learned these from him and as well my boss who was the co-author of “climate Change” manual of WMO (1966).

        At that time we don’t have even a simple calculator.

        Dr. S. Jeevananda Reddy

        • Climate scientists do not limit themselves to the WMO standard, which defines climate only for meteorological purposes. The IPCC in defined it as “in a narrow sense . . . ‘average weather,’ or more rigorously, as the statistical description in terms of the mean and variability of relevant quantities over a period RANGING FROM MONTHS TO THOUSANDS OR MILLIONS OF YEARS” and that the 30-year WMO definition is only the “classical” interval to average your statistic over. (And note the use of the vague phrase “relevant quantities.”) NASA’s website, states that “scientists define climate as the average weather for a particular region and time period, USUALLY taken over 30-years.” Another source states that “changes in weather patterns that persist OVER A DECADE OR MORE is defined as climate change.” I’ve seen sites of scientific organizations say that the relevant interval that separates climate from weather will change based what region you’re measuring the climate of, or what variable or statistic you’re measuring.

          The way that climate researchers quantify climate, in order to determine the amount by which it is changing, is an amorphous mess that belies any claim that it is scientific at all. They just analyze data in whatever ad hoc procedure they want to adopt for the particular paper they are writing at the time.

          • In my above observation mentioned the two aspects. The second one is with reference to climate change — natural variability: it may be 11 year sunspot cycle and multiples, rainfall cycles with different periods. This is different from climate normal of 30 years. By eliminating cyclic part, we get trend. This trend is practically zero in rainfall except local rainfall with drastic changes in climate system as defined by IPCC. But temperature presents the trend associated with several components. 1880 to 2010 global average temperature anomaly presents a 60-year cycle varying between -0.3 to +0.3 oC with a trend of 0.91 oC from 1951 to 2100. This is not global warming. Also if the data starts at 1850 this is 0.80 oC. Truncated data of a natural cycle present different conclusions. Also, the trend may not be linear but also it may be non-linear, this is what I said in the first comment above.

            Dr. S. Jeevananda Reddy

        • If I have a graph of temperature for some place over the last 31 years that follows a linear trend perfectly, then the 31-year average temperature will be the temperature that existed in the 16th year (and in no other year). The “climate normal” will therefore be what existed for one out of those 31 years. Averages from nonstationary time series are not terribly useful benchmarks and the concept of a “temperature anomaly” for that site is essentially useless.

      • Kurt, just so. Look at how NASA presently defines “climate change” at its most basic level:

        “Climate change is a change in the usual weather found in a place. This could be a change in how much rain a place usually gets in a year. Or it could be a change in a place’s usual temperature for a month or season. Climate change is also a change in Earth’s climate. This could be a change in Earth’s usual temperature. Or it could be a change in where rain and snow usually fall on Earth.
        Weather can change in just a few hours. Climate takes hundreds or even millions of years to change.”
        — source: https://www.nasa.gov/audience/forstudents/k-4/stories/nasa-knows/what-is-climate-change-k4.html

  14. Give me a call when they start growing pineapples in Labrador City. Then I might start thinking about the effects of climate change.

  15. Hansen should go to Las Vegas and try to collect on his bet. He can tell us how that went after his knees heal.

  16. At least Dr Hansen told us the truth about the Paris COP 21 mitigation BS and fr-ud. Dr Hansen’s words not mine, but I fully agree with his very accurate description in this Guardian interview.

    Just a pity that Pelosi and other Dem donkeys can’t add up simple sums and understand simple logic and reason. Their virtue signaling allows them to seek to waste endless trillions of $ into the future with no return on the investment at all. OH but they will stuff up the US electricity grid and hurt the poor because of their belief in pixie dust science.

    Here’s Dr Hansen’s BS and fra-d interview in the Guardian. Nancy are you listening ? Apparently not and China and India etc are laughing all the way to their banks.

    https://www.theguardian.com/environment/2015/dec/12/james-hansen-climate-change-paris-talks-fraud

  17. Don’t see how 1.5% per year can be considered close to 1.9% a year. Over 30 years, 1.5%/yr gives a 56% cumulative increase, 1.9%/year gives almost a 76% increase. Not an order of magnitude difference, but not close.

  18. “perfect example of why it is so hard to convince climate alarmists of anything—because to them, everything is a confirmation of their ideas”

    Yes sir. Much of climate science, and specifically the so called Event Attribution Science is driven by a combination of activism needs and confirmation bias. The science is thus confounded and corrupted by activism.

    https://tambonthongchai.com/2018/08/03/confirmationbias/

  19. In a related story … N. CA is getting DRENCHED in rain this weekend … standing water in my front yard … after Uuuge early season Nov. and Dec. rains. Thus confirming Jerry Brown’s “accurate assertion” of a “Neverending drought” in CA due to runaway global warming. I believe this is the 4th season in a row of substantial rainfall and snowfall in CA.

    Here’s the TRUTH- CA has always had, and will continue to have periods of abundant precipitation … and periods of sparse precipitation. Same as it always was … same as it … always was. Despite what a mouth-foaming hysteric Jerry Brown has to say.

    • “N. CA is getting DRENCHED in rain this weekend … standing water in my front yard … after Uuuge early season Nov. and Dec. rains.”

      Which proves how well and how fast California’s carbon tax is working…

      • Does that mean they’ll have to back the carbon tax off to overcome the next drought?

        Speed of response might be problematic.

  20. Here’s the 2006 Vinther et al study of Greenland’ s long instrumental temp record. This is a very long study over 200 years and co-authors are prominent alarmist UK scientists Dr Jones and Dr Briffa.

    Looking at temps over this long period we find that much earlier decades are warmer than the last few decades and they even hold up well against some of the decades over one hundred years ago, back in the 1800s.

    So what will be their excuse when the AMO changes to the cool phase, perhaps sometime in the 2020s? Or has it started already? Who knows?

    https://crudata.uea.ac.uk/cru/data/greenland/vintheretal2006.pdf See TABLE 8 from the study comparing decades

  21. Willis:

    The CO2 change per year doesn’t integrate linearly, so taking the mean of each year’s change doesn’t add up. Or else I’m being stupid, which is entirely possible.

    To see how well Scenario A fits the period after 1987, which is when Hansen’s observational data ends, I took a look at the rate of growth of CO2 emissions since 1987. Figure 2 shows that graph.

    You have the actual as 1.9% and Hansen’s as 1.5%. For an annual increase of 1.9% over (2018-1988) = 30 years, I would expect the total increase to be 1988’s measured value to be 1.019^30 more, or 1.76 times more.

    However, the 1988 concentration was 352 and 2018 was 408, which is an increase of 408/352 = 1.16x. Taking ln(1.16)/30 yields an annual increase of 0.5%.

    What am I missing?

    references:
    ftp://aftp.cmdl.noaa.gov/products/trends/co2/co2_annmean_mlo.txt
    https://www.co2.earth/
    https://sciencing.com/calculate-exponential-growth-8143625.html

    • Ooops, you are referring to emissions, not concentrations!

      However the numbers still don’t add up:

      Global CO2 Emissions Millions of Metric Tons:
      1967: 3393
      1987: 5725
      2014: 9855

      Applying the same analyses, (ln(ratio)/period), I get a percentage increase of 2.6% for 1967-1987, and 2.0% for 1987-2014. (I note the rate of emission increase is dropping over time).

      I’m amused that the CO2 concentration is only going up 0.5% per year when the emissions rate is 2.0%. I wonder where all that CO2 went.

      references:
      https://cdiac.ess-dive.lbl.gov/ftp/ndp030/global.1751_2014.ems

      • To use an engineering term — the regular, rhythmic seasonal behavior of [CO2] swings in the NH demonstrate quite clearly the global CO2 sinks are far from a saturated state.

  22. “The only discrepancy in Hansen’s work from 1988 was his estimate of climate sensitivity. Using best current estimates, it plots out perfectly.”

    Too bad they didn’t let us know in 1988 that their estimate of climate sensitivity was wrong.

  23. Here’s an attempt at calculating the climate sensitivity vs. Hansen’s prediction with the wildly wrong assumption that the entire temperature change from 1988 to 2018 is due to CO2.

    Temperature anomaly in 1988: 0.2degC
    Temperature anomaly in 2018: 0.8degC
    CO2 concentration in 1988: 352
    CO2 concentration in 2018: 408

    climate sensitivity constant = deltaT*ln2/(ln(Cn/Co) = 0.6*ln2/ln(1.16) = 2.8degC

    Which is far less than the 4.2 degC that Hansen used in his model per fah (above).

    Peter

    References:
    http://www.roperld.com/science/globalwarmingmathematics.htm
    ftp://aftp.cmdl.noaa.gov/products/trends/co2/co2_annmean_mlo.txt
    https://www.co2.earth/

    and the temperatures estimated from the plots above.

  24. he’d merely used the wrong value for the climate sensitivity, viz: “The only discrepancy in Hansen’s work from 1988 was his estimate of climate sensitivity.”

    The only relevant scientific question for climate science and any worry over climate change is the sensitivity value of the climate to [CO2]x2.

    Nothing else matters if that value used is grossly wrong. And it was grossly wrong for decades as claimed at > 2.5 K to 4.5 K by climateer-rentseekers. We now know it must be < 2 K/CO2x2, and probably closer to 1.5 K.
    So no problem, turn off the alarms, the +CO2 is net beneficial. That is the problem the carnival-barking climateer-rentseekers have; they have sold a bad bill of goods to the public and politicians for decades and now they can't backtrack to save their grants or their reputations.

    • “he’d merely used the wrong value for the climate sensitivity, viz: “The only discrepancy in Hansen’s work from 1988 was his estimate of climate sensitivity.”

      Hansen did not select a climate sensitivity; his model calculated one. All of the crap assumptions he put into the model resulted in an ECS. Get real; he screwed around with different versions and parameters until he got the result he wanted.

    • Joel O’Bryan January 6, 2019 at 6:24 pm
      “now know it must be < 2 K/CO2x2, and probably closer to 1.5 K."
      Is that using adjusted data, or unadjusted data?
      Seeing as at least half the increase in temperatures is due to adjustments, how sure are you of the 1.5K.

  25. Back in the summer this subject was addressed on Climate etc. by McKitrick and Christy and in a video by Yale Climate Connections. I wrote this note to my discussion group:
    “I have trouble with the dichotomy between these two reviews Of the James
    Hanson model predictions at 30 yr. .

    https://judithcurry.com/2018/07/03/the-hansen-forecasts-30-years-later/

    https://www.youtube.com/watch?v=UVz67cwmxTM

    The video seems to imply that he “got it just right” and we have only
    improved models since then. The Climate Etc. article says he missed
    radically and it stands as proof of the models not using settled
    science. Which one makes their point more believably for you?”

    The discussion group really wanted to accept the video and went to quite some lengths to downplay McKitrick to do it.
    I noted an interesting fact while reviewing Hansen’s paper. His Model has a resolution of 8 degrees of latitude and 10 degrees of longitude. My rule of thumb it a minute of lat or long in Montana is about 1 mile.( 1′ Lat is about 1.2 mi and 1′ long. Is about .8 mi.) so his resolution is a grid about 500 mi square. Less than 2 grid cells for all of Montana. It is hard for me to imagine how that model handles convective mixing or latent heat transfer of phase changes of water.

    • DMA – He had to use a coarse grid of 8 x 10 degrees to make the model work on the best computers of the time.

      Now that supercomputers are that much more powerful, models can use a finer resolution (IIRC the CMIP5 models use a 1 x 1 degree grid, still far too coarse to model weather events). But, after tweaking to make them fit the past and averaging the multiple runs of multiple models, they give essentially the same predictions as Hansen 1988.

      Makes you wonder whether all that investment in computer hardware was worth it. A disinterested observer might conclude that the input parameters were wrong in 1988 and are still wrong now.

  26. With climate alarmists when the future doesn’t turn out as predicted it’s not their fault. It’s the history that’s out of whack . Let’s adjust the past data so that it fits the narrative of our predictions and then we will always be right.

    • “Where’s Nick?”
      On holiday, with limited internet access. But all this was rehashed just 6 months ago, on the 30-year anniversary. A full analysis of the scenarios is here. The outcome was between B and C. It is true that CO2 was not far from Scenario A. But A and B were virtually the same for CO2 over the period. Scenario B was actually closer, but that made little difference.

      The big effects on scenario were the slow rise of CH4 and the big restriction of CFC’s, neither anticipated by A (which was actually an older scenario). The other factor was that A made no provision for volcanoes at all. Scen B postulated big eruptions in 1995 and 2015. Pinatubo fulfilled the first, but the second hasn’t really happened yet.

      Incidentally, it isn’t just my assessment that the scenario that was followed is between B and C. Here is Steve McIntyre:
      “As to how Hansen’s model is faring, I need to do some more analysis. But it looks to me like forcings are coming in below even Scenario B projections.”

        • “Those are emission scenarios. The CO2 emissions have grown even more than scenario A!”
          No, not true. Oddly enough, Willis was calculating them correctly back in 2006 when he said
          “However, after 1988, all three scenarios show more CO2 than observations. C drops off the charts in when it goes flat in 2000, but A and B continue together, and they continue to be higher than observations. “

          That relation continued to be pretty much true.

          • Nick, it’s crazy that you deny this. The average annual growth in CO2 emissions since the prediction was 1.9%. Hansen’s scenario A (BAU) assumes 1.5% average annual growth. You cannot deny this.

          • Nick, the average annual growth in anthropogenic CO2 emissions since the prediction was 1.9%. Hansen’s scenario A assumes 1.5%. How can you D E N Y this?

          • No Nick is probably right , Willis has taken the average of the estimated annual average emissions for the period. That gives him the average of the annul linear growth of the accumulated total sum for the whole period, and if I remember correctly Hansen’s 1.5% was his estimate of the exponential growth rate so that would be equivalent to a liner growth rate of ((1+1.5/100)^30-1) /30 = 0.0188 which rounded up to 0.019 given as a ratio or 100*0.019 = 1.9 % . It is the same number as Willis gets for actual growth for the annualy calculated estimate numbers tha CDIAC puts out.

          • ” It is the same number as Willis gets for actual growth for the annualy calculated estimate numbers that CDIAC puts out.”
            It isn’t the exponential arithmetic. It’s the difference in physical quantities between that CDIAC figure for amount of C burnt, and the quantity Hansen identified with emission, which was the increment of CO2 in the air. It just happened that one increased at about 1.9%pa and one at about 1.5%pa. The difference is due to ocean absorption, mainly.

            Hansen used the annual increment for good reasons
            1. It is the figure that the GCM and the real world responds to. If he had used amount of C burnt, he would still have had to predict the amount that would remain in the air.
            2. He didn’t have reliable figures for amount of C burnt anyway. Those need to be collected by governments via tax data etc, and this was an outcome of the UNFCCC.

          • The actual predictions of CO2 by Hansen’s model from ’88 are archived, the actual values for 2018 are as follows:
            Scenario A 410ppm Scenario B 404ppm
            Global measurements September 2018 405ppm

            Predictions from 30 years ago were rather good.

      • “The big effects on scenario were the slow rise of CH4 and the big restriction of CFC’s, neither anticipated by A (which was actually an older scenario).”

        Yeah, that’s what I remember being said in my discussion of this over at SkS.

      • The website analysis you link to merely engages in post-hoc cheating. Like I said above, Hansen’s original paper presented “emissions” scenarios that posited a causal relationship where rising CO2 emissions would cause temperatures to increase. That was the whole thrust of of his 1988 paper, and his corresponding Congressional testimony where he used that paper to try to persuade Congress that his research supported the need to reduce emissions. About 20 years after the fact, he (and that site you linked to) tried to re-cast his “scenarios” as being merely for possible future atmospheric CO2 concentration increases.

        But that kind of forecast can’t be used to establish a causal relationship between CO2 and temperature. All it can do is confirm correlation. If all you’re doing is firing a shotgun and saying that if CO2 concentrations are A, temperatures will be X, if concentrations are B then temperatures will be Y, and so forth, then any later validation of one of those associations says absolutely nothing about whether the concentrations caused the temperature, or vice versa, or some mixture of the two.

        Hansen’s original 1988 paper claims that his “emissions scenarios” are “designed to yield sensitivity experiments for a broad range of future greenhouse forcings.” It can’t very well be a “sensitivity experiment” if the anticipated result isn’t going to tell you how much of a temperatures would be caused by an increase in CO2. Its not an “experiment” for “forcing” at all.

        • ” Like I said above, Hansen’s original paper presented “emissions” scenarios that posited a causal relationship”
          That is nonsense. The scenarios are just possible future paths of gas concentrations. The causal relationship, if any, will be established by the GCM, taking the scenarios as input. Of course the intention is to carry out sensitivity experiments to see if temperatures increase with GHG. But that isn’t part of the scenario.

          In Hansen’s papers, emissions are identified with the annual concentration increase. It was the only data he had, or at least the only data he used. The revisionism comes from people (like Willis above) who want to claim that he was talking about the emissions in Gtons that are calculated from government figures, collected by agreement as part of the UNFCCC process. That all postdates 1988. Hansen’s scenarios were of concentration, which is anyway the required input for the GCM. The formula that generates the scenario A he used was
          CO2 ppm = 235 + 117.1552*1.015^n n=years
          The numbers he used, in full detail, are in a file given by the link.

          Steve McIntyre got all that right here in 2008. He shows the graphs.

  27. If Hansen is going to revise his predictions downwards, due to current knowledge, he should also revise his apocalyptic hyperbole down.
    So we would have ‘ serious illness’ trains, instead of ‘death trains’
    ‘Say thank you and smile’ instead of ‘water by request only’
    ‘Nice new flower garden’ instead of ‘ different trees’

    And all the things that have been banned, shut down, wrecked, subsidised should be half-unbanned half reopened half fixed and py us half our money back.

    Half baked predictions from a half baked theory spouted by a half baked man.

  28. Climate Sensitivity stems from the Planck Equation relating absorbed energy to a resultant temperature dF = K*dT where K is the coefficient and relates to Sensitivity.
    Where water is concerned, at phase change K is zero as the absorbed energy goes into the Latent Heat without change in temperature.
    IMHO most, of not all models have ignored this thermodynamic fact and have thus concluded that water enhances the Greenhouse Effect when the opposite is true.
    I suggest this explains a great deal.

  29. As the math professor might say to Eric Morecambe/ Hansen.
    ” You are playing with all the wrong numbers!!”
    To which he replies.
    ” No sunshine, I am playing with all the right numbers, but not necessarily in the right order…”

  30. Dr Hansen’s 1988 testimonial to Congress
    “My last viewgraph shows global maps of temperature anomalies for a particular month, July, for several different years between 1986 and 2029, as computed with our global climate model for the intermediate trace gas scenario B. As shown by the graphs ……… at the present time in the 1980s the greenhouse warming is smaller than the natural variability of the local temperature. So, in any given month, there is almost as much area that is cooler than normal as there is area warmer than normal. A few decades in the future, as shown on the right, it is warm almost everywhere.” ie by 2018

    Hansen et al 1988
    (2) The greenhouse warming should be clearly identifiable in the 1990s; the global warming within the next several years is predicted to reach and maintain a level at least three standard deviations above the climatology of the 1950. ie by 1995

    How is this working out?

    The models of median global temperature anomalies have certainly behaved as predicted. HADCRUT4.4 reached +3 Standard Deviations in 1990 and maintained >+3 Standard Deviations since 1997.

    However Dr Hansen predicted that by now it should be ‘warm almost everywhere’.

    Is this happening?

    Looking at the Mean Annual Central England Temperature (CET) Record. By 2016 Mean Annual CET has only TWICE (2006 & 2014) exceeded +3 Standard Deviations from Climate Normal in the 1950s (1921 – 1959) as defined in Hansen et al 1988

    In the UK Meteorological office Historical data 9 of the published 37 long term data records allow calculation of 1921-1950 Climate Normal. 5 out of 9 stations have exceeded 3 Standard Deviations since 1950, 4 for 1 year (3 in 2014, 1 in 1920), 1 in 2 years (2006 & 2014).

    So no anthropogenic signal at the local nor regional level.

    In this small corner of the world natural variability rules.

    • “Looking at the Mean Annual Central England Temperature (CET) Record. By 2016 Mean Annual CET has only TWICE (2006 & 2014) exceeded +3 Standard Deviations”
      He didn’t say it would. He said the global average would exceed 3 sd (of the global average). Individual locations have a much higher sd, with no corresponding expected increase in mean.

      • Dr Stokes, there was an expected increase in regional means -“A few decades in the future, as shown on the right, it is warm almost everywhere.”

        The UK mean annual temperature remains within natural variability.

        • “The UK mean annual temperature remains within natural variability.”
          That may be. It doesn’t mean that UK is showing less warming that you would expect. It just means that small area averages are much more variable than big areas. That is why people go to the trouble of aggregating as large a sample as possible.

          • Dr Stokes, this is the weakness of geostatistics. The concept of climate is a regional (not global) emergent feature from weather measurement . Global models of temperature have only limited applicability as their level of aggregation mask the variability in the real world. If the warming is global, through CO2, climates (regional aggregates) should show similar behaviour to that modelled for the globe, as Dr Hansen suggested.

  31. Looking at the red dots in Willis’s Figure 5 makes me think of the “convergence” with time of the published values of the charge on the electron, following Millikan’s oil-drop experiment. The fact that the climate sensitivity estimates still appear to be “trending” downwards suggests, to this cynic, that there may still be some way to go before we have anything like an accurate value.

  32. Willis

    When you compare his prediction with GISS and HADCRUT temperature reconstructions, and showing the period running from the late 1950s, are you using current versions of these data reconstructions, or are you splicing on current versions with the versions that were current in the late 1980s?

    Don’t forget that at the time that Hansen made his predictions, the current versions of the temperature reconstructions showed considerable cooling into the 1970s that is absence from current versions.

    Don’t forget the considerable changes that have been made to both GISS and HADCRUT over the years.

    Hansen was working with versions as they stood in the late 1980s, and these versions should be used as the starting point at which his prediction is grounded.

  33. I also took a look at Hansen’s 1988 predictions six months ago.
    A trivial point I would pick up on Willis’s analysis is his estimate of average annual emissions growth. Exponential CO2 emissions growth might be 1.9% per annum in the last 30 years, but GHG emissions growth is nearer 1.3%. The reason this is a trivial point is that actual emissions growth is closest to Scenario A whether CO2 or GHG emissions is used. This is my chart based on total GHG emissions in IPCC CO2 equivalent units.
    https://manicbeancounter.files.wordpress.com/2019/01/jul-18-fig-4.jpg
    Please note the assumptions behind the scenarios.
    Scenario A – Emissions keep on growing at current rates. The business as usual (BAU) scenario.
    Scenario B – Global emissions are stabilized at late 1980s levels. The zero emissions growth scenario.
    Scenario C – Global emissions are reduced to near zero by 2000. The emissions elimination scenario.
    The two policy scenarios B & C gave an important political message. What Hansen was effectively saying was

    It is no good just stopping global emissions growth, as that will have very little impact. To really stop global warming requires eliminating global emissions.

    Hansen, (and nearly all the alarmists since), missed out the global when advocating what we should do.

    • Just to emphasize the last point about climate alarmists missing out the “global” when advocating cutting emissions, I have produced a chart comparing CO2 emissions in 2017 with 1988.
      https://manicbeancounter.files.wordpress.com/2019/01/co2-emissions-1988-2017.jpg
      Global emissions increased by 69% – a compound growth rate of over 1.8%. But the 30 year growth rates between the 5 blocks.
      USA +8%
      EU -20%
      India +369%
      ROW +57%
      China +342%

      As a result of the different growth rates the combined US & EU share of CO2 emissions has reduced from 42% to 24% in the last 30 years. This is mostly due to CO2 emissions growing in the rest of the world.

        • Only a small part of it is from goods produced in developing countries and shipped elsewhere. There are three factors at play in the growth shifting shares of emissions since 1988.
          1. Higher economic growth in developing countries than developed countries. China & India v Japan & EU are the best examples
          2. Collapse of communism lead to a 30-60% drop in emissions. Some of those countries still have lower emissions than in 1988 despite being a lot richer.
          3. Many countries reach peak emissions per capita. USA in 1973 and EU in 1980. This is partly due to transfer of low value manufacturing industries to developing countries (steel, shipbuilding, textiles etc).

  34. People are being distracted by the accuracy – or lack thereof – of Hansen’s predictions and do not question the untenable underlying assumption that a control knob controls the planet’s temperature. Any input parameter that has increased over the last 30 years, such as my waist line, would predict rising temperatures.

    • The only question is did global warming increase your waist line or did your waist line increasing cause global warming?

      • John perhaps a feedback mechanism …. Francois’s waist line increases which makes the earth hotter which causes Francois to drink more beer to cool off, increasing his waist line again ….

  35. Now we just need a list of all the legit science knowledge attained since that time on AMO, PDO, solar cycle, etc. to see just how flimsy that straight edge forecast was.

  36. Last week I placed a bet on team A winning 4-0 against team B. Now I know the actual score was Team A 1 – Team B 2. I went back to the betting shop to tell them that my bet was actually spot on correct, all I had to do was incorporate the latest data and alter 2 parameters in my model and hey presto I’m 100% right! Funnily enough they wouldn’t give me my money back or indeed the winnings that were so rightfully mine…

    The Gambling industry is very very scared

    The Gamblers are very very happy

  37. Twitter Guy: “Hansen was right!”
    Willis: (compares Hansen’s predictions to measurements) “It sure doesn’t look right”
    Twitter Guy: “Well, yeah, but if you take Hansen’s wrong predictions and change them, he becomes right!”

  38. He said that Dr. Hansen’s prediction was indeed proven accurate—he’d merely used the wrong value for the climate sensitivity

    Yeah, if I’m speeding down the highway and get pulled over, I doubt the police officer would accept that I accurately calculated that I was driving a legal speed only that I’d merely used the wrong value for the speed limit of that particular road. I’d have better luck waving my hand and suggesting “this isn’t the speeding car you are looking for” in my best Alec Guinness impression.

  39. Willis Eschenbach, the model is built on forcings. I looked at this some time back, and I found that you have to look at the gases other than CO2 to get a proper comparison of the forcings in the model vs observations.
    I think Scenario B comes out closer.

  40. My model is correct, and so are all the others! It’s that bloody reality that’s the problem! See my most-recent grant-funded paper!/(sarc)

  41. The logic is as bad as running 102 different models and reporting the AVERAGE and implying that is accurate. Reality, only 1 is best.

  42. Hi Willis

    No need to digitise the data, it is available at the locations below.

    Temperature anomalies for Hansen 1988 Scenarios A, B, and C are available from: http://www.realclimate.org/data/scen_ABC_temp.data.

    Emissions Assumed for Hansen 1988 Scenarios A, B, and C are available from: http://www.realclimate.org/data/H88_scenarios.dat

    Forcings for Hansen 1988 Scenarios A, B, and C, in W/m² are available from: http://www.realclimate.org/data/H88_scenarios_eff.dat

  43. Hi Willis

    I think that you have been following Hansen 1988 longer than me. Nevertheless, it should be noted that Hansen’s scenarios are not controlled by CO2 alone – other greenhouse gases also affect the outcome.

    For example, Steve McIntyre explained in https://climateaudit.org/2008/01/24/hansen-1988-details-of-forcing-projections/ that the Scenario A increases are dominated by CFC greenhouse effect. In Scenario A, the CFC contribution to the Earth’s greenhouse effect becomes nearly double the CO2 contribution during the projection period. This is not mentioned in Hansen et al 1988.

    Interestingly In June 1988, Hansen stated to the US Congressional Committee that Scenario A was “business as usual”. However in the scientific literature in 1988 and 2005 he states that Scenario B is “the most plausible”. He certainly liked to heat it up for the politici

    My perception is that real-world emissions are near to Scenario B emissions but that actual GISS temperatures were tracking Scenario C, until the recent El Nino (see attached diagram).

    Furthermore, it is worth noting that the adjustments made through the various versions of LOTI (currently version 5) have added approximately 0.14 °C to the anomalies since the 2002 data set. If these adjustments were not added then the 2018 temperatures would be very near to Scenario C.

    It appears that Gavin is continuing to amend his temperatures upwards until they give the desired result.

    • Hi Willis

      I think that you have been following Hansen 1988 longer than me. Nevertheless, it should be noted that Hansen’s scenarios are not controlled by CO2 alone – other greenhouse gases also affect the outcome.

      For example, Steve McIntyre explained in https://climateaudit.org/2008/01/24/hansen-1988-details-of-forcing-projections/ that the Scenario A increases are dominated by CFC greenhouse effect. In Scenario A, the CFC contribution to the Earth’s greenhouse effect becomes nearly double the CO2 contribution during the projection period.
      After been ‘dragged kicking and screaming’ to that realization by me and others.
      McI post:
      ” Right now, based on the review of GHG concentrations, it’s hard to see exactly what is accounting for the difference in radiative forcing. Update: As noted in a subsequent post, the handling of Other CFCs and Other Trace Gases accounts for the near time difference.”

      This is not mentioned in Hansen et al 1988.

      Although reluctantly McI eventually had to acknowledge that it is mentioned in H ’88.
      McI: “In Hansen et al 1988, they stated that they dealt with other CFCs and trace gases by doubling the effect of CFC11 and CFC12, a point that I noted in my post yesterday. They said:
      “Potential effects of several other trace gases are approximated by multiplying the CFC11 and CFC12 amounts by 2.”
      In my post yesterday, I incorrectly surmised that this would not be a substantial effect.”

    • However, at least one comment on realclimate pointed out that that temperatures were following Scenario C until the 2015-2016 El Nino, which is a completely natural event that is not modelled in Hansen’s model. I haven’t noticed any rebuttals to this comment.

      Gavin’s June 2018 chart is here:
      http://tinypic.com/m/kagth4/2

  44. Since the entire debate really IS about sensitivity… being wrong on sensitivity means being wrong on the issue.

    It’s like arguing about if you’re going to be burned when you hop in the shower tomorrow morning… Hansen argues you will. You say “I don’t think so”, and so we spend a few billion dollars measuring the warm (not burning) water that comes from the tap…

    Verdict! Hansen was absolutely right… all he got wrong was where the valve was set to… if he had known it was going to be set to “comfortably warm”, he would have compensated, and the data would be perfect. Win for Hansen… forgetting what the original issue was… did you get burned?

Comments are closed.