Gavin’s Twitter Trick

Last week, Larry Kummer posted a very thoughtful article here on WUWT:

A climate science milestone: a successful 10-year forecast!

At first glance, this did look like “a successful 10-year forecast:

dklepolx0aa2pm_1

Figure 1. A successful 10-year forecast?

The observations track closer to the model ensemble mean (P50) than most other models and the 2016 El Niño spikes at least a little bit above P50.  Noticing that this was CMIP3 model, Larry and others asked if CMIP5 (the current Climate Model Intercomparison Project) yielded the same results, to which Dr. Schmidt replied:

Figure 2. A failed 10-year forecast.

The CMIP5 model looks a lot like the CMIP3 model… But the observations bounce between the bottom of the 95% band (P97.5) and just below P50… Then spike to P50 during the 2016 El Niño.  When asked about the “estimate of effect of misspecified forcings, Dr. Schmidt replied:

Basically, the model would look better if it was adjusted to match what actually happened.

The only major difference between the CMIP3 and CMIP5 model outputs was the lower boundary of the 95% band (P97.5), which lowered the mean (P50).

Webp.net-gifmaker (4)

Figure 3. Improving accuracy by increasing imprecision.

CMIP-5 yielded a range of 0.4° to 1.0° C in 2016, with a P50 of about 0.7° C. CMIP-3 yielded a range of 0.2° to 1.0° C in 2016, with a P50 of about 0.6° C.

They essentially went from 0.7 +/-0.3 to 0.6 +/-04.

Progress shouldn’t consist of expanding the uncertainty… unless they are admitting that the uncertainty of the models has increased.

Larry then asked Dr. Schmidt about this:

Dr. Schmidt’s answer:

“Not sure”?  That instills confidence.  He seems to be saying that the CMIP5 model (the one that failed worse than CMIP3) may have had “more coherent forcing across the ensemble, more realistic ENSO variability, greater # of simulations.”

I’m not a Twitterer, but I do have a rarely used Twitter account and I just couldn’t resist joining in on the conversation.  Within the thread, there was a discussion of Hansen et al., 1988.  And Dr. Schmidt seemed to be defending that model as being successful because the  2016 El Niño spiked the observations to “business-as-usual.”

Figure 4.  Hansen et al., 1988 is still an epic fail.  The monster El Niño of 2016 is not “business-as-usual.”

I asked the following question:

No answer.  Dr. Schmidt is a very busy person and probably doesn’t have much time for Twitter and blogging.  So, I don’t really expect an answer.

In his post, Larry Kummer also mentioned a model by Zeke Hausfather posted on Carbon Brief

CMIP5_zeke

Figure 5.  Another failed climate model.  The 2016 El Niño is not P50 weather.

El Niño events like 1998 and 2016 are not high probability events.  On the HadCRUT4 plot below, I have labeled several probability bands:

Standard Deviation Probability Band % Samples w/ Higher Values
+2σ P02.5 2.5%
+1σ P32 32.0%
Mean P50 50.0%
-1σ P68 68.0%
-2σ P97.5 97.5%

Yes… I am assuming that HadCRTU4 is reasonably accurate and not totally a product of the Adjustocene.

I removed the linear trend, calculated a mean (P50) and two standard deviations (1σ & 2σ).  Then I added the linear trend back in to get the following:

HadCRUT4

Figure 6.  HadCRUT4 (Wood for Trees) with probability bands.

The 1998 El Niño spiked to P02.5.  The 2016 El Niño spiked pretty close to P0.01.  A strong El Niño should spike from P50 toward P02.5.

All of the models fail in this regard.  Even the Mears-ized RSS satellite data exhibit the same relationship to the CMIP5 models as the surface data do.

The RSS comparison was initialized to 1979-1984.  The 1998 El Niño spiked above P02.5.  The 2016 El Niño only spiked to just above P50… Just like the Schmidt and Hausfather models.  The Schmidt model was initialized in 2000.

This flurry of claims that the models don’t “run hot” because the 2016  El Niño pushed the observations toward P50 is being driven by an inconvenient paper that was recently published in Nature Geoscience (discussed here, here and here).

21 September 2017 0:27

Factcheck: Climate models have not ‘exaggerated’ global warming

ZEKE HAUSFATHER

A new study published in the Nature Geosciences journal this week by largely UK-based climate scientists has led to claims in the media that climate models are “wrong” and have significantly overestimated the observed warming of the planet.

Here Carbon Brief shows why such claims are a misrepresentation of the paper’s main results.

[…]

Carbon Brief

All (95%) of the models run hot, including “Gavin’s Twitter Trick”.  From Hansen et al., 1988 to CMIP5 in 2017, the 2016 El Niño spikes toward the model ensemble mean (P50)… Despite the fact that it was an extremely low probability weather event (<2.5%).  This happens irrespective of then the models are initialized.  Whether the models were initialized in 1979 or 2000, the observed temperatures all bounce from the bottom of the 95% band (P97.5) toward the ensemble mean (P50) from about 2000-2014 and then spike to P50 or slightly above during the 2016 El Niño.

Advertisements

304 thoughts on “Gavin’s Twitter Trick

      • So the bottom line here is the Gavin Schmitt deliberately chose the out of date CMIP3 model output because the ensemble mean looks a lot closer to the observations than the more recent CMIP5.

        Scientific frawd, clear and simple.

        The rest about grey bands and percentiles is just smoke. This article fails to clearly make the most important point because it goes into too much detail.

        Good to bring this out but could have been a lot clearer and to the point.

      • CMIP5 version is similar (especially after estimate of effect of misspecified forcings)

        so their models run too hot and make poor forecasts …. until they do some post hoc parameter tweaking , at which point it is no longer a forecast but a new hindcast.

        The 2007 models look better than the 2013 models. Despite having had more time and additional observations from the post Mt Pinatubo period they get worse results. They are now going backwards.

        It is a very telling admission of the failure and misdirection of the modelling community that Gavin Schmidt chose the older models to misinform the public,

  1. Larry Kummer posted a very thoughtful article here ….

    Wrong. It was a piece of JUNK. (See comments on the thread accompanying that “thoughtful” (cough) piece.)

    • Yes Janice, my thoughts exactly! I thought the world was supposed to end on the 23rd but it just ended for me after reading this very, very stupid statement on WUWT:

      Last week, Larry Kummer posted a very thoughtful article here on WUWT:

      F%ck me, what ever happened to emotional intelligence. Have IQs dropped sharply since I’ve been away!

      “Middleman” – always political – has finally shown his true stripes! God help us indeed, because these ideologues will kill us all before they finally wake up! ;-(

      • There’s no logical inconsistency with describing a post as “thoughtful” and mostly disagreeing with it. While being rude and obnoxious can be fun, it isn’t always necessary.

        This was my last comment on Larry’s post:

        David Middleton September 25, 2017 at 4:31 am Edit
        The personal attacks on Larry Kummer are totally uncalled for.

        While I strongly disagree with his characterization of “Gavin’s Twitter Trick” as a demonstration of predictive skill in a climate model and even more strongly disagree with half of his conclusions (1, 4 & 5), this was a very thoughtful essay.

        https://wattsupwiththat.com/2017/09/22/a-climate-science-milestone-a-successful-10-year-forecast/#comment-2619709

        Larry’s conclusions:

        1. Boost funding for climate sciences. Many key aspects (e.g., global temperature data collection and analysis) are grossly underfunded.

        2. Run government-funded climate research with tighter standards (e.g., posting of data and methods, review by unaffiliated experts), as we do for biomedical research.

        3. Do a review of the climate forecasting models by a multidisciplinary team of relevant experts who have not been central players in this debate. Include a broader pool than those who have dominated the field, such as geologists, chemists, statisticians and software engineers.

        4. We should begin a well-funded conversion to non-carbon-based energy sources, for completion by the second half of the 21st century — justified by both environmental and economic reasons (see these posts for details).

        5. Begin more aggressive efforts to prepare for extreme climate. We’re not prepared for repeat of past extreme weather (e.g., a real hurricane hitting NYC), let alone predictable climate change (e.g., sea levels climbing, as they have for thousands of years).

        6. The most important one: break the gridlocked public policy by running a fair test of the climate models.

        https://wattsupwiththat.com/2017/09/22/a-climate-science-milestone-a-successful-10-year-forecast/

        Let’s see…

        I disagreed with:

        His characterization of Gavin’s Twitter Trick as a demonstration of predictive skill in a climate model.

        Boosting funding for climate sciences.

        Beginning a well-funded conversion to non-carbon-based energy sources.

        And beginning more aggressive efforts to prepare for extreme climate.

        I politely disagreed with most of his post… without insulting him.

      • being rude and obnoxious

        Well, that would be thought crime committed in your own head, an absurd continuum that logically leads to casting everything from “the other”, as hate speech.

        Maybe you are right though, if the words “very, very stupid statement” fit the bill.

        What is going on with these really dumb articles about the skill of model ensembles?

        When you read the literature and all the approaches to this “problem” you wade straight in to the absurdity of arguing the base line for model runs, started today and hind casting this weeks adjusted data, of last centuries’s data and “sliming” it laterally to find the best fit. And I’m not joking, this is the dead set reality and very best received wisdom available today from these insane collectivists*.

        *I can describe them thusly because they make no pretence themselves of being anything other!

  2. The tortured data is starting to confess, just a couple more adjustments and it fits 100%.

    Sorry but knowing how these ‘data’ lines evolved I cannot take anything that is based on such serious. Just a few examples here:


      • I am somewhat suspicious of the CRU figures to put it mildly – not least because the main London temperatures from Heathrow airport are routinely in the 2-4 deg C range above the rural temperatures outside – and sometimes even more.

        Winter minimum temperatures for London are routinely 4 degC Hotter than outside London, and sometimes more. The last week’s daily forecast figures showed Heathrow as 2 deg C above the adjacent towns – which themselves suffer from UHI.

        These are then adjusted for UHI – But not so far as I can make out by anything like the actual super-heated temps that Heathrow records. Those effects alone are enough to create overheating in the UK temperature records held by CRU.

    • Notice if roughly the correct reading originally had been used from 1970, (NCAR 1974) the NASA 2017 rise in temperature would had only reached about the 1950’s levels now. They have adjusted past trends that much, different decades appear warmer when probably the late 1930’s were the warmest years. Reducing the cooling has conned many over the decades into thinking recently was warmer, when in fact the only thing that happened was the planet had recovered from a cooling period until the late 1970’s. This is why absolute temperatures should be used and not anomalies so this fiddling would be lot more difficult to hide.

    • Steven Goddard produces these plots, and they seem to circulate endlessly, with no attempt at fact-checking, or even sourcing. I try, but it’s wearing. The first GISS plot is not the usual land/ocean data; it’s a little used Met Stations only, essentially an update of a 1987 paper. I don’t know if it’s right. But the second shows what is claimed to be a NCAR plot, or at least based on NCAR data. That plot isn’t from any NCAR publication, and the data is not available anywhere. Instead it is from the famous Newsweek 1975 article on “global cooling”. And that just said it was a temperature plot; didn’t even say what it was temperature of. From the context, it seemed to be US, which figures. There was very little digidised global data in 1974.

      • Sunset,
        “NASA, NOT Met”
        Yes, that is exactly what I said about Newsweek. It’s their plot, not NCAR’s. And note that they don’t say what it is the temperature of. I think it’s US (lower 48). But it’s not the sort of detail that bothers SG or hi fans.

        “NASA, NOT Met”
        As I said, and is it is clearly labelled, it is Met Stations. It is an index that Hansen and Lebedeff used in 1987, with met stations extrapolated to cover the whole Earth (since they had no SST). GISS has kept updating, but it is rarely quoted, and no-one else calculates such an index.

      • “I don’t know if it’s right” – LOL.

        As for the NCAR data, so what if you can’t find the data in a publication 40-50 yrs after the fact? Do you think it was made-up that it came from NCAR to appear in Newsweek, and nobody at NCAR ever noticed? Here’s a supposed NAS representation of data around the same time… https://seeker401.files.wordpress.com/2013/02/screenhunter_123-jan-31-20-33.jpg

        It goes ahead and tells you that it is the northern hemisphere. Clearly the NCAR involves smoothing and doesn’t have annual data explicitly shown, but it’s not hard to see how the NAS one would look quite similar to the NCAR one with longer smoothing intervals.

        “…Instead it is from the famous Newsweek 1975 article on “global cooling”. And that just said it was a temperature plot; didn’t even say what it was temperature of. From the context, it seemed to be US, which figures…”

        The data is very similar, but it isn’t the graphic out of the Newsweek article. The headline reads “global.” The dashed line of the first 15 years or so says “northern hemisphere only.” So the context is clearly global. Are you saying Steven Goddard took the Newsweek graph, converted it to Celsius in a new plot made to look like it came from the 1970s, and added fake titles, 15 years of fake data, and a fake legend?

      • with no attempt at fact-checking, or even sourcing. I try, but it’s wearing.

        Nick, I’m calling you out on this BullSh1t…..All you had to do was ask Tony

        You didn’t try sh1t….and posted this crap

      • Nick

        It is good to receive you input, and I think that I can help clarify some of the points that you raise.

        You only have to look at Hansen’s own 1981 paper (Science Volume 213) figure 3 to see that he fully accepts the NCAR 74 plot (which is similar to the NAS 1975 plot, more on which later)

        This is the very top part of his figure 3 (ie., the NH 90 – 23.6 N temperature profile plot). Look at the paper and look at what Hansen says on page 961 about Fig 3, viz:

        Northern latitudes warmed ~0.8degC between 1880 and 1940, then cooled ~0.5degC between 1940 and 1970, in agreement with other analyses (9, 43)

        It is clear from Fig 3 itself, that Hansen as at 1981 considered the NH temperature as at 1980 to be some 0.3degC cooler than it was in 1940.

        Footnote 9 is the NAS 1975 report (the plot from which I set out below), and footnote 43 is the Brinkman 1976 paper, the Vinnikov 1976 paper, and the Jones & Wigley 1980 paper (I presume that you have read all 3 of those). So the Team were very much on board that this represented the temperature profile of the NH. You will recall that I have frequently pointed out to you what Jones and Wigley say about the inadequacies of SH temps in their 1980 paper, and you will note that Hansen states:

        Problems obtaining global temperature history are due to the uneven station distribution (40) with Southern Hemisphere and ocean areas poorly represented…

        ,

        Here is the NAS 1975 plot referenced in footnote 9 of Hansen’s 1981 paper.

        Of course, if you look at various Hansen papers you will see how he has altered the temperature profile of the US. That too has been substantially altered these past 25 years.

      • Nick

        It is always good to receive you input, and I think that I can help clarify some of the points that you raise.

        You only have to look at Hansen’s own 1981 paper (Science Volume 213) figure 3 to see that he fully accepts the NCAR 74 plot (which is similar to the NAS 1975 plot, more on which later)

        This is the very top part of his figure 3 (ie., the NH 90 – 23.6 N temperature profile plot). Look at the paper and look at what Hansen says on page 961 about Fig 3, viz:

        Northern latitudes warmed ~0.8degC between 1880 and 1940, then cooled ~0.5degC between 1940 and 1970, in agreement with other analyses (9, 43)

        It is clear from Fig 3 itself, that Hansen as at 1981 considered the NH temperature as at 1980 to be some 0.3degC cooler than it was in 1940.

        Footnote 9 is the NAS 1975 report (the plot from which I set out below), and footnote 43 is the Brinkman 1976 paper, the Vinnikov 1976 paper, and the Jones & Wigley 1980 paper (I presume that you have read all 3 of those). So the Team were very much on board that this represented the temperature profile of the NH. You will recall that I have frequently pointed out to you what Jones and Wigley say about the inadequacies of SH temps in their 1980 paper, and you will note that Hansen states:

        Problems obtaining global temperature history are due to the uneven station distribution (40) with Southern Hemisphere and ocean areas poorly represented…

        ,

        Here is the NAS 1975 plot referenced in footnote 9 of Hansen’s 1981 paper.

        Of course, if you look at various Hansen papers you will see how he has altered the temperature profile of the US. That too has been substantially altered these past 25 years.

      • Slow down,Nick!

        You appear confused:

        “Sunset,
        “NASA, NOT Met”
        Yes, that is exactly what I said about Newsweek. It’s their plot, not NCAR’s. And note that they don’t say what it is the temperature of. I think it’s US (lower 48). But it’s not the sort of detail that bothers SG or hi fans.”

        No it was NOT from NASA,never said it was either.

        I gave you the link that shows it is FROM NASA!

        You say,

        “NASA, NOT Met”
        As I said, and is it is clearly labelled, it is Met Stations. It is an index that Hansen and Lebedeff used in 1987, with met stations extrapolated to cover the whole Earth (since they had no SST). GISS has kept updating, but it is rarely quoted, and no-one else calculates such an index.”

        The wayback machine shows it is NASA not Metoffice.

        https://web.archive.org/web/20011129113305/http://www.giss.nasa.gov:80/data/update/gistemp/graphs/Fig.A.ps

        Go ahead and see for yourself:

        2001 version : Fig.A.ps

        https://realclimatescience.com/nasa-doubling-warming-since-2001/

        You have yet to show a link to support your position.

      • Nick, I have asked Tony about the charts in question.

        You know he used to work at NCAR,doing charts and stuff?

      • Guys

        I have a comment that I have tried to post a couple of times that has disappeared into the ether. Hopefully it will get posted in due course in which I explain these plots.

        (Found and posted) MOD

      • Unfortunately, I am unable to cut and copy Figure 7.11 form Observed Climate Variation and Change on page 214. But I can confirm that this plot (endorsed by the IPCC) shows that the NH temperatures as at 1989, was cooler than the temperature at 1940, and the temperature as at 1920. Not much cooler but a little cooler.

        This is the IPCC Chapter 7

        Lead Authors: C.K. FOLLAND, T.R. KARL, K.YA. VINNIKOV
        Contributors: J.K. Angell; P. Arkin; R.G. Barry; R. Bradley; D.L. Cadet; M. Chelliah; M. Coughlan; B. Dahlstrom; H.F. Diaz; H Flohn; C. Fu; P. Groisman; A. Gruber; S. Hastenrath; A. Henderson-Sellers; K. Higuchi; P.D. Jones; J. Knox; G. Kukla; S. Levitus; X. Lin; N. Nicholls; B.S. Nyenzi; J.S. Oguntoyinbo; G.B. Pant; D.E. Parker; B. Pittock; R. Reynolds; C.F. Ropelewski; CD. Schonwiese; B. Sevruk; A. Solow; K.E. Trenberth; P. Wadhams; W.C Wang; S. Woodruff; T. Yasunari; Z. Zeng; andX. Zhou

        Figure 7.11: Differences between land air and sea surface temperature anomalies, relative to 1951 80, for the Northern Hemisphere 1861 -1989 Land air temperatures from P D Jones Sea surface temperatures are averages of UK Meteorological Office and Farmer et al (1989) values

      • I also have a copy of the Official US Dept Energy Report of 1985: Projecting the Effects of Increasing Carbon Dioxide which contains a very similar plot on page 151.

        Figure 5.1. Annual mean surface air temperature anomalies from 1880-1981: (solid curve) Vinnikov et al. (1980); and (dashed curve) Jones et al. (1982). Figure from Weller et al. (1983), and includes points updated to 1981 by Jones

        This plot (figure 5.1) shows Vinnikov through to 1980 putting 1980 at some 0.4 degC cooler than 1940, and Jones through to 1981 putting 1981 about 0.1 to 0.15 cooler than 1940.

        This report, on page 152, also includes the Arctic Sea Ice reconstruction from the Vinnikov 1980 paper

      • An abridged attempt:

        You only have to look at Hansen’s own 1981 paper (Science Volume 213) to see that these are well accepted plots, the source of which is sound.

        It will be seen from figure 3 of Hansen’s paper that he fully accepts the NCAR 74 plot (which is similar to the NAS 1975 plot, more on which later).

        At 7.04 hrs, I have posted a part of Figure 3. Please note that this is the very top part of his figure 3 (ie., covering solely the NH 90 – 23.6 N temperature profile). If one reviews the paper, it will be noted what Hansen says on page 961 about Fig 3, viz:

        Northern latitudes warmed ~0.8degC between 1880 and 1940, then cooled ~0.5degC between 1940 and 1970, in agreement with other analyses (9, 43)

        It is clear from Fig 3 itself, that Hansen as at 1981 considered the NH temperature as at 1980 to be some 0.3degC cooler than it was in 1940.

        Footnote 9 is a reference to the NAS 1975 report/book (the plot from which I set out above at 7.02. This comes from page 148 of the report/book), and footnote 43 is the Brinkman 1976 paper, the Vinnikov 1976 paper, and the Jones & Wigley 1980 paper (I presume that you have read all 3 of those).

        So the Team were very much on board that this represented the temperature profile of the NH. I have frequently pointed out what Jones and Wigley say about the inadequacies of SH temps in their 1980 paper, and you will note that Hansen states:

        Problems obtaining global temperature history are due to the uneven station distribution (40) with Southern Hemisphere and ocean areas poorly represented…

      • “As for the NCAR data, so what if you can’t find the data in a publication”
        The reason for wanting to see the publication is that then it will be explained what the plot really is. What kind of data is it based on? Newsweek wasn’t saying at all. Is it land only? Land/ocean? It looks to me as if the b&w plot above is supposed to be global but probably land only; there was no systematic SST or NMAT at the time. The NASA 2017 is probably land/ocean. These are things you need to know to ensure it is a fair comparison.

      • Here is the link to the IPCC website for Chapter 7. Have a look at page 214.

        https://www.ipcc.ch/ipccreports/far/wg_I/ipcc_far_wg_I_chapter_07.pdf

        Also have a look at page 202 for Figure 7.1 which shows the Medieval Warm period and the LIA. These are the plots that the IPCC would sooner that one forgets.

        As others have noted there appears to be a concerted effort to erase the past. How can anyone have any confidence in the data when it has undergone so much revision?

      • Richard,
        Yes, clearly there was a circulating newspaper story at the time with near identical text and graphs claiming to be global temperature, some of which attribute the writing to John Hamer and the artwork to one Terry Morse. What we don’t have is an NCAR source document saying what kind of measure it is – land only, land/ocean? How much data was available? How about some scepticism?

      • What we don’t have is an NCAR source document saying what kind of measure it is – land only, land/ocean? How much data was available?

        Already give this link before, but for some reason you have chosen to not read it. The paper looks at global temperatures including NCAR and they easily agree with the chart and actually show more cooling.

        The NCEP/NCAR shows more cooling than with the 1974 chart between 1960 and 1976.

        http://onlinelibrary.wiley.com/doi/10.1029/2004JD005306/full

        The satellite show only 0.2c difference between land and ocean since 1979. Therefore whether it is land or land/ocean has little relevance. The bigger difference on surface data with HADCRUT and GISS is because it misses huge amount of the land so temperatures varies more.

        The NH and North American temperatures also show there are very similar. The SH also shows cooling around 0.5c, but with the NH it is more.

      • “Already give this link before, but for some reason you have chosen to not read it. “
        I read it, but just couldn’t think of anything to say. What you have linked to is a modern paper on reanalysis. Totally off topic. Yes, one of those has NCAR in the name. But there is no mention of NCAR 1974, or anything even close. It’s just not about that. It does talk about some old NCAR data assimilated into reanalysis, but there is no corresponding reconstruction.

        The argument is about where this supposed NCAR plot of 1974, supposedly showing how GISS has altered the record, or some such, actually comes from. I noted a Newsweek source; Tony Heller seems to think something is busted because he found it in a different newspaper. But there is no provenance. Richard Verney mentioned some earlier papers, but they all had NH temperatures; The Goddard plot claims to be global.

        I’m sure you can find modern graphs that look something like NCAR 1974. But what doesn’t seem to bother people is the basic question – what are they plotting? And what was NCAR plotting then? It matters.

      • Nick Stokes September 26, 2017 at 10:02 pm

        To answer your questions. The graphs are of air temperature over land. No fiddling with totally bogus SSTs, as per Karlization.

        There were more well maintained and monitored reporting stations before independence for British colonies than after, so these data are hard and valid, unlike the made up, pretend “data” of HadCRU, NASA and NOAA today, from fewer stations.

        Note Lamb’s comments, from back when the Hadley Centre still practiced science rather than politics.

        NCAR’s graphs from the 1970s were the best possible stab at actual science. Now they just produce PC garbage, in cahoots with NASA GISS and HadCRU.

      • Nick,

        That c. 1945-77 (when the PDO flipped) was colder than 1912-44 is not a controversial conclusion. Without totally unwarranted “adjustments”, 1978-2010 was no warmer than 1912-44.

        The early 20th century warming and late 20th century warming were practically identical, with a dramatic cold interval in between, just as the early 20th century warming was preceded by a turn of the century cold spell, c. 1879-1911, which was preceded by the first warming cycle of the Current Warm Period, after the end of the LIA.

      • And need I point out that, during the dramatic poastwar cooling, CO2 was increasing, just as it did during the late 20th century warming, but not during the early 20th century warming to anything like the same degree, and during the Depression and WWII, fell.

      • “To answer your questions. The graphs are of air temperature over land.”
        It doesn’t answer the questions. What sort of land? Global? NH only? No-one seems to care.

        But what does this say for the honesty of the original plot? It just has NASA 2017 in big letters, and NCAR 1974. And we’re supposed to think that something nefarious has gone on, as is always being claimed at Goddard’s and parrotted here.. But no mention that one is land (of some sort) and one is global land/ocean. Which would be different even if NCAR 1974 (if the newsapers got it right) had had access to more than a few hundred stations.

      • The argument is about where this supposed NCAR plot of 1974, supposedly showing how GISS has altered the record…..

        But what doesn’t seem to bother people is the basic question – what are they plotting? And what was NCAR plotting then? It matters.

        With it being in deg F it is not actually as big as it looks. I don’t know for certain what the plot was, but normally all papers pre-1980’s usually only had the NH because SH data was extremely limited. SST’s were limited too so probably only from land observations. Although the actual source would be nice to find to replicate, other data sources confirm the same or similar representation. It seems there was doubt about anything like this from back then, not just the source. It was easy to confirm that there was similar data sets that supported it.

        I have only been able to find reanalysis version of NCEP/NCAR, although I wouldn’t say it was off topic because they support previous newspaper graphs. Compared with corresponding gridded values from the CRUTEM2v data set produced directly from monthly station data. Also including the processed CRO station values of monthly air temperatures. Therefore a data set that is not from reanalysis also supports it too.

        “Monthly mean 2-m temperature anomalies from ERA-40 and NCEP/NCAR have been compared with corresponding gridded values from the CRUTEM2v data set produced directly from monthly station data [Jones and Moberg, 2003], referred to in subsequent sections simply as Climatic Research Unit (CRU) data. CRUTEM2v is based on anomalies computed for all stations that provide sufficient data to derive monthly climatic normals for the period 1961–1990. Station values are aggregated over 5° × 5° grid boxes and adjustments made for changes over time in station numbers within each box [Jones et al., 1997, 2001].”

        “In this paper, processed station values of monthly mean surface air temperature are compared with corresponding values derived from the products of two reanalyses, the 45-Year European Centre for Medium-Range Weather Forecasts (ECMWF) Re-Analysis (ERA-40) and the first National Centers for Environmental Prediction/National Center for Atmospheric Research (NCEP/NCAR) reanalysis.”

      • Tony shows that those “met stations only” are part of GISS LAND only. YOU on the other hand claimed it was not land/ocean data,which YOU created out of thin air,since NOBODY said they were land/ocean, you did that.

        This is what I despise about you,creating deliberate misdirection,with crispy Red Herrings. You do this a lot,wonder why anyone thinks you are credible when you make dishonest,misleading comments over and over……… You spent a lot of time here with numerous red herrings of things no one else claimed.

        Tony writes,

        Nick Stokes : Busted Part 3

        “Nick Stoke’s final idiotic claim takes us right to the heart of this scam.

        The first GISS plot is not the usual land/ocean data; it’s a little used Met Stations only

        This was the GISS web page in 2005. Top plot was “Global Temperature (meteorological stations.) No ocean temperatures.

        (2005 webpage)

        “The 2001 GISS web page had the same thing.”

        (2001 webpage)

        https://realclimatescience.com/2017/09/nick-stokes-busted-part-3/#comments

        At the bottom of the linked page,Tony shows the 1999 GISS chart,that has year 1934 hotter than 1998,for USA,but in 2017 GISS plot 1998 is now hotter.

        Don’t even try lying over this Nickyboy! I remember SEEING that 1999 chart when it was originally posted at the NASA website. I know you and Mosher, claim there were no such level of adjustments,but then again you two have been exposed over and over in Red Herrings, Lies and other shameful crap.

      • Lance,
        I’ve been being told that for days now. But no-one ever says how. I don’t think they can figure it out.

        I said that there was no NCAR source, so we can’t work out just what is being plotted. I said that it came from a Newsweek story. Heller says that he got it instead from another old newspaper. So? Still no NCAR source.

        And he says indignantly, well of course it’s land only, it’s all they had. What a defence! He wasn’t telling you that on his graph. He’s claiming that the difference between a 1974 plot of land only (probably NH, despite the newspaper heading) and 2017 land/ocean is due to GISS fiddling. Never tells you that they are just quite different things being plotted.

        And sunsettommy still doesn’t even try to figure out what the GISS plots are. The GISS we have been following and discussing for years is GISS Land/Ocean. Those GISS plots marked Met Stations are something different – he still seems to have no idea what. They are a continuation of the Met Stations index of the paper Hansen and Lebedeff, 1987. They used Met stations data to extrapolate over the whole globe. It’s isn’t land/ocean, and it isn’t land only. And most people think it is no longer a good idea, and no-one else does it. It is rarely referenced. Again, Heller isn’t going to tell you any of this.

  3. I would think that if the models were doing a good job, then typical years should have temperatures fluctuating around the ensemble mean, and unusual warming events like an El Nino should be pushing the upper boundary of the envelope of uncertainty. I would also expect that if what are clearly outliers (El Nino) are removed, the trend of the remaining years should parallel the ensemble mean. That isn’t what we are seeing! I’d say that it is evident that the models are (still) running too warm.

    • “and unusual warming events like an El Nino should be pushing the upper boundary”
      Not really. There are unusual cooling events too (La Nina). And if they come first, as in 2008/9 and 2012/12, then the track goes down, then back to average.

      • NS,
        I don’t think you understand. Ninos and Ninas are anomalous events and they should appear as outliers against a background of predictable changes. If the uncertainty boundaries are doing their job properly, showing the extremes of natural variability, then the anomalous events should be pushing those boundaries.

      • i think nick does understand. as reality edges ever further away from modeled “projections” i think nick along with many others will become more obtuse. either that or admit they are wrong. ten years from now the amo will make many so called experts look like the snake oil salesmen they really are. i don’t doubt they believe their own hype ,but belief never trumps reality.

      • And La Niña events don’t blast through the bottom of the temperature record like a few El Niño events have blasted through the top.

        1911 and 1976 tap P97.5… But they were also the inflection points of the ~65-yr climate cycle.

      • Nick is very knowledgeable and, from my experience, very honest. Even when I disagree with him, I appreciate his informed disagreement. Personal attacks on him don’t really contribute to the debate.

      • Nick maybe wrong in his interpretation and he might get the facts wrong on occasion… but I really haven’t seen anything that makes me question his honesty.

        When people like Nick and Steve Mosher disagree with me, they generally force me to improve my arguments.

      • David Middleton says:
        September 27, 2017 at 1:40 am

        Nick maybe wrong in his interpretation and he might get the facts wrong on occasion… but I really haven’t seen anything that makes me question his honesty.
        When people like Nick and Steve Mosher disagree with me, they generally force me to improve my arguments.

        Sorry I disagree with you. Reread his statement to my post. In my view he is trying to obfuscate, project doubt without a founded argument. He does not produce a similar graph or point to a warmist produced similar graph that would debunk Tony’s graphs. Yes he does word his post careful enough not to appear directly wrong but the tens of answers in the post below do show how wrong he is.
        He does not accept any correction but doubles down and tries to evade. I do not see that as honest debate.

        I have no trust in the kind of science where historical data changes. Especially that it is not a case, oh we found and error and corrected it, but it is a continuous years by year sneaky adaptation always in the same direction.

        There are many other problems with the temperature data as is anyhow, even without these sneaky adjustments.
        Everybody knows that a city has a higher temperature then the environment (UHI). One needs only a car with thermometer and drive in and out the city. The bigger the city, the greater the difference.
        So does a growing city not cause a growing temperature to be measure by a station that was at the periphery of the city 100 years ago? Moving the station at the periphery and having it a second time engulfed by the city would not cause another warming?
        To this Berkeley does find no UHI effect? On the contrary a cooling city effect? If the consequences would not cost billions over billions it would be funny.

        Tony Heller does point to the gradual data adjustment towards the continuous warming as shown in models. He does a great work.

      • Well, David, I’ve never found too much wrong with your arguments, or your science.

        Mosher and Stokes on the other hand push the warmist disinformation irrespective of the facts.

        That’s how their propaganda works.

        You’ll never convince them that they are mistaken, because they are “believers” not scientists.

        Carry on.

      • “Mosher and Stokes on the other hand push the warmist disinformation irrespective of the facts.”

        The word you’re looking for is “disingenuous”, I think.

      • Thanks Cat,

        Yep; that’s the one I was looking for.

        Somehow or other it went missing from my vocab, but you found it for me.

        Was it at the bottom of the ocean with all that missing heat?

        I need to know. There might be more of them down there cooking away.

      • Sceptical Sam,
        I agree with David about Stokes. I think Stokes is actually quite knowledgeable, and I haven’t caught him in an outright lie. However, I think that both he and Mosher (If you can get him to write a coherent paragraph) often indulge in sophistry to support their idealogy, and I have never known Stokes to admit that he was wrong. So, I wouldn’t characterize him as being ‘open minded’ or objective. But, as David points out, he makes a good foil to sharpen one’s arguments.

      • It will take far more than a major La Nina to make temperatures from NOAA/GISS (and their companions in mal-adjustment) drop back down to reality.

      • i agree andy. the downside of the amo should do the trick though. the recent el nino released a lot of heat over a long period of time (it never bested the peak of the 98 event ,i don’t care what anyone says they recorded , the natural responses like fish movements do not support that position) and i think the coming la nina will be a long period event ,relatively speaking, without it bottoming out as low as previous events .note this is a belief, not a projection .

    • “big La Nina”?
      Not really, says the BoM:
      “Five of the eight models suggest SSTs will cool to La Niña thresholds by December 2017, but only four maintain these values for long enough to be classified as a La Niña event.”

  4. The answer is simple.

    If Zeke, Gavin and the rest of the phoneys believe their models are right, let them forecast temperatures over the next 20 years.

    We’ll then come back and see how close they were. In the meantime, let’s stop obsessing about CO2 and get back to the things that should really be concerning us .

    (Sorry – I forgot – Zeke, Gavin and co rely on this nonsense to earn a living.)

  5. The real trick is that the models that they are using for these comparisons are not the models that they are showing the politicians. The mid-range of the models in this comparison warm at less than 2 degrees C per century.

    By contrast, a few years ago the CSIRO in Australia presented a model to Prime Minister Julia Gillard with six or seven degrees warming by 2070. She used that to justify introduction of a carbon tax.

  6. Here is what is really going on See Figs 1 -12 at
    https://climatesense-norpag.blogspot.com/2017/02/the-coming-cooling-usefully-accurate_17.html
    Here is fig 4

    “The RSS cooling trend in Fig. 4 and the Hadcrut4gl cooling in Fig. 5 were truncated at 2015.3 and 2014.2, respectively, because it makes no sense to start or end the analysis of a time series in the middle of major ENSO events which create ephemeral deviations from the longer term trends. By the end of August 2016, the strong El Nino temperature anomaly had declined rapidly. The cooling trend is likely to be fully restored by the end of 2019.”
    and Fig 12

    Fig. 12. Comparative Temperature Forecasts to 2100.

    “Fig. 12 compares the IPCC forecast with the Akasofu (31) forecast (red harmonic) and with the simple and most reasonable working hypothesis of this paper (green line) that the “Golden Spike” temperature peak at about 2003 is the most recent peak in the millennial cycle. Akasofu forecasts a further temperature increase to 2100 to be 0.5°C ± 0.2C, rather than 4.0 C +/- 2.0C predicted by the IPCC. but this interpretation ignores the Millennial inflexion point at 2004. Fig. 12 shows that the well documented 60-year temperature cycle coincidentally also peaks at about 2003.Looking at the shorter 60+/- year wavelength modulation of the millennial trend, the most straightforward hypothesis is that the cooling trends from 2003 forward will simply be a mirror image of the recent rising trends. This is illustrated by the green curve in Fig. 12, which shows cooling until 2038, slight warming to 2073 and then cooling to the end of the century, by which time almost all of the 20th century warming will have been reversed.”
    Schmidt continues to make the same egregious error of scientific judgement as of the majority of academic climate scientists ,lukewarmers , the MSM ,GWPF and the ecoleft chattering classes in general by projecting temperatures forward in a straight line beyond a peak and inversion point in the millennial temperature cycle.
    Climate is controlled by natural cycles. Earth is just past the 2003+/- peak of a millennial cycle and the current cooling trend will likely continue until the next Little Ice Age minimum at about 2650.

  7. Using statistical maleficence to pretend that their statistically mal-adjusted temperature series are somewhere near their non-validated models.

    Its a travesty !!

    • All the data confirms a long term temp rise caused by AGW. You are all distracting yourselves with nitpicking when you should be publishing your evidence in the real world like any good scientist does.
      Here, it means nothing.

      • “All the data confirms a long term temp rise caused by AGW”

        WRONG !

        There is basically ZERO data showing any anthropogenic global warming

        A highly beneficial and natural rise out of the coldest period in 10,000 years.

        Local warming from UHI effects , that feeds through into the calculated global temperature.

        Nearly all of the recent (since 1979) REAL warming has come from ocean events and oscillations.

        There is a large amount of “fabricated” temperature rise on top of that.

        Those “adjustments” are the only form of “anthropogenic warming” in the last 40+ years.

        There is ZERO signal of any CO2 based warming in the whole of the satellite era.

      • WTF . Nonsense .The Figs, data and interpretations linked at 2:44 above are a blog version of the following
        peer reviewed paper:

        The coming cooling: usefully accurate climate forecasting for policy makers.
        Norman J. Page
        Houston, Texas
        Dr. Norman J. Page
        Email: norpag@att.net
        DOI: 10.1177/0958305X16686488
        Energy & Environment
        0(0) 1–18
        (C )The Author(s) 2017
        Reprints and permissions:
        sagepub.co.uk/journalsPermissions.nav
        DOI: 10.1177/0958305X16686488
        journals.sagepub.com/home/eae
        ABSTRACT
        This paper argues that the methods used by the establishment climate science community are not fit for purpose and that a new forecasting paradigm should be adopted. Earth’s climate is the result of resonances and beats between various quasi-cyclic processes of varying wavelengths. It is not possible to forecast the future unless we have a good understanding of where the earth is in time in relation to the current phases of those different interacting natural quasi periodicities. Evidence is presented specifying the timing and amplitude of the natural 60+/- year and, more importantly, 1,000 year periodicities (observed emergent behaviors) that are so obvious in the temperature record. Data related to the solar climate driver is discussed and the solar cycle 22 low in the neutron count (high solar activity) in 1991 is identified as a solar activity millennial peak and correlated with the millennial peak -inversion point – in the UAH temperature trend in about 2003. The cyclic trends are projected forward and predict a probable general temperature decline in the coming decades and centuries. Estimates of the timing and amplitude of the coming cooling are made. If the real climate outcomes follow a trend which approaches the near term forecasts of this working hypothesis, the divergence between the IPCC forecasts and those projected by this paper will be so large by 2021 as to make the current, supposedly actionable, level of confidence in the IPCC forecasts untenable.

      • 1) The surface data does show AGW, humans adjusting it to show more warming that didn’t exist.
        2) Only warming occurred over the last 20 years has been from the strong El Nino, but humans adjusting it have caused a bigger difference from before.
        3) Any warming is no different from noise and almost as big as the errors involved.
        4) The AMO and ENSO explain most of the warming during the long term temp rise.
        5) Taking those into account in 4) there is little room for anything else.

      • WTF,

        here is real evidence that AGW is not being supported by the failed IPCC per decade warming trend prediction of .30C,which has been in place since the 2007,they also said the same in the 1990 report too,which was wrong too,

        “For the next two decades, a warming of about 0.2°C per decade is projected for a range of SRES emission scenarios. Even if the concentrations of all greenhouse gases and aerosols had been kept constant at year 2000 levels, a further warming of about 0.1°C per decade would be expected.”

        https://www.ipcc.ch/publications_and_data/ar4/wg1/en/spmsspm-projections-of.html

        The satellite data shows it is LESS than 50% of the predicted rate,BASED on the AGW conjecture:

        http://www.woodfortrees.org/graph/rss/from:1990/mean:12/plot/rss/from:1990/trend

        Not even close!

      • WTF,
        Now why would someone like yourself waste time sharing your thoughts at a place where “it means nothing?”

      • Clyde,
        Because I like seeing the excuses you guys come up with for your glaring failures within the scientific community, hence the existence of sites like this.

        (You are trolling,debate the comments instead) MOD

      • wtf, if you can point me to one paper that identifies the human causal signal in the recent small amount of warming the globe has experienced i will immediately become a warmist.

      • MOD,
        Pointing out the skeptic’s failure to engage with the scientific community and get any counter evidence to work is not trolling, just basic observation.

        (This is your second warning,stop trolling and debate the replies) MOD

      • Creatively preposterous claim, ‘WTF’. The common orthodox warmist demand is for only “peer reviewed” criticism of the orthodoxy, in publications controlled by the guardians of the orthodoxy. One really should climb from under your metaphoric bridge and realize just how preciously irrational that is.
        WTF, if you want to be labeled as anything other than a cliche ridden troll, try something really difficult, like defending the temperature records in HADCRUT and GISSTEMP from the conclusion that they have been adjusted and infilled to the point of bearing no resemblance to reality.

      • WTF,
        You are long on opinion and short on substantiation. You accuse skeptics of not having any significant contributions here. However, I view my submissions as a form of informal peer review. However, the process depends on people like yourself actually submitting comments with logic, or counter-citations that are germane, not just vitriolic opinions.

        Your very first statement, “All the data confirms [sic] a long term temp rise caused by AGW.”, is not supported beyond being contested. It is generally accepted that the Earth has been warming since at least the end of the Little Ice Age. Logically, the second half of your statement cannot be supported. “AGW” is a presumed effect of anthropogenic activities, not the actual process for causing warming. Overlooking your limited ability to even state your beliefs, there is considerable controversy just what the extent of anthropogenic contributions are.

        So, you really aren’t making any contribution here either for or against your belief system. It would seem that your primary purpose is to be an annoyance to people whom you think are not as smart as yourself. That would seem to be a decent definition of a “troll.”

  8. The only way these demonstrated-incorrect models could have been induced to create an accurate hindcast is if the parameters were “tuned” to give that result regardless of reality. Using these faked models to claim that they will give an accurate forecast is ludicrous. Dr Schmidt is being disingenuous perhaps? Or maybe climate science is as difficult for him as was mathematics.

    Models are computer programs. They can be written to produce whatever result you want. It don’t mean they are correct. Or, if correct, correct for the right reasons.

    • Although alarmists don’t like it, the models are tuned at least to the extent of quantities which are computationally beyond the computer power available. These are parameterized and the value of the parameters is where the tuning occurs.

      Whether there is more curve fitting going on is an open question, but what researcher would put forward a model which deviates too far from history.

      • With enough knobs, dials and degrees of freedom, any past behavior can be reproduced by any model. The problem is always predicting the future and the more wrong the un-tuned model was to begin with, the further from reality predictions of the model ‘tuned’ to the past will be.

  9. “CMIP-5 yielded a range of 0.4° to 1.0° C in 2016, with a P50 of about 0.7° C. CMIP-3 yielded a range of 0.2° to 1.0° C in 2016, with a P50 of about 0.6° C.

    They essentially went from 0.7 +/-0.3 to 0.6 +/-04.

    Progress shouldn’t consist of expanding the uncertainty…”

    Isn’t that diminishing the uncertainty?

      • CMIP5 is supposed to be the progress. It is Phase 5 of the Coupled Model Intercomparison Project whereas CMIP3 was Phase 3. I think Fabius Maximus has things backwards and Gavin doesn’t know enough to tell the difference.

    • Nick, I made the same point in David’s identical comments on a previous post and he never replied. I thought he was just going to quietly never mention it again. That reaction at least would have been rational.

      • There were 500+ comments in that thread. I didn’t read them all.

        Schmidt’s CMIP3 model was presented as “progress” over CMIP5. CMIP-5 yielded a range of 0.4° to 1.0° C in 2016, with a P50 of about 0.7° C. CMIP-3 yielded a range of 0.2° to 1.0° C in 2016, with a P50 of about 0.6° C.

        They essentially went from 0.7 +/-0.3 to 0.6 +/-04.

        Progress shouldn’t consist of expanding the uncertainty… unless they are admitting that the uncertainty of the models has increased.

        Larry then asked Dr. Schmidt about this oddity and Schmidt didn’t have a coherent answer.

      • I’m pretty sure I wrote my post in English. If you have a problem with something I posted, quote the exact phrase, in context.

      • David Middleton: I asked about your comment claiming that Dr. Schmidt said CMIP3 was progress over CMIP5. Nothing in what you wrote supports that. So you just plain lied, and continue to lie.

      • David Middleton: You wrote “Schmidt’s CMIP3 model was presented as “progress” over CMIP5.” That was flatly untrue.

      • I was referring to the WUWT post by Larry Kummer in this comment…

        David Middleton September 26, 2017 at 5:27 pm
        There were 500+ comments in that thread. I didn’t read them all.

        Schmidt’s CMIP3 model was presented as “progress” over CMIP5. CMIP-5 yielded a range of 0.4° to 1.0° C in 2016, with a P50 of about 0.7° C. CMIP-3 yielded a range of 0.2° to 1.0° C in 2016, with a P50 of about 0.6° C.

        They essentially went from 0.7 +/-0.3 to 0.6 +/-04.

        Progress shouldn’t consist of expanding the uncertainty… unless they are admitting that the uncertainty of the models has increased.

        Larry then asked Dr. Schmidt about this oddity and Schmidt didn’t have a coherent answer.

        My comment was in reply to this comment…

        Tom Dayton September 26, 2017 at 5:25 pm
        Nick, I made the same point in David’s identical comments on a previous post and he never replied. I thought he was just going to quietly never mention it again. That reaction at least would have been rational.

        The CMIP3 model was referred to as “A climate science milestone: a successful 10-year forecast!” Hence, progress over previous models.

        I never said that Dr. Schmidt called it progress. Although he is the one who posted it as a more successful model than the CMIP5 model he subsequently posted.

      • No, Larry Kumer did not refer to CMIP3 as progress over CMIP5. He referred to progress in the existence of a successful 10-year projection. Emphasis on 10 years. Obviously that had to be CMIP3 because it was published 10 years ago. It could not be CMIP5 because not enough time has passed. So you lied about Kumer’s statement, and continued by implying Schmidt agreed with that false statement. Never anywhere was there any basis in fact for anyone but you having claimed that CMIP3 was progress over CMIP5. My guess is that you got confused but then instead of simply admitting that you have continued to dig yourself deeper. Pathetic.

      • When I’m quoting people, I put these things ” ” around the quote or use the blockquote tags.

        I didn’t say Larry referred to it as progress. I said he presented it as progress (a milestone in climate science).

      • Learn to read…

        The below graph was tweeted yesterday by Gavin Schmidt, Director of NASA’s Goddard Institute of Space Sciences (click to enlarge). (Yesterday Zeke Hausfather at Carbon Brief posted a similar graph.) It shows another step forward in the public policy debate about climate change, in two ways.

        (1) This graph shows a climate model’s demonstration of predictive skill over a short time horizon of roughly ten years. CMIP3 was prepared in 2006-7 for the IPCC’s AR4 report. That’s progress, a milestone — a successful decade-long forecast!

        (2) The graph uses basic statistics, something too rarely seen today in meteorology and climate science. For example, the descriptions of Hurricanes Harvey and Irma were very 19th C, as if modern statistics had not been invented. Compare Schmidt’s graph with Climate Lab Book’s updated version of the signature “spaghetti” graph — Figure 11.25a — from the IPCC’s AR5 Working Group I report (click to enlarge). Edward Tufte (The Visual Display of Quantitative Information) weeps in Heaven every time someone posts a spaghetti graph.

        >Note how the graphs differ in the display of the difference between observations and CMIP3 model output during 2005-2010. Schmidt’s graph shows that observations are near the ensemble mean. The updated Figure 11.25a shows observations near the bottom of the range of CMIP5 model outputs (Schmidt also provides his graph using CMIP5 model outputs).

        https://wattsupwiththat.com/2017/09/22/a-climate-science-milestone-a-successful-10-year-forecast/

        Ed Hawkins CMIP5 ensemble has over 10 years of forecast mode. Schmidt’s CMIP3 model was cearly presented as progress over CMIP5 models.

  10. Gavin Schmidt’s reproduction of Hansen’s 1998 paper is a lie. Hansen’s Scenario A was presented at the time as the “business as usual” scenario, with continued exponential growth of CO2 emissions.When asked by Congress, under oath, Hansen said that scenario A was “business as usual.” In the 1988 paper his only caveat on that scenario was that, since it had exponential input, it “must eventually be on the high side of reality” as we start running into resource shortages – an event that has not yet happened.

    Since 1998, at least through about 2009, CO2 emissions actually grew to a level that would follow the numerical rate of exponential growth assumed in the 1998 study. But after Pat Michaels, Michael Crichton and others stared lambasting this prediction since it was so far off, Hansen and Schmidt took the “high side of reality” quote from the original paper out of context and pretended that scenario A was only a “worst case” scenario and that Scenario B was the benchmark prediction for “business as usual.” But in the original model run, Scenario B assumed sufficient emissions controls to attain a constant rate of growth in greenhouse gas emissions, a scenario that never occurred except perhaps immediately following the 2007-09 recession.

    In other words, since 2008 our CO2 emissions have actually grown at the exponential rate predicted in Scenario A, Hansen testified at the time that Scenario A was the “business as usual” scenario. but 30 years later when that forecast was wildly off base, Hansen and Schmidt lie and say that Scenario B was the “business as usual” scenario.

    • I used to think that it was. A vs B is kind of subjective. Steve McIntyre demonstrated that B was closest to “business-as-usual.” Nick will probably post a link to McIntyre’s analysis.

    • “A vs B is kind of subjective.”
      The term “business as usual” is subjective. And what Hansen actually said in his paper was:

      But basically none of this matters. The whole point of scenarios is that they cover things outside the science. What decisions people will make in the future. They are used for matters that scientists don’t claim to be able to predict. So there isn’t any use fussing about which they thought most likely. They calculated all of them, and the only thing that matters is, what actually happened?

      I duscuss that here. It has extracts from the data, and links to Steve McIntyre’s threads, including some of his plots. Basically we agree that the scenario that happened is B; I would give it a B- (SM was writing in 2008).

      • Nick, you are good at historical curiosities. Did Hansen actually publish a chart showing CO2 levels under each of his scenarios or just a temperature chart?

        That would seem to be a good way to figure out which of his scenario temperature charts should be discarded.

      • @David

        Thanks. Do the narratives indicate which of his temperature scenarios can be safely discarded? It would be helpful not to have to play a continual pea and thimble game flipping between his temperature scenarios. Eliminating two of the thimbles would make the game much more informative.

      • “The term ‘business as usual’ is subjective. And what Hansen actually said in his paper was:
        ‘Scenario B is perhaps the most plausible”
        But basically none of this matters. The whole point of scenarios is that they cover things outside the science.”

        First, you are also taking quotes out of context. His comment about Scenario B immediately followed his comment that Scenario A couldn’t last indefinitely and that Scenario C likely wouldn’t be implemented, i.e. this quote on Scenario B was for the long term – not the first 20-30 years of projections (for which scenario A went out about 100 or so years). And ask yourself why Hansen would have told Congress under oath that Scenario A was the “business as usual scenario” if he really thought that Scenario B was the most likely case? Was he lying to Congress? And in later Congressional testimony when discussing scenario B as being plausible, he was discussing that scenario over much longer time frames than the initial 30 years of the forecast.

        Nor is the task of validating any of these projections outside science. This is NASA’s model. All they have to do is dust it off the shelf today and put in the actual greenhouse emissions and volcanic eruptions that had occurred since 1988, and see just how well the model performs with the emissions path we now know occurred.Hansen didn’t do that when he tried to defend his paper, but instead simply made a useless comparison of actual temperatures against his three scenarios, and only in hindsight said that scenario B was the target to hit. (Texas sharpshooter, anyone?)

        Here’s a link to a graph that charts annual CO2 emissions globally and by country from 1990 to 2015 and 1960 to 2015, respectively:

        Look at those charts and try to say that greenhouse gas emissions followed the same linear trend that existed from 1960 to 1988, onward out to 2010, which is what scenario B entailed. Also in Hansen’s paper you will find the numerical value for the growth rate assumed for the exponential increase (a modest 1.5% per year), and if you use that growth rate on the 1998 emissions you get to at least what our current emissions are.

        What Hansen and Schmidt are doing is a bait and switch. The prediction in 1998 was to estimate temperature as a function of emission scenarios, and the 1988 paper expressed “forcing” as a simple function of atmospheric CO2 concentration (rotely converted to temperature by a theoretical no-feedback estimate of equilibrium temperature at that concentration). Look at the y-axis of FIG. 2 of Hansen ’88 you cite and you’ll see that it’s in temperature, not watts/meter squared, and again only actually reflects GHG concentrations since the temperature values were just a theoretical conversion to equilibrium temperature.

        The real world emissions followed scenario A , as apparently did actual CO2 concentrations in the air (since the alarmist line, even today, is that the cumulative CO2 in the atmosphere from industrial activities has doubled since 1988, and a 1.5% growth rate wouldn’t even get past 1.6). This is a huge problem for Hansen and Schmidt since it means that the world did indeed match or even exceed Hansen’s “Scenario A” exactly as he described it in his paper. Their response was a pathetically obvious piece of historical revisionism – pretending that the GHG concentration “forcing” was no longer GHG concentrations, but instead W/m^2 (power) and then ginned up some theoretical calculations, and . . . “oh, hey, will you look at that? Forcing seems to look a lot like scenario C.”

        This is so transparent that only the terminally gullible would fall for it. In 1998 they splattered a few lines on a wall and told Congress that the sky was falling because the world’s trajectory was along the top, scary line and said “you have to listen to us.” Congress didn’t listen to them, and the sky never fell. When they got called on their silliness, they went back to their original paper and gave words and graphs new meanings that were never intended in the original paper, and came up with some entirely new post-hoc calculations to try to pretend that the prediction was right all along, that war is peace, freedom is slavery, and ignorance is strength.

        If the goal is to demonstrate your scientific understanding of a system by accurate predictions, then once a prediction is made, you later test whether it’s right or it’s wrong. There’s nothing in between, and you don’t shift the prediction into something it wasn’t. You don’t get to revise history and retroactively readjust the conditions on which the prediction was made. If Hansen in 1998 didn’t understand the Earths climate system well enough to get the simple stuff right (how much added heat input results from CO2 emissions of constant growth) it’s a farce to think that he understood the system well enough to get the hard stuff right (what temperature results).

      • Lovely little clipped picture without context, Nick.

        Hansen’s full statement preparatory to your tiny snippet:

        “These scenarios are designed to yield sensitivity experiments for a broad range of future greenhouse forcings. Scenario A, since it is exponential, must eventually be on the high side of reality in view of finite resource constraints and environmental concerns, even though the growth of emissions in scenario A (:::1.5% yr·1) is less than the rate typical of the past century (:::4% yr·1) .

        Scenario C is a more drastic curtailment of emissions than has generally been imagined; it represents elimination of chlorofluorocarbon (CFC) emissions by 2000 and reduction of C0² and other trace gas emissions to a level such that the annual growth rates are zero (i.e., the sources just balance the sinks) by the year 2000.

        Scenario B is perhaps the most plausible of the three cases.”

        Hansen explains his “model” a tad bit more later in the paragraph.

        “Scenario A reaches a climate forcing equivalent to doubled co2 in about 2030,
        scenario B reaches that level in about 2060,
        and scenario C never approaches that level.

        Note that our scenario A goes approximately through the middle of the range of likely climate forcing estimated for the year 2030 by Ramanathan et al. [1985], and scenario B is near the lower limit of their estimated range.”

        WUWT, Climate Audit and Lucia’s Blackboard also “investigated” Hansen’s always changing forecasts.

        Note the “difference between “observations” added to Hansen’s forecasts around 2008, and Gavy’s twitter trick as shown above.

        Or the temperatures applied by Lucia:

        Or Steve McIntyre’s version of Willis’s digitized data with Steve adding in the then current data?

        N.B.
        Steve McIntyre’s “observations: shows observed anomalies ranging from 0.0°C in 1960 to 2008’s +0.55°C (GISS)
        or RSS’s +0.16°C in 1979 to +0.45°C in 2008
        (Both eyeball estimates.)

        Where Gavy’s twitter silliness shows temperature observation anomalies in 1976 as -0.45°C while 2008 reaches +0.3°C.

        Yes, the various charts were “re-centered” to align chart basics:
        e.g. “In order to put the three Scenarios apples and apples to the GISS GLB temperature series (basis 1951-1980), I re-centered the three scenarios slightly so that they were also zero over 1958-1967.”

        Hansen had several other views on his forecasts in the document, including:

        “With hot, normal and cold summers defined by 1950-1979 observations as described earlier, the climatological probability of a hot summer could be represented by two faces (say painted red) of a six-faced die.
        Judging from our model, by the 1990s three or four of 1he six die faces will be red. It seems to us that this is a sufficient “loading” of the dice that it will be noticeable to the man in the street.”

        Three of six faces represents a maximum “B scenario”. Four sides represent a lower “A scenario”

        “Specifically, in scenario A C0² increases as observed by Keeling for the interval 1958- 1981 (Keeli11g et al., 1982] and subsequently with 1.5% yr·1 growth of the annual increment. CCl&sub3;F (F-11) and CCl&sub2;F&sub2; (F-12) emissions are from reported rates (Chemical Manufacturers Association (CMA) 1982) and assume 3% yr·1 increased emission in the future, with atmospheric lifetimes for the gases of 75 and 150 years, respectively.

        CH41 based on estimates given by Lacis et al. (1981], increases from 1.4 ppbv in 1958 at a rate of 0.6% yr·1 until 1970, 1 % yr·1 in the 1970s and 1.5% yr·t thereafter.
        N&sub2;0 increases according to the semiempirical formula of Weiss [1981], the rate being 0.1%yr in 1958, 0.2% yr·1 in 1980, 0.4% yr·1 in 2000, and 0.9% yr·1 in 2030.

        Potential effects of several other trace gases (such as 0&sub3;, stratospheric H&sub2;0, and chlorine and fluorine compounds other than CCl&sub3;F and CC&sub2;F&sub2;) are approximated by multiplying the CCl&sub2;F and CCl&sub2;F&sub2; amounts by 2.

        In scenario B the growth of 1he annual increment of C02 is reduced from 1.5% yr·1 today to 1% yr·1 in 1990.
        0.5% yr·1 in 2000, and 0 in 2010; thus after 2010 the annual increment in C02 is constant, 1.9 ppmv yr·1.”

        CO² increase from 2016 to 2017 was roughly 2.42ppm; which works out to a 0.604% increase from 2017. A lack of catastrophic warming is not a surprise.

      • Kurt September 26, 2017 at 7:47 pm
        The real world emissions followed scenario A , as apparently did actual CO2 concentrations in the air (since the alarmist line, even today, is that the cumulative CO2 in the atmosphere from industrial activities has doubled since 1988, and a 1.5% growth rate wouldn’t even get past 1.6). This is a huge problem for Hansen and Schmidt since it means that the world did indeed match or even exceed Hansen’s “Scenario A” exactly as he described it in his paper.

        Hansen’s projections for CO2 were as follows for 2016:
        Scenario A: 405.5
        Scenario B: 400.5
        Scenario C: 367.8

        Actual global annual average: 402.8

        So the CO2 projection fell between A and B, not bad agreement after 30 years.

      • David Middleton September 26, 2017 at 4:47 pm “Just temperature charts. The CO2 scenarios were laid out in narratives.”

        In the submission to the congressional record, the scenarios were clearly described as emission scenarios. The plots are of the scenarios.

        As Kurt points out, the actual emissions were actually in excess of A, mainly because of China.

      • Phil. — that’s a common point of confusion, Hansen was pretty close on CO2 concentrations (although A was too pessimistic about the relationship of emissions to concentrations) but way, way off on methane, which flattened in the 1990s for reasons that are still the subject of some debate.

        Naturally this leads AGW enthusiasts to promote the idea that Hansen’s model was awesome, because no one could have predicted what methane would do (actual argument from SkS). The implications for predictability of temperature trends seems largely to escape them.

      • Steve’s 2008 post is as excellent as his work usually is — he notes that in 1998 Hansen claimed the forcings were more like B. This was probably true, but emissions looked more like A.

        Remember, Congress cannot pass a law to change concentrations, legislation can only affect human emissions, and that policy was the subject of the testimony. So when judging his 1988 predictions, Scenario A is the correct choice.

        If people want to go back and run Hansen’s model with updated assumptions, they should feel free, but that cannot affect his 1988 predictions one way or another.

      • Apart from the strong El Niño events of 1998 and 2015-16, GISTEMP has tracked Scenario C, in which CO2 levels stopped rising in 2000, holding at 368 ppm.

        The utter failure of this model is most apparent on the more climate-relevant 5-yr running mean:

      • “Steve’s 2008 post is as excellent as his work usually is — he notes that in 1998 Hansen claimed the forcings were more like B. This was probably true, but emissions looked more like A.”

        This isn’t accurate. In 1998 Hansen ridiculously alleged that actual forcing approximated Scenario C, even though his data really indicated that actual forcing exceeded scenario A. His deceit in the paper was in ignoring the conservative assumption, made in all three scenarios presented in his original forecast, that the growth rate in emissions starting in 1988 would only be 1.5%, with the three scenarios having different forecasts of what the growth rate would do after that, and even though the historical growth rate was more like 4%. The link to his original paper is here, look at p. 9345 and Appendix B to see that all three scenarios assumed an unrealistic, instantaneous cut in the growth rate of GHG forcing to 1.5% starting in 1988:

        https://pubs.giss.nasa.gov/abs/ha02700w.html

        So in 1998, a decade later, in a lame attempt to show that he got his predictions right, Hansen presents a graph indicating that year-over-year percentage growth rate of the combined forcing of all the GHGs considered in his first paper started plummeting (only due to methane and some CFCs), but the growth rate in net forcing started dropping from the 4% growth rate that was much higher than what all three scenarios presumed would occur in 1998. Hansen superimposed his three scenarios over this curve without accounting for the discontinuity that should have been in each of the three scenarios in this the graph, at 1998 on the x-axis, i..e Hansen’s graph inappropriately lined up the curves on the vertical axis to pretend that the actual growth rates were following scenario C, when really scenarios A, B, and C all should have dropped way below that curve starting in 1988.

        Here’s a link to the paper – the offending graph is FIG. 5B (look carefully at the legend on the vertical axis):

        https://pubs.giss.nasa.gov/abs/ha01100t.html

        Also, look at FIG. 5A. You’ll note that in his original scenario A, in 2005 the combined forcing of all the gasses.considered (CO2, methane, CFCs, Nitrous oxide, and other trace gasses i) were forecast to be about 2.2 W/m^2. Here is an IPCC chart indicating the actual relative forcing of these gases in 2005

        You’ll notice that the sum of the forcing of CO2, methane, N2O, and CFCs is right on track with Hansen’s scenario A shown in FIG. 5A of his 1998 paper, and this doesn’t even consider other trace gasses Hansen threw into scenario A. The very notion that GHG forcing folowed Hansen’s admittedly unrealistic scenario C beginning in 1988, given all the global growth throughout the 1990s and 2000s is so absurd that, in order to be suckered by that argument, you have to want to believe it so badly that all rational thought goes out of your mind.

      • Kurt September 27, 2017 at 3:17 pm
        “Hansen’s projections for CO2 were as follows for 2016: Scenario A: 405.5; Scenario B: 400.5; Scenario C: 367.8 – Actual global annual average: 402.8

        So the CO2 projection fell between A and B, not bad agreement after 30 years.”

        There is no agreement to be had; Hansen’s prediction, even on how much CO2 would be in the atmosphere was way off.
        No it was not, as shown above A was about 3ppm above actual and B was about 2ppm below actual.

        I don’t know where you came up with those numbers – they certainly weren’t in Hansen’s 1988 paper, so I suspect that they were put out post hoc as some sort of deceitful statistical trick.

        Clearly you don’t know, so you make up your own conspiracy theory, they are from the file containing the projected concentrations released by Hansen in 1989, the same source as used by McIntyre when he plotted the data in 2008.

        But let’s assume for a second that you have accurately correlated each of Hansen’s emissions scenarios with a 1998 prediction of CO2 atmospheric concentrations in 2016 at each of those scenarios.

        Let’s not because that’s not what I or McIntyre did!

        With respect to CO2 emissions, Hansen’s scenario A assumed that CO2 emissions would only grow at a 1.5% rate.

        No it did not, it assumed that from 1981 onwards (after the end of Keeling’s observations) the ‘annual increment’ in CO2 would grow at 1.5% yr-1, i.e. it is the second derivative not the first as you have erroneously supposed. Since the annual increment in 1981 was 1.4ppm, 36 years of 1.5% growth in that increment would mean that it would now be growing at 2.3ppm yr-1.

        The world blew through that rate throughout the 1990s and 2000s, and even after the stall in CO2 emissions growth over the last several years, growth since 1998 still average around 2% a year – well above Hansen’s 1.5% growth rate for emissions scenario A (33% off), and his scenario B assumed that growth rates would drop to 1.0% in 1990, 0.5% in 2000 and 0% in 2005. Needless to say, scenario B didn’t happen.

        Wrong of course because of your mistake, scenario B assumed that the second derivative would be 0 in 2010 and explicitly stated that the growth rate in CO2 at that time would be a constant 1.9ppm yr-1. The actual values in 2009, 2010 and 2011 were 1.62, 2.44 and 1.68.

        So what you’re trying to say is that Hansen should be viewed as some kind of a prophet because the actual CO2 concentrations in 2016 correspond to something between Hansen’s emission scenario A and emissions scenario B, when in the real world emissions exceeded scenario A by a large margin and made scenario B look ridiculously optimistic.

        No I’m saying that the scenarios he chose were a good description of what happened over the next 30 years, CO2 just over B, CH4 ~C, CFCs below C, and that you made an error in your interpretation of Hansen’s paper.

      • Forrest Gardener September 26, 2017 at 4:51 pm
        @David

        Thanks. Do the narratives indicate which of his temperature scenarios can be safely discarded? It would be helpful not to have to play a continual pea and thimble game flipping between his temperature scenarios. Eliminating two of the thimbles would make the game much more informative.

        Basically A is the outlier, CO2 concentration has been slightly over B, N2O close to B, CH4 about C and the Freons below C, so the overall effect on temperature is below B.

      • Phil:

        I think you’re missing my point. Hansen’s 2008 paper made predictions on what temperatures were like for three different scenarios – A, B, and C – where those scenarios were explicitly described as alternative paths of “emissions.” Hansen stated, verbatim, that “Scenario A assumes that growth rates of trace gas emissions . . . will continue indefinitely.” Note that he didn’t say “concentrations.” He further states:

        “Scenario A, since it is exponential, must eventually be on the high side of reality in view of finite resource constraints and environmental concerns, even though the growth of emissions in Scenario A is (~1.5% per year) is less than the rate typical of the past century (~4% per year). Scenario C is a more drastic curtailment of emissions than has generally been imagined; it represents elimination of chloroflourocarbon (CFC) emissions by 2000 and a reduction of CO2 and other trace gas emissions to a level such that the annual growth rates are zero (i.e. the sources just balance the sinks) by the year 2000. Scenario B is perhaps the most plausible of the three cases.”

        This latter quote is about a clear as it can be – Hansen is using his model to make a prediction of what temperatures would result from three different hypothesized courses of action, each specifying how much greenhouse gasses we choose to pump into the atmosphere. Thus, when Hansen says that “Scenario B is perhaps the most plausible” he means that scenario B best represents the emissions reductions the world might be willing to implement. Hansen then used this paper and his model to go on a very public crusade to advocate for emissions reductions based on the scary results of scenario A, which he indicated were his predicted consequences of simply doing nothing.

        Eliminating all doubt on this subject, Hansen brought this paper up in a Congressional hearing specifically investigating the issue of what kind of emissions reductions would be desired in view of the global warming hypothesis, and Hansen characterized scenario A as “business as usual” which plainly expresses the assertion that the temperatures shown in scenario A are what Hansen is predicting will occur if we keep increasing our emissions, even at the conservative exponential rate assumed. After all, the climate system doesn’t have a “business.” People do.

        When, decades later after the real world results came in and he got called on that failed forecast, Hansen’s felt the need to dissemble. He not only took his “high side of reality” quote entirely out of context but he also falsely said that Scenario A was presented as a “worst case” scenario and that it assumed “rapid” exponential growth of GHGs, even though his actual paper specifically noted that the 1.5% growth rate was much lower than the recent historical rate of growth in emissions. If, as you say, Hansen’s predictions were right on target, why did Hansen have to lie like this?

        Hansen assumed without any data or reasoning to back it up, that there was a 1-1 relationship between emissions and the measured increase in GHG concentrations. In other words, his prediction was premised on an understanding of the climate system where that system would not react to remove any portion of the amount emissions that we sent up into the air, hence to Hansen in 1988, emissions and forcing were for practical purposes the same thing. Accordingly, he assumed that past incremental increases of CO2 measured in the atmosphere was a measurement of past annual GHG emissions. In some cases, however, he used industrial data of emissions (see appendix B as it pertains to CFCs) and assumed that those emissions would continue, to construct his emission scenarios.

        But I don’t care about what kinds of assumptions went into how Hansen got from point 1 (emission scenarios) to point 2 (temperature response to emissions scenarios), or even whether his numerical model differentiated emissions from concentrations. I’m judging Hansen’s predictions the same way my engineering professors judged my answers to the problems they gave me on an exam with specified starting conditions. If my answer was wrong, they weren’t interested in any after-the-fact excuses as to what information I didn’t know, what parts of my analysis were sound, or sophistries about what my numbers really represented, after I had already learned the right answer and how to approach the problem.

        Hansen’s predicted relationship between emissions and temperature was wrong, and it was wrong because Hansen didn’t understand the climate system well enough to make that prediction. Having failed, his recourse was to start over with a new prediction about the future and see if it is right, not to retroactively adjust the old prediction by taking only those subparts of his prediction that in hindsight turned out to have not been invalidated (perhaps by sheer coincidence; since his model has a feedback factor of over 3, as long as the forcing is low, any variations between the three projected temperature curves are going to be impossible to distinguish over background variation for a really long time).

        I know full well what an exponential rate of growth is. It’s just that I’m observing that our actual emissions into the air exceeded Hansen’s 1.5% growth rate, because that is the relevant metric to his actual prediction. You want to be able to remove Hansen’s assumptions about the relationship between emissions growth and trace gas concentration growth in the atmosphere from his prediction, and pretend that in 1988 he was just forecasting bland scenarios on different, arbitrarily selected atmospheric GHG concentrations, as if he had picked them by throwing darts at a board.

        That increases in GHG concentrations didn’t follow in lockstep with emissions took Hansen by surprise.
        We know this because in 1998, when temperatures weren’t rising fast enough in the last decade, and his preferred policy changes were accordingly not being adopted, he put out a new paper with new data assembled from “personal communications” to try to explain why temperatures weren’t rising: https://pubs.giss.nasa.gov/abs/ha01100t.html. Only after temperature didn’t follow scenario A did Hansen see the need to investigate what portion of emissions remains in the atmosphere, and after finding out that there really wasn’t much of a relationship, he decided to retroactively pretend that the scenarios of his 1988 paper should be judged against W/m^2 forcing calculated from atmospheric GHG concentrations, and not emissions.

        And even in that paper he couldn’t bring himself to actually show the graph of changes in emissions against changes in GHG concentrations, instead hiding that information by converting it to a useless (and vaguely described) chart of the “fraction” of CO2 in the atmosphere relative to emissions, smoothing the curve, and serenely remarking that the smoothed fraction “remained at about 0.6” even though the annual variations ranged between 0.3 and 0.98, the smoothed curve ranged between 0.4 and 0.7 and the 30-year trend was downwards.

      • Kurt,
        “Hansen stated, verbatim, that “Scenario A assumes that growth rates of trace gas emissions . . . will continue indefinitely.” Note that he didn’t say “concentrations.””
        He did say concentrations. He spelt it out in the quote I gave in Appendix B, where he gave the basis for the arithmetic for scenario A. Here it is for C:

        Note the unit for emissions, ppmv/year. With Scen A he gave his source, Keeling’s CO2 concentration measurements. What you won’t find in the paper is a single figure quoted in Gtons C/year. And there is no reference to any such data source. He didn’t have one, and he made no use of any such data. Hansen measures emissions, as for CH4, by concentration growth.

      • That still misses the point. I understand full well that with CO2 and methane (but not CFCs) Hansen is using measurements to determine what future emissions to assume as his scenarios. And I also understand that Hansen is likely taking those assumed emission scenarios (not all of which are based on measurements) and using them directly to increment future GHG concentrations, and having the model use his k-means clustering method on those concentrations to produce his delta T (called forcing) and then applying feedback to get his final temperature graph.

        But the prediction being offered was of what would happen under various “emission” scenarios. That 1.5% figure was arbitrary. Given that the past emissions rates were disclosed to be about 4%, Hansen could have picked that value for post-1981 emissions. He could have picked 2%. All of those would have been based on measured CO2 atmospheric concentrations. But whatever value he picked was stated to represent an “emissions” scenario because the point of this paper was to present choices to politicians showing what consequences were associated with what policies they might adopt..

      • Kurt,
        “That 1.5% figure was arbitrary.”
        He gives his basis. It was the typical value for the 70’s and 80’s, so the first thing to check is what if that continues. That is his scenario A. Remember, he’s talking about rate of increase of the increments. The increase in the 70’s was 12.8 ppmv, and in the 80’s 15.6 ppmv. That’s almost exactly 1.5% per year compounded.

        The test (afterwards) of a range of scenarios is whether they cover the range, and here they clearly did.

      • Kurt ““Steve’s 2008 post is as excellent as his work usually is — he notes that in 1998 Hansen claimed the forcings were more like B. This was probably true, but emissions looked more like A.”

        This isn’t accurate. ” —

        You seem to be arguing against the forcings in C. I’m saying the forcings don’t matter, Hansen predicted relationships between forcing and emissions where emissions where the independent variable that Congress could affect.

        It’s really important to understand this — if concentrations aren’t a function of emissions, there are no policy implications for Hansen’s work. It was absolutely critical to establish that link, and that was the main purpose of his testimony.

    • “Hansen’s projections for CO2 were as follows for 2016: Scenario A: 405.5; Scenario B: 400.5; Scenario C: 367.8 – Actual global annual average: 402.8

      So the CO2 projection fell between A and B, not bad agreement after 30 years.”

      There is no agreement to be had; Hansen’s prediction, even on how much CO2 would be in the atmosphere was way off. I don’t know where you came up with those numbers – they certainly weren’t in Hansen’s 1988 paper, so I suspect that they were put out post hoc as some sort of deceitful statistical trick. But let’s assume for a second that you have accurately correlated each of Hansen’s emissions scenarios with a 1998 prediction of CO2 atmospheric concentrations in 2016 at each of those scenarios.

      With respect to CO2 emissions, Hansen’s scenario A assumed that CO2 emissions would only grow at a 1.5% rate. The world blew through that rate throughout the 1990s and 2000s, and even after the stall in CO2 emissions growth over the last several years, growth since 1998 still average around 2% a year – well above Hansen’s 1.5% growth rate for emissions scenario A (33% off), and his scenario B assumed that growth rates would drop to 1.0% in 1990, 0.5% in 2000 and 0% in 2005. Needless to say, scenario B didn’t happen.

      So what you’re trying to say is that Hansen should be viewed as some kind of a prophet because the actual CO2 concentrations in 2016 correspond to something between Hansen’s emission scenario A and emissions scenario B, when in the real world emissions exceeded scenario A by a large margin and made scenario B look ridiculously optimistic.

      • Yes. I have no idea where Nick and Phill get this…”Remember, he’s talking about rate of increase of the increments.” When clearly Hansen states he is talking about actual emissions at 1.5 percent increase annually. ( The clear goal being to get Congress to reduce emissions)

        The charts show annual emissions of 22.5. gigatonnes in 1990 and a 1.5 percent annual increase ( scenario A) compounded annually would be 32.65. in 2015. Actual results were 36 Gigatonnes, well above scenario.

      • By the way, the actual annual increase in emissions was 1.9 percent compounded annually. This is well above Hansen’s scenario A.

        The PPM content of CO2 in the atmosphere is not annual emissions any more then the land movement guess of ocean floor sinking adjustment means Sea Level is rising more then stable tide gushes show.

        Hansen said “emissions” clearly referring to anthropogenic human GHG emissions. He did not say, “the annual increase in atmospheric concentration of CO2 will increase by 1.5 percent each year. In all the scenarios Hansen referred to what humans must do to reduce annual emissions.

      • David A
        “When clearly Hansen states he is talking about actual emissions at 1.5 percent increase annually.”
        No, he’s quite clear, and it isn’t that. As I said, nowhere in the paper does he quote an emission in Gton/year. He didn’t have that data. For Scenario A he spelt out the arithmetic thus

        He is using only Keeling’s CO2 concentration data, and the scenario is 1.5% increase in the annual increment (in ppmv). He sets out numbers in Fig B2; in 1970’s the increment was 12.8 ppm and in 1980’s the increment was 15.6 ppm for the decade (he must have extrapolated a bit). That is actually an average increment of about 2% per year, but it is declining, so he decided to project it forward at 1.5%. That is, if the increment was 1ppmv/yr at the start, next year would be 1.015ppmv higher, and so forth.

  11. What has been demonstrated is the GISS land-ocean index is completely fabricated and the 0.4c higher than 1997/98 El Niño is only due to human adjustments. Scenario C the observed levels are reached, not adjusting it to match Scenario B. Dare to show how models compare with the next strong La Nina? Of course they won’t, this is their cherry pick moment. Without adjustments the recent strong El Nino 2015/16 would be like the 1997/98 one and matching with satellite data. The warming from 2014 with the index looks so false and bigger change then all previous ones.

    http://www.woodfortrees.org/plot/gistemp/from:2014/plot/hadcrut4gl/from:2014/plot/rss/from:2014/plot/uah6/from:2014

    The surface temperatures show they behaved nothing like they did shortly after, compared what they shown now after be tortured. The rise and fall between both different El Nino’s look nothing like each other with surface temperatures. It is clear to see we can’t trust these with any data. The satellite on the other hand, the behaviour with warming and cooling is retained and looks natural.

    http://www.woodfortrees.org/plot/gistemp/from:1996/to:2000/plot/hadcrut4gl/from:1996/to:2000/plot/rss/from:1996/to:2000/plot/uah6/from:1996/to:2000

  12. The problem with these analyses is that they include 2014 data and later. Don’t you know that all data used on this site must end with 2012 so that recent warming can’t be compared with the models and we can pretend the “pause” at 0.2 above the 30 year average continues. (/sarcasm)

    • Hey Slip. I’d be happy if the data fiddlers just stopped torturing the historical data.

      Include what data you want, but just don’t call a set of adjusted figures data because it isn’t. Oh, and don’t call the output of computer models data either because it isn’t.

  13. Apart from the very good analysis in the article, two points which interest me do not seem to have been addressed.

    Do the much vaunted climate models use as input the historical fact of the Pinataubo eruption, or not? What other historical facts to the models use or exclude?

    Why does the grey area on the top graph not have a narrow waist at the time the models were run? How does the model output not reflect the known temperature at the time the models were run?

    • Funny that the alarmists call them that because they are rich (nonsense). Yet the rich and famous are actually the ones supporting the alarmist agenda. The alarmist can’t get anything right, even when it is not science.

  14. Here is the data without the trickery.

    This is ALL the important climate model forecasts starting at the time the “forecast period” started versus the observations of Hadcrut4 and UAH.

    These are all done on the baseline of 1961-1990 (and I have been producing this chart for many years now and know the numbers). There is no fancy playing around here.

    Now that the impact of the Super El Nino is wearing off, Hadcrut4 in July17 and UAH in August17 are basically at the same point on the 1961-1990 baseline at +0.650C while all the climate models are higher ranging from +0.8C to +1.1C.

    Anyone can use this chart without attribution.

      • In every instance, there is rule from the IPCC or a statement in the paper/report saying when historical observations end and the forecast period begins.

        When you tune your models to historical observations that is clearly subject to bias so I am not going to accept it.

        The volcanoes, for example, were originally predicted to have twice the impact that eventually appeared and the modellers just went back and changed their numbers and tried to pretend “look, the models worked”. Hansen should be famous for that.

        The lines above start at the period when the prediction started and it is interesting how they mostly start higher than the current observation data says was happening then. The simple fact is that 2001 and 2003 and 2008 are now “colder” than they were when the modellers were running their models.

    • And in 1880 and 1940 the temperature anomaly was said to have been +0.4degC, meaning that today, it is only about 0.25 degC warmer than it was in 1880, and if we get a La Nina late 2017/early 2018, by April/May 2018, we may well see the temperature anomaly at around 0.1 degC warmer than 1880. Within error margins, we will be unable to say whether in that scenario May 2018 is any warmer than the temperatures observed in 1880!!

    • Great chart. Many thanks. But for comparing forecasts with reality a forecast including the millennial peak and inversion point at about 2004 should be included.see above.
      https://wattsupwiththat.com/2017/09/26/gavins-twitter-trick/comment-page-1/#comment-2621239

      “Schmidt continues to make the same egregious error of scientific judgement as of the majority of academic climate scientists ,lukewarmers , the MSM ,GWPF and the ecoleft chattering classes in general by projecting temperatures forward in a straight line beyond a peak and inversion point in the millennial temperature cycle.
      Climate is controlled by natural cycles. Earth is just past the 2003+/- peak of a millennial cycle and the current cooling trend will likely continue until the next Little Ice Age minimum at about 2650.

  15. Typo: change “then” to “when” in:
    “This happens irrespective of then the models are initialized. “

  16. CMIP3 was released in 2010. It is supposed to be showing 10 yrs of quality projections through 2016. Gavin’s chart says it started forecasting in 2000. Really?

      • Exactly my point. Model developed in 2005/06 and shown for the purpose of comparing 2006-2016 real world results to forecast…2000-2006 should be hindcast as well. If the model starts forecasting in 2000 then it is a 16 yr comparison not 10.

      • Yep… And it’s simply bizarre that Schmidt would trot out a CMIP3, storyline, model to support his errant claim that the models don’t exaggerate the warming.

      • The design document for the CMIP3 forecasts is here. The basic sequence is #3 20C3M, a 20th C run. The design explicitly says data runs to end 2000. The forecast sequences basically say run 20C3M, then forecast.

      • Nick — it doesn’t matter when the data ends, it only matters when the forecast was published.

        This is another example of where climate modelling veers into pseudoscience — the data may end five years earlier, but any known data at the time of publication could have been used to tune the model. It’s a hindcast.

        Consider a model that successfully models a complex trend from 1900 to 2000, using data that ends in 1950. Is this more impressive if the model is published in 1950 or 2000? The difference is pretty important.

  17. Mr. Middleton, CMIP5 was a later release than CMIP3 and is supposed to be the improvement. You have it backwards. Fabius Maximus seems to be confused on that, and Gavin’s inability to notice is pretty clueless.

  18. I wish they would show temperature charts the way that a standard alcohol thermometer shows degrees. They would all be essentially flat lines and more accurately express the size of the problem.

    • I think you have succinctly stated why they do not use standard thermometer displays. The last thing alarmists, and those whose salaries depend on that alarm, want to do is to display flat lines that accurately express that there is no problem.

  19. Amid the battle of the Graphs, if you want to know what the weather is, stick your head out of the window. If you want to know what it will be tomorrow, consult the ants.

  20. Mods

    I have tried a couple of times to post a response to NS at September 26, 2017 at 3:53 pm

    it has simply disappeared. Please look out for it, and please post the 2nd versions since I corrected a small formatting issue.

    Many thanks

    (It has been found and approved) MOD

  21. Yes… I am assuming that HadCRTU4 is reasonably accurate and not totally a product of the Adjustocene.

    No, it is not reasonable accurate just a less awful dataset / making it up as they go along. Last time it was reasonable accurate were pre-1990’s around the beginning of the global warming scare.

    Adding over 400 stations just in the NH and comparing with previous years without them, is not data comparison. This on it’s own changes about 6% of the data every time from when it was introduced. It is worse than this because it was added to try and show more warming in the Arctic so it effects larger areas in grids.

    One example of grid data for December 2010 covering the UK. It was the coldest December in the UK since records began and the 2nd coldest in the CET since the 17th century (only by 0.1c). There was sea ice forming around the island with harbours having fairly thick ice forming. SST’s cooled a lot around the UK since mid-November until end of December.

    The grid for December 2010 shown in 2011, showed the UK warmer than average. What chance does a month have getting below average, when the coldest doesn’t even reach the average? After complaining and showing the grids were worthless as any don’t generally reflect regional temperatures they later changed. This time they showed up to around up to 0.7c cooler than average. Land temperatures over the UK were widely below average by 4c for the month with some areas even colder.

  22. Was “WomannBearPig” really fighting with Gavin on Twitter? That’s hilarious all by itself.

    Model verifications are best done with straight numbers, not deceptive graphs. The arithmetic is not hard.

    I’ll use HadCrut for verification. Alarmists want GISS and skeptics want UAH so this makes neither happy.

    Linear fit was used to determine trends.

    1988-2017
    HadCrut +0.50
    Hansen +0.96
    Bias 1.92 to one

    CMIP5 2005-2017
    HadCrut +0.20
    CMIP5 +0.31
    Bias 1.55 to one

    Disclaimer… There were no corrections done for “misspecification” of post-2000 volcanoes, wildebeasts, ManBearPigs, or carbon offsets.

    • Thank you Bill. Regarding your graphic please consider that the HadCrut 4 surface observations, per IPCC GHG physics, should be adjusted down 20 percent below UHA lower troposphere levels to show the portion of surface warming that can be attributed to GHGs, according to the IPCC.

  23. It was suggested once before here that the best way to analyse the the worth of the models was to look at 10 year trends of individual runs. You have a grey area of spaghetti but do any show a 15 year trend between El Nino events that is a tenth of the 3°C/century warming or less?
    http://www.woodfortrees.org/graph/uah6/from:1988/to:1998/mean:12/plot/uah6/from:1999/to:2014/mean:12/plot/uah6/from:1999/to:2014/trend
    HadCrut4 is not so damning but also far from what you would call scientific data.
    http://www.woodfortrees.org/graph/hadcrut4gl/from:1988/to:1998/mean:12/plot/hadcrut4gl/from:1999/to:2014/mean:12/plot/hadcrut4gl/from:2001/to:2014/trend

    • The reason for running ensembles of multiple models is to produce a probability distribution. Due to the stochastic nature of weather and climate variabilities, individual model runs are about as useful as mammary glands on a bull.

      • They might be but so is the comparison with a starting point in the spaghetti and a finishing point.
        What’s the probability of this variation neutralising the CO2 warming effect for 10/15 years?

      • Our forecast modelling group discovered a long time ago that ensembles don’t actually give you a probability distribution. It looks and feels like a probability distribution but it’s not.

        To get an actual probability distribution, we do statistics on the historical ensemble results.

        If 20 ensembles are run and two of them bring snow, the actual probability may be quite a bit higher or lower than 10%. Many weather forecasters and climate modelers erroneously assume 10%

        In climate, there aren’t enough Case studies to statistically improve the ensemble results

      • Yes David. Yet only a disingenuous piece of something would take the mean of some 70 models that all consistently run warm,
        and then fund 100s of disaster studies based on that way above observations mean.

  24. Have emissions really been in line with Hansen’s 1998 Scenario B?

    I’ve been under the impression that emissions from China meant globally we were exceeding even Scenario A.

    • “Scenario B” is one of the great deceptions of the Hansen apologists. Make no mistake… scenario A is what happened. Actually Scenario A+ since emissions have been far greater than imagined

      This “Scenario B” nonsense comes
      from the fact that there is far less CO2 in the atmosphere now than Hansen thought given the emissions. The atmosphere is far better at mitigating the gas and returning it to earth than earlier assumed. This is another source of modelling error which doesnt happen on backfit data.

      So they fudge on what the forecast was and change the obs and put results on a deceptive graph and… SHAZAAM… the forecast is only half terrible instead of totally terrible

      • Mary Brown September 27, 2017 at 5:50 am
        “Scenario B” is one of the great deceptions of the Hansen apologists. Make no mistake… scenario A is what happened. Actually Scenario A+ since emissions have been far greater than imagined

        This “Scenario B” nonsense comes
        from the fact that there is far less CO2 in the atmosphere now than Hansen thought given the emissions. The atmosphere is far better at mitigating the gas and returning it to earth than earlier assumed.

        Sorry Mary but you’re completely wrong.
        Hansen’s projections for CO2 were as follows for 2016:
        Scenario A: 405.5
        Scenario B: 400.5
        Scenario C: 367.8

        Actual global annual average: 402.8

        So the CO2 projection fell between A and B.

      • Phill, you completely missed my point.

        You don’t get to change the forecast scenario based on the CO2 verification. That’s part of the forecast.

        Business as usual means business as usual… which is what happened…and then some. If you gave Hansen the exact future emissions, he would have assumed CO2 much higher than 405 and forecast even hotter. But the atmospheric CO2 has been much less than anticipated given the emissions. That is good news for climate change.

        And how on earth does 5ppm make such a huge difference in temp fcst from A to B? By 2019, the difference is ~0.5*K

        Imagine if the same emissions 1988-2017 only produced CO2 of 367ppm today due to totally unexpected reasons. Would we then say that Scenario C verified? That’s your argument.

        If that happened, we would say that record emissions occurred but the atmospheric CO2 verified at dramatically reduced emission levels.

        Hansen got lucky… the atmosphere shed much more CO2 than he assumed.

        You can argue this stuff forever but the bottom line is that a 3rd grade kid with a ruler in 1988 could draw a trend line on the temp data and do as well as the climate models. There are not enough forecast cases to determine any skill level in the models. The 3rd grader’s trend line would have been more accurate but that doesn’t mean he had any skill either. Skill must be proven which requires many cases.

      • yet the temperature doesn’t reflect what hansen proposed for that level of co2 in the atmosphere phil. go figure.

      • talldave2 September 27, 2017 at 7:17 am
        Phil. — wrong, Hansen’s testimony is very clear, he was making a prediction based on emissions, not concentrations. The fact he got the relationship between emission and concentrations completely wrong doesn’t make his model more accurate.

        This was what I was replying to:
        This “Scenario B” nonsense comes from the fact that there is far less CO2 in the atmosphere now than Hansen thought given the emissions.

        As I showed there is not “far less CO2 in the atmosphere”, it in fact lies between A and B. Hansen projects concentrations based on the observed history and calculates temperature based on the concentrations.

      • Phil said…
        “As I showed there is not “far less CO2 in the atmosphere”, it in fact lies between A and B. Hansen projects concentrations based on the observed history and calculates temperature based on the concentrations.”

        Valid points… but I said “far less CO2 in the atmosphere GIVEN the emissions”.

        Emissions 1988-2017 were MUCH higher than anyone anticipated. Hansen got the CO2 levels roughly correct by being wrong on emissions (higher) and wrong on atmospheric CO2 mitigation (less CO2 remains).

        Two wrongs sometimes make a right. Two Wrights sometimes make an airplane.

        So, in this case, two wrongs almost made his CO2 assumptions right. But those are serious forecast errors, the kind that will rear their ugly head and ruin future forecasts.

        Back to my earlier point… forecasts can be right or wrong for a lot of reasons. In weather forecasting, we know almost exactly what the skill and errors bars are because we have millions of cases. In climate forecasting, we have very few cases and no clue what the skill level or errors bars will be. Betting our future on such unproven techniques is very poor policy.

      • Phil — yes, Mary’s claim would be slightly more accurate if she said “there is far less GHG in the atmosphere than predicted” or “there is somewhat less CO2 than predicted” (CO2 emissions exceeded A but concentrations are lower than predicted) because the main source of error was methane.

        But either way, Hansen’s independent variable is clearly emissions — concentrations are a function of emssions (not just “based on the observed history”), and temperatures a function of concentrations, so emissions -> concentrations -> temps.

      • Mary Brown September 27, 2017 at 7:26 am
        Phill, you completely missed my point.

        You don’t get to change the forecast scenario based on the CO2 verification. That’s part of the forecast.

        I suggest you actually read the paper, because it’s clear that you don’t understand what was done!
        “We define three trace gas scenarios to provide an indication of how the predicted climate trend depends upon trace gas growth rates. Scenario A assumes that growth rates of trace gas emissions typical of the 1970s and 1980s will continue indefinitely; the assumed annual growth averages about 1.5% of current emissions, so the net greenhouse forcing increases exponentially. Scenario B has decreasing trace gas growth rates, such that the annual increase of the greenhouse forcing remains approximately constant at the present level. Scenario C drastically reduces trace gas growth between 1990 and 2000 such that the greenhouse climate forcing ceases to increase after 2000. The range of climate forcings covered by the three scenarios is further increased by the fact that scenario A includes the effect of several hypothetical or crudely estimated trace gas trends (ozone, stratospheric water vapor, and minor chlorine and fluorine compounds) which are not included in scenarios B and C.
        These scenarios are designed to yield sensitivity experiments for a broad range of future greenhouse forcings. Scenario A, since it is exponential, must eventually be on the high side of reality in view of finite resource constraints and environmental concerns, even though the growth of emissions in scenario A (~1.5% yr-1) is less than the rate typical of the past century. Scenario C is a more drastic curtailment of emissions than has generally been imagined;it represents elimination of chlorofluorocarbon (CFC) emissions by 2000 and reduction of CO2 and other trace gas emissions to a level such that the annual growth rates are zero (i.e., the sources just balance the sinks) by the year 2000. Scenario B is perhaps the most plausible of the three cases.”

        Business as usual means business as usual… which is what happened…and then some. If you gave Hansen the exact future emissions, he would have assumed CO2 much higher than 405 and forecast even hotter. But the atmospheric CO2 has been much less than anticipated given the emissions. That is good news for climate change.

        ‘Business as usual’ meant carrying on emitting as before, in fact as anticipated by Hansen this did not occur.

        And how on earth does 5ppm make such a huge difference in temp fcst from A to B? By 2019, the difference is ~0.5*K

        No it isn’t, look at fig 2, the difference between A and B due to CO2 is about 0.05ºC by 2020.
        The total difference between A and B is due to the reduced growth of ‘all the trace gases’ and the presence in A of ‘ozone, stratospheric water vapor, and minor chlorine and fluorine compounds’. The effect on A over and above CO2 alone, of these gases is about 0.6ºC by 2020 (Fig 2). In fact the trace gases other than CO2 followed scenario C most closely (2016 values):
        CH4: 2.8A, 1.9C, ~1.8(2016)
        N2O: 0.34A, 0.31C, ~0.33(2016)
        CFC12: 1.44A, 0.50C, ~0.5(2016)

        Hansen got lucky… the atmosphere shed much more CO2 than he assumed.

        No CO2 did about what he expected, the other trace gases exceeded his expectations for reduction, and the ‘several hypothetical’ gases didn’t materialize (as he expected).

      • Well said Phil with good information. I will consider.

        The difference from A to B is fairly close in 2017 but looks to be about 0.4 to 0.5deg in 2019 or 2020. I am just eyeballing off graphs… but A really spikes

        But no matter how you slice it, the bias is upwards of 1.8 to one and this is just one case study. I am a long way from trusting GCMs to plan humanity’s future.

      • “Phil:

        I suggest you actually read the paper, because it’s clear that you don’t understand what was done! . .. . CO2 did about what he expected, the other trace gases exceeded his expectations for reduction, and the ‘several hypothetical’ gases didn’t materialize (as he expected).”

        I think you may be the one not reading Hansen’s original paper carefully enough. His approach was to simplistically equate emissions scenario growth with forcing growth via the equations set forth in appendix B to that paper. Trying to draw a distinction between Hansen’s emission scenarios and some mythical “forcing” scenario doesn’t get you anywhere.There is no practical distinction between predicted emissions and predicted forcing in that paper, because Hansen assumed that every particle of GHG emitted by mankind would cause a 1-1 increase in GHG concentrations in the atmosphere. We know this because his paper said that “the net greenhouse forcing [delta T0] for these [emissions] scenarios is illustrated in FIG. 2. [Delta T0] is the computed temperature change at equilibrium for the given change in trace gas abundances.”

        Hansen obviously got the relationship between CO2 emissions and CO2 concentrations wrong because we wound up with CO2 concentrations associated with significant cuts in emissions, which did not occur. It wasn’t until his “oops” moment a decade later that he learned that not everything emitted stayed in the air and started applying a 0.6 multiplier.

        Mary was absolutely right. Had Hansen known that emissions would not have dropped to 1.5% or lower growth like he assumed for all three scenarios, he would have used his equations to get more forcing, and a higher transient temperature response, thereby exacerbating the discrepancies between his prediction and the reality that ensued.

        Hansen also got the relationship between forcing and temperature wrong. Actual forcing (not just emissions) followed scenario A set forth in FIG. 2, or even higher, but temperatures followed the track between the predictions for scenarios B and C.

        In short, Hansen got nothing right.

      • Mary Brown September 27, 2017 at 12:21 pm
        Well said Phil with good information. I will consider.

        The difference from A to B is fairly close in 2017 but looks to be about 0.4 to 0.5deg in 2019 or 2020. I am just eyeballing off graphs… but A really spikes

        Are you looking at Fig 2? That shows scenario A at just over 1.0ºC and Scenario B at ~0.7C in 2020, only slightly less in 2017. Most of that difference is due to the ‘other trace gases’ not CO2, asI pointed out the difference due to CO2 is more like 0.05ºC.

      • Kurt,
        “There is no practical distinction between predicted emissions and predicted forcing in that paper, because Hansen assumed that every particle of GHG emitted by mankind would cause a 1-1 increase in GHG concentrations in the atmosphere”
        No, that’s not true. He didn’t assume that. And even if he had, it would have been of no use, because he had no measure of “every particle of GHG emitted by mankind”.

        For the most part, we still don’t. No-one measures the amount of CH4 or N2O emitted. We can’t. The emission is inferred from the concentration increase. Hanson’s language re CO2 and CH4 is identical, and confuses us today. The reason is that for about 25 years, we’ve been talking about CO2 emissions as calculated by governments measuring FF miniing and comsumption. But not for 30 years. The reason we can do that is that because governments through the UNFCCC assembled that information. Hansen didn’t have it. So to him, emissions of CO2 meant just the same as for CH4 – an observed concentration increase.

        “We know this because his paper said that “the net greenhouse forcing [delta T0] for these [emissions] scenarios is illustrated in FIG. 2. [Delta T0] is the computed temperature change at equilibrium for the given change in trace gas abundances.””
        That’s your proof of the supposed assumption. But read it carefully, especially the last sentence. It says the opposite. It relates ΔT0 and abundances.

        “Hansen obviously got the relationship between CO2 emissions and CO2 concentrations wrong because we wound up with CO2 concentrations associated with significant cuts in emissions, which did not occur.”
        No. Hansen only dealt with CO2 concentrations, and got them right. Again, he had no other quantification of emissions. He set it out in detail in Appendix B:

        Note that the 1.5% he referred to earlier as increase of emissions, is an increase in concentrations. Scens B and C are described in the same way.

      • talldave2 September 27, 2017 at 8:23 am
        Phil — yes, Mary’s claim would be slightly more accurate if she said “there is far less GHG in the atmosphere than predicted” or “there is somewhat less CO2 than predicted” (CO2 emissions exceeded A but concentrations are lower than predicted) because the main source of error was methane.

        But either way, Hansen’s independent variable is clearly emissions — concentrations are a function of emssions (not just “based on the observed history”), and temperatures a function of concentrations, so emissions -> concentrations -> temps.

        No, Hansen based his projection on the observations of CO2 concentration from 1958-81 by Keeling, not on emission measurements.
        Keeling et al. ‘Carbon Dioxide Review: 1982’, ed. W C Clark, pp.377-385, Oxford University Press, NY, 1982.

      • Nick – In one respect I think we agree on a fact, but just interpret it in completely different ways. Hansen in 1988 certainly equated emissions with annual increments in GHG concentrations – they were treated as interchangeable, both ways. When he had industrial data on CFC emissions he used those directly to construct his “emission” scenarios. When he didn’t have the industrial data, he constructed his “emission” scenarios based on past, measured differences in trace gas concentrations. And he took his emission scenarios, plainly stated as the conditions for his projections, and used them as direct increments to GHG concentrations, and had his model calculate the delta T based on those concentrations predicted from the emission scenarios. So as I said, there is no practical distinction between emissions into the air, and forcing – one determined the other. But based upon what he said in that paper, and his contemporaneous statements about his scenarios, the public prediction he made tied temperature increases to various emission scenarios.

        And incidentally, the data on CO2 emissions was available from at least 1958 onward, Hansen just didn’t look for it until 1998 when his prediction went wrong. Then he went out and got it because he needed it to explain why temperatures weren’t rising as fast has he led the world to believe they would.

        If I have a prediction that says: If 1 then 2, if 3 then 4, if 5 then 6 – and reality shows the result of 1 and 6, I can’t save my prediction by arguing that my analysis assumed that 5 would lead to A and A would lead to 6, and that reality showed not just 1 and 6, but 1, A, and 6. And as long as the same system controls the relationship between all these variables, I can’t even use the post-hoc correlation between A and 6 as evidence of my understanding of the system. That’s cherry picking.

        You, and Phil, and many others are too willing to let Hansen off the hook for not understanding the climate system well enough to nail the relationship between emissions and changes in GHG concentrations. For all Hansen knows, there is a huge natural control mechanism that regulates the CO2 and methane in the air as a function of temperature, and it’s temperature driving these atmospheric changes, not vice versa. If Hansen whiffed on the relationship between emissions and changes in GHG concentrations, it’s certainly plausible that he whiffed on the amount of water vapor feedback via temperature throwing the correlation between forcing and temperature off.

        I’ve got a long reply to Phil above, which I won’t repeat here, but suffice it to say that there is no question at all that Hansen’s 1998 paper intended to show the predicted consequences of three hypothesized policy choices, not three hypothesized future atmospheric trace gas compositions.

      • “No, Hansen based his projection on the observations of CO2 concentration from 1958-81 by Keeling, not on emission measurements.
        Keeling et al. ‘Carbon Dioxide Review: 1982’, ed. W C Clark, pp.377-385, Oxford University Press, NY, 1982.”

        Again, the methodology of his prediction is irrelevant. Hansen was advising Congress on emissions policy. It doesn’t matter if Hansen bases the predictions on astrological signs, Congress can still pass laws to affect human emissions, but can no more command concentrations than King Canute could order the tides around.

    • So to him, emissions of CO2 meant just the same as for CH4 – an observed concentration increase.

      With one really, really important difference — Congressional policy and legislation can target actual GHG emissions in sundry ways, which was the point of his testimony, but can no more directly affect GHG concentrations than the force of gravity or the Planck constant. Remember, his scenarios are labelled as emissions policy scenarios — “business as usual” vs draconian cuts.

      The inability for proponents and skeptics to look at a simple prediction like Hansen 1988 and agree on what it meant is a common issue in pseudoscience. With no firm predictions, there can be no falsification.

      • “With no firm predictions, there can be no falsification.”

        Well said. Drives me crazy. I have worked in quantitative forecasting for a long time. I hate fuzzy forecasts and fuzzy verification. We score everything ruthlessly and carefully. That is the only way to be sure it works.

        I stand behind my assertion that Hansen got the CO2 forecast correct by making two big mistakes. He extrapolated the trend. The reality is that emissions dramatically increased but much of it did not stay in the atmosphere.

        Kind of like hitting a 7 iron when you should have hit a wedge. Then you hit it really fat and it went on the green. Doesn’t mean it was a good shot.

        I will update your saying…

        “With no firm predictions, there can be no verification.”

        Climate science has produced one 30 year forecast that didn’t do very well and can’t be accurately assessed.

        So, the proven statistical significance of forecast skill remains very very close zero. Unfortunately, I don’t see how that can change in the next few decades.

  25. See how Nick fogs up the discussion with misleading concerns?

    “Nick Stokes
    September 26, 2017 at 10:02 pm Edit

    Richard,
    Yes, clearly there was a circulating newspaper story at the time with near identical text and graphs claiming to be global temperature, some of which attribute the writing to John Hamer and the artwork to one Terry Morse. What we don’t have is an NCAR source document saying what kind of measure it is – land only, land/ocean? How much data was available? How about some scepticism?”

    It is 1974,Nick! There was NO Ocean temperature data to speak of in those days. Stop with your misleading questions baloney! You KNOW this,so why all the crap about whether it was land only or land/ocean?

    In those days it was NORTHERN Hemisphere only,because at that time that was all they had much to work with. The Southern Hemisphere was scanty data wise. DR. Jones himself admitted this back in 1989.

    I read that article back in 1975,seeing how it was matching up with others charts of the day,the few that were published in the public arena,were mostly from NCAR and NAS,little else since NASA didn’t get into the picture until 1981. How scientists of the day were echoing the concerns of sharp temperature decline, you seem to gloss over.

    Richard Verney,showed that Hansen in 1981,the IPCC,NAS and other sources basically agree with that 1974 NCAR chart, So what is your problem?

    You need to stop with the misleading comments as if it is all a mystery,when the only mystery is you being the way you are here. NCAR, probably no longer has the data for the chart,which is not being posted as they don’t have it on their website.,after all it was 43 years ago!

    • “It is 1974,Nick! There was NO Ocean temperature data to speak of in those days. Stop with your misleading questions baloney!”
      Indeed so. How does it help to say that we’re comparing modern GISS land/ocean to 1974 land only, and that somehow proves the data has been tortured (original claim) or GISS has fudged it or whatever? They are just different things. And it’s no use saying that they are using Land only because they couldn’t get marine data. This sort of fudging is just not honest.

      But you could take that 1974 apect further. There was very little usable land data either. People forget two big issues – digitization and line speeds. Most data was on paper. If you wanted to do a study, you had to type it yourself (or get data entry staff). And if there was anything digitised, there were no systematic WAN’s, and what they had would hacve been at 300 baud. So it isn’t a wonder if 1974 analysis deviates from modern. It’s just a miracle they got as close as they did (if you find something genuinely comparable).

      As for what Richard Verney has shown, again not a single one matches. There was no IPCC in 1981. NAS was NH data. Hansen’s is even more restricted – just NH extra-tropical. They do look somewhat similar, and that is because NH extratropical was pretty much all they had. That still doesn’t make it comparable to GISS land/ocean.

      • Nick

        The fundamental issue in Climate Science is that the data is not fit for purpose. The reason why we cannot measure Climate Sensitivity, ie., the warming signal to CO2 (if any at all), is either because the signal is very small, or that the accumulated error bandwidth of our data sets (and the way it is presented) is so wide that the signal cannot be seen above the noise of natural variability. This is a data issue, and we need high quality data, and we need this to be presented in an unbiased way, if we are to answer what if any impact has CO2 had on this planer’s temperature.

        It is very difficult to know where one should start with all of this, since <b.it is obviously correct that as far as possible one should always make a like for like comparison if one wishes to draw meaningful conclusions.

        But this point is equally applicable to all of the time series thermometer reconstructions (whether they be GISS. or Hadcrut etc). On the basis of your reasoning, you should throw out all the time series thermometer data sets since they never over time make a like for like comparison since the sample set (and its composition) is continually varying over time (and this is before the continuing historic adjustments/reanalysis that is a continuing on going process with the past being constantly altered/rewritten)

        Given the comings and goings of stations and station drop outs, the change in distribution from high latitude to mid latitude, the change in distribution from rural to urban, the change in distribution towards airport stations, the changes in airports themselves over time (many airports in the 1930s/1940s had just a grass runway, and all but no passenger and cargo terminals, and certainly no jet planes with high velocity hot exhaust gas etc), the change in instrumentation with its different sensitivity and thermal lag, the change in the size, volume and nature of equipment enclosures which in itself impacts upon thermal lag, etc. etc, the sample set that is involved in these time series reconstructions is a constantly moving feast, and there is never a like with like comparison being made over time.

        These time series thermometer data sets are not fit for scientific purpose, and it is impossible from these to say whether the globe (or some part thereof) is any warmer than it was from the highs of the late 1930s/early 1940s, or for that matter around 1880.

        I consider that due to their composition and the way they purport to present data, they are not quantitative, but maybe they are qualitative in the sense that they can say it was probably warming during this period, but the amount and rate thereof cannot be concluded, or it is probable that it was cooling during this period but the amount and rate thereof cannot be concluded, or that during this period it is probable that there does not appear to be much happening.

        But they should really be given a very wide berth. The claim that the error bound is in the region of 0.1 degC is farcical, Realistically it is closer to 1 degC than 0.1 degC.

        You maybe correct that in the mid 1970s, or early 1980s, or the end of the 1980s (the date of the IPCC First Assessment Report) that there was limited data, but the important point is that this data (limited as it was) was showing the same trend. Whatever it consisted of, it was showing that the NH (or some significant part thereof) was warmer in 1940 than it was in the early 1970s, or in 1980. The IPCC First Assessment Report shows the globe to be slightly cooler in1989 than it was in 1940.

        There is a consistent trend here, and for quantitative purposes one can get an impression of what was going on.

        You will know that Hansen, at various dates in his career, has made various assessments of US temperatures. Now this data set is far more certain, and yet, over time, there are very significant changes in his temperature reconstructions.

        The upshot of all of this is that one cannot have reasonable confidence in the various temperature data sets (and I consider that the satellite data set also to have issues). They are not fit for the scientific inquiry and scrutiny that we are trying to use them for. I consider that an objective and reasonable scientist would hold that view. This is why I have so often suggested that we require an entirely different approach to the assessment of temperature, the collection, handling and presentation of data. It is why BEST was a missed opportunity, especially since BEST was well aware of poor station siting issue, and the potential issues arising therefrom.

        If we are genuinely concerned about CO2 emissions, we only need to look at the period around 1934 to 1944 and to see whether there has been any change in temperature since the highs of the 1930s/1940. Some 95% of all manmade CO2 emissions has occurred subsequent to that period. Since CO2 is a well mixed gas, the warming signal (if any at all) can be found without the need to sample the globe. We only need a reasonable number of pinpoint assessments to see what if any temperature change has occurred at each of the pinpoint locations. We only need a reasonable number of exact (or as nearly exact as possible) like for like comparisons to tell us all we need to know.

        I consider that we should ditch the thermometer temperature reconstructions, and start from scratch, by assessing say the best 200 sited stations across the Northern Hemisphere, which are definitely free of all manmade change , and then retrofit these stations with the same type of LIG thermometers as were used in the 1930s/1940s calibrated in the same way as used at each station (on a station by station basis), put in the same type of Stevenson screen painted with the same type of paint as used in the 1930s/1940s, and then observe using the same observational practices and procedures as used in the 1930s/1940s at the station in question. We could then obtain modern day RAW data that can be compared directly with historic RAW data (for the period say 1933 to 1945) with no need for any adjustment whatsoever.

        There would be no sophisticated statistical play, no attempt to make a hemisphere wide set, no area weighting, no kriging etc. Simply compare each station with itself, and then simply list how many station show say 0.3 deg C cooling, 0.2 deg V cooling, 0.1 deg C cooling, zero change, 0.1 deg C warming, 0.2 deg C warming, 0.3 deg C warming etc.

        This type of like for like comparison would very quickly tell us whether there has been any significant warming, and if so its probable extent.

        This will not tell us definitively the cause of any change (correlation does not mean causation), but it will tell us definitively whether there has been any change during the period when man has emitted some 95% of his CO2 emissions. To that extent, the approach that I propose would be very useful.

      • Whoops. Should have read:

        There is a consistent trend here, and for qualitative purposes one can get an impression of what was going on.

      • Nick, this is my complaint about you, is that you try to make it appear that I argue that the 1974 data is comparable to modern data. That it is superior or complete. You basically talk too much about stuff I didn’t bring up. You must be hungry for fish,since YOU posted a number of Red Herrings against me.

        Here is my FIRST comment about the chart,

        “Nick, the 1974 NCAR line is real,as it was in an old Newsweek magazine back in 1974:”

        (Posted the chart)

        You later say,

        “Indeed so. How does it help to say that we’re comparing modern GISS land/ocean to 1974 land only, and that somehow proves the data has been tortured (original claim) or GISS has fudged it or whatever?”

        My position all along has been that the 1974 chart is real and data for it is real,which YOU have never disproved once with evidence. You talked all around it a lot,but no evidence provided that it isn’t real. I NEVER said they were comparable to modern data at all,you try to put words into my mouth…., again! STOP IT!

        Here I give you evidence that it was Murray Mitchell who provided the data for that chart,that you whine so much about.

        Then Nick writes,

        “But the second shows what is claimed to be a NCAR plot, or at least based on NCAR data. That plot isn’t from any NCAR publication, and the data is not available anywhere. Instead it is from the famous Newsweek 1975 article on “global cooling”.

        It has been answered at Tony’s site that you avoid commenting in.

        Douglas Hoyt,at Tony’s blog writes, https://realclimatescience.com/2017/09/nick-stokes-busted/#comment-66381

        “The NCAR plot is based largely on the work of J. Murray Mitchell, which was confirmed by Vinnikov and by Spirina. Spirina’s 5 year running mean looks a lot like the NCAR plot.

        See Spirina, L. P. 1971. Meteorol. Gidrol. Vol. 10, pp. 38-45. His work was reproduced by Budyko in his book Climatic Changes on page 73, published in 1977”

        Richard Verney, showed the NAS chart,showing a striking similarity with NCAR chart, https://wattsupwiththat.com/2017/09/26/gavins-twitter-trick/#comment-2621480

        Gee, it might be based on the same 1974 Northern Hemisphere data………..

        Nick goes on with his manufactured whining,

        “They are just different things. And it’s no use saying that they are using Land only because they couldn’t get marine data. This sort of fudging is just not honest.”

        NEVER said they were the same thing,why do you try so hard to make a Red Herring on this? Besides that it was over the first chart (NOT NCAR) when you whined about Land only or land/ocean database babble.

        You write,

        “The first GISS plot is not the usual land/ocean data; it’s a little used Met Stations only, essentially an update of a 1987 paper. I don’t know if it’s right.”

        ==============================

        You go on with this,I NEVER argued for:

        “But you could take that 1974 apect further. There was very little usable land data either. People forget two big issues – digitization and line speeds. Most data was on paper. If you wanted to do a study, you had to type it yourself (or get data entry staff). And if there was anything digitised, there were no systematic WAN’s, and what they had would hacve been at 300 baud.”

        Here you are showing your tendency to OVER ANALYZE the issue,since I NEVER argued that the 1974 chart was robust. All I kept trying to point out to you that the chart is real and was based on real data. That was all I was doing,but YOU keep dragging in a lot of other stuff that I never talked about.

        Last but least,since you try hard to put words into my mouth.

        Nick writes,

        “As for what Richard Verney has shown, again not a single one matches. There was no IPCC in 1981. NAS was NH data. Hansen’s is even more restricted – just NH extra-tropical. They do look somewhat similar, and that is because NH extratropical was pretty much all they had. That still doesn’t make it comparable to GISS land/ocean.”

        Sigh,neither did Richard or me said the IPCC existed in 1981, here is what Richard stated about the IPCC,

        “Unfortunately, I am unable to cut and copy Figure 7.11 form Observed Climate Variation and Change on page 214. But I can confirm that this plot (endorsed by the IPCC) shows that the NH temperatures as at 1989, was cooler than the temperature at 1940, and the temperature as at 1920. Not much cooler but a little cooler.

        This is the IPCC Chapter 7

        Lead Authors: C.K. FOLLAND, T.R. KARL, K.YA. VINNIKOV
        Contributors: J.K. Angell; P. Arkin; R.G. Barry; R. Bradley; D.L. Cadet; M. Chelliah; M. Coughlan; B. Dahlstrom; H.F. Diaz; H Flohn; C. Fu; P. Groisman; A. Gruber; S. Hastenrath; A. Henderson-Sellers; K. Higuchi; P.D. Jones; J. Knox; G. Kukla; S. Levitus; X. Lin; N. Nicholls; B.S. Nyenzi; J.S. Oguntoyinbo; G.B. Pant; D.E. Parker; B. Pittock; R. Reynolds; C.F. Ropelewski; CD. Schonwiese; B. Sevruk; A. Solow; K.E. Trenberth; P. Wadhams; W.C Wang; S. Woodruff; T. Yasunari; Z. Zeng; andX. Zhou

        Figure 7.11: Differences between land air and sea surface temperature anomalies, relative to 1951 80, for the Northern Hemisphere 1861 -1989 Land air temperatures from P D Jones Sea surface temperatures are averages of UK Meteorological Office and Farmer et al (1989) values.”

        It was one of two times he brought up the IPCC on the thread,both times it was about chapter 7. Again you fog it up with crap,you do this a lot.

        Never disputed that NCAR and NAS were based on Northern Hemisphere data. Never said they were comparable with GISS land/ocean.

        Stop with the Red Herrings!

        ========================
        Nick you are obviously a smart man,but you have a bad habit with Red Herrings and over analyze the topic,in your replies to me.

      • Sunsettommy

        I agree with your analysis.

        I referred to the IPCC FAR because:
        (i) their data plot generally corroborated the NCAR, and NAS plots, which in any event were corroborated by Jones and Widgley (1980 paper) and by Hansen (1981 paper). Jones and Hansen extended the NH plot out to 1980 and confirmed that as at 1980 the NH was still cooler than 1940.
        (ii) the IPCC plot extends the position out to 1989, and confirmed that as at 1989, the NH was still cooler than 1940.
        (iii) I set out details of the Authors to the IPCC paper who endorsed the plot because these were major Team players, eg., Karl, Vinnikov, Bradley, Jones, Trenbeth, Wadhams etc. All these guys were quite satisfied that the data suggested that as at 1989 the NH was cooler than it was in 1940. The recovery of the substantial post 1940 cooling was still not complete.

        Note the importance of the recovery from the 1940 -early 1970s cooling, still not being complete by 1989. This was of course why M@nn in MBH98 had to perform his nature trick. The tree ring data was going through to 1995 (it might have been 1996) and it too showed that as at 1995 (or 1996) the NH was still no warmer than it was in 1940!!!

        But it was in the late 1980s/early 1990s that the data sets underwent their revisionary rewriting which meant that the adjusted thermometer record now showed warming where previously there had not been a complete recovery to 1940s levels. This was the real reason for the cut and slice. M@nn had identified that the tree rings no longer tracked the adjusted thermometer record. The tree rings did track the unadjusted historic record at least in qualitative terms, ie., they showed no net warming between 1940 and mid 1990s which would have been the position had the late 1980s onwards adjustments not been made to the thermometer data sets!!

        This is how it all ties together. The nature trick is only required because of the revisionary adjustments made to thermometer time series set.

      • Whilst the below plot (prepared by Tony Heller, not checked) is just the US, one should bear in mind that the US makes up a large percentage of the GISS temperature set.

        There are adjustments being made in the region of up to around 1 degF, and it is clear how the 1930s and earlier data has been cooled. Unadjusted, the 1930s are the warmest period, whereas in the adjusted plot, by 1990 temperatures had fully recovered and post then the temperatures are reported warmer than 1930.

        i am not saying that the adjustments are wrong (they could be valid), but that type of adjustment should set off alarm bells, and is a genuine reason to be extremely skeptical as to the legitimacy and scientific value of the thermometer time series reconstruction.

      • RV, the adjustments are being made in a non-blind process, and there is a long history of bias creeping into results in those circumstances. Medical research, especially with drugs, routinely uses a “double blind” procedure because of that effect.
        Notably, I am not alleging conscious deception, but it still works out to the same result.

      • richard verney September 27, 2017 at 6:30 pm

        Agree, and there’s another, bigger problem with the adjustments as well — even if we concede, ad argumentum, that the adjustments are accurate, the claimed error bar for the temps (.1 according to Gavin) is much smaller than the adjustments to past data since, say, the year 2000. So the claimed error bar was wrong. And there will probably be more changes which will also exceed the claimed error bar, so it’s very likely wrong now too. Maybe it’s gotten better, but how could anyone know that?

        The upshot of this is that no one really knows what the temperatures were in the pre-satellite era to within a degree at 95% confidence and it’s not even all that clear how well we know the satellite-era temps. Combine that error bar with the error bars on model predictions and you quickly realize the overlap is so large few models can even be falsified — even if we could agree to stop using baseline anomalies and predict real temps.

    • Seems to me that the majority of the land in S. Hemisphere still has pretty scanty coverage of surface stations that report regularly. Massive areas are filled in by interpolation every year. As are huge areas of Asia in the N. Hemisphere. So why should anyone trust the claims of a global temperature average based on surface station data?

      • “So why should anyone trust the claims of a global temperature average based on surface station data?”

        Simple.. THEY SHOULDN’T

      • “Trust” is the wrong word. The myriad adjustments are problematic… but all of the temperature records more or less depict the same thing since 1979.

        http://www.woodfortrees.org/graph/gistemp/from:1979/offset:-0.43/mean:12/plot/hadcrut4gl/from:1979/offset:-0.26/mean:12/plot/rss/mean:12/offset:-0.10/plot/uah/mean:12

        Personally, I think the early 20th century warming has been suppressed by the adjustments… But I can’t prove this.

        As it pertains to validation of the models, we really only need the pist-1979 data.

      • David Middleton September 27, 2017 at 1:35 am
        ““Trust” is the wrong word”
        ———————————————————
        Trust is the right world for the layman IMO. Which is what I am with no credentials nor formal education in science or statistics above the level of sophomore in college. The difference is I’m interested so read and watch videos, about this stuff and frequent several climate blogs. But the average layman doesn’t do that and relies only on their faith in the institutions which authoritatively publish their reports which are then reported with little detail, nor questioning of content or context in the general press as scientific fact.

        As for the “early 20th century” record. I presume your referring to the massive heat waves of the 1930s?

      • Yep. In a detrended temperature series, the 1930’s should be as “hot” as today, or even hotter. They aren’t… which makes me think that the adjustments are suppressing the 1930’s.

        That said, the temperature stations do have to be adjusted for factors like time of observation.

      • There can be no doubt they were much hotter in 1934-36 in the US than any period since.
        In the US 1934 -36 the spring and summer highs were just outlandish. Incomprehensible to most Americans today for the areas they lived in. Tony Heller has done a great job of showing this.
        Here is just one of the many day by day examples he has posted from that period: https://realclimatescience.com/2017/07/july-19-1934-every-state-over-90-degrees/
        The maps he uses really get the point across:

        People argue that this just covers the US. But the fact is that nowhere else has as pristine or comprehensive record over such a broad area as the US does for this period.

      • Considering the more human aspect of that period one has to remember that even the White House didn’t get AC until 1933 and then just in the living quarters. Central air for the whole area of living working quarters was not installed until the Truman era.

        So the only place the average city dweller had to go to beat the heat was the local theater. Most of them had AC installed in the late 20s or early 30s because it made a huge difference in their bottom lines.

      • And don’t forget it is not simply areas being infilled where there are no stations, it appears that even where there are stations some 40 to 60% are not reporting data (or consistently reporting data) so that these stations also get infilled.

        It is a complete mess. Of course, we should not be trying to deal with thousands of stations, which inevitable causes problems. We should just be dealing with the cream, say the 200 very best sited, managed, maintained and run stations and then retrofit those with as nearly as possible identical equipment as used in the 1930s/1940s and then observe using the same procedure and practices as used in the 1930s/1940s. We could the just compare observational RAW data obtained today, with the observationally obtained RAW data of the 1930s/1940s without the need to make any adjustment whatsoever to the data. This would be far more informative

      • richard verney says:
        “……..It is a complete mess. Of course, we should not be trying to deal with thousands of stations, which inevitable causes problems. We should just be dealing with the cream, say the 200 very best sited, managed, maintained and run stations and then retrofit those with as nearly as possible identical equipment as used in the 1930s/1940s and then observe using the same procedure and practices as used in the 1930s/1940s. We could the just compare observational RAW data obtained today, with the observationally obtained RAW data of the 1930s/1940s without the need to make any adjustment whatsoever to the data. This would be far more informative”

        Then people would actually have to go to the stations and read the mercury thermometer and faithfully record and report the data as they did back then.

      • Rah,

        Glad you brought this up,showing that in year 2017, land only data is still has a lot of holes in it. I am sure Nick Stokes, will come along and pontificate on this awesome reality!

        No wonder he avoids Land only in modern GISStemp data! He knows it isn’t that much better than the old 1974 level of NH data coverage.

        He constantly says something like this,

        “That still doesn’t make it comparable to GISS land/ocean.”

        Snicker………

  26. The late 1990’s was a period of little volcanic activity, but the 2000’s saw a serious of moderate volcanos. About the same time, we developed the ability to detect stratospheric volcanic aerosols with much greater sensitivity and detect changes in aerosols that were in the noise before 2000s. That allowed many people to claim that the Pause was partly due to the continuous presence of slightly higher than normal aerosols for more than a decade. In reality, we didn’t really know how much higher than normal those aerosols were, because we couldn’t accurately measure “normal” before about 2000. So they wanted to correct the CMIP5 projections because volcanic cooling was stronger than anticipated. Also, N2O didn’t increase as fast as projected. Whether these were valid excuses isn’t clear.

    IMO, we should be focused on the longest possible sensible period to compare observations to projections: Observation 1977-present: +0.17 K/decade. Projection trends provided above by Bill Illis: FAR and Hansen about 0.3 K/decade. AR3, AR4, AR5 about 0.2+ K/decade. Mann 0.37 K/decade. (Consider only trends, not y-intercept and trend. Estimating y-intercepts adds greater uncertainty.

    • During the 2000’s the SAOT showed no trend, so the volcanoes had no influence on the stratosphere. Higher sulphate aerosols in the lower atmosphere make no difference to global temperatures and even China still warmed as usual with them.

      • Matt G: Some scientists claim that there was an increase in volcanic aerosols in the 2000’s vs the late 1990s and that this made an important difference.

        http://onlinelibrary.wiley.com/doi/10.1002/2014GL061541/full

        Abstract: Understanding the cooling effect of recent volcanoes is of particular interest in the context of the post-2000 slowing of the rate of global warming. Satellite observations of aerosol optical depth above 15 km have demonstrated that small-magnitude volcanic eruptions substantially perturb incoming solar radiation. Here we use lidar, Aerosol Robotic Network, and balloon-borne observations to provide evidence that currently available satellite databases neglect substantial amounts of volcanic aerosol between the tropopause and 15 km at middle to high latitudes and therefore underestimate total radiative forcing resulting from the recent eruptions. Incorporating these estimates into a simple climate model, we determine the global volcanic aerosol forcing since 2000 to be −0.19 ± 0.09 Wm−2. This translates into an estimated global cooling of 0.05 to 0.12°C. We conclude that recent volcanic events are responsible for more post-2000 cooling than is implied by satellite databases that neglect volcanic aerosol effects below 15 km.

        I’ll try to paste the key Figure below, otherwise see Figure 1 in the paper. I personally believe that the evidence for a change in stratospheric aerosols is marginal and is highly uncertain quantitatively.

        http://onlinelibrary.wiley.com/store/10.1002/2014GL061541/asset/image_n/grl52300-fig-0001.png?v=1&s=a8c4d782c05732014ee3d7389b813d82cf9caaaa

        The Vernier reference below shows some of the same information in different ways, but doesn’t quantify it as a forcing.

        http://onlinelibrary.wiley.com/doi/10.1029/2011GL047563/full

      • The trend below show little difference in background noise. The linked papers show very little difference between late 1990’s and 2000’s. The value SAOD or SAOT around 0.01 is a tiny amount and 17 times smaller than Pinatubo 1991. The difference between late 1990’s and early to mid 2000’s is less than SAOD of 0.001. This value represents 170 times smaller than Pinatubo that had an estimated cooling of 0.35 c.

        The estimated global cooling of 0.05 to 0.12°C is unrealistic and totally far fetched, that would lead to Pinatubo 1991 having an estimated cooling of 0.85 to 2.04 c.

        The SAOT of 0.01 is 17 times smaller and represents global cooling of about 0.02 c that is far less then just above. The difference between late 1990’s and early to mid 2000’s represents about cooling of 0.002 c. This just confirms that the trend show little difference from background noise and the signal being far too small, won’t be observed in global temperatures.

    • “Valid excuses”

      ???

      Once the forecast is made, there is no excuse. You can’t go back and change the forecast just because things happened that you didn’t know anticipate or understand

      Climate Forecast apologists want to change the forecast because volcanoes turned out different or CO2 was mitigated by the atmosphere more rapidly than anticipated. Then they try to use GISS to verify because that’s the data they were able to manipulate the most

      Fudge A little here, fudge a little there, fudge fudge everywhere

      • Mary wrote: “Once the forecast is made, there is no excuse.”

        Technically-speaking, the IPCC makes “projections”, not “forecasts”. They “project” what will happen IF the forcing from aerosols and rising GHGs changes in a particular way. If the observed change in forcing is inconsistent with the forcing changes used in a projection, then that projection has a legitimate excuse for being wrong*. Ideally, one would go back and re-run the same climate model using the observed change in forcing agents and see what the model predicts, but modelers don’t invest their resources re-running obsolete models that are decades old. Instead they estimate how much the projection would have changed.

        This process is scientifically reasonable. The fudging comes in when several marginally significant perturbations from expected forcing are added together to explain a failed projection.

        The most important use of projections is to show policymakers how much cooler it will be if they restrict CO2 emissions.

        * Suppose an economist made a projection for economic growth over the next two years based on the expectation that Congress would pass Trump’s tax cut and infrastructure spending plans. Would you say the economist’s projection was wrong if Congress failed to pass either program? What if the economist had warned that his projection would not be valid if this legislation did not pass?

      • The most important use of projections is to show policymakers how much cooler it will be if they restrict CO2 emissions.

        If the models were grounded in reality, they could show policymakers how much cooler it will be if they if they just left us the Hell alone.

        Almost every catastrophic prediction is based on climate models using the RCP 8.5 scenario (RCP = relative concentration pathway) and a far-too high climate sensitivity. RCP 8.5 is not even suitable for bad science fiction. Actual emissions are tracking closer to RCP 6.0. When a realistic transient climate response is applied to RCP 6.0 emissions, the warming tracks RCP 4.5… A scenario which stays below the “2° C limit,” possibly even below 1.5° C.

        https://www.carbonbrief.org/factcheck-climate-models-have-not-exaggerated-global-warming

        Note that the 2σ (95%) range in 2100 is 2° C (± 1° C)… And the model is running a little hot relative to the observations. The 2016 El Niño should spike toward the top of the 2σ range, not toward the model mean.

      • Frank…

        Your points about “projections” are valid in a scientific sense of discovery and understanding

        But when you are demanding massive changes to society based on the models, then you are making a forecast.

        If i make a bet on a stock based on Congress passing a law, I don’t get my money back if the law doesn’t pass.

        Real world forecasting is full of booby traps. I’ve done it for a long time on many issues. When you are wrong, you cant go back and change the forecast. You can go back and re-develop your models and issue a new forecast from that point going forward

      • Yep. Using the models as heuristic tools is fine for science projects. Using them as weapons in the mother-of-all armed robberies is a whole different story.

      • Frank, I think your economic analogy is incomplete. Climate and economic models include multiple steps and (net) positive feedbacks. Climate models say that increased emissions => increased concentrations => higher temps => even higher emissions (CO2, water vapor, methane) => even higher concentrations, etc (repeat with diminishing returns).

        The Bush tax cut would be a better analogy because the proponents said that cutting taxes at the high end => more money to the wealthy => more investment => more jobs => more spendig => even more jobs, (repeat), ultimately leading to an increase in tax revenue.

        In both cases, the initial flows, increased emissions and more cash to the wealthy (or less taken from them), were as large as promised, but the models drastically over-predicted the positive feedbacks, and, therefore, the end result.

      • Mary,
        “But when you are demanding massive changes to society based on the models, then you are making a forecast.”
        This is really silly stuff. Scientists don’t forecast future GHG levels, exactly because they depend on policy decisions. They say – if you do this, then this will happen. That’s all they can say. You’re saying – well if you can’t tell us what we’ll decide, then why should we listen to you before deciding?

      • Nick Stokes:

        Is that really you, or did a watermelon spring up under your bed?

        As you’ve so forcefully demonstrated above, this party line of yours only lasts until the thing that was supposed to happen (Scenario A temperatures), doesn’t, even when the associated policy decision the scientists warned against (business as usual) continues, in which case the party line literally does switch to “our model was a forecast of different future GHG levels.”

      • Frank:

        “Technically-speaking, the IPCC makes “projections”, not “forecasts”. . . The most important use of projections is to show policymakers how much cooler it will be if they restrict CO2 emissions.”

        That should have read “the most important abuse of projections . . .”

        Until the scientists have the courage to commit to definitive forecasts, and accept the verification and/or failure of those forecasts, they haven’t demonstrated that they know what they are talking about, and asking anyone to rely on their opinions is arrogant. But they want it both ways. They want society to have blind faith in these mere “projections” without having to put first their credibility on the line.

      • Kurt,
        “only lasts until the thing that was supposed to happen (Scenario A temperatures)”
        No-one said Scenario A is supposed to happen. I’m sure lots of people hope it doesn’t.

        Most scientific prediction is based on scenarios. It doesn’t make absolute predictions. If you drop a ball from the Empire State building, science predicts how long it will take to reach the ground. It doesn’t say that you will drop that ball, or that you should. That’s policy. Given a scenario, there is stuff you can predict. Without it, science can’t.

  27. I don’t understand this bit: “They essentially went from 0.7 +/-0.3 [CMIP5] to 0.6 +/-04 [CMIP3]. Progress shouldn’t consist of expanding the uncertainty… unless they are admitting that the uncertainty of the models has increased.” Assuming your estimates are correct, you have the direction of progress backwards. CMIP5 is more recent than CMIP3. What they’ve done is they have gone from a less precise model but more accurate model to a more precise but less accurate model. That is semiprogress.

    • The CMIP3 model run was characterized as a successful 10-yr forecast… Hence progress over the unsuccessful CMIP5 forecast.

      I fully realize that the actual model progression is from CMIP3 to CMIP5.

      I often trade clarity for sarcasm.

    • The more you know, the more you realize how much you don’t know.

      The less you know, the more you think you know.

      https://tse2.mm.bing.net/th?id=OIP.DtoEmLWDPY-oGJfXP5rhTAEsEs&pid=15.1&P=0&w=300&h=300

      Uncertainty has increased in the models because for a start, they don’t know what caused the pause. Cherry picking ideas what may have occurred to model data in individual short periods, using hind casts against already known observed temperatures doesn’t actually improve forecasts. Only where it would improve forecasts is if it fit the entire timeline, but they mainly adjust short periods individually because they don’t know. Every time the models are adjusted it needs decades later with zero changes to confirm if they were any good.

  28. It’s silly to claim successful forecast when the forecast range is from 0.2 to 1.0 C, a spread of 0.8 C in a single year. That’s almost the actual temperature change since 1850, a period of 166 years. Even dart-throwing monkeys can hit that gigantic bull’s-eye. Anybody with a bit common sense will just say the range is huge because we can’t forecast temperatures.

  29. For the millionth time, stop using anomalies in temperature forecasts! Forecast a real number (or range) so everyone can agree whether you got it right.

    • That Hansen 1988 graph is a perfect example of why this matters — remember Ira’s graph? Where Now compare to Gavin’s above — where you set that baseline matters, in Ira’s graph there’s no way that El Nino spike pushes GISTEMP up to Scenario B. But it shouldn’t matter — we’re talking about real physical phenomena, which has real physical values.

    • Forecasters should issue a new forecast every year. Very specific forecasts…what they are forecasting and how it will be verified.

      Then we can evaluate easily and have a new sample every year.

      Without that, it’s just dart throwing monkeys.

  30. I often think climate science would be a great field for a young person to get into if it weren’t already so politicized. So much to learn. Of course that means there are prominent people who haven’t learned as much as they think they have, or pretend they have.

    • part of the trouble with the science is young people coming through the establishments that provide them with the knowledge to enter the field have their minds corrupted to the point there is no notion that the currently accepted climate science warming line might be wrong. to progress a field requires inquisitive open minds .

    • The surface increasingly non-data sets should be scrapped as an observation tool for global warming and validity of climate models.

      Why?

      The huge problems with them are obvious, but the main point being the theory involves warming the atmosphere not the surface. The models are trying to resolve how much the atmosphere will warm not at the surface. This con trick is happening now comparing models with surface data.

      Naturally short wave radiation (SWR) warms the surface because it goes through the atmosphere like a vacuum and is absorbed by the ground and oceans. Warming at the surface more than the atmosphere only indicates the source of warming is from the ground and ocean, warming the air above it. That is not evidence of a greenhouse effect from increasing CO2 and why no detection out of noise is verified. This points to SWR being the reason for increased warming with the AMO / ENSO and with declining global cloud levels, decreasing RH levels and stations recoding increased sunshine hours this becomes obvious.

      Altering data and infilling causing more warming at the surface than there actually was, is actually further falsifying the theory when there should be more warming in the atmosphere.

      • This was suppose to be a general post, not a reply to one.

        “,,,,,,stations recording increased sunshine hours…”

  31. Richard Verney, you might find this interesting:

    Data Tampering At USHCN/GISS

    “The measured USHCN daily temperature data shows a decline in US temperatures since the 1930s. But before they release it to the public, they put it thorough a series of adjustments which change it from a cooling trend to a warming trend.”

    https://stevengoddard.wordpress.com/data-tampering-at-ushcngiss/

    Here he has a, GHCN Software, you can download to better use the NOAA temperature data:

    “I have released a new version of my GHCN software. It is much faster at obtaining NOAA data and should work on Mac, Linux or Cygwin on Windows. Download the software here.”

    https://realclimatescience.com/ghcn-software/

  32. David I’ve brought this criticism up many times before and never seen an answer; why is it the climate science community accepts the idea of an average (P50 I think you call it) behavior of these models? It’s a very basic mistake to average two or more distinctly different things, and these models are different.

    If you had hundreds of runs using the same model, taking the average might have some real meaning, but these are different models. Why does the climate science community continue to tolerate this practice?

    • It’s how probability distributions are run. Before we drill wells we build a probability distribution by inputting the minimum and maximum cases for a variety or reservoir parameters (porosity, permeability, area, thickness, drive mechanism, etc.). The computer then runs a couple of thousand Monte Carlo simulations and we get a probability distribution from P90 (minimum) to P50 (mean, or most likely) to P10 (maximum) of the resource potential (bbl oil, mcf gas).

    • With the climate models, sometimes the run an ensemble of the same RCP scenario and sometimes they run multiple RCP scenarios, like this:

      While there isn’t a lot of divergence of the RCP’s yet, the temperature observations are clearly tracking closer to RCP 2.6 than RCP 8.5.

    • Mr. Layman here.
      (For this comment maybe I should have put that in all caps?)
      All those different models, whatever the method used, are trying to model the same thing. Reality. Future “reality” at that.
      Comparing real observations of present reality with what models said observations should be speaks to the validity of the models.
      If a model got it wrong, learn and adjust the model. Don’t adjust the data to cover your behindcast. 8-)

  33. Now Nick is wading into lies,since it was HIM who whines about the Land/Ocean argument,that no one said or disputed.with. He does it to deflect from his false claims about NCAR chart which was based on J.Murray Hamilton work. He did it to deflect from the animated GISS charts Lars posted.

    The NCAR chart exist, it was labeled as being from NCAR,you were repeatedly shown them and that NAS chart is very similar to it. You have yet to prove otherwise.

    Nick writes,

    “I said that there was no NCAR source, so we can’t work out just what is being plotted. I said that it came from a Newsweek story. Heller says that he got it instead from another old newspaper. So? Still no NCAR source.”

    Funny that many newspapers,posted that chart a lot in 1974,long before Newweek posted it,that it came from NCAR right at the bottom of the chart,but the person who made that data for it was given to you,which you COMPLETELY ignored,since that destroyed you entire babble about the NCAR based chart existence.

    You drone on with more misleading stuff,

    “And he says indignantly, well of course it’s land only, it’s all they had. What a defence! He wasn’t telling you that on his graph. He’s claiming that the difference between a 1974 plot of land only (probably NH, despite the newspaper heading) and 2017 land/ocean is due to GISS fiddling. Never tells you that they are just quite different things being plotted.”

    You originally stated differently,

    “Steven Goddard produces these plots, and they seem to circulate endlessly, with no attempt at fact-checking, or even sourcing. I try, but it’s wearing. The first GISS plot is not the usual land/ocean data; it’s a little used Met Stations only.”

    Tony upon reading your dishonest red herring replied,

    “The amount of misinformation in that claim is breathtaking. Nobody attempted to do land/ocean plots in 1974, because they weren’t willing to make up fake temperature data like modern climate fraudsters.”

    It was YOU who created something that didn’t exist in 1974,that no one HERE besides you said anything about land/ocean data. Tony,NEVER said they were land/ocean data for 1974,1981 or 2001.

    The 2001 chart Tony posted shows only land data on it.

    Here is the animated chart you originally responded to,

    BOTH charts for the years 2001 and 2015 are straight off the GISS website,Tony simply created the animation to show the obvious changes. They are LAND DATA ONLY FOR THOSE TWO YEARS. It is ALL GISS.

    NOTHING about land/ocean on those two charts Lars P. posted on
    September 26, 2017 at 12:43 pm.

    When are you going to stop LYING?

    Then Nick tries to lie about what I said,since I NEVER once claimed they were land/ocean based charts for NCAR,NAS or any pre 2001 chart in this thread.

    Nick writes,

    “And sunsettommy still doesn’t even try to figure out what the GISS plots are. The GISS we have been following and discussing for years is GISS Land/Ocean.”

    Lars P posted the animation chart that you replied to with this part about land/ocean drivel that only YOU brought up over the chart Lars posted,

    Nick first comment about Tony’s charts (Which really came from GISS),

    “The first GISS plot is not the usual land/ocean data; it’s a little used Met Stations only, essentially an update of a 1987 paper.”

    The GISS charts NEVER says it was land/ocean at all, just Meteorological stations,it says so right on the FREAKING charts!

    “Those GISS plots marked Met Stations are something different – he still seems to have no idea what. They are a continuation of the Met Stations index of the paper Hansen and Lebedeff, 1987”.

    No the two charts came from the GISS webpages as shown to you by Lars P. I posted the 2001 chart that is from GISS themselves which is identical to what Tony used for the animation.

    You brought up the Hansen 1987 paper as if you have something relevant say,which no one cares about it,since it was IRRELEVANT,since Tony got the charts from the GISS website.

    “They used Met stations data to extrapolate over the whole globe. It’s isn’t land/ocean, and it isn’t land only. And most people think it is no longer a good idea, and no-one else does it. It is rarely referenced. Again, Heller isn’t going to tell you any of this.”

    Tony used the GISS data based charts,which you NEVER refuted a single time! The ones Tony used were always LAND only.

    Again I quote Tony,

    “This was the GISS web page in 2005. Top plot was “Global Temperature (meteorological stations.) No ocean temperatures. The 2001 GISS web page had the same thing.”

    You need to stop Lying!

Comments are closed.