UAH and ENSO (Now Includes July and August Data)

Guest Post by Werner Brozek, Edited by Just The Facts:

WoodForTrees.org and NOAA Climate Prediction Center – Click the pic to view at source

First of all, I wish to draw your attention to the graph of UAH6.0beta5 with a mean of 12. (Yes, WoodForTrees.org has finally been updated, at least for UAH6.0beta5 and Hadcrut4.4 and WTI!)

On the graph, it can be seen that the mean 2016 values are pretty even with the 1998 values to this point. The average for the first 8 months of 1998 was 0.574 and the average for the first 8 months of 2016 was 0.566. The final average for 1998 with all 12 months was 0.484 so further drops are needed in 2016 if the 1998 record is to stand. In 1998, the January to December period was the highest 12 month period to 3 decimal places. This 0.484 average was narrowly beaten by the 12 month period from September 2015 to August 2016 where the average was 0.496. However since the margin of error for a 12 month period for UAH6.0beta5 is 0.1, the extra 0.012 is within the margin of error so it would be more accurate to say that the latest average is statistically tied with 1998. Whatever happens for the rest of the year, I believe it is safe to say that 2016 will be statistically tied with 1998 since the difference will not be more than 0.1.

Now for some comments on the ENSO numbers, which are from NOAA’s Climate Prediction Center. The period of time where we had El Nino conditions around 1998 was 13 months. In contrast, the period of time in the present case was 19 months. The highest average value was 2.3 in both cases. For all of the months from January to August, the 1998 numbers were lower than the 2016 numbers. The difference was not much most of the time with the exception of the July average. In 1998, it was in La Nina territory at -0.7. This time, it is in neutral territory at -0.3 which is 0.4 warmer. There was much speculation as to whether or not we would have a strong La Nina soon. At the moment, the drops in ENSO numbers have greatly slowed down. They only dropped from -0.6 to -0.7 over about 6 weeks. I still expect the average UAH6.0beta5 anomalies to drop over the rest of 2016, however they did rise over the last 2 months. This contest between 1998 and 2016 may go right down to the wire.

In the sections below, we will present you with the latest facts. The information will be presented in two sections and an appendix. The first section will show for how long there has been no statistically significant warming on several data sets. The second section will show how 2016 so far compares with 2015 and the warmest years and months on record so far. For three of the data sets, 2015 also happens to be the warmest year. The appendix will illustrate sections 1 and 2 in a different way. Graphs and a table will be used to illustrate the data. All data sets except Hadcrut4.4 go to August.

Section 1

For this analysis, data was retrieved from Nick Stokes’ Trendviewer available on his website. This analysis indicates for how long there has not been statistically significant warming according to Nick’s criteria. Data go to their latest update for each set. In every case, note that the lower error bar is negative so a slope of 0 cannot be ruled out from the month indicated.

On several different data sets, there has been no statistically significant warming for between 0 and 23 years according to Nick’s criteria. Cl stands for the confidence limits at the 95% level.

The details for several sets are below.

For UAH6.0: Since August 1993: Cl from -0.006 to 1.810
This is 23 years and 1 month.
For RSS: Since December 1993: Cl from -0.008 to 1.746 This is 22 years and 9 months.
For Hadcrut4.4: The warming is statistically significant for all periods above three years.
For Hadsst3: Since December 1996: Cl from -0.022 to 2.162 This is 19 years and 9 months.
For GISS: The warming is statistically significant for all periods above three years.

Section 2

This section shows data about 2016 and other information in the form of a table. The table shows the five data sources along the top and other places so they should be visible at all times. The sources are UAH, RSS, Hadcrut4, Hadsst3, and GISS.

Down the column, are the following:
1. 15ra: This is the final ranking for 2015 on each data set.
2. 15a: Here I give the average anomaly for 2015.
3. year: This indicates the warmest year on record so far for that particular data set. Note that the satellite data sets have 1998 as the warmest year and the others have 2015 as the warmest year.
4. ano: This is the average of the monthly anomalies of the warmest year just above.
5. mon: This is the month where that particular data set showed the highest anomaly prior to 2016. The months are identified by the first three letters of the month and the last two numbers of the year.
6. ano: This is the anomaly of the month just above.
7. sig: This the first month for which warming is not statistically significant according to Nick’s criteria. The first three letters of the month are followed by the last two numbers of the year.
8. sy/m: This is the years and months for row 7.
9. Jan: This is the January 2016 anomaly for that particular data set.
10. Feb: This is the February 2016 anomaly for that particular data set, etc.
17. ave: This is the average anomaly of all months to date.
18. rnk: This is the rank that each particular data set would have for 2016 without regards to error bars and assuming no changes to the current average anomaly. Think of it as an update 35 minutes into a game.

Source UAH RSS Had4 Sst3 GISS
1.15ra 3rd 3rd 1st 1st 1st
2.15a 0.261 0.358 0.746 0.592 0.86
3.year 1998 1998 2015 2015 2015
4.ano 0.484 0.550 0.746 0.592 0.86
5.mon Apr98 Apr98 Dec15 Sep15 Dec15
6.ano 0.743 0.857 1.010 0.725 1.10
7.sig Aug93 Dec93 Dec96
8.sy/m 23/1 22/9 19/9
9.Jan 0.540 0.665 0.909 0.732 1.15
10.Feb 0.832 0.977 1.074 0.611 1.32
11.Mar 0.734 0.841 1.070 0.690 1.28
12.Apr 0.715 0.756 0.918 0.654 1.08
13.May 0.545 0.524 0.690 0.595 0.93
14.Jun 0.339 0.467 0.736 0.622 0.80
15.Jul 0.389 0.469 0.736 0.670 0.85
16.Aug 0.435 0.458 0.661 0.98
17.ave 0.566 0.645 0.874 0.651 1.05
18.rnk 1st 1st 1st 1st 1st
Source UAH RSS Had4 Sst3 GISS

If you wish to verify all of the latest anomalies, go to the following:
For UAH, version 6.0beta5 was used.
http://vortex.nsstc.uah.edu/data/msu/v6.0beta/tlt/tltglhmam_6.0beta5.txt
For RSS, see: ftp://ftp.ssmi.com/msu/monthly_time_series/rss_monthly_msu_amsu_channel_tlt_anomalies_land_and_ocean_v03_3.txt
For Hadcrut4, see: http://www.metoffice.gov.uk/hadobs/hadcrut4/data/current/time_series/HadCRUT.4.4.0.0.monthly_ns_avg.txt
For Hadsst3, see: https://crudata.uea.ac.uk/cru/data/temperature/HadSST3-gl.dat
For GISS, see:
http://data.giss.nasa.gov/gistemp/tabledata_v3/GLB.Ts+dSST.txt

To see all points since January 2016 in the form of a graph, see the WFT graph below.

WoodForTrees.org – Paul Clark – Click the pic to view at source

As you can see, all lines have been offset so they all start at the same place in January 2016. This makes it easy to compare January 2016 with the latest anomaly.
I am very happy to note that WFT has been updated for the latest Hadcrut4 version as well as allowing UAH6.0beta5 to be seen. The thick double line is the WTI which shows the average of RSS, UAH6.0beta5, Hadcrut4.4 and GISS.

Appendix

In this part, we are summarizing data for each set separately.

UAH6.0beta5

For UAH: There is no statistically significant warming since August 1993: Cl from -0.006 to 1.810. (This is using version 6.0 according to Nick’s program.)
The UAH average anomaly so far for 2016 is 0.566. This would set a record if it stayed this way. 1998 was the warmest at 0.484. Prior to 2016, the highest ever monthly anomaly was in April of 1998 when it reached 0.743. The average anomaly in 2015 was 0.261 and it was ranked 3rd.

RSS

For RSS: There is no statistically significant warming since December 1993: Cl from -0.008 to 1.746.
The RSS average anomaly so far for 2016 is 0.645. This would set a record if it stayed this way. 1998 was the warmest at 0.550. Prior to 2016, the highest ever monthly anomaly was in April of 1998 when it reached 0.857. The average anomaly in 2015 was 0.358 and it was ranked 3rd.

Hadcrut4.4

For Hadcrut4: The warming is significant for all periods above three years.
The Hadcrut4 average anomaly so far is 0.874. This would set a record if it stayed this way. Prior to 2016, the highest ever monthly anomaly was in December of 2015 when it reached 1.010. The average anomaly in 2015 was 0.746 and this set a new record.

Hadsst3

For Hadsst3: There is no statistically significant warming since December 1996: Cl from -0.022 to 2.162.
The Hadsst3 average anomaly so far for 2016 is 0.651. This would set a record if it stayed this way. Prior to 2016, the highest ever monthly anomaly was in September of 2015 when it reached 0.725. The average anomaly in 2015 was 0.592 and this set a new record.

GISS

For GISS: The warming is significant for all periods above three years.
The GISS average anomaly so far for 2016 is 1.05. This would set a record if it stayed this way. Prior to 2016, the highest ever monthly anomaly was in December of 2015 when it reached 1.10. The average anomaly in 2015 was 0.86 and it set a new record.

Conclusion

It would appear that when considering ENSO numbers and the length of time they were high, that 2016 would be significantly higher than 1998. In addition, there is significantly more carbon dioxide in the air now than was the case in 1998, i.e. according to Economist, “The world added roughly 100 billion tonnes of carbon to the atmosphere between 2000 and 2010. That is about a quarter of all the CO₂ put there by humanity since 1750.” Thus if 2016 edges out 1998 on UAH6.0beta5, how much of the reason should be attributed to the length and strength of the El Niño and how much can be attributed to additional carbon dioxide in the atmosphere?

Advertisements

156 thoughts on “UAH and ENSO (Now Includes July and August Data)

      • Why does the (whatever is plotted) suddenly have a region of infinite curvature every now and then ??

        No real Physical function ever has infinite curvature.

        Yes it can do something that may take as long as one atto-second to do, but it never does anything in zero time.

        And then why also does it have long stretches of time when the curvature is zero ??

        No real physical function consists of periods of time with zero curvature, and periods of time with infinite curvature and NO other values for the graph curvature.

        G

      • Werner Brozek “Gasundheight” probably

        Actually “gesundheit” is the correct German spelling for the word wishing good health to someone who sneezed.

        On a global warming site? How else would you measure the climate impact of the extra CO2 emitted by a sneeze?

  1. “….how much of the reason should be attributed to the length and strength of the El Niño (99.9%) and how much can be attributed to additional carbon dioxide in the atmosphere (0.1%)?”

    And what about minor changes in the 90 W/m^2 irradiance aphelion to perihelion & 638 W/m^2 fluctuation on 40 N horizontal surface solstice to solstice?

    • And what about minor changes in the 90 W/m^2 irradiance aphelion to perihelion & 638 W/m^2 fluctuation on 40 N horizontal surface solstice to solstice?

      Could you please elaborate on “changes in the 90 W/m^2”? What I mean is what was the exact value in 1998 and what was the exact value in 2016?

  2. A very good read. As you say, it is looking close at the moment, but when you take into account the bigger El Nino event and just how much extra CO2 has been added in the past 18 years, how anyone could say this is CO2 driven is beyond me.

    IF 2016 does break the record, I can imagine the headlines already. Somehow I don’t think it will be mentioning by how much it beat the record, what weather events happened and just what happens the previous 18 years. It will be “2016 shatters temperature record!”

    • It will be “2016 shatters temperature record!”

      If that statement is made, it will be made with reference to GISS which could “shatter” its 2015 record by more than 0.1. However neither UAH nor RSS will shatter their last record by more than 0.1 unless an adjusted version comes out for RSS between now and the end of the year.

      • Alas, they won’t discuss the different measurements, the differences and margin of errors or even how much it was “shattered” by. You know it :-)

    • It’s close enough that the ‘climate team’ wrote the articles six months ago. Every month since then, the climate team rehashes what their news announcement will announce.

      The Arctic is recovering, what else do they have to jump around for? Hottest temperatures, after cooling the past, ignoring temperature errors, jacking up their SST, sacrificing a weather event on their CO2 altar.

    • Mostly it makes me wonder about GISS and HADCRUT and how much they have been “corrected”.

      Good question!
      I have five different anomalies for 2012. The first from Hadcrut3, the next from Hadcrut4, the next from Hadcrut4.2, the next from Hadcrut4.3, and the last from Hadcrut4.4. At a certain point in time, their numbers were, respectively, 0.403, 0.433, 0.448, 0.467 and finally 0.470. Note that the later one is always higher than the earlier one. This begs the question as to what HadCRUT knew in 2014 and 2015 about the 2012 anomaly that they did not know a year earlier.

      • Are we able to date each of these revisions?

        Major changes seem to come once a year. Is the date important or is verification of the numbers important? The first and last numbers can be verified in these two places now:

        https://crudata.uea.ac.uk/cru/data/temperature/HadCRUT3-gl.dat
        https://crudata.uea.ac.uk/cru/data/temperature/HadCRUT4-gl.dat

        Over the years, I have a number of posts on the changes.
        For the 0.433 value, see:
        https://wattsupwiththat.com/2013/03/05/has-global-warming-stalled-now-includes-january-data/

        On this post,
        https://wattsupwiththat.com/2014/10/05/is-wti-dead-and-hadcrut-adjusts-up-again-now-includes-august-data-except-for-hadcrut4-2-and-hadsst3/#comment

        I have this comment:
        “And how do you suppose the last 16 years went prior to this latest revision? Here are the last 16 years counting back from 2013. The first number is the anomaly in Hadcrut4.2 and the second number is the anomaly in Hadcrut4.3: 2013(0.487, 0.492), 2012 (0.448, 0.467), 2011 (0.406, 0.421), 2010(0.547, 0.555), 2009 (0.494, 0.504), 2008 (0.388, 0.394), 2007(0.483, 0.493), 2006 (0.495, 0.505), 2005 (0.539, 0.543), 2004(0.445, 0.448), 2003 (0.503, 0.507), 2002 (0.492, 0.495), 2001(0.437, 0.439), 2000 (0.294, 0.294), 1999 (0.301, 0.307), and 1998(0.531, 0.535). Do you notice something odd? There is one tie in 2000. All the other 15 are larger. So in 32 different comparisons, there is not a single cooling.”

      • “This begs the question as to what HadCRUT knew in 2014 and 2015 about the 2012 anomaly that they did not know a year earlier.”
        Adjustment rarely makes changes to recent years. What caused those numbers to change is a revision of what happened in the anomaly base period of 1961-90 relative to now. It is the accumulation of adjustment information in the years during and since that period that changes those numbers.

      • “This begs the question as to what HadCRUT knew in 2014 and 2015 about the 2012 anomaly that they did not know a year earlier.”

        This is trivial and it’s funny that so called smart skeptics cant figure it out.

        First, note that very few skeptics actually doubt their “analysis” that something must be wrong with these
        changes. Its easy to fool yourself into believing that something strange must be going on.

        But, a little knowledge can correct this.

        1. Prior to climategate CRU applied what they termed “Value added” adjustments. In fact, the reason
        why McIntyre asked for the data was to SHOW that CRU wasnt making that many changes.

        2. In climategate we asked for the raw data, about 95% of which was taken from GHCN and the rest
        was taken from NWS.. Jones refused to release the NWS data because it was covered by agreements
        So we FOIA the agreements..
        3. After Climategate Jones released the source data– with a couple of exceptions.. for example
        Poland.. which refused to allow him to post their data.
        4. In addition Jones Moved to only using NWS adjusted data. That is they Ingest data that has been
        adjusted by individual countrys. canada has their own series.. France, Italy, Crotia, so Instead
        of adjusting data themselves, CRU relies on NATIONAL Weather service adjusted data.

        The upshot is this. If a NWS Changes its official record, the data CRU reads will change. And their average will change. They have no control over this. So country X builds a national series… If one year
        from now, they add or drop stations from that average.. The effect will show up in CRU
        because CRU merely average the data output by others. The same goes for GISS.. they import data
        from Adjusted NOAA.

        AND.. the same thing happens to us as Berkeley. Month in and Month out the station count varies.
        In the beginning we had something like 36K stations.. As countries add their raw data to repositories
        the number of stations changes. This can happen because of data recovery Or because a national weather service decides to stop reporting raw data to GHCN Daily. or they fix metadata and Stations change.

        The code for CRU and GISS are pretty clear on what data is read in and what is done to it.

      • What caused those numbers to change is a revision of what happened in the anomaly base period of 1961-90 relative to now.

        In this article:
        https://wattsupwiththat.com/2014/10/05/is-wti-dead-and-hadcrut-adjusts-up-again-now-includes-august-data-except-for-hadcrut4-2-and-hadsst3/

        I have this quote:
        “From 1997 to 2012 is 16 years. Here are the changes in thousandths of a degree with the new version of HadCRUT4 being higher than the old version in all cases. So starting with 1997, the numbers are 2, 8, 3, 3, 4, 7, 7, 7, 5, 4, 5, 5, 5, 7, 8, and 15. The 0.015 was for 2012.”

        If the base period change was the only cause for a change, then I would think that all numbers would be the same. Perhaps some change in base period numbers was responsible for some of the change, but there had to be much more to it than that in order for the increase to vary from 2/1000 to 15/1000.

      • Nick Stokes: . What caused those numbers to change is a revision of what happened in the anomaly base period of 1961-90 relative to now. It is the accumulation of adjustment information in the years during and since that period that changes those numbers.

        Nick, if I understand you correctly, the implication is that the 1961-90 data are being systematically adjusted downward. Which leaves Werner’s concerns essentially unchanged; only the time periods are changed, to confuse the innocent. : > )

      • “Nick, if I understand you correctly, the implication is that the 1961-90 data are being systematically adjusted downward.”
        In fact, when Werner raised this previously, as in his links, Tim Osborn commented. And he pointed out that most of the change is not due to adjustments at all, but to inclusion of new data. There had been criticism of HADCRUT’s coverage of Arctic regions particularly, and Cowtan and Way showed that it had substantial effects. In recent versions included a lot of new stations.

      • If one year
        from now, they add or drop stations from that average.. The effect will show up in CRU
        because CRU merely average the data output by others.

        Fair enough from Stephen Mosher.

        There had been criticism of HADCRUT’s coverage of Arctic regions particularly

        Fair enough from Nick Stokes.

        So they found out a bunch of new things about 2012 in 2014. But in 2015, more new things are apparently discovered or changed that were not discovered or changed in 2014. Can we expect additional polar temperatures to be discovered each year over the next 10 years that will change the 2012 anomaly each year and in an upward direction?

      • It is the accumulation of adjustment information in the years during and since that period that changes those numbers…..

        How long will we have to wait to know what the temperature is now?

    • “Fair enough from Nick Stokes.
      So they found out a bunch of new things about 2012 in 2014. But in 2015, more new things are apparently discovered or changed that were not discovered or changed in 2014. Can we expect additional polar temperatures to be discovered each year over the next 10 years that will change the 2012 anomaly each year and in an upward direction?”

      Actually, I misspoke there. Tim Osborn talked about Arctic tmpeatures, but the changes were mostly elsewhere:

      Principal changes are the addition of new station data (e.g. 10 new series from Spain, 250 from China, etc.), extension of existing station data to include more recent values (e.g. 518 series from Russia were updated, etc.), or the replacement of station data with series that have been the subject of homogeneity analysis by other initiatives.

      And there were stations added in each new version – detailed here, with dates. But generally, yes. If new data becomes available, should they not use it? What would you say if they didn’t?

      Though I think this rush was a bit of catch-up. They hadn’t updated for a while, and gaps were showing. So change will probably slow.

      • If new data becomes available, should they not use it? What would you say if they didn’t?

        If the new data ALWAYS shows the most recent years getting warmer than before, perhaps they should not use it to avoid perceptions that they are just looking for things to “bust pauses”.

      • Werner Brozek said:

        “If the new data ALWAYS shows the most recent years getting warmer than before, perhaps they should not use it to avoid perceptions that they are just looking for things to “bust pauses”.”

        Are you serious? That’s how we should do science? Since when was how something might be perceived a part of the scientific method?

      • Since when was how something might be perceived a part of the scientific method?

        Since climategate, and specifically for climate science. Their emails showed that they were not following the scientific method. Specifically, what did McIntyre have to go through to access their raw data to verify things? They have to earn back a lot of trust before suggesting that all results really always point in the direction that Hansen is pushing so hard that he would be willing to get arrested for it.

      • They can use, of course. However, they need to disclose more information when adding or adjusting. A comment at https://wattsupwiththat.com/2014/10/05/is-wti-dead-and-hadcrut-adjusts-up-again-now-includes-august-data-except-for-hadcrut4-2-and-hadsst3/#comment-1757103 shows something interesting about the China dataset added, as an example.

        The sad fact is that the hadcrut graphs above, not overly far away from the satellites now, will be revised up for 2016 over the next 4 years to show it even hotter. There is no prospect of them correcting down.

        As these land “measurements” depart further and further from satellites, people will just become more and more suspicious.

      • Nick, what do you think the chances are that in 3-4 years HADCRUT has added more data (we can discuss what data and the methods used to smooth it at a later stage) to show that 2016 was hotter than they show it now?

        It has already been demonstrated what happened to 2012 in just 3 years.

      • “we can discuss what data and the methods used to smooth it at a later stage”
        HADCRUT doesn’t smooth or homogenise data, though some of the data is homogenised before they get it. I think there just isn’t that much data of sufficient quality still available.

      • “However, they need to disclose more information when adding or adjusting.”
        They do provide a great deal of detail. The China adjustments to HAD 4.3.0 are described thus:

        China
        The two subsets of homogenized series came from different sources. The 18 long homogenized monthly mean series (Cao et al., 2013, DOI: 10.1002/jgrd.50615) have been acquired through personal communication and added to the CRUTEM archive. Due to the extensive work that has gone into the construction and homogenization of the 18 series, priority was given to these over any matching series in the larger 380-subset (below,) when the merger with CRUTEM was conducted. The longest series is that for Shanghai which begins in 1873.
        The second, much larger subset of 380 homogenized series, most of which cover the period 1950s to 2012 (none start earlier than 1951), have been prepared by Xu et al. (2013, doi:10.1002/jgrd.50791). The daily series, acquired through personal communication, were used to produce monthly-mean series. Before any merger between the two new subsets (combined) and the CRUTEM archive was actually undertaken, checks were made to see if any of the new series received had data in the existing CRUTEM archive under different ID codes. As a result of these checks, six of the existing CRUTEM series were combined (merged) with the new matching series (under the new ID codes). The final merger with CRUTEM archive, using blanket overwrite, resulted in there being 419 Chinese series in CRUTEM. This is a significant increase of 250 series from the previous update in 2013.
        Checks were made on any post-merger series that had retained original CRUTEM (pre-merger subset) blocks of data – that is, before the start of the homogenized series being added. As a result of these checks, inhomogeneity was apparent in five series. Three of these were corrected via the use of the same adjustments used by the creators of the homogenized series and two series received FRY labels that effectively inactivate their pre-homogenized series section.
        During the preceding operations, it was noted that the Station series for Hong Kong (one of the 18 long series), under the ID code 450050, no longer receives routine updates from monthly circulations. The ID code for Hong Kong is now 450040. The CRUTEM station ID for Hong Kong was thus changed to the new value.

        And that is just part of it.

        But I see Tim in a follow-up also explained why the additions generally increase trend. They are land data, and land is warming faster than global. In HADCRUT, an empty grid cell is left out of the average, which has the effect of assigning it the average of the rest, which is mostly ocean. I don’t think that is a good way of doing it, but sceptics often do. Whenever an empty cell is replaced by genuine land obs, the trend goes up. HADCRUT’s trend hd been lower than other surface measures for a long time, probably because of this treatment of missing cells.

      • Thanks, Nick. You clarified for me that HADCRUT is effectively just given information and therefore can’t be responsible for the methods used to obtain data. So, I shouldn’t be too hard on them :-)

        However, does HADCRUT have any control over if they use said data or not? For example, the comments at https://wattsupwiththat.com/2014/10/05/is-wti-dead-and-hadcrut-adjusts-up-again-now-includes-august-data-except-for-hadcrut4-2-and-hadsst3/#comment-1757103 raise some interesting questions and concerns about the Chinese dataset. What are your thoughts on that?

      • “raise some interesting questions and concerns about the Chinese dataset”
        So what are they? JTF is pretty unclear. Do you know?

        He seems to complain that before 1950 of 18 stations, 28 adjustments were made. 24 were for known reasons – station move and instrument change. 4 adjustments (out of 18 stations) were for unknown reasons. Before 1950, only 5 adjustments in total were made. Is this supposed to be unreasonable? What else were they supposed to do?

        You’re right that UEA/Hadley hasn’t been out there for 150 years reading thermometers around the world. They have to rely on other people.

      • Hi Nick,

        Last reply till tonight, as I have to work ;-)

        However, as per your initial comment and as Tim Osborn commented, among principal changes were 250 new series station data from China. However, as justthefactswuwt commented, the referenced papers were about reconstructing existing data, rather than adding a new data series. So, the implication that 250 new data series from China was new data added seemed to be at odds with the paper mentioned. As Tim Osborn never replied, it remains open, I guess? Until you have some insights, or perhaps someone else?

      • “However, as justthefactswuwt commented, the referenced papers were about reconstructing existing data”
        Yes, the papers did a reconstruction. But HADCRUT used the station data. Read the part about how they cross-checked station IDs, processed the daily data etc.

      • John,
        HADCRUT in fact give very complete information for each of their versions. There are not only the change notes, but you can download a readable set of data, for each recent version (at least ver 4.2 to 4.5). Each station has its own file with header information, including a code to indicated where it came from. I’ve written a post here which sets all this out, and links to a zip file of station inventories for versions 4.2,4.3,4.4 and 4.5 here.

      • Thanks for the education, Nick. I always like to be enlightened.

        Ok. I was too harsh :-) It stems from the fact that when I originally read the notes it sounded like new data had been added from records, but when I looked into it more it was a pure reconstruction and then went through a smoothing. I was particularly alarmed that records from Western China would be reconstructed using observational data (no idea what that is really, is that a guy looking out the window?) and data from neighbouring countries. Pretty big place to rely on a neighbour for.

        I’m not an overly big fan of looking for temperature record gaps and the adjustments that then take place, but I understand there is a requirement to do it, but it isn’t real data. I’m even less of a fan of trying to fill in wider geographical gaps based on neighbouring data. Maybe it is intelligent guess work with some logic to back it, but it is still a guess.

      • “It stems from the fact that when I originally read the notes it sounded like new data had been added from records, but when I looked into it more it was a pure reconstruction and then went through a smoothing”
        No, that simply isn’t true. They are describing the process in filling gaps in real station records. The paper is here. Most of the gaps relate to WW2 and revolution. Harbin, for example, had a record from 1909, but was missing 65 months, in 7 separate fragments in the 1940’s.

        The second paper just describes standard homogenisation of existing station records in some detail.

  3. Very interesting stuff! Yes of course CO2 is still a greenhouse gas so it does contribute to warming, and there is more of it now (although at 400 ppm that’s still 4/100 of 1% of the atmosphere, a trace gas). Regardless, it will have contributed somewhat.

    The fact remains however, that Earth has been warming since the end of the Little Ice Age, so any given year has a reasonable chance of being the hottest on record no matter what CO2 is up to. Let’s hope that warming continues.

    • Regardless, it will have contributed somewhat.

      Very few people will deny that. The big question is whether feedbacks are positive or negative and by how much. If feedbacks were positive, we would not be here to talk about it.

      • Werner,
        Yes, CO2 has some effect, but an obsession about the sign of the feedback means a limited understanding of how feedback works. The sign of the feedback can be made to be whatever you want it to be depending on the reference chosen (the open loop gain).

        Climate science assumes an open loop gain of 1 and claims that the no feedback effect is that 239 W/m^2 of incident energy will produce a surface temperature of 255K. Based on the assumed open loop gain, the net feedback must be positive since the surface is warmer than 255K.

        The actual average input from the Sun is 341.5 W/m^2 and most of what is reflected is from clouds and ice is considered the result of albedo feedback, thus the surface temperature, assuming an open loop gain of 1, would be 279K and not 255K and the net increase in temperature from ‘feedback’ is only 9C and not the 33K often claimed since that 33K of warming comes with an unavoidable 24C of cooling! Even if we assume an intrinsic albedo the same as the Moon, there’s still about 18C of unavoidable and unaccounted for cooling.

        The IPCC specifically chose its metric of forcing so that it could hide the cooling (apparent negative feedback) from albedo in the ‘reference’ established by Hansen’s and Schlesinger’s flawed application of Bode’s analysis of linear amplifiers. Of course, this isn’t the biggest error which is that the feedback model referenced assumes a linear amplifier with an unlimited power supply to provide all of its output power, neither assumption of which is applicable to the climate system.

      • Yes, CO2 has some effect, but an obsession about the sign of the feedback means a limited understanding of how feedback works.

        Thank you. Many recent articles have gone into this in great detail. I have no intention of doing so.

  4. I’m glad we are still reporting to the thousandths of a degree.

    I’m not really sure what we we did when we only had mercury thermometers and an old guy trudging out to read it every few hours. Now we can measure the entire global surface area with our magical instruments and use gigaflop processors to spit it out in beautiful graphics to incredible precision. Maybe our next gen satellites can move the decimal point one more point left and go to ten thousandths.

    What a wonderful world we live in.

    • Maybe our next gen satellites can move the decimal point one more point left and go to ten thousandths.

      Even Dr. Spencer says that the yearly average is only accurate to the nearest 0.1. Let us suppose all measurements were to the nearest degree and that I would give all to the nearest degree. Then my whole table would be just ones and zeroes and it would be meaningless.
      I am well aware of the limitations of accuracy which I made clear here:

      However since the margin of error for a 12 month period for UAH6.0beta5 is 0.1, the extra 0.012 is within the margin of error so it would be more accurate to say that the latest average is statistically tied with 1998.

      • sorry .. I suppose I should have used the /s tag but really, for all of us that have come up with engineering/science training, this is getting ridiculous.

        Maybe we can get the UN, Congress and all other legislative bodies to require all future values be reported to the half degree. /s

        [When on-line, tis always best to assure the other readers can “see” your voice and body language. On-line, words are forever (until deleted by Hillary’s email processor) but inflections are as fleeting as her memories of classified training. .mod]

      • No twice , AM and PM , preferably first thing like 7 AM and early evening 7 PM ( Hey it is tough to be a volunteer and at 64 I am not “some old guy trudging” , I hop, skip and jump, :) , )

    • While I’m sure rbabcock was being sarcastic, he is making a good point. I believe the satellites have the accuracy to deliver measurements of a precision of .1 degree, but no so for the ground networks. I think they would be lucky to get it down to 1 degree, given that most of the instruments used don’t have even that degree of accuracy. And no, averaging together a bunch of different types of instruments with different error models does not help.

  5. Notanist

    “The fact remains however, that Earth has been warming since the end of the Little Ice Age…”
    _______________

    This claim is frequently made, yet according to HadCRUT4 global suface temperatures reamined flat (actually cooled very slightly) for the 80 odd years between 1850 and 1930: http://www.woodfortrees.org/graph/hadcrut4gl/to:1930/plot/hadcrut4gl/to:1930/trend

    I guess that was an 80-year hiatus in the ‘recovery from the Little Ice Age?’

    • how many thermometer were at this time in Africa and around South and North poles?

      (just saying that this trend between 1850 and 1930 is mostly fantasy EOM)

      • Well, yes, largely agree, but even then a tad irrelevant since the end of the little ice age still causes some debate. Some say it was 1850, some actually put it into the early 1900s, so I’m not sure what posting a temperature set that only goes back to when some claim it ends shows anything. Add into the limited data set and just how many times it has been “corrected”, I’m not sure he raises a point at all.

      • janus100

        “(just saying that this trend between 1850 and 1930 is mostly fantasy EOM)”
        ______________

        In that case how do we know that temperatures have been warming since the Little Ice Age?

      • Somewhat stands to reason. If you believe the mini ice age has ended and that temperatures have increased since then, you logically conclude that temperatures have risen since the little ice age. Of course, considering that is 100-160 years ago, depending on when you say it ended, that is an entirely meaningless statement in itself, as you are merely pointing out something happened, without adding any kind of context….

    • DWR54 (quoting Notanist)
      “The fact remains however, that Earth has been warming since the end of the Little Ice Age…”

      Technically, that is only “almost” right.

      “The fact remains however, that Earth has been warming since the LOWEST POINT of the Little Ice Age (in 1650).”

      you see, the positive and negative feedbacks of natural climate change are NEVER at zero.
      Nor do they EVER stop changing the climate.
      Nor is climate EVER static nor is the earth EVER “at equilibrium” with the continuous loss of energy to space, balanced by equal amounts of short wave radiation “in” and long wave radiation “out”. Other than an Einsteinium “paper exercise” of a theoretical “flat earth at average orbit with an average albedo equally distributed from pole to pole” that is.

      So, the positive feedbacks, AND the negative feedbacks are always present, but the average global temperature cycles between “too hot until losses exceed the stored energy gains” at which time it begins cooling until it is “too cool and stored energy gains exceed losses”.

      Like a swing set, gravity does not “stop” acting when the potential energy is lowest at the bottom, not does it suddenly begin acting at the top of the swing when potential energy is highest. The point of lowest temperature of the Little Ice Age in 1650 marks the BEGINNING of today’s Modern Warming Period,and it will likely peak at either today’s 2000-2020 Modern Warming maximum, or (more likely) at the next 60-70 short term cycle’s peak in 2070. Then down again as we enter 450 years of cooling, perhaps towards the ultimate evils of the next Modern Ice Age. Which is due to start, if not already overdue.

      • Agree with everything you said. I believe the mid-point of the LIA may have been a little earlier. The Maunder Minimum probably created a secondary low point which is why we tend to think of the 1600s as the bottom of the cooling. If we use the 1500s as the more general low point then it is likely we are at the top of the cycle right now.

        This idea is based on the oceans (MOC) being the driver of what is sometimes called the Millennial Cycle and solar variations affect this longer term cycle on a shorter term basis.

  6. Well, what dates do you class as the little ice age? Most would put it at around 1300 till 1900 or so. There are some differences of opinion on the subject, but picking a start date largely required to be at one of the main colder periods (1850 or so being the last one), wouldn’t seem to be overly relevant.

    • The LIA span is delimited differently by different scientists. Mary Hill uses 1900 as the end in the Sierra Nevada in California, where it is referred to as the Matthes Advance. Cooling seems to start about AD 1400, peaks in the 17th century and warms into the 18th century, cools mildly in the early 19th century before heading toward the present. One problem you run into with “definers” is there are those who call the turn from cooling to warming is the end of a glacial, versus those who define the end as when temperature warms to some arbitrary “normal” temperature. The depths (or peaks) of a glacial are easy to define since the glacier is way down the valley and the entire village has to move. But the “end” is a much fuzzier concept.

    • And you know full well that some say into the early 1900s as well. Since you are posting a temperature record that started in 1850 as a basis of showing a cooling trend, one would think you sit in the camp that it lasted longer. Otherwise, what you are saying is “mini ice age ended in 1850, then we had more cooling”. Doesn’t make much sense…..

    • Many would also argue that the LIA started later, too.

      It depends on whether you attach the Wolf Minimum, c. AD 1280-1350, to the Medieval WP or the LIA. Another warm cycle, arguably the last blast of the MWP, followed the Wolf, so the LIA is often reckoned not to start until c. 1400, which was around the peak before descent into the Spörer Minimum.

      • ‘what “drives” chaos?’

        The overwhelming requirement to seek equilibrium under the influence of opposing forces, in this case, hot and cold, whose boundaries are where much of the chaos is observed.

        The chaos we observe in the climate system is in the path from one state to another and is the result of the local reaction to change acting faster than the global state can change as constrained by the available energy.

      • “Are we really dealing with chaos”

        The acceptance of any kind of trend analysis is predicated on the idea that chaos averages out, but it actually averages out to a finite RMS value and not zero.

      • That’s a good question. But the data presented here sure points to CO2 having very minimal influence on the air temperature, and the driver of increasing temperatures is the ocean. Why are they getting warmer? Can CO2 possibly drive that?

      • Because I have a hard time believing re-radiated IR that can only penetrate a few mm of the ocean surface can do what we’re seeing.

      • Chaos is a fundamental property of the universe. It is a type of dynamic that emerges spontaneously under certain well understood circumstances – for instance a dissipative system which is open and is characterised by both excitability (positive feedbacks) and friction (negative feedbacks). It even emerges mathematically in certain equations as shown by Mandelbrot, Feigenbaum, Lorenz and many others.

        The discipline of Physics has evolved as a mechanism to selectively focus on the tiny minority of phenomena in the universe which can be meaningfully analysed quantitatively in a linear paradigm, without involvement of chaos, and the development of a robust architecture of denial of the existence of the majority of the universe’s phenomena and processes which are nonlinear / chaotic. Astronomy is one example where one parameter only i.e. gravity often explains object behaviour and thus linear analysis is adequate. But even in astronomy chaos sometimes appears e.g. in the collapse of a supernova.

        So when people approach climate and say things like “and then there’s physics” what this translates to is “and then there’s denial”. Physics is denial. Of chaos. Climate science in its current form, being dominated by physics, is essentially an institution of denial of chaos.

        Nowhere is the absurdity of denial of chaos more evident than in the attempt to make linear fits to climate data. It was elegantly shown by Lorenz in 1963 (Deterministic Nonperiodic Flow) that climate will naturally meander chaotically between short term apparent means which are not really means but an artefact, and will be always changeing. Naturally. Sinlessly. Climate science has ignored Lorenz’ insight and in doing so made itself meaningless.

  7. John

    Would you prefer that producers didn’t correct their data sets for known errors and biases? For instance, do you prefer UAH 5.6 over UAH 6 (beta)?

  8. The Little Ice Age started with a bang in 1258 AD, a year after the Rinjani eruption in Lombock (Indonesia), the biggest volcanic eruption since writing was invented. And it ended with another bang in 1837, two years after the Cosigüina eruption in Nicaragua, when the effects of its aerosols started to wane. 580 years duration. The bottom is disputed between the 1430’s the minimum of the 15th century, and the 1630’s the minimum of the 17th century. Recovery started around 1700, after the prolonged second bottom, but got derailed by the intense volcanic activity of the early 1800s. Glacier retreat and sea level rise started to pick up speed after 1840.

    Global warming is about 350 years old, but it has been most active for the past 180 years.

    Camenisch, C., et al.: The early Spörer Minimum – a period of extraordinary climate and socio-economic changes in Western and Central Europe, Clim. Past Discuss., doi:10.5194/cp-2016-7, in review, 2016.
    http://www.clim-past-discuss.net/cp-2016-7/cp-2016-7.pdf

    This study is an outcome of the workshop “The Coldest Decade of the Millennium? The Spörer Minimum, the Climate during the 1430s, and its Economic, Social and Cultural Impact“.

    • Javier on September 19, 2016 at 10:42 am

      Wow! Javier, this is I guess the very first time I read here on WUWT a hint on Rinjani / Samalas on Lombok island having been LIA’s true detonator fuse.

      Source of the great A.D. 1257 mystery eruption unveiled, Samalas volcano, Rinjani Volcanic Complex, Indonesia:
      http://www.pnas.org/content/110/42/16742.full

      Until now solely solar irradiance minima (Maunder, Spörer etc) were mentioned to explain it. What experienced commenters have clearly debunked as inappropriate because even their simultaneous activity wouldn’t cool the Globe by more than 0.3 °C / century.

      As so often in such discussions, the most known counterargument is that eruptions lack sustainable effect on the climate over periods longer than e.g. a decade.

      But firstly, Rinjani was not at all a solitaire, as is shown by a list of 7 eruptions with VEI 6 following it within 400 years:

      Ecuador Quilotoa Andes 1280 AD
      Vanuatu Kuwae 1452-53 AD
      Iceland Bárðarbunga 1477 AD
      Papua New Guinea 1580 AD
      Peru Huaynaputina Andes 1600 AD
      Greece Kolumbo, Santorini 1650 AD
      Papua New Guinea Long Island 1660 AD

      Moreover, there is an interesting article linking huge volcano activity to drastic ocean cooling.

      Abrupt onset of the Little Ice Age triggered by volcanism
      and sustained by sea-ice/ocean feedbacks:
      http://onlinelibrary.wiley.com/doi/10.1029/2011GL050168/full

      • Well definitely volcanic eruptions contributed to the LIA. Temperatures were already going down since about 1100 AD well before solar activity declined around 1250 for the Wolff minimum.

        “the most known counterargument is that eruptions lack sustainable effect on the climate over periods longer than e.g. a decade.”

        A big eruption like Pinatubo has an effect over 2-3 years.
        A very big eruption like the probable Kuwae eruption of 1458 has an effect over 1-2 decades.
        A series of very big eruptions like the ones between 1257-80 and 1809-35 have an effect over 4-5 decades.

        In every case after the effect of the eruptions (aerosol) is past, there is a rebound in temperatures and whatever was the temperature trend before the eruptions (whether warming or cooling) it continues afterwards. The evidence available is very clear in this respect.

        So volcanic eruptions significantly contributed to the LIA and marked its start and end, but volcanic eruptions did not cause the LIA.

      • … but volcanic eruptions did not cause the LIA

        That stands well in contradiction to the work of Miller, Geirsdóttir & alii cited above.
        Here an exerpt:

        A transient climate model simulation shows that explosive volcanism produces abrupt summer cooling at these times, and that cold summers can be maintained by sea-ice/ocean feedbacks long after volcanic aerosols are removed. Our results suggest that the onset of the LIA can be linked to an unusual 50-year-long episode with four large sulfur-rich explosive eruptions, each with global sulfate loading >60 Tg. The persistence of cold summers is best explained by consequent sea-ice/ocean feedbacks during a hemispheric summer insolation minimum; large changes in solar irradiance are not required.

        Could you present us some publication underpinning what seems so evident to you?

  9. So the process starts with somewhat unreliable data, which is adjusted and then used to calculate what the temperature might have been if it had been measured in places where there are no thermometers, the daily rise and fall of temperatures is used to assign a temperature for the day, everything is converted to anomalies from a baseline for each location real or imagined, and then out pops a figure to three decimal places. Over the years more is then discovered about why previous calculations and even the data itself was in error. And with each calculation no precision is lost but in fact gained.

    Ok, perhaps someone can tell me how the climate of my home town of Cairns has changed over the last 30 years. What was the climate 30 years ago and what is it now?

    • Ok, perhaps someone can tell me how the climate of my home town of Cairns has changed over the last 30 years. What was the climate 30 years ago and what is it now?

      Sure someone can tell! Here is your Cairns:

      42574778002 31.2700 -85.7200 91.0 CAIRNS FIELD/FORT RUCKER 62R -9HIxxno-9A-9WARM CONIFER B

      All you need is to extract, out of the monthly GHCN V3 record, all data belonging to the station above, and to plot it somewhere for the last 30 years. I’m a bit too tired to do that right now, it’s 1 am here :-)

      Interesting! Your home town was in earlier times a rather rural corner, but nightlight evaluation has moved it up to the suburban level.

    • Never work when tired, I was told. Yesterday I of course selected the wrong Cairns (that in the USA).

      While your home town didn’t warm over the whole time (0.5 °C / century), it warms a bit more since 1979, at the same rate of 1.5 °C / century as that measured by UAH in the troposphere for the whole Downunder (whose surface warms a bigger bit more since then, at 2.7 °C / century).

      • No B. You have failed to answer the question. I asked what the climate in Cairns was 30 years ago and what it is now.

        Hint: start with actual recorded temperatures, cloud cover, wind, humidity and rainfall. Those are the elements of climate.

        You then commit the basic error of faith in imagining that temperatures have risen uniformly across the entire continent of Australia. Your blind faith in the results of such calculations is quite touching. If you actually believe what you write, name a single location in Australia where temperatures have risen at 2.7 C per century.

        Go on. Have a go. Describe the climate of Cairns and how it has changed. Name a single location where temperatures have risen at 2.7 C per century. It shouldn’t be hard for someone with your unquestioning faith. That is if you actually understand what the word climate means.

      • “If you actually believe what you write, name a single location in Australia where temperatures have risen at 2.7 C per century.”
        Well, you can make it easier using this interactive map. From there is the shaded plot of GHCN stations, trend (of unadjusted GHCN station readings) since 1984:

        Blue doesn’t mean cooling; it means trend less than average. You can click to query.
        Canberra 3.74 °C/Cen
        Cobar 3.85 °C/Cen
        Cairns 0.4 °C/Cen
        and for a cold spot
        Halls Creek -2.69 °C/Cen
        Darwin -1.27 °C/Cen

      • Nick, graphs with pretty colours smeared all over the nation merely indicate the usual calculations on calculations. Pick a location and present the actual data. Go ahead and tell me what the climate of Cairns was 30 years ago and what it is now.

        You remember climate, temperature ranges at various times of the year, rainfall at various times of the year, humidity and cloud cover at various times of the year. Once you start you will soon find your calculations on calculations are utterly meaningless.

      • Forrest Gardener on September 20, 2016 at 10:25 am

        1. You have failed to answer the question. I asked what the climate in Cairns was 30 years ago and what it is now.

        Why are you so unnecessarily arrogant, FG? I didn’t fail at all.

        In the comment https://wattsupwiththat.com/2016/09/19/uah-and-enso-now-includes-july-and-august-data/comment-page-1/#comment-2302999

        I wrote

        While your home town didn’t warm over the whole time (0.5 °C / century), it warms a bit more since 1979, at the same rate of 1.5 °C / century as that measured by UAH in the troposphere for the whole Downunder …

        Extract out of the entire GHCN record the data belonging to station 50194287000 (Cairns Airport), and plot a chart of the anomalies i.e. the differences between raw temperatures recorded by the station and their average for e.g. jan 1981 till dec 2010. You will obtain the same result.

        2. If you actually believe what you write, name a single location in Australia where temperatures have risen at 2.7 C per century.

        Why don’t rather you do that search? There are less than 600 stations in Australia (country id 501, as you perfectly know). That would be peanuts for you to do such a simple job when using a free SQL database software.

        You then will discover that the trend for all the temperatures you extracted for Downunder for the period between jan 1979 and aug 2016 simply is… 2.7 °C / century.

        What clearly means that there will be quite a lot of stations whose century trend is below that average value, and quite a lot of other stations whose whose century trend lies above it.

      • “Once you start you will soon find your calculations on calculations”
        That’s what you asked for – how much have temperatures risen in 30 years. How would you measure it?

        And why have sceptics only questions – no answers?

      • Nick, what I asked for was for B to describe the climate of Cairns and how it has changed. Name a single location where temperatures have risen at 2.7 C per century.

        Your calculations on calculations on dubious data show nothing. To make it easier, tell me what the climate or Cairns is now. Not what your calculations on calculations on dubious data say it should be, but what the climate is (including rainfall, hours of daylight, humidity, daily temperature variations and so on.

        As for your rhetoric about me asking questions, why shouldn’t I? You are part of the tribe wanting the world to accept your calculations. Real scientists understand the concept of the null hypothesis and would not pose such fatuous questions of those who dare to ask for evidence. Why don’t you understand the null hypothesis?

      • Forrest Gardener on September 20, 2016 at 8:36 pm

        Name a single location where temperatures have risen at 2.7 C per century

        Nick’s answer:

        Canberra 3.74 °C/Cen
        Cobar 3.85 °C/Cen

        And there are many many more, of course all computed out of… dubious data.

        It seems to me that you simply do not want any answers which do not perfectly fit into your way of thinking.

        What is of interest for so many people is not how the weather behaves in Cairns or elsewhere, be it today or 30 years ago. Simply because only averaging over greater distance provides us with the information needed.

        So it’s nice to know about the temperature for Cairns Airport Station for tody, yesterday or the last 12 months

        2015 9 23.7
        2015 10 24.5
        2015 11 26.8
        2015 12 27.3
        2016 1 28.0
        2016 2 28.8
        2016 3 27.9
        2016 4 26.7
        2016 5 26.0
        2016 6 24.0
        2016 7 23.3
        2016 8 22.7

        and for the corresponding 12 months exactly 30 years ago:

        1985 9 22.7
        1985 10 25.3
        1985 11 27.1
        1985 12 28.5
        1986 1 28.1
        1986 2 27.9
        1986 3 26.6
        1986 4 25.6
        1986 5 24.2
        1986 6 22.3
        1986 7 23.1
        1986 8 22.6

        But what should it be for?

        Here is a graph which probably won’t interest you at all… but others.

        Even meteorologists perform such averaging and interpolations to have best data to supply for people asking for a weather forecast around Málaga, Berlin, Marseille, Cairo or… Cairns.

        And even meteorologists wouldn’t accept this Cairns data: it is much too raw for their forecasting needs, its standard deviations are too high, and they will request for… homogenization, oh My!

        What I’m actually doing, motivated through a response by WUWT commenter Geoff Sherrington: to split, for a given context (Australia, CONUS or whatever else), their station set in two subsets:
        – pure rural
        – rest
        and to compare the 60 month running means of the subsets. It’s simply amazing.

        So yes, Mr Gardener: speaking, like you do, about my faith in meaningless calculations, that’s, I’m really sorry to repeat it, indeed pure arrogance. And telling us all the time that the data we use would be dubious: that’s more than pretentious.

        Many thanks for your comprehension, Sir, and above all for this delicate hint on null hypothesis. I really appreciated :-)

      • B, here is what I asked. I asked what the climate in Cairns was 30 years ago and what it is now.
        Hint: start with actual recorded temperatures, cloud cover, wind, humidity and rainfall. Those are the elements of climate.

        Try again when you have something to say and don’t have to resort to name calling.

  10. On the graph, it can be seen that the mean 2016 values are pretty even with the 1998 values to this point.

    Well, by averaging the wrong data, you can tell everything and its contrary.

    Here I think it is better to stay on the data produced by Roy Spencer as it is, directly downloaded from UAH’s site:

    You see here UAH’s anomalies for 1997/98 and 2015/16 superposed, normalized to their corresponding january (in order to offset the anomaly level between the two periods).

    And you see that they were by far more powerful in 1997/98 than they are actually.
    Nothing to add.

  11. What about satellite observations showing that the planet’s coastal land is actually increasing. This is the last 30 years. Can there be any stronger case for the lack of any significant global warming over the last 30 years? Will we ever wake up?

    BBC link to this new study and the relevant quote.

    http://www.bbc.com/news/science-environment-37187100

    Coastal areas were also analysed, and to the scientists surprise, coastlines had gained more land – 33,700 sq km (13,000 sq miles) – than they had been lost to water (20,100 sq km or 7,800 sq miles).

    “We expected that the coast would start to retreat due to sea level rise, but the most surprising thing is that the coasts are growing all over the world,” said Dr Baart.

    “We’re were able to create more land than sea level rise was taking.”

    • What about satellite observations showing that the planet’s coastal land is actually increasing.

      Was this intended for a different post? But to your point, coastal land increasing could be caused by huge amounts of extra snow falling in Antarctica and warming or cooling may have nothing to do with it.

      • Werner the greatest CAGW scare of the past 30+ years is the rise in SLs. I understand your point but you must have seen Gore’s SLR scare that helped him win the Academy award and a Nobel prize? On that snowfall over the last period of cooling and warming since 1941. The US P 38 bomber was found on Greenland buried under 80+ metres of ice ( 268 feet) in the 1990s after it was abandoned over 50 years before it’s discovery.

        Here’s a link. http://www.cbc.ca/news/canada/newfoundland-labrador/glacier-girl-buried-in-ice-for-decades-retries-transatlantic-flight-1.683436

        Since 1941 there was both cooling and warming or so we’re told.

        Also the Royal Society has shown all the models for Antarctica and Greenland for the next 300 years. Greenland is positive but Antarctica is negative until 2300. So are we facing dangerous SLR because of our increases in co2 or not?

  12. “Note that the satellite data sets have 1998 as the warmest year and the others have 2015 as the warmest year.”

    Please note that, Mr. Trump. We need an investigation into why there is this discrepancy in the official temperature records.

    • Good luck with that,

      Australian skeptics have been pushing for an audit of BoM but the politicians tell us to go away, even though we have clear proof of a warm bias in adjustments.

      • “Good luck with that,

        Australian skeptics have been pushing for an audit of BoM but the politicians tell us to go away, even though we have clear proof of a warm bias in adjustments.”

        If a U.S. president got involved, it would be a lot different. The president would not have to consult Congress to do an audit of NASA and NOAA because those agencies are under the Executive Branch and his jurisdiction. All Trump has to do is say the word.

      • Ironicman
        “Australian skeptics have been pushing for an audit of BoM but the politicians tell us to go away, even though we have clear proof of a warm bias in adjustments.”
        A bit like the New Zealand Skeptics (Climate Change Co…)when they took NIWA to court, lost and then ran for the hills when they were ordered to pay court costs. As a kiwi tax payer I’m still waiting for them to foot the bill for wasting my money.

    • “Australian skeptics have been pushing for an audit of BoM but the politicians tell us to go away, even though we have clear proof of a warm bias in adjustments.”
      And don’t seem at all interested in the fact that a very eminent statistical panel was appointed and reported.

    • ““Note that the satellite data sets have 1998 as the warmest year and the others have 2015 as the warmest year.”
      Please note that, Mr. Trump. We need an investigation into why there is this discrepancy in the official temperature records.”

      Could that be because the peak in atmospheric temp is in the year the EN finishes? … and that will be 2016.
      So comparison need to be made with 1997.
      ““Note that the satellite data sets have 1998 as the warmest year and the others have 2015 as the warmest year.”

      Please note that, Mr. Trump. We need an investigation into why there is this discrepancy in the official temperature records.”

      Could that be because the peak in atmospheric temp is in the year the EN finishes…. and that will be 2016.

      Also: Do you really think that the Earth’s surface atmosphere warms by ~0.7C during an EN?

      Of course not.
      The MSU sensors are over sensitive the atmospheric WV.
      They are not thermometers.

      Which is just one reason why satellite temp data are not the “Gold standard” (Curry).

      Also: Do you really think that the Earth’s surface atmosphere warms by ~0.7C during an EN?

      Of course not.
      The MSU sensors are over sensitive the atmospheric WV.

      Which is just one reason why satellite temp data are not the “Gold standard” (Curry).

      • Cock-up – trying again with text for clarity …

        ““Note that the satellite data sets have 1998 as the warmest year and the others have 2015 as the warmest year.”

        Please note that, Mr. Trump. We need an investigation into why there is this discrepancy in the official temperature records.”

        Could that be because the peak in atmospheric temp is in the year the EN finishes…. and that will be 2016.

        Also: Do you really think that the Earth’s surface atmosphere warms by ~0.7C during an EN?

        Of course not.
        The MSU sensors are over sensitive the atmospheric WV.
        They are not thermometers.

        Which is just one reason why satellite temp data are not the “Gold standard” (Curry).

      • That is a very interesting graph! Despite a huge head start in 2016 as compared to 1998, it may not reach the 1998 value. On top of that, the El Nino was stronger in 2016 than 1998 as the table shows.

      • Werner Brozek on September 20, 2016 at 2:26 pm

        On top of that, the El Nino was stronger in 2016 than 1998 as the table shows.

        You always repeat the same mistake: to compare anomalies wrt a global average like that for 1981-2010 instead of comparing them relatively to their common begin:

        And you can compare that with Wolter’s MEI values for the 3 biggest recent ENSO events (excluding here august 2016):

        You then see that 1997/98 clearly was the bigger event compared to 2015/16.

      • You then see that 1997/98 clearly was the bigger event compared to 2015/16.

        There are different ways of looking at things. I am not going to get into debating which way is best. That is beyond the scope of my expertise. Suffice to say that many people in the past have endorsed using Nino 3.4.

      • Werner Brozek on September 20, 2016 at 5:09 pm

        Suffice to say that many people in the past have endorsed using Nino 3.4.

        As you can see, applying the same to another ENSO index gives the same result. And a chart plotting that for BOM’s SOI or NOAA’s ONI very probably will have nearly identical results.

        Please feel free to extract any data out of UAH to compare the stuff according to your needs!
        But I’m really sorry, Werner: this has nothing to do with ‘UAH and ENSO‘.

        If I would ever do the same as you do here, everybody would suspect me to
        – have an a priori idea about what I want a dataset to show;
        – extract some part of it such that it a posteriori shows what I had expected it to do.

      • But I’m really sorry, Werner: this has nothing to do with ‘UAH and ENSO‘.

        From:
        https://wattsupwiththat.com/2016/07/18/say-hello-to-la-nina-conditions/

        “Meteorological agencies like NOAA use the sea surface temperature anomalies of the NINO3.4 region (5S-5N, 170W-120W) of the equatorial Pacific to determine if the tropical Pacific is experiencing El Niño, La Niña or ENSO neutral (not El Niño, not La Niña) conditions.”

        So if you feel NOAA is wrong, take it up with them, not with me.

  13. Werner, it would be nice if ever just once you made an attempt to determine if your numbers were statistically significant, using standard statistical tests.

    You’ll find that many of them are not.

    • You’ll find that many of them are not.

      If you or Nick Stokes wish to make a post to go beyond what I have covered in Section 1 using Nick’s numbers, I will not stop you.
      Did I miss your posting before or were you in moderation for a long time?

  14. Werner, shouldn’t you be using (or at least including) the updated RSS 4.0 data rather than version 3.3 (which RSS themselves provide a warning about)? It strikes me that would be a fairer comparison if you are now using UAH6.0(beta). Although there is no TLT yet at V4.0, the updates to UAH6.0 (from 5.6) actually give it a very similar weighting profile to RSS TTT, so if you use one (UAH6.0), it seems only fair to use its closest counterpart from the ‘other’ satellite dataset, rather than an outdated version of RSS TLT….

    • Although there is no TLT yet at V4.0

      Over the last several years, Lord Monckton and I have never reported on the TTT, neither for UAH nor RSS. It was always TLT which is much closer to where we live and therefore would be expected to relate to the surface measures better. As well, WFT does not and never did show TTT.
      However when the new RSS TLT comes out, I will use that.
      Comparing RSS TTT to UAH TLT is comparing apples and oranges.

    • Although there is no TLT yet at V4.0, the updates to UAH6.0 (from 5.6) actually give it a very similar weighting profile to RSS TTT

      Are you sure? Maybe you should first compare their OLS trends for 1979-2016:
      – UAH6.0beta5 TLT: 0.122 °C / decade
      – RSS4.0 TTT: 0.177 °C / decade
      – GISS land+ocean: 0.163 °C / decade.

      RSS is, as you can see, far above UAH. See, for explanations:
      http://journals.ametsoc.org/doi/full/10.1175/JCLI-D-15-0744.1

    • Werner, but yet you do report UAH 6.0! One of the major revisions to UAH from V5.6 to 6.0 was to shift the temperature profile to further up in the troposphere deliberately to reduce the influence of surface warming: Roy Spencer says exactly that in his description of it:

      “As seen in Fig. 7, the new multi-channel LT weighting function is located somewhat higher in altitude than the old LT weighting function.”

      He goes on to say:

      “The new LT weighting function is less sensitive to direct thermal emission by the land surface (17% for the new LT versus 27% for the old LT), and we calculate that a portion (0.01 C/decade) of the reduction in the global LT trend is due to less sensitivity to the enhanced warming of global average land areas.”

      In other words, the new UAH dataset deliberately reduces the influence of temperatures from the parts of the atmosphere which is, as you say “much closer to where we live and therefore would be expected to relate to the surface measures better”

      see http://www.drroyspencer.com/2015/04/version-6-0-of-the-uah-temperature-dataset-released-new-lt-trend-0-11-cdecade/

      As I said before, the new UAH weighting function is very close to that of RSS TTT and the latter would be a better direct comparison (still using a different method to correct diurnal drift). Check with Roy Spencer if you don’t believe me.

      • Check with Roy Spencer if you don’t believe me.

        I will assume Roy Spencer had excellent reasons to do what he did and that he has a reason why he still calls it the lower troposphere. Furthermore, the 6.0 version was much closer to the RSS lower troposphere version than 5.6 was.

      • Well, the reason is pretty obvious from the quote I provided above: he wanted to remove the influence of the thermal emission by the land surface (i.e. where we actually live). Isn’t this the whole point of having surface data sets versus tropospheric data? Climate models predict different effects for the surface, the troposphere and the stratosphere. Part of the reason for measuring them all is to test the validity of those models. So if one strata gets ‘contaminated’ by the other it makes it more difficult. Thats a scientifically valid reason for changing the UAH profile to actually be more like RSS TTT (which Mears et al. actually argue is a good measure for tropospheric temperatures).

        I think too many people have become too focused on the satellite TLT data sets because they distrust the empirical surface measurements. But as Roy Spencer and Carl Mears have both pointed out, measuring atmospheric temperature with microwave sensors on multiple sequential satellites with decaying orbits is far from trivial – there are a number of known problems and each revision of their respective datasets is an attempt to improve on their methods for accounting for these problems. But as Mears (at least) has also pointed out, the surface temperature datasets, while also imperfect in a number of well discussed ways, are still a more reliable indicator of what is happening “much closer to where we live”.

      • “Furthermore, the 6.0 version was much closer to the RSS lower troposphere version than 5.6 was.”

        As I pointed out in response to Bindidon (below) the UAH 6.0 version was, when first released nearly identical to RSS 3.3 TTT. Not surprisingly, perhaps, given that they both used the same elevation profile for weighting the different MSU/AMSU sensors, regardless of whether this means that UAH 6.0 TLT is no longer strictly “LT” (i.e. “Lower Troposphere”) in the sense that the UAH 5.6 TLT set is/was. But RSS have subsequently updated the TTT dataset to version 4.0 with a method that has actually been published in the peer-reviewed literature to better account for diurnal drift. The trend for that set is 0.177°C/decade.

      • Isn’t this the whole point of having surface data sets versus tropospheric data?

        My understanding was that satellites were superior because they covered many places on earth where we had no thermometers.

        As I pointed out in response to Bindidon (below) the UAH 6.0 version was, when first released nearly identical to RSS 3.3 TTT.

        In that case, I wonder how the new RSS TLT will compare to UAH 6.0 and the old RSS TLT. When it comes out, I will analyze the differences between them. However I am not in a position to say which version is best.

      • “My understanding was that satellites were superior because they covered many places on earth where we had no thermometers.”

        It’s also relatively easy to determine the surface temperature under clear sky conditions, as the sensors are narrow band and tuned to transparent regions in the spectrum so the relative color temperature can be readily determined (the relative peak in the Planck spectrum), which corresponds to the emission temperature of the equivalent surface and which can be calibrated to the actual surface by a small number of surface measurements and then it will consistently track temperatures across the globe. The problem is that you need a good predictor for when cloud coverage is present and when data is missing or noisy and the color temperature can not be determine.

    • @Bindidon: the two datasets use different methods to correct for diurnal drift, which is why it is useful to look at both rather than consider one as definitive. There is some useful discussion of the changes to UAH from v5.6 to 6.0 on Nick Stokes’ site, including the weighting profile.

      https://moyhu.blogspot.se/2015/12/big-uah-adjustments.html

      One thing that is interesting is that the above discussion took place *prior* to the RSS 4.0 update. At that time, apparently there was very close alignment between UAH6.0 TLT and RSS 3.3 TTT, with an identical trend. Hence the higher trend you now quote for RSS (0.177°C/decade) is a feature of the updated analysis. Perhaps you can fault the latter based on the methodology published by Mears et al?

      Incidentally, I calculate the GISS L/O trend over the same period as the satellite set (Jan 1979 to present) to be 0.170 so I’m not sure how you get your 0.163 number.

      • Thanks Dave
        – I know of Nick’s communication, I plotted the beta5 diffs between Globe and NoPol/SoPol, amazing.
        – You are right! I forgot to extend in my data the range from dec 2015 up to aug 2016 :-(

        I obtain for GISS L+O indeed 0.171± 0.006 °C with linest.

  15. Werner Brozek on September 21, 2016 at 6:27 am

    So if you feel NOAA is wrong, take it up with them, not with me.

    As I repeatedly told you, Werner, the problem isn’t at ENSO, all is evidently perfect there, whichever index you choose.

    The problem is that your conclusion in the head post:

    It would appear that when considering ENSO numbers and the length of time they were high, that 2016 would be significantly higher than 1998.

    is not at all correct, for many reasons described in the following.

    1. The primary incorrectness is to link ENSO to UAH: the tropospheric temperatures are not influenced by ENSO phenomena only; they are e.g. by volcanic aerosols as well.

    Suppose one moment that El Niño 1997/98 had never existed: you would then have had to compare, still based on UAH, ENSO’s 2015/16 edition with that of e.g. 1982/83 instead.

    Look e.g. at a UAH/MEI comparison, for the UAH anomalies at that time:

    How could one here compare the two UAH anomalies (0.15 resp. 0.85 °C) and draw any conclusion about the power of the corresponding ENSO events? That would be inaccurate, as the ENSO signature was at that time covered by the consequences of the El Chichon eruption. The same comparison ten years later (due to Pinatubo) would even have been bare nonsense.

    2. What is wrong anyway is your – excuse me: a little bit stubborn – trial to make a connection between an arbitrary choice of UAH values to demonstrate the respective power of the 1997/98 and 2015/16 editions of El Niño.

    There are many clever cherry picking methods! But yours unfortunately isn’t at all. Because you perfectly know that, having performed a clean, professional statistics, you would have averaged, for both ENSO periods, all their respective 20 UAH anomalies, and not the 10 highests! With clearly different results, certainly making your guest post obsolete before starting to write it.

    3. The next – maybe unintended – cherry picking is, as noted in earlier comments, to compare UAH values of 1997/98 and 2015/16 without taking into account that a gap (of 0.45 °C) exists between these two points in the time series. It is insignificant wrt warming or whatsoever, but significant enough for your comparison.

    You compare this

    instead of this

    You see below that 1997/98 lies clearly above 2015/16.

    4. The next mistake you make here is to compare the geographically isolated ENSO phenomenon with UAH’s globe anomaly scheme, within which the ENSO signatures are inevitably diluted.

    If you want an appropriate comparison, you can’t simply rely on WFT or whichever global UAH data; you must perform a comparison using UAH’s zonal Tropics stripe, or even better: of its oceanic subset, where the signature is a tick higher:

    Here you one more time clearly can see how wrong your UAH/ENSO comparison is: again, 1997/98‘s Tropic anomalies were by far higher than are those of 2015/16. In the Tropics, 1997/98 tops 2015/16 even without normalization!

    And your comparison would last not least be most accurate if you could obtain from UAH their anomalies restricted to the Tropic’s Niño3.4 region (5° N-5° S , 120° -170° W). The dominance of 1997/98 over 2015/16 probably would be even a further tick higher.

    5. Finally, even if you were ready no longer to persist in this strange blind-alley of measuring ENSO power on the base of UAH anomalies, and would solely rely on the ENSO indices themselves instead, you still would have to accept the reality depicted below:

    *
    So I am afraid that if the final goal of your post was to demonstrate that CO2 is not the main warming’s driver, as is visible here

    Thus if 2016 edges out 1998 on UAH6.0beta5, how much of the reason should be attributed to the length and strength of the El Niño and how much can be attributed to additional carbon dioxide in the atmosphere?

    you’ll owe all WUWT readers a far more accurate proof the next time.

    Roy Spencer enjoyed us two years ago with a list called „Top Ten Skeptical Arguments that Don’t Hold Water“. You are pretty good contributing to increase this list, Werner.

    • Bindidon, don’t forget that these ENSO values are all just in themselves anomalies (i.e. temporally local departures from a long term trend) measured at the surface in small parts of the ocean. Ok they are large bits, but on a global scale they are a still a relatively small subset. They are in fact expressed relative to the appropriate preceding 30 year period. i.e. for the Nino 3.4 index to be = 0, it means that the current temperature in the the Nino 3.4 region is the same as the average for the current reference period (at present this is equivalent month averaged over the 30 years until 2015). In other words, Nino 3.4 temperatures are current temperatures minus any long term trend.

      What I find most amazing in this whole argument is the willingness of folk on this site to blindly accept UAH tropospheric estimates as the gospel truth, yet look to a small subset of the historical surface temperature record (which is all the ENSO values are) for an explanation! Yet in any other context, these same surface temperature records (obtained and maintained by the same people at NOAA) are despised and accused of being doctored. Sounds like the tail wagging the dog to me.

      • Bindidon, don’t forget that these ENSO values are all just in themselves anomalies

        Dave, your comment may be valuable for Nino3.4, but is imho not for MEI:

        See also

        Please keep in mind that MEI’s baseline average for 1961-1990 is less than 0.1; compared with values near 3.0, that’s peanuts.

        The situation in temperature series (e.g. GISS with a 1981-2010 baseline higher than many anomalies wrt 1951-1980) is quite different.

      • Yet in any other context, these same surface temperature records (obtained and maintained by the same people at NOAA) are despised and accused of being doctored. Sounds like the tail wagging the dog to me.

        Could that be because no one cares what the ENSO numbers are, but there was a huge push to eliminate the pause?

    • 1. The primary incorrectness is to link ENSO to UAH: the tropospheric temperatures are not influenced by ENSO phenomena only; they are e.g. by volcanic aerosols as well.

      Of course they are. But neither 1998 nor 2016 had a huge volcanic eruption so a comparison with 1983 is not valid. And since both the 1998 and 2016 peaks are virtually identical in height and since both had very high ENSO numbers, the obvious conclusion to draw is that ENSO was a huge contributing factor in the anomaly spikes.

      trial to make a connection between an arbitrary choice of UAH values

      RSS shows exactly the same thing so I could have used it as well.

      all their respective 20 UAH anomalies

      The 1997/1998 El Nino period lasted 13 months. What benefit would there be to show 20 anomalies? It would just in effect dilute the effect of the El Nino.

      that a gap (of 0.45 °C) exists between these two points in the time series You see below that 1997/98 lies clearly above 2015/16.

      What are you talking about? February 2016 was 0.089 warmer than April 1998. The most recent 12 months were 0.012 higher than 1998.

      The next mistake you make here is to compare the geographically isolated ENSO phenomenon with UAH’s globe anomaly scheme

      My focus in this post was on global warming and not regional warming. And the isolated ENSO phenomenon did affect the globe in huge way.

      If you want an appropriate comparison, you can’t simply rely on WFT or whichever global UAH data

      I could have shown the same thing using RSS and Nick Stokes’ site.

      So I am afraid that if the final goal of your post was to demonstrate that CO2 is not the main warming’s driver, as is visible here you’ll owe all WUWT readers a far more accurate proof the next time.

      The highest 12 months went up by 0.012 over 18 years, although this may change slightly with September’s anomalies. Let us pretend it was all due to CO2. What conclusion would you draw?

      • “The highest 12 months went up by 0.012 over 18 years, although this may change slightly with September’s anomalies. Let us pretend it was all due to CO2. What conclusion would you draw?”

        That 0.012 number is a peak-peak comparison only, regardless of whether its averaged over 12 months or 1 month, that type of short time frame is not really useful. Even the UAH satellite data that you have so much faith in shows a long term warming trend. Current temperatures remain *above* that long term trend, even though we are now almost 7 months past the February peak recorded in all of the datasets, more than 10 months past the peak of the El Nino itself and (last time I looked) almost 0.4 degree below the peak value in February for the UAH data. All this just underscores how much the 1998 peak in UAH and some other data sets (although not the current RSS 4.0!) stands out as a short term anomaly. If you look at Nick Stokes’ site, where he has normalised all the temperature sets to a common baseline, its very clear that they are generally in good agreement in terms of these long term trends. The satellite data show that the troposphere tends to exaggerate both the ups and the downs picked up in the surface temperatures from the Nino 3.4 region (i.e. the El Nino related peaks are higher and the La Nina troughs lower) so are a particularly poor choice for comparing any two points 18 years apart. The fact remains that long term trends since the satellite record began have slopes between 0.12 (UAH 6) to just below 0.18 degrees per decade (e.g. BEST and RSS 4.0 TTT): thats at least 18 times what you are quoting for the peak to peak value. Are you seriously suggesting that this is a good way to estimate long term trends in global temperature?

      • Let us pretend it was all due to CO2. What conclusion would you draw?

        As opposed to many WUWT commenters and posters who seem to perfectly know that CO2 plays no role at all in the planet’s actual warming, I do not know wether or not it does.

        Thus: no conclusion! How could I conclude about what I do not know enough about?

      • The fact remains that long term trends since the satellite record began have slopes between 0.12 (UAH 6) to just below 0.18 degrees per decade (e.g. BEST and RSS 4.0 TTT)

        Did you think it would escape my notice that you compared UAH TLT to RSS TTT? The slope for UAH TLT since 1978 is 0.012 per year as you say, but for RSS TLT, it is 0.013 per year. See:

        http://www.woodfortrees.org/plot/rss/from:1978/plot/rss/from:1978/trend/plot/uah6/from:1998/plot/uah6/from:1998/trend

        Are you seriously suggesting that this is a good way to estimate long term trends in global temperature?

        No. I wanted an apples to apples comparison as to how more or less equally strong El Ninos affected the satellite records 18 years apart. Long term trends should be done with slopes and the slope for UAH since 1998 is 0.0038/year. See the above plot. Now you may think I am cherry picking by starting with 1998. But by going from the start of one El Nino to the end of another, I believe that is fair.

      • As opposed to many WUWT commenters and posters who seem to perfectly know that CO2 plays no role at all in the planet’s actual warming

        I believe most will say it plays some role. We just do not know how much nor do we know the magnitude of all feedbacks. However if this post does nothing else, I hope it has convinced you that increasing CO2 is not catastrophic.

      • “Now you may think I am cherry picking by starting with 1998. But by going from the start of one El Nino to the end of another, I believe that is fair.”

        I see that you believe that’s fair. If you chose a year earlier OR later to start your trend, it would actually have a higher slope – between 1.5 or 3 times higher that the 0.0038 that you quote, not that that means much given the value is so small other than to underscore the volatility around an El Nino peak in a dataset that already has a large variance over time. The fact remains that by all measures (including the outdated RSS 3.3), current temperature remain above the long term trend plotted over the whole data set (which seems a better way if you want to avoid accusations of ‘cherry picked’ start and end dates, given its still limited span). This is the case even though we are now 7 months past the peak of the atmospheric response to El Nino and 10 months past the peak of the event itself. Arguably we were already above that trend before the recent El Nino spike. See:

        http://images.remss.com/msu/msu_time_series.html

      • “I believe most will say it plays some role. We just do not know how much nor do we know the magnitude of all feedbacks”

        I completely agree. And it may be decades yet before we can definitively say what the magnitude of those feedbacks is – the current models vary widely in their prediction. This is why when looking at the empirical evidence for what the actual effect is I still advocate looking at longer time scales if the data are reliable. If you don’t trust the surface measurements, then at least use the whole satellite set. I certainly agree that at this stage the trend looks more like the 1.5 degrees/century end of the range than the more alarmist 4.5 degrees. Perhaps we will look back 20 years from now and have a clearer idea and a better refined prediction. In the meantime, as intelligent people who concede that we don’t yet know all the answers, we all have a choice: we can ignore the possibility that the role CO2 plays may end up large and instead assume that it will be small and then go on doing what we have been doing anyway. Or we can hedge our bets and try and cut back our consumption and encourage others to do the same and maybe even save some gasoline for our descendent’s to burn in their great great grandad’s Shelby Cobra.

      • If you chose a year earlier OR later to start your trend, it would actually have a higher slope – between 1.5 or 3 times higher that the 0.0038 that you quote

        But in those cases, you go from neutral or La Nina to El Nino. That would not be fair. December 1997 had an ENSO of 2.3. December of 1998 had an ENSO of -1.5.

      • Or we can hedge our bets and try and cut back our consumption and encourage others to do the same and maybe even save some gasoline for our descendent’s to burn in their great great grandad’s Shelby Cobra.

        I have no problem with that. Things like insulting houses is great. But spending billions on things like carbon capture is a total waste.

  16. You lost me from the top…. what is/are the axes of yr graphs?
    As my Finance Prof. used to say: “Always state the obvious!” Might be obvious to you, but totally worthless to me.

Comments are closed.