January 2016 Global Surface (Land+Ocean) and Lower Troposphere Temperature Anomaly Update

Guest Post by Bob Tisdale

UPDATE: All graphs with UAH lower troposphere temperature data have been updated from beta version 6.4 to 6.5. Thanks, Nick Stokes. This means Figure 4 and Figures 6 through 9 have been revised.

# # #

The GISS Land-Ocean Temperature Index jumped upwards more than 0.2 deg C from September to October 2015. The NOAA/NCEI and UKMO HADCRUT4 data have now caught up.

It appears the El Niño-related upsurges in the global lower troposphere temperature data have started.

# # #

This post provides an update of the values for the three primary suppliers of global land+ocean surface temperature reconstructions—GISS through January 2016 and HADCRUT4 and NCEI (formerly NCDC) through December 2015—and of the two suppliers of satellite-based lower troposphere temperature composites (RSS and UAH) through January 2016. It also includes a model-data comparison.


The NOAA NCEI product is the new global land+ocean surface reconstruction with the manufactured warming presented in Karl et al. (2015). For summaries of the oddities found in the new NOAA ERSST.v4 “pause-buster” sea surface temperature data see the posts:

Even though the changes to the ERSST reconstruction since 1998 cannot be justified by the night marine air temperature product that was used as a reference for bias adjustments (See comparison graph here), and even though NOAA appears to have manipulated the parameters in their sea surface temperature model to produce high warming rates (See the post here), GISS also switched to the new “pause-buster” NCEI ERSST.v4 sea surface temperature reconstruction with their July 2015 update.

The UKMO also recently made adjustments to their HadCRUT4 product, but they are minor compared to the GISS and NCEI adjustments.

We’re using the UAH lower troposphere temperature anomalies Release 6.5 for this post even though it’s in beta form. And for those who wish to whine about my portrayals of the changes to the UAH and to the GISS and NCEI products, see the post here.

The GISS LOTI surface temperature reconstruction and the two lower troposphere temperature composites are for the most recent month. The HADCRUT4 and NCEI products lag one month.

Much of the following text is boilerplate…updated for all products. The boilerplate is intended for those new to the presentation of global surface temperature anomalies.

Most of the update graphs start in 1979. That’s a commonly used start year for global temperature products because many of the satellite-based temperature composites start then.

We discussed why the three suppliers of surface temperature products use different base years for anomalies in chapter 1.25 – Many, But Not All, Climate Metrics Are Presented in Anomaly and in Absolute Forms of my free ebook On Global Warming and the Illusion of Control – Part 1 (25MB).

Since the July 2015 update, we’re using the UKMO’s HadCRUT4 reconstruction for the model-data comparisons.


Introduction: The GISS Land Ocean Temperature Index (LOTI) reconstruction is a product of the Goddard Institute for Space Studies. Starting with the June 2015 update, GISS LOTI uses the new NOAA Extended Reconstructed Sea Surface Temperature version 4 (ERSST.v4), the pause-buster reconstruction, which also infills grids without temperature samples. For land surfaces, GISS adjusts GHCN and other land surface temperature products via a number of methods and infills areas without temperature samples using 1200km smoothing. Refer to the GISS description here. Unlike the UK Met Office and NCEI products, GISS masks sea surface temperature data at the poles, anywhere seasonal sea ice has existed, and they extend land surface temperature data out over the oceans in those locations, regardless of whether or not sea surface temperature observations for the polar oceans are available that month. Refer to the discussions here and here. GISS uses the base years of 1951-1980 as the reference period for anomalies. The values for the GISS product are found here. (I archived the former version here at the WaybackMachine.)

Update: The January 2016 GISS global temperature anomaly is +1.13 deg C. It’s basically the same as it was in December 2015, with only a +0.02 deg C increase.


Figure 1 – GISS Land-Ocean Temperature Index


NOTE: The NCEI produces only the product with the manufactured-warming adjustments presented in the paper Karl et al. (2015). As far as I know, the former version of the reconstruction is no longer available online. For more information on those curious adjustments, see the posts:

And recently:

Introduction: The NOAA Global (Land and Ocean) Surface Temperature Anomaly reconstruction is the product of the National Centers for Environmental Information (NCEI), which was formerly known as the National Climatic Data Center (NCDC). NCEI merges their new Extended Reconstructed Sea Surface Temperature version 4 (ERSST.v4) with the new Global Historical Climatology Network-Monthly (GHCN-M) version 3.3.0 for land surface air temperatures. The ERSST.v4 sea surface temperature reconstruction infills grids without temperature samples in a given month. NCEI also infills land surface grids using statistical methods, but they do not infill over the polar oceans when sea ice exists. When sea ice exists, NCEI leave a polar ocean grid blank.

The source of the NCEI values is through their Global Surface Temperature Anomalies webpage. Click on the link to Anomalies and Index Data.)

Update (Lags One Month): The December 2015 NCEI global land plus sea surface temperature anomaly was +1.11 deg C. See Figure 2. It rose sharply (an increase of +0.15 deg C) since November 2015.

02 NCEI Pause Buster

Figure 2 – NCEI Global (Land and Ocean) Surface Temperature Anomalies


Introduction: The UK Met Office HADCRUT4 reconstruction merges CRUTEM4 land-surface air temperature product and the HadSST3 sea-surface temperature (SST) reconstruction. CRUTEM4 is the product of the combined efforts of the Met Office Hadley Centre and the Climatic Research Unit at the University of East Anglia. And HadSST3 is a product of the Hadley Centre. Unlike the GISS and NCEI reconstructions, grids without temperature samples for a given month are not infilled in the HADCRUT4 product. That is, if a 5-deg latitude by 5-deg longitude grid does not have a temperature anomaly value in a given month, it is left blank. Blank grids are indirectly assigned the average values for their respective hemispheres before the hemispheric values are merged. The HADCRUT4 reconstruction is described in the Morice et al (2012) paper here. The CRUTEM4 product is described in Jones et al (2012) here. And the HadSST3 reconstruction is presented in the 2-part Kennedy et al (2012) paper here and here. The UKMO uses the base years of 1961-1990 for anomalies. The monthly values of the HADCRUT4 product can be found here.

Update (Lags One Month): The December 2015 HADCRUT4 global temperature anomaly is +1.01 deg C. See Figure 3. It increased sharply (about +0.20 deg C) since November 2015.


Figure 3 – HADCRUT4


Special sensors (microwave sounding units) aboard satellites have orbited the Earth since the late 1970s, allowing scientists to calculate the temperatures of the atmosphere at various heights above sea level (lower troposphere, mid troposphere, tropopause and lower stratosphere). The atmospheric temperature values are calculated from a series of satellites with overlapping operation periods, not from a single satellite. Because the atmospheric temperature products rely on numerous satellites, they are known as composites. The level nearest to the surface of the Earth is the lower troposphere. The lower troposphere temperature composite include the altitudes of zero to about 12,500 meters, but are most heavily weighted to the altitudes of less than 3000 meters. See the left-hand cell of the illustration here.

The monthly UAH lower troposphere temperature composite is the product of the Earth System Science Center of the University of Alabama in Huntsville (UAH). UAH provides the lower troposphere temperature anomalies broken down into numerous subsets. See the webpage here. The UAH lower troposphere temperature composite are supported by Christy et al. (2000) MSU Tropospheric Temperatures: Dataset Construction and Radiosonde Comparisons. Additionally, Dr. Roy Spencer of UAH presents at his blog the monthly UAH TLT anomaly updates a few days before the release at the UAH website. Those posts are also regularly cross posted at WattsUpWithThat. UAH uses the base years of 1981-2010 for anomalies. The UAH lower troposphere temperature product is for the latitudes of 85S to 85N, which represent more than 99% of the surface of the globe.

UAH recently released a beta version of Release 6.0 of their atmospheric temperature product. Those enhancements lowered the warming rates of their lower troposphere temperature anomalies. See Dr. Roy Spencer’s blog post Version 6.0 of the UAH Temperature Dataset Released: New LT Trend = +0.11 C/decade and my blog post New UAH Lower Troposphere Temperature Data Show No Global Warming for More Than 18 Years. The UAH lower troposphere anomalies Release 6.5 beta through January 2016 are here.

Update: The January 2016 UAH (Release 6.5 beta) lower troposphere temperature anomaly is +0.54 deg C. It rose (an increase of about +0.09 deg C) since December 2015.


Figure 4 – UAH Lower Troposphere Temperature (TLT) Anomaly Composite – Release 6.5 Beta


Like the UAH lower troposphere temperature product, Remote Sensing Systems (RSS) calculates lower troposphere temperature anomalies from microwave sounding units aboard a series of NOAA satellites. RSS describes their product at the Upper Air Temperature webpage. The RSS product is supported by Mears and Wentz (2009) Construction of the Remote Sensing Systems V3.2 Atmospheric Temperature Records from the MSU and AMSU Microwave Sounders. RSS also presents their lower troposphere temperature composite in various subsets. The land+ocean TLT values are here. Curiously, on that webpage, RSS lists the composite as extending from 82.5S to 82.5N, while on their Upper Air Temperature webpage linked above, they state:

We do not provide monthly means poleward of 82.5 degrees (or south of 70S for TLT) due to difficulties in merging measurements in these regions.

Also see the RSS MSU & AMSU Time Series Trend Browse Tool. RSS uses the base years of 1979 to 1998 for anomalies.

Update: The January 2016 RSS lower troposphere temperature anomaly is +0.66 deg C. It rose (an increase of about +0.12 deg C) since December 2015.


Figure 5 – RSS Lower Troposphere Temperature (TLT) Anomalies


The GISS, HADCRUT4 and NCEI global surface temperature anomalies and the RSS and UAH lower troposphere temperature anomalies are compared in the next three time-series graphs. Figure 6 compares the five global temperature anomaly products starting in 1979. Again, due to the timing of this post, the HADCRUT4 and NCEI updates lag the UAH, RSS and GISS products by a month. For those wanting a closer look at the more recent wiggles and trends, Figure 7 starts in 1998, which was the start year used by von Storch et al (2013) Can climate models explain the recent stagnation in global warming? They, of course, found that the CMIP3 (IPCC AR4) and CMIP5 (IPCC AR5) models could NOT explain the recent slowdown in warming, but that was before NOAA manufactured warming with their new ERSST.v4 reconstruction.

Figure 8 starts in 2001, which was the year Kevin Trenberth chose for the start of the warming slowdown in his RMS article Has Global Warming Stalled?

Because the suppliers all use different base years for calculating anomalies, I’ve referenced them to a common 30-year period: 1981 to 2010. Referring to their discussion under FAQ 9 here, according to NOAA:

This period is used in order to comply with a recommended World Meteorological Organization (WMO) Policy, which suggests using the latest decade for the 30-year average.

The impacts of the unjustifiable adjustments to the ERSST.v4 reconstruction are visible in the two shorter-term comparisons, Figures 7 and 8. That is, the short-term warming rates of the new NCEI and GISS reconstructions are noticeably higher during “the hiatus”, as are the trends of the newly revised HADCRUT product. See the June update for the trends before the adjustments. But the trends of the revised reconstructions still fall short of the modeled warming rates.

06 Comparison 1979 Start

Figure 6 – Comparison Starting in 1979


07 Comparison 1998 Start

Figure 7 – Comparison Starting in 1998


08 Comparison 2001 Start

Figure 8 – Comparison Starting in 2001

Note also that the graphs list the trends of the CMIP5 multi-model mean (historic and RCP8.5 forcings), which are the climate models used by the IPCC for their 5th Assessment Report.


Figure 9 presents the average of the GISS, HADCRUT and NCEI land plus sea surface temperature anomaly reconstructions and the average of the RSS and UAH lower troposphere temperature composites. Again because the HADCRUT4 and NCEI products lag one month in this update, the most current average only includes the GISS product.

09 Surface and TLT Averages

Figure 9 – Average of Global Land+Sea Surface Temperature Anomaly Products


Note: The HADCRUT4 reconstruction is now used in this section. [End note.]

Considering the uptick in surface temperatures in 2014 (see the posts here and here), government agencies that supply global surface temperature products have been touting record high combined global land and ocean surface temperatures. Alarmists happily ignore the fact that it is easy to have record high global temperatures in the midst of a hiatus or slowdown in global warming, and they have been using the recent record highs to draw attention away from the growing difference between observed global surface temperatures and the IPCC climate model-based projections of them.

There are a number of ways to present how poorly climate models simulate global surface temperatures. Normally they are compared in a time-series graph. See the example in Figure 10. In that example, the UKMO HadCRUT4 land+ocean surface temperature reconstruction is compared to the multi-model mean of the climate models stored in the CMIP5 archive, which was used by the IPCC for their 5th Assessment Report. The reconstruction and model outputs have been smoothed with 61-month filters to reduce the monthly variations. Also, the anomalies for the reconstruction and model outputs have been referenced to the period of 1880 to 2013 so not to bias the results.

10 HADCRUT Model-Data Comparison

Figure 10

It’s very hard to overlook the fact that, over the past decade, climate models are simulating way too much warming and are diverging rapidly from reality.

Another way to show how poorly climate models perform is to subtract the observations-based reconstruction from the average of the model outputs (model mean). We first presented and discussed this method using global surface temperatures in absolute form. (See the post On the Elusive Absolute Global Mean Surface Temperature – A Model-Data Comparison.) The graph below shows a model-data difference using anomalies, where the data are represented by the UKMO HadCRUT4 land+ocean surface temperature product and the model simulations of global surface temperature are represented by the multi-model mean of the models stored in the CMIP5 archive. Like Figure 10, to assure that the base years used for anomalies did not bias the graph, the full term of the graph (1880 to 2013) was used as the reference period.

In this example, we’re illustrating the model-data differences in the monthly surface temperature anomalies. Also included in red is the difference smoothed with a 61-month running mean filter.

11 HADCRUT Model-Data Difference

Figure 11

The greatest difference between models and reconstruction occurs now.

There was also a major difference, but of the opposite sign, in the late 1880s. That difference decreases drastically from the 1880s and switches signs by the 1910s. The reason: the models do not properly simulate the observed cooling that takes place at that time. Because the models failed to properly simulate the cooling from the 1880s to the 1910s, they also failed to properly simulate the warming that took place from the 1910s until 1940. That explains the long-term decrease in the difference during that period and the switching of signs in the difference once again. The difference cycles back and forth, nearing a zero difference in the 1980s and 90s, indicating the models are tracking observations better (relatively) during that period. And from the 1990s to present, because of the slowdown in warming, the difference has increased to greatest value ever…where the difference indicates the models are showing too much warming.

It’s very easy to see the recent record-high global surface temperatures have had a tiny impact on the difference between models and observations.

See the post On the Use of the Multi-Model Mean for a discussion of its use in model-data comparisons.


The most recent sea surface temperature update can be found here. The satellite-enhanced sea surface temperature composite (Reynolds OI.2) are presented in global, hemispheric and ocean-basin bases.


We discussed the recent record-high global sea surface temperatures for 2014 and 2015 and the reasons for them in General Discussions 2 and 3 of my recent free ebook On Global Warming and the Illusion of Control (25MB). The book was introduced in the post here (cross post at WattsUpWithThat is here).

47 thoughts on “January 2016 Global Surface (Land+Ocean) and Lower Troposphere Temperature Anomaly Update

  1. I notice that the 1999 el Nino year is no longer super warm. Cool the past and heat up the present!

    By the way, unlike the 1999 el Nino year which I remember very well since I was building my house that year!…today it is MINUS 16 and the high will be…3 degrees F!

  2. ‘The UAH lower troposphere anomalies Release 6.4 beta through December 2015 are here. (Because the data have not been updated at that webpage for January 2016, I used the 0.54 deg C January 2016 value from the update at Dr. Roy Spencer’s blog here.)’
    Bob, you have to switch to V6 5beta. That file has been posted. Roy’s post has details.

  3. Real physical quantity is thermal energy of atmosphere and oceans, top couple of meters of oceans are about equal to total thermal energy of the Earth’s atmosphere. In that respect averaging land and ocean temperatures to obtain the so called global temperature gives a deceptive quantity, which I doubt that it represents anything.
    Temperatures across land and oceans cells should not be averaged into single quantity. Regional temperatures make some sense as long as long as they are corrected for the UHI.

      • Contrary to ingrained ideas that we have temperatures are not an additive quantity and so it is incorrect to try to take the average.

        One could average several measurements of the temperature of a bucket of water to reduce experimental error but averaging the temperature of different objects ( eg land and sea ) has no meaning in physics.

        These combined datasets are un-physical bunk , that will over-weight the changes on land.

        This concept is so ingrained in climatology that many have trouble accepting it.

    • Vuk’ “Regional temperatures make some sense as long as long as they are corrected for the UHI.”

      I find that statement rather odd. You agree that it’s fundamentally wrong …. but probably ok regionally ??!

      • Averaging temperature over a small region of land or over a small region of ocean, individually and not combined, it ‘makes some sense’ but it is not the absolute ‘wisdom’..

      • Ah, thanks. I thought you were meaning land+sea was OK regionally.

        Yes, as explained in the article on Climate Etc. , with some care it is reasonable to regard temp as an energy proxy and thus additive. Averaging rainforest and desert would be questionable as would high latitudes of SST with tropical since there are large differences in mix layer depth. If that is being done some discussion of how much it is bending the rules would be appropriate.

        Regional averages should be fairly justifiable as long as it is presented as an energy proxy and we are not trying to average temperatures.

      • The measurement of air temperature is not a good proxy for energy.

        The idea that it is a proxy for energy involves all sorts of assumptions, the validity of which have not been properly tested.

        At the minimum one would need hourly temperature and hour humidity to create a reasonable energy profile for any given day. The average of min/max potentially is very misleading, and without humidity is meaningless.

        Measurements of ocean temperature, of course provides a metric for energy. Furthermore because of the latent heat capacity, changes in ocean temperature over a 24 hour period are damped. It might be quite reasonable to take just a min/max temperature and average that to get an idea as to the energy contained within a given layer of the ocean.

        The same cannot be said for land based air temperature which are very much influenced by the presence of clouds. Whether a day is measured ‘hot’ will in most instances depend upon whether it is cloud free around midday.

        If a day is cloudy in the morning but the clouds dissipate for just a couple of hours around midday, and then cloud over again, the day may not actually be ‘hot’, but will show up as a ‘hot’ day. The converse is also true, the day can be sunny early to mid morning, and sunny mid to late afternoon, such that overall it was a ‘hot’ day, but if it is cloudy at midday then it will show up as a cool day.

        There is an assumption that cloudiness remains constant over time, but slight changes in cloudiness, or the patterns and timing of cloudiness could have a significant impact upon land based air temperature measurements.

        It is the oceans that govern the climate. There is no point in using land based thermometers which were never designed to be set up so as to produce a global temperature network, there siting is poor, including the spatial coverage, and one cannot produce a time series data set when each year during the time series different station data is used to ensemble the data set. So for example we cannot compare today’s temperatures to that of the 1880s since there were only a few hundred stations reporting temperatures in 1880 and we are not using in 1930, or 1950 or 1980, or today, the very same few hundred stations so we are never making a like for like comparison. The inclusion and drop out of stations over time renders the data set hopeless for comparison over time analysis.

      • Could just leave it as an average of sea temperatures are suitable as qualitative evidence (Not sure even that as trends in SST vary a lot around the globe) but definitely both combined are not quantitative evidence to support an increase in forcing.

        There is too much affecting land temperatures and corrections to even use it in a hand waving exercise. How many readers are sure that the pause doesn’t really extend back to 1940?

        TMT is at least an increase in emissions from O2 in the atmosphere turned into an average temperature equivalent. I wonder why Mears doesn’t like it?

      • Richard Verney, 11.15 am., very good summary.

        Today and for the past few days in the west Pacific we have the remains of a large cyclone system that has sucked all the usual summer cloud cover and diurnal effect away from the land in my NOTW in the east coast hinterland of Aus and that has given us clear, aerosol-free skies. The absence of those clouds and aerosols on a beautiful summer’s day means a warming of 5c.

      • even averages of sea-surface temperatures are misleading, as they assume the vertical mixing rate is constant. El Nino/ La Nina show that it is not, that there are significant changes in ocean upwelling due to currents and winds.

        ARGO was supposed to resolve much of this controversy. Instead ARGO itself was adjusted because it was diverging from the combined land sea surface temperatures. Had ARGO shown warming, there would have been no adjustments to remove “cold” bloats.

        Yet is it quite obvious that in any group of floats there will be some floats that read above the average and some that read below the average, even if all floats are performing 100% correctly.

        To remove that “cold” floats because they are reading below the average, under the assumption that they are faulty, ignores that measurement error is normally distributed. You expect to get high and low readings, even if the data is 100% correct.

      • Ferdberple,
        Has anyone ever tried to count the energy content of the vertical mixed sea surface layer as shown by (raw data of) Argo Floats? The trend shown could be a perfect indication of warming/cooling trends of the Earth. Oceans count for 71% of Earth’s surface.

      • Thanks Vukcevic. About: “Real physical quantity is thermal energy of atmosphere and oceans, top couple of meters of oceans are about equal to total thermal energy of the Earth’s atmosphere.” Do you know whether there are graphs about the (development of the) heat content of the same upper layers of the ocean as shere the SST is measured? And or whether there is a combination of that heat content with the heat content of the lower layers of the Troposphere? Thanks in advance.

  4. Figure 11 is interesting. All the negative excursions in the difference, coincide with major volcanoes. A clear indications that climate sensitivity to volcanic forcing is being over-estimated.

    The corrolary is that GHG forcing ( which opposed the volcanic forcing ) is probably also being over-rated.

    • Climate models over-rate both volcanic and GHG forcing. This works roughly OK when you have both. When the cooling one is absent ( post 2000 lack of volcanoes ) you are left with exaggerated warming and need to start rigging the data ( yes Karl , this means you ) and desperately looking for “missing heat”.

  5. some of those charts would look great in an abstract art gallery .in the real world anything referencing karl et al is a bunch of meaningless mathturbated spaghetti mush.

  6. Bob,

    I love you. We love you. But can you include Key takeaway summaries when you right. Your articles are most often painfully long and so detailed that when I finish I generally have to re-review to grasp your main points and even then I’m often not sure I got your intended point.

  7. Bob, fig. 10, about the multi model mean. What is the (mean?) ‘run date’ of the models? To what date date back their predictions?

  8. In Figure 7, you state that the UAH slope is +0.002/decade from January 1998. Unfortunately, UAH gives the anomaly for January and days later gives the complete data set. Every year of at least the hottest 14 years have been changed. According to:

    The slope from January 1998 is now negative.
    Temperature Anomaly trend
    Jan 1998 to Jan 2016 
    Rate: -0.011°C/Century;
    CI from -1.225 to 1.204;
    t-statistic -0.017;
    Temp range 0.143°C to 0.141°C

    The pause for UAH6.0beta5 starts from October 1997, but it will end if February is above 0.315 which is extremely likely.

    • Really, will it?

      What is the measurement accuracy?

      Whether the least square regression line that is fitted is flat, or, within measurement errors, has a slight negative trend or a slight positive trend, all one can say is that within the limitations of the data and the measurement errors thereof, there is no statistical trend.

      One should not be drawing a thin line representing some claimed trend but rather a thick line which is bordered by the error bounds of the data set.

      • there is no statistical trend

        With respect to statistically significant warming on UAH6.0beta5, you need to go back 23 years:
        Temperature Anomaly trend
        Feb 1993 to Jan 2016 
        Rate: 0.827°C/Century;
        CI from -0.031 to 1.685;
        t-statistic 1.889;
        Temp range 0.009°C to 0.199°C

      • Thanks for that.

        It may well be the case that because of the present strong El Nino, over the course of the next few months, the ‘pause’ will disappear, and Lord Monckton will be unable to present his usual summary.

        But is that significant? for PR, YES, but scientifically, I would say NO.

        We know that we are dealing with a system that have much variation, from month to month, and indeed from year to year. It will almost certainly be the case that in late 2016 or sometime in 2017 there will be a La Nina. It is likely that this will be accompanied by cooler temperatures. In this scenario, it is likely that the ‘pause’ will reappear and perhaps will even lengthen beyond its 18 yr 8 month length.

        So what is to be made of data where one moment there is a ‘pause’ of about 18 yrs 8 months, then the ‘pause’ disappears, and then some months later the ‘pause’ reappears, possibly even lengthening?

        Given the variability in the data, I suggest that one should not get too worked up by the disappearance of the ‘pause’ Nothing of substance will have changed; there will simply have been a short lived response to a natural event, ie., El Nino warming, and order will returned by a following La Nina.

        the current 2015/16 El Nino will only be of significance if coincident with it there is a long lasting step change in temperatures, as there was with the 1997/98 Super El Nino. Coincident with the 1997/98 Super El Nino there was a step change in temperature of around 0.27degC. If that does not happen this time around, and if the strong 2015/16 El Nino is more like the 2010 strong El Nino then there will be a short lived spike/peak in the temperature anomaly but that anomaly will be brought back down by a following La Nina and the temperature anomaly will restabilize around the 2001 to 2003 anomaly level.

        Of course, the future has yet to be written so we do not yet know whether there ill be a long lasting step change in temperature, or just a short lived spike. We will know the position in 2019/20 since that will give time for the following La Nina to have worked its impact.

    • This is why we need to move to an ENSO corrected data set to determine the pause (or at least add that data). That would remove any objections based on starting dates as well as eliminate the problems we will see over the next few months until the La Nina takes hold.

      You could use the approach used in Santer et al 2014 thus eliminating any squawking about the algorithm used.

    • I wonder, whether we should waste out time on such type of abnormal variations with few year change like:

      rate of -0.011/century with Jan 1998 toJan 2016
      rate of 0.827oC/century with February 1993 to 2016

      The fundamental point is: we must look at the data pattern, without that simply fitting linear curve we get ghost results. This I am pointing again and again.

      Dr. S. Jeevananda Reddy

      • Any trend that changes when you change the end points is not a trend. It is an artifact of the end-points.

  9. I criticized the offerings last few times because these are taking on a lamb-being-led-to-the-slaughter series. The progressive adjustments leave you less and less able to report other than that we are on our way to thermageddon. Your published work is gradually being “debunked by the observations” and in a year or two will be irrelevant once every El Nino since 1998 will be larger than that through progressive shrinking of 1998 and inflating of its followers. All we will need to read in the future on this topic (if something isn’t done to stop the tampering) is Trenberth, Karl, Mann and Mear analyses.

  10. My general trade is in accounting, so running numbers is a natural extension of what I do for a living. Unfortunately for the environmental scientist conducting these studies, are using a form of science that is as accurate as a crystal ball at determining outcomes and have forgotten simple scientific principals of empirical data collection untainted by variables such as the “heat island” effect of asphalt or concrete of streets and runways in cities and “adjusting data” to fit the conclusion. I see no mention of the solar output in the long term predictions of the GISS or HADCRUT4 temperature anomalies, which speak volumes in their ignorance of our relationship with the very thing that gives us life on this rock, the sun. The absents of the Mulder minimum is also suspect and indicates their unwillingness to contemplate that a similar event could happen in the future. As an accountant, I see the potential, as has already been demonstrated, of this whole issue being more about money, political power and control then about the wellbeing of the planet.

  11. Bob Tisdale or WUWT, almost all the temperature data is corrupted, so there isn’t a lot of confidence that can be placed in any conclusions reached, basically GIGO. The adjustments and constructions are made in secret, without public justification and highly subjective and apparently extremely biased. What is needed in an Open Source Temperature Data and Reconstruction Repository and/or Contest that will bring the needed sunlight to the shadowy world of Climate Change exposed in the climategate email. Way way way too much power has been concentrated in the hands of activists like Hansen, Jones, Mann and Thompson. We need a “Project Sunlight” to expose the fraud that is Climate Science, an Open Source Climate Data and Temperature Reconstruction that removes the bias of Government Paid Activists seeking to reach a preconceived solution rather than the truth.

  12. The Earth’s surface is in a warming trend, somewhat as predicted.
    The stratosphere is in a cooling trend, as predicted.
    So would not there be a “Goldilocks Layer” in between, with no trend at all?
    Could that Goldilocks Layer be the middle troposphere?

    So “no warming, forever.”

  13. Slightly OT about the recent “el Nino”;
    Classically el Nini are supposed to cause damaging decline in Peruvian anchovy catches due to interruption of the Peruvian upwelling.

    So what kind of el Nino caused the 2015 anchovy landed tonnage to be 1 million tons larger than that of 2014?


  14. Bob, why do like to pawn off official temperatures containing known and obvious computer-generated noise? I find this to be the case with your figures 1, 2, and 3 showing, respectively, data from GISS, NCEI (NCDC originally) and HadCRUT4. The noise manifests itself as sharp upward spikes, most easily visible at years 1980, 1983,1987, 1990, `995, 1998 (two attached to the super El Nino), 2002, and 2007. Their locations are exactly the same in all three data-sets, assuring us that they have a common origin when they were all processed identically by a computer that left these traces on their publicly available. They are neithyer acknowledged nor explained by the owners of these data-sets. My observations show that they have existed so since the nineties. One needs to know what happened in the nineties to cause this computer processing to be initiated. The one major thing in the nineties that I am aware of is the cover-up of a hiatus in the eighties and nineties with a fake warming they call “late twentieth century warming.” It is not unreasonable to connect these two events because all three data-sets prominently display this fake warming. Satellite temperatyre curves for that time period are free of this noise and clearly show the erxistence of the hiatus of the eighties and nineties. You, Bob, owe me an apology for calling attention to that on October 29th. You had the remerity to say that I was making it all up. I did not, I never have, and the data is open to observation by anybody who wants to do it.

  15. I’ve been reading this website for a long time. When in High School, I used to be convinced AGW was real and that we were in immediate danger. Then I became skeptical, and began to read this site.

    But I have to say that since 2014, and particularly during 2015 and the first month of 2016, temperatures have been rising non-stop. I am really concerned about this and I am also very worried about this never-ending El Niño (CFS showing El Niño conditions into October!!!!).

    I live in Argentina and we are really suffering its effects. Also, January and February have been very warm and humid months, pretty much like what you see in the tropics. It’s becoming unbareable, and we are getting round after round of severe t-storms.-

    I really pray the Lord this El Niño goes away.



Comments are closed.