Do the Adjustments to the Global Land+Ocean Surface Temperature Data Always Decrease the Reported Global Warming Rate?

Guest Post by Bob Tisdale

If you’ve read the first two posts in this series you might already believe you know the answer to the title question.  Those two posts were:

In this post, we’ll compare “raw” global land+ocean surface temperature data and the end products available from Berkeley Earth, Cowtan and Way, NASA GISS, NOAA NCEI and UK Met Office.

END PRODUCTS

Berkeley Earth – This land+ocean dataset is made up of the infilled land surface air temperature data created by the Berkeley Earth team and their infilled version of the HADSST3 sea surface temperature product from the UK Met Office (UKMO). For their merged land+ocean product, Berkeley Earth also infills data missing from the polar oceans, anywhere sea ice exists.  They accomplish this infilling two ways, creating separate datasets:  First, using sea surface temperature data from adjacent ice-free oceans.  Second, using land surface air temperature data from adjacent high-latitude land masses.  For this post, we’re using the data with the land-based infilling of the polar oceans to agree with the Cowtan and Way and the GISS Land-Ocean Temperature Index, both of which rely on land-surface temperature data for infilling.  The annual Berkeley Earth Land+Ocean data can be found here.

Cowtan and Way – The land+ocean surface temperature data from Cowtan and Way is an infilled version of the UKMO HADCRUT4 data. (Infilled by kriging.) As noted above, Cowtan and Way also infill areas of the polar oceans containing sea ice using land-based surface air temperature data. The annual Cowtan and Way data are here.

NASA GISS – The Land-Ocean Temperature Index (LOTI) from the Goddard Institute of Space Studies (GISS) is made up of GISS-adjusted GHCN data from NOAA for land surfaces and NOAA’s ERSST.v4 “pause buster” sea surface temperature data for the oceans, the latter of which has already been infilled by NOAA.  GISS infills missing data for land surfaces by extending data up to 1200km.   GISS also masks sea surface temperature data in the polar oceans (anywhere sea ice has existed) and extends land surface air temperature data out over the polar oceans.  The GISS LOTI data are here.

NOTES – For summaries of the oddities found in the new NOAA ERSST.v4 “pause-buster” sea surface temperature data see the posts:

Even though the changes to the ERSST reconstruction since 1998 cannot be justified by the night marine air temperature product that was used as a reference for bias adjustments (See comparison graph here), and even though NOAA appears to have manipulated the parameters (tuning knobs) in their sea surface temperature model to produce high warming rates (See the post here), GISS also switched to the new “pause-buster” NCEI ERSST.v4 sea surface temperature reconstruction with their July 2015 update. [End notes.]

NOAA NCEI – The NOAA Global (Land and Ocean) Surface Temperature Anomaly reconstruction is the product of the National Centers for Environmental Information (NCEI).  NCEI merges their new “pause buster” Extended Reconstructed Sea Surface Temperature version 4 (ERSST.v4) (see notes above) with the new Global Historical Climatology Network-Monthly (GHCN-M) version 3.3.0 for land surface air temperatures. The ERSST.v4 sea surface temperature reconstruction infills grids without temperature samples in a given month.  NCEI also infills land surface grids using statistical methods, but they do not infill over the polar oceans when sea ice exists.  When sea ice exists, NCEI leave a polar ocean grid blank. The source of the NCEI values is here.  Click on the link to Anomalies and Index Data.

UK Met Office – The UK Met Office HADCRUT4 reconstruction merges CRUTEM4 land-surface air temperature product and the HadSST3 sea-surface temperature (SST) reconstruction.  CRUTEM4 is the product of the combined efforts of the Met Office Hadley Centre and the Climatic Research Unit at the University of East Anglia. And HadSST3 is a product of the Hadley Centre.  Unlike the other reconstructions, grids without temperature samples for a given month are not infilled in the HADCRUT4 product.  That is, if a 5-deg latitude by 5-deg longitude grid does not have a temperature anomaly value in a given month, it is left blank. Blank grids are indirectly assigned the average values for their respective hemispheres before the hemispheric values are merged. The annual HADCRUT4 data are here, per the format here.

“RAW” DATA

For the “raw” global land+ocean surface temperature data, we’re using a weighted average of the global (90S-90N) ICOADS sea surface temperature data (71%) and the global “unadjusted” GHCN data from Zeke Hausfather (29%). (To confirm the percentage of Earth’s ocean area, see the NOAA webpage here.)

ICOADS – This is the source sea surface temperature data used by NOAA and UKMO for their sea surface temperature reconstructions. The source of the ICOADS data is the KNMI Climate Explorer.  Also see the post Do the Adjustments to Sea Surface Temperature Data Lower the Global Warming Rate?

For the unadjusted land surface air temperature data, Zeke Hausfather (a member of the Berkeley Earth team) has graciously updated his monthly unadjusted global GHCN land surface temperature data through March 2016, using the current version of the GHCN data. (Thank you, Zeke.) See Zeke’s comment here on the cross post at WattsUpWithThat of the original land surface air temperature post. The link to that current version of the “raw” data is here.  Also see the post UPDATED: Do the Adjustments to Land Surface Temperature Data Increase the Reported Global Warming Rate?

GENERAL NOTES

The WMO-preferred base years of 1981-2010 are used for anomalies for the ten comparison graphs.

We excluded the polar oceans in the post Do the Adjustments to Sea Surface Temperature Data Lower the Global Warming Rate?.  That is, we limited the latitudes to 60S-60N because the sea surface temperature reconstructions account for sea ice differently. We’re taking a different tack in this post:  because the suppliers of the end products handle sea ice differently (Some manufacturers infill data when and where sea ice exists, others don’t infill. Infilling is another form of adjustment.), I’m including the polar oceans in the “raw” sea surface temperature product, including the data for the latitudes of 90S-90N and comparing that to the global end products.

For the “raw” data from ICOADS, if a 2.5-deg latitude by 2.5-deg longitude grid does not contain observations-based data, it is left blank. This means there are no temperature measurements in the polar oceans when sea ice exists…like the UKMO HADCRUT4 data and the NOAA/NCEI data.

In past posts I have mentioned one of the problems with infilling the temperature data for the polar oceans by extending land surface air temperature data out over sea ice. That problem: the method fails to consider that polar sea ice during the summer likely has a different albedo than surface station locations where snow has melted and exposed underlying land surfaces. That is, sea ice will tend to reflect sunlight while exposed land surfaces would absorb it. That problem is compounded in the Arctic Ocean when ice-free ocean exists between land and sea ice. The ice-free ocean has yet another albedo, which is not the same as the ice surface or the snow-free land mass.  Those problems do not exist in winter when snow covers both sea ice and land surfaces and when the sea ice covers the Arctic Ocean to the shoreline, so the problem is seasonal.

Regardless of the season, any polar temperature data created by infilling is make-believe data.

Let’s start with the long-term data.

LONG-TERM TREND COMPARISON

This comparison starts in 1880 because the GISS and NOAA/NCEI data begin then. See Figure 1.  Because the adjustments to the sea surface temperature data reduce the amount of warming during the early 20th Century, the “raw” data have the highest long-term warming rate.  Or phrased differently, the adjustments have reduced the reported global warming since 1880.

Figure 1

Figure 1

Note:  The excessive warming of the “raw” surface temperature data from the early 1900s to the mid-1940s (due to the sea surface temperature component) presented a problem for climate models. Most of the warming then was not caused by man-made greenhouse gases (according to the models), but the warming trend of the “raw” data from the early 1900s to the mid-1940s was much higher than their recent warming rates.  For confirmation, see the graph of 30-year running trends here.  The bias corrections for the data prior to 1940 reduced those problems for the models, but did not eliminate them.  That is, the models still cannot explain the initial cooling of global sea surface temperatures from 1880 to about 1910, and, as a result, the models cannot explain the warming from about 1910 to the mid-1940s.   [End note.]

TREND COMPARISONS FOR 1950 TO 2015

1950 was one of the mid-20th Century start points used by NOAA in their study Karl et al (2015) Possible artifacts of data biases in the recent global surface warming hiatus…the “pause buster” paper. As shown in Figure 2, for the period of 1950 to 2015, the GISS and NCEI data have noticeably higher warming rates that the other datasets.  As you’ll recall, both GISS and NCEI use NOAA’s ERSST.v4 “pause-buster” sea surface temperature data, which have not been corrected for the 1945 discontinuity and trailing biases presented in Thompson et al. (2008)  A large discontinuity in the mid-twentieth century in observed global-mean surface temperature.  On the other hand, the other datasets (Berkeley Earth, Cowtan and Way, and HADCRUT4) use the UKMO HADSST3 data, which have been corrected for those mid-20th Century biases.

Figure 2Figure 2

For a more-detailed discussion of NOAA’s failure to account for those biases with their ERSST.v4 “pause-buster” data, see the post Busting (or not) the mid-20th century global-warming hiatus, which was also cross posted at Judith Curry’s ClimateEtc here and at WattsUpWithThat here.

TREND COMPARISONS FOR 1975 TO 2015

1975 is a commonly used breakpoint for the transition from the mid-20th Century cooling or slowdown (depends on the dataset) and the recent warming period. Figure 3 compares the “raw” and “adjusted” global warming rates from 1975 to 2015.  There is a 0.019 deg C/decade spread in the trends, with the Cowtan and Way data having the highest warming rate and the NOAA/NCEI data having the lowest…even lower than the “raw” data.

Figure 3

Figure 3

TREND COMPARISONS FOR 1998 TO 2015

Figure 4 compares the “raw” and “adjusted” global surface temperature anomalies starting in 1998, which is often used as the start year of the slowdown in global warming.  The GISS LOTI and NOAA/NCEI data have the highest warming rate, a result of NOAA’s excessive tweaking of the parameters (tuning knobs) in the model that manufactures NOAA’s “pause buster” ERSST.v4 sea surface temperature data, creating a trend near the high end of the parametric uncertainty range.  At the other end of the spectrum, the UKMO HADCRUT4 data has a trend that’s very similar to the “raw” data.  Keep in mind that the HADSST3 sea surface temperature data (used in the HADCRUT4 combined land+ocean data) have been adjusted for ship-buoy biases.

Figure 4

Figure 4

Both the Berkeley Earth and the Cowtan and Way land+ocean data rely on and infill HADSST3 data for the ocean portion, yet the Cowtan and Way data have noticeably higher warming rate than the Berkeley Earth data during this period. (Maybe Kevin Cowtan or Robert Way will stop by and explain that for us.)

As a reference, based on the model-mean of the climate models stored in the CMIP5 archive, which represents the consensus of the modeling groups, the expected warming rate during the period of 1998 to 2015 is 0.233 deg C/decade with the worst-case RCP8.5 scenario, and that’s about 0.1 deg C/decade to 0.13 deg C/decade higher than observed…thus my use of the term “slowdown”.

WAS THERE A “HIATUS” IN GLOBAL WARMING?  

For this heading, I’m going to borrow and update the text from the post Do the Adjustments to Sea Surface Temperature Data Lower the Global Warming Rate?

Of course there was a hiatus, but the extent of the slowdown depends on the global land+ocean temperature dataset and the period to which the slowdown is compared.  Figure 5 includes the “raw” and “adjusted” global sea surface temperature anomalies for the period of 1998 to 2013.  We ended the data in 2013, because:

  • 2013 was an ENSO neutral year…that is there no El Niño or La Niña.  (See NOAA’s Oceanic NINO Index here.)
  • The Blob and the weak El Niño conditions were the primary causes of the naturally occurring uptick in global surface temperatures in 2014 and,
  • The continuation of The Blob and the strong El Niño conditions were the primary causes of the naturally occurring uptick in global surface temperatures in 2015.

Figure 5

Figure 5

Note 1: To confirm the second and third bullet points, we discussed and illustrated the natural causes of the 2014 “record high” surface temperatures in General Discussion 2 of my free ebook On Global Warming and the Illusion of Control (25 MB).  And we discussed the naturally caused reasons for the record highs in 2015 in General Discussion 3.

Note 2: Some may claim the start year of 1998 is cherry-picked because it’s an El Niño decay year. That’s easily countered by noting that the 1997/98 El Niño was followed by the 1998 to 2001 La Niña. (Once again, see NOAA’s Oceanic NINO Index here.) Also, 1998 was used as a start year by Karl et al. (2015) and the period of 1998 to 2013 is also one year longer than the period of 1998 to 2012 used by NOAA in that paper.

[End notes.]

Karl et al. (2015) also used a sleight of hand in their trend comparisons by using 1950 as the start year of the recent warming period. The IPCC did the same thing in their analyses of it in Chapter 9 of their Fifth Assessment Report (See their Box 9.2).  Both groups referenced the hiatus warming rates to periods starting in 1950 or 1951.  Why does that indicate they were using smoke and mirrors?  The trends from those 1950 or 1951 start dates include the slowdown or cooling of global surfaces that occurred from the mid-1940s to about 1975. So let’s present the trends from the start of the recent warming period (1975) to the end of the 20th Century (1999). See Figure 6.  As you’ll recall, the year 1999 was used by NOAA in Karl et al. (2015). (Again refer to Figure 1 from Karl et al. (2015).)

Figure 6

Figure 6

Only the UKMO’s HADSST3 data have a higher warming rate than the “raw” data during this period. Some readers might believe the other data suppliers have reduced the reported global warming during the period of 1975 to 1999 to suppress the extent of the slowdown that followed.

Now let’s compare the trends for the periods of 1975 to 1999 (Figure 6) and 1998 to 2013 (Figure 5).  The slowdowns (1975-1999 trends minus 1998-2013 trends) are:

  • HADCRUT4 slowdown = 0.137 deg C/decade (compared to +0.190 deg C/decade for 1975-1999)
  • NOAA/NCEI slowdown = 0.087 deg C/decade (compared to +0.173 deg C/decade for 1975-1999)
  • Berkeley Earth = 0.086 deg C/decade (compared to +0.178 deg C/decade for 1975-1999)
  • GISS LOTI = 0.080 deg C/decade (compared to +0.178 deg C/decade for 1975-1999)
  • Cowtan and Way = 0.079 deg C/decade (compared to +0.182 deg C/decade for 1975-1999)

Not too surprisingly, the “adjusted” dataset with the no infilling (HADCRUT4) shows the greatest slowdown.

The “raw” data show a slowdown of about 0.148 deg C (compared to +0.183 deg C/decade for 1975-1999), a slightly greater slowdown than the HADCRUT4 data.

And for those of you wondering about climate models, the model mean (the model consensus) of the CMIP5-archived models (with historic and RCP8.5 forcings) shows a higher warming rate (+0.223 deg C/decade) during 1998 to 2013 than during 1975 to 1999 (+0.154 deg C/decade). That is, according to the models, global warming should have accelerated in 1998 to 2013 compared to the period of 1975 to 1999.  Instead, the data show a deceleration of global warming.

But other start years have been used for the recent “hiatus” in global warming.  NCAR’s Kevin Trenberth used 2001 in his 2013 article Has Global Warming Stalled? for the Royal Meteorological Society.  (My comments on Trenberth’s article are here.)  Figure 7 compares the trends for the “raw” global land+ocean surface temperature data and the end products for the period of 2001 to 2013. The “raw” data show a slight cooling over this short time period.  The trend of the UKMO HADCRUT4 data is basically flat at 0.001 deg C/decade. At the high end are the GISS LOTI and NOAA/NCEI data, which should result from NOAA’s excessive parameter tweaking.

Figure 7

Figure 7

The slowdowns (1975-1999 trends minus 2001-2013 trends) are:

  • HADCRUT4 slowdown = 0.189 deg C/decade (compared to +0.190 deg C/decade for 1975-1999)
  • Berkeley Earth = 0.146 deg C/decade (compared to +0.178 deg C/decade for 1975-1999)
  • Cowtan and Way = 0.138 deg C/decade (compared to +0.182 deg C/decade for 1975-1999)
  • GISS LOTI = 0.125 deg C/decade (compared to +0.178 deg C/decade for 1975-1999)
  • NOAA/NCEI slowdown = 0.121 deg C/decade (compared to +0.173 deg C/decade for 1975-1999)

Because the “raw” data trend is negative over this timeframe, the slowdown is greater than the trend for 1975 to 1999.

And once again, the climate models shows a higher warming rate (+0.184 deg C/decade) from 2001 to 2013 than for the period of 1975 to 1999 (+0.154 deg C/decade).

LET’S LOOK AT THE TRENDS FOR THE EARLY COOLING PERIOD, THE EARLY 20TH-CENTURY WARMING PERIOD AND THE MID 20TH-CENTURY SLOWDOWN/COOLING PERIOD   

In past posts and in my book Climate Models Fail, I used the breakpoints of 1914, 1945 and 1975 when dividing the data prior to the recent warming period.  The years 1914 and 1945 were determined through breakpoint analysis by Dr. Leif Svalgaard of a former GISS LOTI dataset (the version that used ERSST.v3b data).  See his April 20, 2013 at 2:20 pm and April 20, 2013 at 4:21 pm comments on a WattsUpWithThat post here.  And for 1975, I referred to the breakpoint analysis performed by statistician Tamino (a.k.a. Grant Foster).  With the inclusion of the NOAA ERSST.v4 “pause-buster” sea surface temperature data in the GISS LOTI data, I suspect there may be new breakpoints for that dataset (and that the breakpoints may be slightly different for the other datasets), but I’ll continue to use 1914, 1945 and 1975 for consistency with past posts and that book.

Specifically, 1880 to 1914 is used for the early cooling period (Figure 8), 1914 to 1945 is used for the early 20th-Century warming period (Figure 9), and the mid-20th Century slowdown/cooling period is captured in the years of 1945 to 1975 (Figure 10).

During the early cooling period of 1880 to 1914, Figure 8, most of the end products have cooling trends that are lesser negative value than the cooling trend of the “raw” data. Curiously, the trend of the NOAA/NCEI data is the same as the “raw” data. The spread in the cooling rates of the end products is about 0.04 deg C/decade.  As a reference, the model mean of the CMIP5-archived model (historic/RCP8.5) show a slight warming trend (+0.032 deg C/decade) for this cooling period.  Models wrong again.

Figure 8

Figure 8

In Figure 9, we can see that the “raw” data have the highest warming rate for the early 20th Century warming period of 1914 to 1945. The adjustments during this time period are primarily to the sea surface temperature data…an effort to account for biases resulting from the transition in temperature-sampling methods, from buckets to ship inlets.  Regardless, the spread in the warming rates is about 0.02 deg C/decade for the end products.

Figure 9

Figure 9

Of course, since the models do not simulate the cooling from 1880 to 1914, they fail to properly simulate the warming from 1914 to 1945.  The model consensus only shows a simulated warming rate of +0.057 deg C/decade during this period.  Because the models can’t explain the extent of the warming that took place in the early part of the 20th Century, apparently natural variability is capable of warming Earth’s surfaces at a rate of 0.07 to 0.09 Deg C/decade above that hindcast by the models. That of course makes one wonder how much of the recent warming was caused naturally.

The mid-20th-Century slowdown/cooling period of 1945 to 1975 is last, Figure 10.  The “raw” data and those datasets that are based on the HADSST3 data sea surface temperature data (Berkeley Earth, Cowtan and Way, UKMO HADCRUT4) show slight cooling trends during this period. Once again, they have been adjusted for the 1945 discontinuity and trailing biases that were determined in Thompson et al. (2008).  On the other hand, the two datasets that rely on NOAA’s very odd ERSST.v4 “pause buster” sea surface temperature data (GISS LOTI and NOAA/NCEI) show a slight warming trend for 1945 to 1975.  And once again, the reason those two differ from the others is that the ERSST.v4 “pause-buster” data were not corrected for the 1945 discontinuity and trailing biases.

Figure 10

Figure 10

How awkward is NOAA’s failure to correct for the 1945 discontinuity and trailing biases? Even the consensus of the climate models (CMIP5 model mean with historic and RCP8.5 forcings) shows a cooling trend (-0.014 deg C/decade, slightly more than observed) during the mid-20th-Century period of 1945 to 1975.

THE SPREADS BETWEEN ANNUAL AND MONTHLY GLOBAL LAND+OCEAN SURFACE TEMPERATURE END PRODUCTS      

Since we’re discussing global temperature products, I thought I’d illustrate something else…the extents of the disagreements between annual and monthly global surface temperature anomalies.

Figure 11 (annual) and Figure 12 (monthly) show the spreads between the 5 global land+ocean surface temperature end-products. (Please note the differences in the scales of the y-axis.) The anomalies are all referenced to the full term of the data (1880 to 2015) so not to bias the results. The minimum and maximum values for the 5 datasets were first determined.  Then the spread was calculated by subtracting the minimums from the maximums.

Figure 11

Figure 11

# # #

Figure 12

Figure 12

Curiously, referring to the annual data because it’s easier to see, the spread in the early 1900s is less than the spread for much of the mid-20th Century. Again, the anomalies are all referenced to the full term of the data (1880 to 2015) so not to bias the results.

CLOSING

We often hear people state that the adjustments to global land+ocean surface temperature data have decreased the global warming rate.  That’s very true for the long-term data (1880 to 2015) but not necessarily true for the periods after the mid-1940s.

For the post-1998 or post-2001 slowdown in global warming, the adjustments have increased the global warming rates in all datasets, with the UKMO HADCRUT4 adjustments having the least impacts.

NOAA’s failure to correct for the 1945-discontinuity and trailing biases causes the GISS LOTI and NOAA/NCEI to have relatively high warming rates for the period of 1950 to 2015.  That failure on NOAA’s part also shows up during the mid-20th-Century period of 1945 to 1975…the GISS LOTI and NOAA/NCEI show a light warming during this period, while the datasets that have been corrected for the 1945-discontinuity and trailing biases show a slight cooling.

Some persons believe the adjustments to the global temperature record are unjustified, while others believe the seemingly continuous changes are not only justified, they’re signs of advances in our understanding.  What are your thoughts?

Get notified when a new post is published.
Subscribe today!
0 0 votes
Article Rating
196 Comments
Inline Feedbacks
View all comments
Walt D.
May 7, 2016 7:30 am

Surely the key point is that changing the data does not change anything in the real world.
You can arbitrarily change the ARGO data, but unless you actually believe that there is something wrong with all the thermometers and that they are systematically under recording actual ocean temperatures then the idea that the actual ocean temperatures adjacent to each ARGO buoy suddenly jumped is fatuous.

Pamela Gray
May 7, 2016 7:34 am

Solar energy, once it enters Earth’s systems, isn’t easy to measure. It gets stored up where we can’t easily measure it here and there, and also directly heats things where we can easily measure it here and there. Just because we have a bunch of sensors all over the place, does not mean we have accurately measured all the incoming energy that Earth receives.
Energy gets stored, and moved around. A lot. The thing with moving heat from one part of Earth (say, the oceans) to another part of Earth (say, the atmosphere), is that we don’t know how long it takes for that heat to move to another part of Earth, or to leave Earth all together. Those that advocate for removing ENSO or atmospheric temperature spikes from the record are not taking into account this lack of knowledge about where energy goes or how long transferred heat stays around in the atmosphere. Furthermore, AGW’s may be attributing a source of additional atmospheric heat to a human source but in reality this heat is coming from a natural source that had stored it up earlier.

Reply to  Pamela Gray
May 7, 2016 8:17 am

Pamela I am not saying remove the Enso events from the data .The 1998 and 2010 El Ninos are obvious in the RSS data above. You simply don’t use the middle of one as an end point. You wouldn’t start a trend from the 1998 peak e.g. As you can see 4 or 5 years past the peak it doesn’t affect the trend much.

May 7, 2016 7:37 am

None of this two and three decimal point data, graphs, and trends are real measured data, it’s all statistical manipulations and hallucinations.

Walt D.
Reply to  Nicholas Schroeder
May 7, 2016 8:53 am

+10

May 7, 2016 7:46 am

1) Per IPCC AR5 Figure 6.1 prior to year 1750 CO2 represented about 1.26% of the total biosphere carbon balance (589/46,713). After mankind’s contributions, 67 % fossil fuel and cement – 33% land use changes, atmospheric CO2 increased to about 1.77% of the total biosphere carbon balance (829/46,713). This represents a shift of 0.51% from all the collected stores, ocean outgassing, carbonates, carbohydrates, etc. not just mankind, to the atmosphere. A 0.51% rearrangement of 46,713 Gt of stores and 100s of Gt annual fluxes doesn’t impress me as measurable let alone actionable, attributable, or significant.
2) Figure 10 in the Trenberth paper (Atmospheric Moisture Transports…), in addition to substantial differences of opinion, i.e. uncertainties, 7 of the 8 balances showed more energy leaving ToA than entering, i.e. cooling.
3) Even IPCC AR5 expresses serious doubts about the value of their AOGCMs.
Three simple points seek three simple rebuttals. All the rest, sea levels, ice caps, polar bears, SST/LTT/ST trends, etc. don’t matter, nothing but sound and fury.

Reply to  Nicholas Schroeder
May 7, 2016 9:52 am

1 Is very easy, look into how those numbers are produced and you have your rebuttal to that. clue: the total biosphere carbon balance is a fictional number, they have no idea, this is a guesstimate.
2 Water vapor appears to be on a slight downward trend, very slight, not fitting for Trenberth’s warmer wetter world. More evaporation will cool the surface more but it will also increase water vapor, where is all Trenberth’s water vapor? also hiding in the iceans?comment image
http://clivebest.com/blog/wp-content/uploads/2013/01/fig4c_tpw.jpg
3 The models are bunk, even fudged they cannot mirror the observed climate. End of story.

Reply to  Mark
May 7, 2016 9:54 am

Research what the models cannot do, what they are bad at, and which complicated natural processes are fudged with simplistic equations rather than modeling the actual physics. Poor resolution and if you run them for long enough they will turn the earth into Venus

Reply to  Mark
May 7, 2016 10:20 am

Well, that’s not a rebuttal!

Reply to  Mark
May 8, 2016 2:59 am

“Mark May 7, 2016 at 9:52 am
1 Is very easy, look into how those numbers are produced and you have your rebuttal to that. clue: the total biosphere carbon balance is a fictional number, they have no idea, this is a guesstimate…”

Not disagreeing Mark, just restating.
The total biosphere carbon balance is worse than ‘a guesstimate’. It is a sum of multiple simple guesstimates! All of which are untested and unverified for global use.
The Earth is a globe with an average diameter of 12,742 km (7,926 miles).
Carbon sourced from the biosphere is known to be found at a depth of 650 km (403.9 miles); subduction zones of the upper mantle, to the limits of atmosphere on our planet.
There may be Earth bio-sourced carbon beyond these rough limits.
Anyone claiming to rationally know biosphere carbon storage amounts is practicing self stimulation instead of science.
Especially when they barely understand the carbon biosphere basics.

Reply to  Nicholas Schroeder
May 7, 2016 10:40 am

Nicholas Schroeder says:
All the rest, sea levels, ice caps, polar bears, SST/LTT/ST trends, etc. don’t matter, nothing but sound and fury
signifying nothing, as the Bard would say.

Reply to  Nicholas Schroeder
May 7, 2016 11:33 am

It’s not a rebuttal, it’s pointing out the obvious. The point being that the IPCC are not dealing in anything other than guesses when talking carbon balance. Trenberth’s science is based on assumptions as are all if then studies and the models are not validated, not representative of actual climate, and have proven to be totally unreliable even with fudge factors and they run in a linear fashion where as the system they are trying to replicate is non linear and they are all based on one than one assumption, or more acurately mathematics in lieu of scientific results.
If you want to argue guesses and science based on assumptions with no solid scientific basis to support them, go ahead, you cant fix a broken game by playing it.

Reply to  Mark
May 7, 2016 11:46 am

Bickering over sea levels, ice sheets/caps, 0.xx C trends over decades/centuries, ToBs, UHI, etc. is the very epitome of the broken game. The point is to hoist the alarmists by their own fundamental petards.

Reply to  Nicholas Schroeder
May 7, 2016 11:37 am

Bottom line Nicholas.. no one can actually prove anything. So everyone can argue until the cows come home meanwhile little real progress is made. That’s where we’re at. After 30 years there is still nothing solid for AGW or the cause of rising temperature or CO2 growth.

Reply to  Mark
May 7, 2016 11:47 am

“…no one can actually prove anything..”
Yes, they can and have – except in the Matrix.

Reply to  Mark
May 8, 2016 3:19 am

Nicholas:
When the NULL hypothesis remains and the many proposed CO2 hypotheses are either still unproved or have been multiply falsified, what remains?
Use of confirmation bias science;
— guesstimates,
— tailored algorithms,
— incomplete experiment implementations,
— preferred selected data,
— preferred adjusted data,
— terrible data collection processes and procedures,
— Horrendous data manipulation processes and procedures,
— ignored errors,
— expunged contrary data and information,
— and so on, and so on…
are all the mainstays of climate science.
Rebuttal of the NULL hypothesis still remains to be proven by climate science.
Shrill claims and protestations coupled with many manifold wrong predictions are/is today’s climate science methods.

benben
May 7, 2016 10:07 am

Bob, interesting graphs.
I would be really interested to see one of these reasonable and hyperbole-free posts on what adjustments are, in your view, valid. It’s clear that some adjustments for changes in measurement technologies etc. are necessary. It’s also clear that you don’t trust the adjustments as made by the scientists compiling the data. So how would you adjust the data, and how would that compare to the data you present here?
Cheers,
Ben

Reply to  benben
May 7, 2016 11:59 am

benben, I see you are again sniping your betters for the sake of sniping.
Bob was (and has been in the past) clear in his statements about certain adjustments. NOAA adjustments based on invalid use of extreme end of statistical range adjustment factors, inappropriate use of night marine air temperatures and failure to adjust for known data problems centered on the 1940’s. He has never, to my knowledge, stated that he did not “…trust the adjustments as made by the scientists compiling the data.” Crawl back in your hole.

Reply to  benben
May 8, 2016 3:27 am

Here’s adolescent benjibenji again harping on straw men and demanding that others work harder so benji can blindly ignore proofs and nitpick findings.
benjibenji; if you trust the adjustments, then prove them to us. Generic mass adjustments serve no purpose other than to obfuscate or obliterate.
Legitimate scientists would prove their claims with actual unadjusted data first, then indicate what increased error ranges exist because of possible systemic data collection, transmission, storage and manipulation procedures or circumstances.
Without proof from actual original data, climate scientists are just using self satisfaction exercise in lieu of science or scientific method.

benben
Reply to  benben
May 8, 2016 6:33 pm

I did not write that I trust the adjustments, and I’m certainly not going to say anything about them myself since it’s not my field of expertise by a long shot. But mr. Tisdale seems quite good at it, so how is it offensive that I ask for his opinion? Oh I know, because I’m different than you and therefore everything I say can be derided with nasty comments (‘crawl back into your hole’? Really?).
Well, if ever you wonder why most people don’t give you guys a second thought, your evidence is above. Beyond a very narrow demographic (hello Trump supporters!), people tend to ignore those who can’t deal with others without getting unpleasant.
Cheers,
Ben

TLMango
May 7, 2016 11:17 am

It’s not just past scientific data that’s being rolled.
The Weather Channel sends out local 7 day forecasts to
every city in the country. Where I live, the forecast is consistently
3 degrees higher 3 to 7 days out, 2 degrees higher for tomorrow’s
and too many times 1 degree higher than the actual high for the day
(indicating a bias in their rounding method).
There’s a widespread belief that contaminating all future data is
a good thing as long as they are saving the world. I think they are criminals.

May 7, 2016 11:21 am

What was the reason for wiping out the 1940’s warming from data?

TA
Reply to  Mark
May 7, 2016 12:39 pm

Is that a trick question, Mark?

May 7, 2016 11:45 am

Bob, thanks again for another very useful post. I have benefited greatly from your various posts and books.
I learned a long time ago that if one is to move forward and accomplish things the way you do, one must ignore the nitpickers and naysayers. Alternatively, one can gain useful insights from constructive critics, as you have acknowledged in the past. It is an art form to differentiate the two. It also tries one’s patience and temper and it is a credit to you that you handle it with such grace.
In order to make any sense of the world around us we have to take information as is available; denying that it is valid is a dead-end. All of your work has served to inform reasoned conclusions as to the evolution over time of land and, more importantly, ocean-basin temperature TRENDS. You have shown that models do not reconstruct the past and cannot predict the future. Based on the available data, you show that CO2 is not the primary driver of climate metrics. The puppies nipping at your heels haven’t a clue.
Steven Mosher actually got close to making a good point. Arguing about the details of past warming will get you nowhere in policy discussions. Proving the drivers of such warming is fair game. He is wrong, however, in stating that the world is warming. Speculating about future warming or cooling is also fair game. PROVING causation is essential in forecasting.
Dave Fair

TLMango
May 7, 2016 12:07 pm

Wiping out the 40’s data was an attempt to undermine
the argument that climate is cyclical.
Cooling the past and warming the present rolls the data
to give us an endless stream of hottest moments ever.

May 7, 2016 12:29 pm

Back in the real world, cool waters signaling La Nina conditions are having an impact on the Peruvian anchovy fishery

May 7, 2016 1:02 pm

Bob, you write:

2013 was an ENSO neutral year…that is there no El Niño or La Niña.

in your explanation of how you chose the endpoints for Figure 5, then reference a NOAA chart showing 2013 was, according their measures, indeed a “neutral” ENSO year, but hte same chart confirms the other endpoint (1998) was not.
Why have you chosen 1998 to begin this series? My suspicion is the “hiatus” looks much less pronounced when it’s begun on the other side of the step rise in temperature observed following the 1998 ENSO. It seems less theoretical and more to the empirical point to begin and end the interval as Monkton has proposed; you start from now and go back until a trend is observed. There’s no hypothetical in this method. It brooks no argument concerning the global effects of ENSO, it simply observes there has been no significant warming over some period of time.
I think complicating the analysis by using “partial” ENSO criteria doesn’t improve the argument or the presentation, it just opens the door to discussing the relevance of the ENSO and your criteria will be criticized for not being uniform.

Reply to  Bartleby
May 7, 2016 1:25 pm
Reply to  Dr Norman Page
May 8, 2016 3:37 am

Dr. Norman Page:
“Monckton’s approach is not particularly meaningful”
Well, that is clear as mud. Perhaps you mean to indicate that, Monckton’s approach for calculating the pause is not particularly applicable in calculating periods of warming or cooling?
Lord Monckton’s pause calculation method is applicable to calculating statistically significant periods of null data slope. Which is not what Bob Tisdale accomplishes above.

Reply to  Dr Norman Page
May 8, 2016 6:18 am

If the period includes an inflection point “statistical significance ” has no meaningful physical correlate in the real world..

Reply to  Dr Norman Page
May 8, 2016 12:52 pm

Dr. Page:
That is an arbitrary claim.
The obverse is; ‘inflection points, as a portions of cycles, cause determining rising or falling slopes to be unnecessary exercise’.

1sky1
May 7, 2016 2:12 pm

Questions of the effect of various data adjustments should not obscure much more fundamental concerns about the geographic coverage and integrity of the extant data bases.
With historical SST observations from ships of opportunity we not only have vast empty spaces between well-traveled sea-lanes, but total lack of control of measurement location and method (buckets, engine intakes) in monitoring perpetually moving water masses. With station records, we have only partial control of location, which is overwhelmingly urban in the global aggregate, and only rarely extends over the entire 20th century, let alone over the intervals shown from stitched-together shorter records. The unconscionable elision of long trendless records, mentioned earlier by Bill Illis, only exacerbates the lack representativeness in the GHCN database.
Those are the real elephants in the room, whose shadow obscures the fact that the 20th century global average air temperature–as best as can estimated from long, vetted UHI-uncorrupted station records and high-quality marine observations, experienced its minimum in 1976, as did the SST.

1sky1
May 7, 2016 2:15 pm

There should be a dash, instead of a comma, between “observations” and “experienced” in my last sentence.

TA
May 7, 2016 2:59 pm

Alarmist Questioner: You claim the 1930’s is hotter than subsequent years (see chart below) but you seem to be misguided because the 1930’s temperature data you are pointing to in the chart is U.S. temperature data, not a “global” temperature.
http://www.sealevel.info/fig1x_1999_highres_fig6_from_paper4_27pct_1979circled.png
My reply: Yes, I see that argument all the time when this subject is brought up. The question implies that what I am doing is comparing apples to oranges when comparing the 1930’s U.S. temperature to the “global” 1998, temperature, thus, the comparson is invalid, so the argument goes.
This is the standard rebuttal to the claim the 1930’s was hotter than subsequent years.
But I don’t think we are comparing apples to oranges.
The Climate Change Charlatans conspired together to change the historic surface temperature data because they said among themselves that the 1930’s, was hotter than 1998, and they felt the need to modify this temperature profile so that instead of showing a cooling trend, it would show a warming trend, to fit in with their CAGW theory.
Now, if the Climate Change Charlatans were only comparing apples to oranges, then they would have no need to modify the temperature record, because they could just make the same argument make and say the data was irrelevant because one was a local temperature and one was a “global” temperature. But they didn’t make that argument.
What really happened is the Climate Change Charlatans looked at all the historic temperature data available to them, such as the UK, Iceland, etc., and saw that in all cases, the 1930’s showed as hotter than subsequent years (with minor differences), so the Climate Change Charlatans were basing their claim about the 1930’s on all the available global data they had, not just on the U.S. temperature data. That’s why they were panicked and conspired to change the data.
They were comparing apples to apples, and if they left the apples in their historic form, then the 1930’s would show as hotter than subsequent years. They didn’t want that.
There are many historic, unmodified charts from around the world that show a similar temperature profile to the chart above. Even Cape Town South Africa has a similar chart.
Any chart you look at in the future should look like the one above, with the years after 2000, added on to the end. That is the REAL temperature profile we have been living under all these years.
Any chart that does not have the 1930’s hotter than subsequent years is false and is bad data meant to fool people into thinking we are in an unrelenting warming trend, when the exact opposite is the case. This is a costly, stressful false reality created by conniving Climate Change Charlatans.
I think Trump’s Attorney General (Christy?) should prosecute these Climate Change Charlatans for defrauding the taxpayers of the United States.
They wouldn’t want me on their jury. 🙂

Bindidon
May 7, 2016 3:07 pm

Nicholas Schroeder wrote on May 7, 2016 at 7:37: None of this two and three decimal point data, graphs, and trends are real measured data, it’s all statistical manipulations and hallucinations.
I love such comments!
Here you see a collective result of lots of wonderful ‘statistical manipulations and hallucinations’:
http://fs5.directupload.net/images/160507/hspmb7lq.jpg
This is a plot showing, for the time interval between 1979 and 2015, the work of several teams responsible for 9 datasets: 1 radiosonde measurement set for the middle troposphere, 3 satellite measurement sets for the lower troposphere, and 5 surface measurement sets.
They partly share raw data, that all we know, but they all have very different evaluation schemes.
Over a span of 37 years, the trends vary, in °C par decade with 2σ CI, from 0.114 ± 0.08 (UAH6.0 beta5) up to 0.172 ± 0.06 (Berkeley Earth).
The difference between the two extremes above should not wonder anybody: troposphere and surfaces are different places, with different reactions to what drives temperature changes.
What imho is much more interesting is the convergence among all these datasets.

Reply to  Bindidon
May 7, 2016 3:29 pm

You plot repeats the gross schoolboy error of the climate establishment , the IPCC the whole UNFCCC circus,Monckton and most commentators here by projecting the trend in a straight line when there is clearly an inflection point in the millennial and 60 years trends at about 2003 . see above
https://wattsupwiththat.com/2016/05/07/do-the-adjustments-to-the-global-landocean-surface-temperature-data-always-decrease-the-reported-global-warming-rate/comment-page-1/#comment-2209238
Any discussion which does not address, accept or refute this in some way is a waste of time.

Reply to  Dr Norman Page
May 7, 2016 3:31 pm

Add an r to the first word.

Bindidon
Reply to  Dr Norman Page
May 7, 2016 3:41 pm

Ooops?! One more Owner Of The Very Truth, I guess? Thanks anyway for your fruitful hint, Dr…

Reply to  Dr Norman Page
May 7, 2016 5:37 pm

Whether it is is the truth or not only time will tell. I do think it is the simplest and most obvious working hypothesis.It can be understood by any high school graduate or average non scientist who may have heard of the Roman, Medieval and Current warming periods It wouldn’t result in many publications jobs, grants, honors etc for the eco-left academic establishment and NGO.s in the US and UK nor opportunity
for profit by the corporate feeders at the public trough and the politicians who aid and abet them.

1sky1
Reply to  Bindidon
May 7, 2016 4:25 pm

The “convergence” of datasets over the recent decades is hardly unanticipated, inasmuch as there has been a sharp upward swing in temperatures since 1976 and global satellite results are made publically available early each month. Prior to advent of the satellite era, however, there is great divergence between the various versions of the ouvre of cherry-picking archivists and tendentious index-makers, and even greater divergence relative to long, thoroughly vetted station records. The most egregious divergence is the creation of bogus long-term historical trends.

Bindidon
Reply to  1sky1
May 7, 2016 5:12 pm

Thanks for this very intelligent explanation… but I could well live without it.
Maybe you explain all the world what it does wrong, e.g. by publishing your thoughts in a scientific journal?

1sky1
Reply to  1sky1
May 7, 2016 5:33 pm

My “thoughts” about the development during the last few decades of stunning inconsistencies in the archived data and attendant indices have been noted by numerous others who have delved deeply into the data bases. Some have even displayed their findings as flash images here at WUWT. That common knowledge scarcely provides fodder for publishing a research paper, especially in the era of academic pal review. Glad to hear that you can well live without it.

Reply to  1sky1
May 7, 2016 5:42 pm

Thanks for this very intelligent explanation… but I could well live without it.
Because it doesn’t fit the narrative…

Reply to  Bindidon
May 7, 2016 5:25 pm

Bindidon, what IM(not so)HO is vastly more interesting is the divergence of the post-1998 radiosonde and satellite trends from surface measurements.

1sky1
Reply to  dogdaddyblog
May 7, 2016 5:37 pm

Another “innocent” discrepancy!

Bindidon
Reply to  dogdaddyblog
May 7, 2016 5:52 pm

It’s about 2:50 am here, so this for short.
It’s a bit hard to see, but in the plot above with all the 9 records between 1979 and now, there is this RATPAC B in the version “monthly combined”, for the pressure level “300 hPa”.
According to one of Roy Spencer’s posts, the average absolute temperature in the LT where UAH measurements take place was in 2015 about 264 K, i.e. -9 °C and thus 24 °C less than at Earth’s surface.
If we assume a loss of about 6 °C per km upwards in the atmosphere, that would mean the LT level where these -9 °C were measured should be at 4 km above surface.
That in turn means an atmospheric pressure somewhere between 700 and 500 hPa.
But the UAH6.0beta5 record is even below the temperatures measured by RATPAC A and B radiosondes at 300 hPa.
The only satellite record with a good fit to the RATPACs is… RSS4.0 TTT !
I have all that data on disk, and can plot all you want about that stuff.

Reply to  Bindidon
May 7, 2016 6:34 pm

trends, trends, trends

Bindidon
Reply to  dogdaddyblog
May 8, 2016 9:30 am

dogdaddyblog, on May 7, 2016 at 6:34 pm : trends, trends, trends
Wow! Somebody disturbed by trends at WUWT! That’s new indeed.
Until now I solely had to deal with these specialists of
http://www.woodfortrees.org/plot/rss/from:1997/to:2012/trend/plot/rss/from:1979/mean:12
Glad to read another meaning.
However, I don’t know how to show “the divergence of the post-1998 radiosonde and satellite trends from surface measurements” other than by comparing estimations computed using some least square method 🙁
Hope I understood you well as you wrote:
Bindidon, what IM(not so)HO is vastly more interesting is the divergence of the post-1998 radiosonde and satellite trends from surface measurements.
I plotted, from 1999 till today, some of the RATPAC B pressure records together with UAH6.0beta5 TLT, RSS4.0 TTT, and GISS:
http://fs5.directupload.net/images/160508/r6f4eto7.jpg
You see that
– GISS is far below radiosonde’s surface measurement;
– UAH6.0 is even below 250 hPa;
– the new RSS for the entire troposphere is near GISS…
Feel free to explain what you mean with these divergences. I have no idea…

Reply to  Bindidon
May 8, 2016 7:43 pm

Excellent graph, Bindidon! Thank you very much. I would have had to learn some new programs.
Looking at the data it is not apparent to me they start at 1998. From a visual standpoint, it would help if each of the trend lines began in 1998, as a common reference date (zeroed). Additionally, the slopes of the trends in degrees C per decade would be very meaningful to me as an old engineer. I do not, however, know if your program(s) have such capabilities.
Dave Fair

Reply to  Bindidon
May 7, 2016 7:00 pm

“…from 0.114 ± 0.08…”
What type of instrument measures to that resolution & from space, yet.

Bindidon
Reply to  Nicholas Schroeder
May 8, 2016 3:28 am

The best for you would be to read
http://www.remss.com/measurements/upper-air-temperature
from top to bottom; you then will understand what happens if you try to do the job using one fractional digit.

Reply to  Nicholas Schroeder
May 8, 2016 7:34 am

Bindidon
Ok. Now what.
Skimmed it, saw nothing about primary instrumentation, precision, accuracy, resolution, uncertainties. Tons of statistical manipulations. The pretty colored globe has a scale of 230 to 290 K w/ 1 K tick marks. How does one get 0.xxx C / decade out of xx1. C measured inputs?

Bindidon
Reply to  Nicholas Schroeder
May 8, 2016 8:22 am

http://images.remss.com/papers/rsspubs/Mears_JTECH_2009_MSU_AMSU_construction.pdf
http://images.remss.com/papers/rsspubs/Mears_JTECH_2009_TLT_construction.pdf
etc etc etc etc
But I think you aren’t that much interested in discovering what these people really do… because if you did, I wouldn’t have to write such comments.

Reply to  Bindidon
May 8, 2016 3:53 am

“…This is a plot showing, for the time interval between 1979 and 2015, the work of several teams responsible for 9 datasets: 1 radiosonde measurement set for the middle troposphere, 3 satellite measurement sets for the lower troposphere, and 5 surface measurement sets.
They partly share raw data, that all we know, but they all have very different evaluation schemes…”

A perfect example of how the climateers believe adding in more data without clarifying metadata or error ranges presents an image of precision, without any honest accuracy.
If data “evaluation schemes” result in different data ranges, that is additional errors injected by the evaluation scheme that should be explicitly explained in detail.
Simple source evaluation scheme names are not explanations!

“…Over a span of 37 years, the trends vary, in °C par decade with 2σ CI, from 0.114 ± 0.08 (UAH6.0 beta5) up to 0.172 ± 0.06 (Berkeley Earth)…”

Wow! Great precision, terrible to zero accuracy.
Multiple disparate temperature stations in varying degrees of disrepair and often infested with living organisms have huge error ranges greater than tenths of a degree.
Accuracy claims for data error ranges in the hundredths of a degree using data that originates with tenths of a degree error is impossible; but not uncommon for climatism science.

Bindidon
Reply to  ATheoK
May 8, 2016 8:09 am

One more of these perfectly redundant and thus superfluous comments.
Firstly, ATheoK: do you really think that I didn’t see that detail? Do you really think me wasting my time in adding, for my hundreds of Excel Linest trend blocks, a computation automatically checking wether or not the 2 sigma has less precision than its trend, so all these ATheoK’s don’t need to add useless comments?
Secondly: there is NOTHING wrong in a confidence interval being even greater as the trend. I’m afraid you don’t understand what this CI is intended to.
Thirdly: such a comment as yours I NEVER have seen in the context of spurious trends like e.g. those of RSS3.3 for a ridiculous time interval of say 17 years, giving sometimes CI’s ten times bigger as their trend.
On the contrary: I then was told only the trend would be relevant.
‘Standard errors for flat trends are meaningless’, you suddenly get to read…

Reply to  ATheoK
May 8, 2016 1:13 pm

Bindidon:
Excuse me your omnipotence, Thank you for confirming your ignorance of handlings errors.
Your immediate reliance to ad hominem insults, deprecations and condescension is the giveaway that you can not speak to the science.
You sure don’t mind wasting our time with your claims of reaching 2-sigma precision.
Confidence interval? Straw man argument along with childish ad hominems?
A confidence interval is pure smoke if one is unable to state accurate all error ranges contained within that CI. An error range presented in a final graph should accuratly represent the total of all error ranges from data origination through presentation.
Is that an attempt to insult the RSS data? A small admission that there is a 17+ year pause in rising temperatures?
What is ridiculous is that the warmists and revisionists are unable to explain the pause.
Given that La Nina is coming quickly and looking to be a strong La Nina; it will only take a year or so before the decline in temperatures resuscitates the pause right back to the beginning.
Again, Confidence Intervals are bogus without all factors, i.e. errors, contributing to the trend are included.
Alleged increasing temperature trends are useless if the actual range of error is well above/below the alleged increases/decreases.

Bindidon
May 7, 2016 5:04 pm

I see in a comment written by PA (May 7, 2016 at 10:44 am) a plot available at climate4you, describing well-known changes of the GISS temperature record.
Maybe PA should have a look at this:
http://www.moyhu.org.s3.amazonaws.com/2015/12/uahadj1.png
That’s a plot made by Nick Stokes to compare the two main GISS adjustments with the difference between the revisions UAH5.6 and UAH6.0beta (dated april 2015).
I never searched for older GISS datasets (what would that be good for?) and thus can’t verify Nick‘s comparison as such. I don’t have any reason not to trust him, however!
But these two UAH revisions I have downloaded, and it was easy to let good ol’ Excel compute the diffs between the two and plot them:
http://fs5.directupload.net/images/160508/wxdjwt5b.jpg
Looks nice, huh? But what about this?
http://fs5.directupload.net/images/160508/eth49vwp.jpg
Here we suddenly see that the UAH diffs in the first diff plot (concerning the globe) vanish to nearly zero in comparison with the diffs at the northern and southern poles in the second diff plot!
You are welcome to verify by doing the same job: many of the differences between anomalies are bigger than the anomalies themselves.
I trust in Roy Spencers work: his integrity is for me beyond any suspicion. If some people don’t trust in the work of Gavin Schmidt, Carl Mears or anybody else: that’s their problem. Basta!

Bindidon
Reply to  Bindidon
May 7, 2016 5:07 pm

Apologies: I’m too lazy to use Gnuplot 🙁

Kurt
May 7, 2016 6:10 pm

To answer the question at the end of the post, I think that adjusting the raw data is pointless because there is no way of objectively evaluating whether the adjusted data is more accurate than the raw data. Say I design a filter for a video signal that I think is going to eliminate noise (which is essentially an adjustment process on data). I can test the filter by using it and visually verifying whether picture quality improves, under what circumstances, etc. But we have no practical use for the temperature data to determine whether the new data sets are, in fact, more accurate quantitatively than the old data sets. This is not to say that there aren’t biases in the data. But just identifying sources of potential bias doesn’t mean that you should try to correct for them unless you have some way of testing whether the corrections improve the data. Absent that ability, the “corrections” are just an above-board way of fabricating data.

Dr. S. Jeevananda Reddy
May 7, 2016 7:04 pm

Bob Tisdale – You rarely like to respond positively on comments. When we raise cyclic variation, you tell us write a separate article but this is not correct approach on comments. Let me present two practical example: On global temperature anomaly in 2015 US Academy of Sciences & British Royal Society jointly brought out a report wherein they presented 10, 30 and 60 year moving average of the this data. The 60-year moving average pattern showed the trend. That means in this data series, if we use a truncated series, depending upon the series is decreasing arm and increasing arm of the sine curve [through these columns sometime back somebody presented a figure to explain this] the trend shows different type. This is obvious.
[1] In the case of Indian Southwest Monsoon data series from 1871, IITM/Pune Scientists [they are part of IPCC reports] briefed the Central minister [which was later briefed the members of parliament in parliament session in 2013/2014] that the precipitation is decreasing. They have chosen the data of descending arm of the sine curve. This has dangerous consequences on water resources and agriculture. This I brought to the notice of ministry of environment & forestry.
[2] In the case of Krishna River water sharing among the riparian states, central government appointed a tribunal [retired judges] to decide on this in 1970s. The tribunal used the data available to them at that time on year-wise water availability [1894 to 1971] for 78 years and then computed 75% probability value for distribution among the riparian states. Probability values were derived from graphical estimates [lowest to highest] and using the incomplete gamma model or any other model nor tested for its normality. Now, the central government appointed a new tribunal [three retired judges] to look in to the past award and give their award on this issue. Though this tribunal has the data for 114 years, chosen a period of 47 years [1961 to 2007] and decided the distribution.
The mean of the 47 years series is higher than 47 years series by 185 TMC. This 47 years series positively skewed and far from normality. The 114 years data series showed normality [mean is at 48% probability which is very close to 50% probability] and as well the precipitation data series showed a 132 year cycle. Prior to 1935 the series presented 24 years drought conditions and 12 years flood conditions; from 1935 to 2000 the data series presented 12 years drought condition and 24 years flood condition; since 2001 on majority of years drought conditions were seen similar to prior to 1935.
With the new tribunal award the downstream riparian state is the major casualty. This I called it as “technical fraud” to favour upstream states. This I brought to the notice of the Chief Justice of the Supreme Court, Respected President of India and Respected Prime Minister of India but they did little as per the constitution the powers of the tribunal beyond question even if they commit fraud. Now the downstream state approached the Supreme Court. Here the discussion goes on legality but not on technicality. We should not do the same mistake as scientists have no unfettered powers.
Dr. S. Jeevananda Reddy

Dr. S. Jeevananda Reddy
May 7, 2016 7:05 pm

Bob Tisdale – You rarely like to respond positively on comments. When we raise cyclic variation, you tell us write a separate article but this is not correct approach on comments. Let me present two practical example: On global temperature anomaly in 2015 US Academy of Sciences & British Royal Society jointly brought out a report wherein they presented 10, 30 and 60 year moving average of the this data. The 60-year moving average pattern showed the trend. That means in this data series, if we use a truncated series, depending upon the series is decreasing arm and increasing arm of the sine curve [through these columns sometime back somebody presented a figure to explain this] the trend shows different type. This is obvious.
[1] In the case of Indian Southwest Monsoon data series from 1871, IITM/Pune Scientists [they are part of IPCC reports] briefed the Central minister [which was later briefed the members of parliament in parliament session in 2013/2014] that the precipitation is decreasing. They have chosen the data of descending arm of the sine curve. This has dangerous consequences on water resources and agriculture. This I brought to the notice of ministry of environment & forestry.
[2] In the case of Krishna River water sharing among the riparian states, central government appointed a tribunal [retired judges] to decide on this in 1970s. The tribunal used the data available to them at that time on year-wise water availability [1894 to 1971] for 78 years and then computed 75% probability value for distribution among the riparian states. Probability values were derived from graphical estimates [lowest to highest] and using the incomplete gamma model or any other model nor tested for its normality. Now, the central government appointed a new tribunal [three retired judges] to look in to the past award and give their award on this issue. Though this tribunal has the data for 114 years, chosen a period of 47 years [1961 to 2007] and decided the distribution. The mean of the 47 years series is higher than 47 years series by 185 TMC. This 47 years series positively skewed and far from normality. The 114 years data series showed normality [mean is at 48% probability which is very close to 50% probability] and as well the precipitation data series showed a 132 year cycle. Prior to 1935 the series presented 24 years drought conditions and 12 years flood conditions; from 1935 to 2000 the data series presented 12 years drought condition and 24 years flood condition; since 2001 on majority of years drought conditions were seen similar to prior to 1935. With the new tribunal award the downstream riparian state s the major casualty. This I called it as “technical fraud” to favour upstream states. This I brought to the notice of the Chief Justice of the Supreme Court, Respected President of India and Respected Prime Minister of India but they did little as per the constitution the powers of the tribunal beyond question even if they commit fraud. Now the downstream state approached the Supreme Court. Here the discussion goes on legality but not on technicality.
In science we should not follow this.
Dr. S. Jeevananda Reddy

Geistmaus
May 7, 2016 10:29 pm

“Some persons believe the adjustments to the global temperature record are unjustified, while others believe the seemingly continuous changes are not only justified, they’re signs of advances in our understanding. What are your thoughts?”
They need no justification at all. Science is two parts: Imagination and Observation. Theory, models, adjustments, and all that rot are purely Imagination. And the Imagination requires nothing in the way of justification beyond that it was imagined.
Observation simply is. It is what was recorded from reality, no more nor less. You can take reality, put it through a sausage grinder with Imagination, and get Imagination out the other end. But you cannot get Observation back out of such an exercise.
However, misrepresenting Imagination as Observation is the domain of the ignorant and hucksters. And what is sorely missing in all of this is the scientific method. Proclaiming your imagination loudly and proudly. Then waiting for the Observations to roll at you in refutation.

DWR54
May 8, 2016 12:40 pm

It’s pretty clear even from figure 1 that there is no significant difference between raw and adjusted land and sea surface data over the long term. In a 5 year smooth they’d be practically identical.
The raw data tell us just as clearly as the adjusted data that there has been considerable global warming since 1880 and that the rate of warming has considerably increased since the mid 1970s.

May 9, 2016 10:49 am

Adjustments are proof the original raw data were no good.
Adjusting the adjustments means the first adjustments were no good.
When the adjustments pause, only then will the data be perfect.
Of course the “adjustment pause” must extend for at least 15 years before one can declare adjustments have ended.
Will someone please revive this thread after 15 years with no adjustments?
Only then will the data be perfect, and worthy of our skilled debate.
Perhaps after all the adjustments, we will be back to the original raw data again?
PS:
No adult beverages were consumed during the development of this logical post.

Lars P.
May 11, 2016 3:09 am

Sorry, but I personally see the work with this data as more or less futile.
The “raw” data says it all. It is no raw data.
Having spent some time reading and learning about the climate debate I came to the conclusion that the “raw” data is not raw at all, but already wildly contaminated and selected. In my view unfortunately there is no usable science to be done with such contaminated data.
In addition to this, the whole UHI issue is not properly handled, the population more then doubled since the 1950’s and is 7 times more since the start of the climate charts.
The cities grew, which resulted in increased values for UHI of a respective city. There is one value for a city of 5.000 inhabitants a different one for 50.000 and a different one for 500.000 with a great deal of measurements done around cities. But climate charts consider UHI a constant.
All the measured warming, including the “plateau” could be explained simply by the fact that overall population increased, increase that has slowed in the last decades significantly, especially in the Northern Hemisphere where cities are in a colder climate and UHI is more significant.
All this over a cyclical variation in the background.
There are also other influences to the measured temperature and all these adjustments are more or less guesstimates to try to correct for them.
What becomes more and more clear is that the CO2 influence to the climate is benign, in reality we cannot yet measure it.
The greening of the planet through the CO2 effect is real and could be measured, but that is anathema. Only this point alone clarifies that we do not talk science in climate but new religion. Meanwhile we can enjoy the greener woods where they are allowed to grow.
http://www.nature.com/nclimate/journal/vaop/ncurrent/full/nclimate3004.html
http://www.co2science.org/data/plant_growth/plantgrowth.php
The models are a catastrophic failure. Using weather models to try to estimate climate is from the starting point wrong, climate models need proper energy balance which is not included, as it would be too complex to build in. Not even the atmosphere lapse rate is properly calculated by such models and they want to measure long term trends?
The only logical step is to start new measurement with a clearly defined methodology and tools, define a 30 years observing period and postpone any conclusion for after the analysis of this period. There is no urgency as the influence is shown to be benign on temperature and good on plant growth.

barry
May 16, 2016 4:43 pm

My thoughts: impugning the work of any of the global data compilers is speculative, ideologically based and ignorant. That goes for impugning Spencer and Christy as well as the other compilers. It’s lazy bile.
All are the best estimates available at the time, data and methods available to the public (waiting on S&C to publish their latest revision methodology): fairly transparent compared to other sciences.
The trend differences between the surface data sets are minimal – a few hundredths of a degree per decade over climatic periods and longer. Uncertainty is larger nearer the beginning of the record, of course, due to sparser coverage. Very short time periods, of course, can have greater discrepancies – but then you’re not estimating climate.
The difference between satellite and surface are also fairly minimal for the full satellite period, despite different properties being measured at different altitude. Between the highest trend (HadCRU krig v2) and RSS 3.3, the difference is 6 hundredths of a degree per decade, or 0.6C/century.
Of course, the shorter the time period the greater potential for more discrepancy. And this is where critics love to dwell. But if the uncertainty (which is larger for shorter time periods) is taken into account, the difference between them all is vanishingly small. Indeed, the uncertainties for all the data sets overlap for pretty much all time periods. All the estimates are ‘wrong,’ but all share a common uncertainty value, even satellite/surface. They’re not wildly wrong.
One more thought. Bob Tisdale should include uncertainty estimates for the trends. Ie, X (+/- Y). With a brief explanation of what this means, the uncertainty intervals would provide some clarity. Using mean trend estimates alone suggests that they are the absolute result for trends, when they are not. Readers of posts like this should be educated on that.