Is WTI Dead? And HadCRUT Adjusts Up, Again… (Now Includes August Data except for HadCRUT4.2 and HadSST3)

Guest Post by Werner Brozek, Commentary By Walter Dnes, Edited by Just The Facts:

WoodForTrees.org – Paul Clark – Click the pic to view at source

Update: Comment from Tim Osborn of the University of East Anglia (UEA), Climatic Research Unit (CRU), appended to the end of this article as it offers additional background on the recent changes to HadCRUT and HadSST.

The WoodForTrees Index (WTI) is a combination of surface data sets HadCRUT3 and GISS, as well as the satellite data sets RSS and UAH (version 5.5). All four must be present to produce the WTI. As can be seen in the graph above, three of the data sets go to August. However HadCRUT3 has not been updated since May, 2014. As a result, WTI has not been updated since then.

There are some who claim cherry picking when a particular data set is chosen, e.g. RSS to demonstrate the length of the Pause, thus the WTI offers a good average of data sets to address this criticism. Up to May, 2014, the WTI showed a very slight negative slope from January 2001. And since the anomalies have declined since then, it is very possible that the period without warming would be 13 years and 8 months at the end of August if WTI had data from all of the data sets.

But it doesn’t, so what’s up with Hadcrut3? Here are few quotes from years ago that may shed some light:

“The Met Office Hadley Center has pioneered a new system to predict the climate a decade ahead. The system simulates both the human driven climate change and the evolution of slow natural variation already locked into the system.”

“We are now using the system to predict changes out to 2014. By the end of this period, the global average temperature is expected to have risen by around 0.3 °C compared to 2004, and half of the years after 2009 are predicted to be hotter than the current record hot year, 1998.” Met Office Hadley Centre 2007

Met Hadley Prediction

“The Met Office Hadley Centre has the highest concentration of absolutely outstanding people who do absolutely outstanding work, spanning the breadth of modelling, attribution, and data analysis, of anywhere in the world.” Dr Susan Solomon, Co Chair IPCC AR4 WGI

So let us see how “absolutely outstanding” the Met Office Hadley Centre’s 2007 prediction is turning out. The 2004 anomaly was 0.447. After 5 months in 2014, the average anomaly was 0.472, which is no where near an “around 0.3 °C” rise from 0.447. What about “half of the years after 2009 are predicted to be hotter than the current record hot year, 1998”? As of 2013, the 1998 record had not been broken on the Hadcrut3. We do not know what might have happened in the remainder of 2014, however Hadcrut3 had been tracking Hadcrut4 very closely. So if we assume the same change in Hadcrut3, as occurred in Hadcrut4 for 2014 compared to 2013, we get the following results. Hadcrut4 had an anomaly of 0.492 in 2013. The average to the end of August 2014 is 0.555. The difference is 0.063. So if we add 0.063 to the Hadcrut3 anomaly of 0.457 in 2013, we get 0.52. With this anomaly after 8 months, an average anomaly of 0.604 would have been required for the remaining 4 months of 2014 to set a record. This number has been beaten 5 times in 1998 and three other times after that. So it is safe to say Hadcrut3 had no chance of beating 1998 through 2014.

Which leads us to HadCRUT4 and some new adjustments. The HadCRUT4 data for this update couldn’t be drawn from HadCRUT4.2 since, as of July, that no longer seems to exist. Instead, we have HadCRUT4.3. Neither HadCRUT4.2 nor HadSST3 had been updated for August as of October 3, and since the new HadCRUT4.3 numbers are up, I will assume that we will no longer see HadCRUT4.2. I do not know about HadSST3.

Unsurprisingly, HadCRUT4 was adjusted up, again. Walter Dnes offers the following insight with respect to the new HadCRUT4.3: The anomaly increased to +0.669. The longest negative string is now November 2001 through August 2014. November 2001 (and January 2002) are just barely negative for the slope. If September’s value is +0.513 or higher, the next negative numbers are in late 2004.

Here is a graph of HadCRUT4 slopes for all months for the period from that month to the latest available data:

Walter Dnes – Click the pic to view at source

Note that, between 2007 and late 2008 and before late 2000 the slope() was greater than .004 C degree per year, thus was literally “off the graph”. The graph above, along with those for GISS, UAH5.6, RSS, and NCDC/NOAA, as well as the associated data is available from this Google spreadsheet.

The following are the “diagnostic plots show comparisons of global and hemispheric time series for HadCRUT.4.3.0.0 (this version) and HadCRUT.4.2.0.0 (the previous version of HadCRUT4).” Unsurprisingly the Met Office Hadley Center found more warming:

Met Office Hadley Center – Click the pic to view at source

It appears they found more warming during the last 18 years in the Northern Hemisphere:

Met Office Hadley Center – Click the pic to view at source

Whereas the Southern Hemisphere they apparently only found some warming that had been hiding out since World War I:

Met Office Hadley Center – Click the pic to view at source

The monthly values for the new HadCRUT4.3, they are available here and the new yearly averages are here.

The above raises the question as to why these adjustments were made. It would have been nice to compare apples to apples to see if Hadcrut3 would have finally broken the 1998 record. But then the apple became a banana when the new Hadcrut4 came out. Then the banana became a red pepper when Hadcrut4.2 came out as can be seen here. Now, the red pepper became a jalapeno pepper when Hadcrut4.3 came out. The anomaly for the first 7 months on Hadcrut4.2 averaged 0.535 and it ranked in third place at that time, but the first 7 months on Hadcrut4.3 averaged 0.539, and the average for 8 months on the new Hadcrut4.3 is now 0.555 and this would tie it in first place with 2010 which is also at 0.555.

Why are they changing things so quickly? Do they want to take some of the heat off GISS? Are they embarrassed that Dr. McKitrick has found no statistically significant warming for 19 years and before the ink is barely dry on his report, they want to prove him wrong? Are they determined that by hook or by crook that 2014 will set a new record?

Last year, I wrote the following in comments:

“From 1997 to 2012 is 16 years. Here are the changes in thousandths of a degree with the new version of HadCRUT4 being higher than the old version in all cases. So starting with 1997, the numbers are 2, 8, 3, 3, 4, 7, 7, 7, 5, 4, 5, 5, 5, 7, 8, and 15. The 0.015 was for 2012. What are the chances that the average anomaly goes up for 16 straight years by pure chance alone if a number of new sites are discovered? Assuming a 50% chance that the anomaly could go either way, the chances of 16 straight years of rises is 1 in 2^16 or 1 in 65,536. Of course this does not prove fraud, but considering that “HadCRUT4 was introduced in March 2012”, it just begs the question why it needed a major overhaul only a year later.”

And how do you suppose the last 16 years went prior to this latest revision? Here are the last 16 years counting back from 2013. The first number is the anomaly in Hadcrut4.2 and the second number is the anomaly in Hadcrut4.3: 2013(0.487, 0.492), 2012 (0.448, 0.467), 2011 (0.406, 0.421), 2010(0.547, 0.555), 2009 (0.494, 0.504), 2008 (0.388, 0.394), 2007(0.483, 0.493), 2006 (0.495, 0.505), 2005 (0.539, 0.543), 2004(0.445, 0.448), 2003 (0.503, 0.507), 2002 (0.492, 0.495), 2001(0.437, 0.439), 2000 (0.294, 0.294), 1999 (0.301, 0.307), and 1998(0.531, 0.535). Do you notice something odd? There is one tie in 2000. All the other 15 are larger. So in 32 different comparisons, there is not a single cooling. Unless I am mistaken, the odds of not a single cooling in 32 tries is 2^32 or 4 x 10^9. I am not sure how the tie gets factored in, but however you look at it, incredible odds are broken in each revision. What did they learn in 2014 about the last 16 years that they did not know in 2013?

In the table, you can see the statistics for Hadcrut4.2 beside those of Hadcrut4.3. Chances are that if August were available for Hacrut4.2, it would also rank #1.

P.S. RSS for September came in at 0.206. This lowers the average to 0.252 so 2014 would rank as 7th warmest if it stayed this way. The length of no warming increases to 18 years and 1 month.

In the sections below, as in previous posts, we will present you with the latest facts. The information will be presented in three sections and an appendix. The first section will show for how long there has been no warming on several data sets. The second section will show for how long there has been no statistically significant warming on several data sets. The third section will show how 2014 to date compares with 2013 and the warmest years and months on record so far. The appendix will illustrate sections 1 and 2 in a different way. Graphs and a table will be used to illustrate the data.

Section 1

This analysis uses the latest month for which data is available on WoodForTrees.com (WFT). All of the data on WFT is also available at the specific sources as outlined below. We start with the present date and go to the furthest month in the past where the slope is a least slightly negative. So if the slope from September is 4 x 10^-4 but it is – 4 x 10^-4 from October, we give the time from October so no one can accuse us of being less than honest if we say the slope is flat from a certain month.

1. For GISS, the slope is flat since October 2004 or 9 years, 11 months. (goes to August)

2. For Hadcrut4, the slope is flat since February 2001 or 13 years, 6 months. (goes to July and may be discontinued)

3. For Hadsst3, the slope is flat since March 2009 or 5 years, 5 months. (goes to July and may be discontinued)

4. For UAH, the slope is flat since January 2005 or 9 years, 8 months. (goes to August using version 5.5)

5. For RSS, the slope is flat since October 1996 or 17 years, 11 months (goes to August).

The next graph shows just the lines to illustrate the above. Think of it as a sideways bar graph where the lengths of the lines indicate the relative times where the slope is 0. In addition, the upward sloping brown line at the top indicates that CO2 has steadily increased over this period.

WoodForTrees.org – Paul Clark – Click the pic to view at­ source

When two things are plotted as I have done, the left only shows a temperature anomaly.

The actual numbers are meaningless since all slopes are essentially zero. As well, I have offset them so they are evenly spaced. No numbers are given for CO2. Some have asked that the log of the concentration of CO2 be plotted. However WFT does not give this option. The upward sloping CO2 line only shows that while CO2 has been going up over the last 18 years, the temperatures have been flat for varying periods on various data sets.

The next graph shows the above, but this time, the actual plotted points are shown along with the slope lines and the CO2 is omitted.

WoodForTrees.org – Paul Clark – Click the pic to view at source

Section 2

For this analysis, data was retrieved from Nick Stokes’ Trendviewer available on his website <a href=”http://moyhu.blogspot.com.au/p/temperature-trend-viewer.html”. This analysis indicates for how long there has not been statistically significant warming according to Nick’s criteria. Data go to their latest update for each set. In every case, note that the lower error bar is negative so a slope of 0 cannot be ruled out from the month indicated.

On several different data sets, there has been no statistically significant warming for between 16 and 21 years according to Nick’s criteria.

Dr. Ross McKitrick has also commented on these parts and has slightly different number for the three data sets that he analyzed. I will also give his times.

The details for several sets are below.

For UAH: Since April 1996: CI from -0.015 to 2.311

(Dr. McKitrick says the warming is not significant for 16 years on UAH.)

For RSS: Since December 1992: CI from -0.018 to 1.802

(Dr. McKitrick says the warming is not significant for 26 years on RSS.)

For Hadcrut4: Since December 1996: CI from -0.026 to 1.139

(Dr. McKitrick says the warming is not significant for 19 years on Hadcrut4.)

For Hadsst3: Since August 1994: CI from -0.014 to 1.665

For GISS: Since October 1997: CI from -0.002 to 1.249

Note that all of the above times, regardless of the source, are larger than 15 years which NOAA deemed necessary to “create a discrepancy with the expected present-day warming rate”.

Section 3

This section shows data about 2014 and other information in the form of a table. The table shows the five data sources along the top and other places so they should be visible at all times. The sources are UAH, RSS, Hadcrut4, Hadsst3, and GISS.

Down the column, are the following:

1. 13ra: This is the final ranking for 2013 on each data set.

2. 13a: Here I give the average anomaly for 2013.

3. year: This indicates the warmest year on record so far for that particular data set. Note that two of the data sets have 2010 as the warmest year and three have 1998 as the warmest year.

4. ano: This is the average of the monthly anomalies of the warmest year just above.

5. mon: This is the month where that particular data set showed the highest anomaly. The months are identified by the first three letters of the month and the last two numbers of the year. Note that this does not yet include records set so far in 2014 such as Hadsst3 in June.

6. ano: This is the anomaly of the month just above.

7. y/m: This is the longest period of time where the slope is not positive given in years/months. So 16/2 means that for 16 years and 2 months the slope is essentially 0.

8. sig: This the first month for which warming is not statistically significant according to Nick’s criteria. The first three letters of the month are followed by the last two numbers of the year.

9. sy/m: NEW: This is the years and months for row 8. Depending on when the update was last done, the months may be off by one month.

10. McK: NEW: These are Dr. Ross McKitrick’s number of years for three of the data sets.

11. Jan: This is the January 2014 anomaly for that particular data set.

12. Feb: This is the February 2014 anomaly for that particular data set, etc.

19. ave: This is the average anomaly of all months to date taken by adding all numbers and dividing by the number of months. However if the data set itself gives that average, I may use their number. Sometimes the number in the third decimal place differs slightly, presumably due to all months not having the same number of days.

20. rnk: This is the rank that each particular data set would have if the anomaly above were to remain that way for the rest of the year. It will not, but think of it as an update 40 minutes into a game. Due to different base periods, the rank is more meaningful than the average anomaly.

I do not know the future of Hadcrut4.2 and Hadsst3. They did not come in by October 3 and I think they may be discontinued since a new Hadcrut4.3 is now out which includes August. Unfortunately, Hadcrut4.3 is not on WFT. As a result, there are several gaps for Hadcrut4.2 and Hadsst3.

Source UAH RSS Hd4.2 Hd4.3 Sst3 GISS
1.13ra 7th 10th 8th 9th 6th 6th
2.13a 0.197 0.218 0.487 0.492 0.376 0.61
3.year 1998 1998 2010 2010 1998 2010
4.ano 0.419 0.55 0.547 0.555 0.416 0.67
5.mon Apr98 Apr98 Jan07 Jan07 Jul98 Jan07
6.ano 0.662 0.857 0.829 0.835 0.526 0.93
7.y/m 9/8 17/11 13/6 12/10 5/5 9/11
8.sig Apr96 Dec92 Dec96 Aug94 Oct97
9.sy/m 18/6 21/9 17/9 20/1 16/11
10.McK 16 26 19
Source UAH RSS Hd4.2 Hd4.3 Sst3 GISS
11.Jan 0.236 0.261 0.509 0.508 0.342 0.70
12.Feb 0.127 0.162 0.304 0.305 0.314 0.45
13.Mar 0.137 0.214 0.540 0.548 0.347 0.70
14.Apr 0.184 0.251 0.643 0.658 0.478 0.73
15.May 0.275 0.286 0.584 0.596 0.477 0.79
16.Jun 0.279 0.345 0.620 0.619 0.563 0.62
17.Jul 0.221 0.351 0.549 0.541 0.552 0.53
18.Aug 0.118 0.193 0.669 0.70
Source UAH RSS Hd4.2 Hd4.3 Sst3 GISS
19.ave 0.197 0.258 0.535 0.555 0.439 0.65
20.rnk 7th 6th 3rd 1st 1st 3rd

If you wish to verify all of the latest anomalies, go to the following:

For UAH, version 5.5 was used since that is what WFT used.

http://vortex.nsstc.uah.edu/public/msu/t2lt/tltglhmam_5.5.txt

For RSS, see: ftp://ftp.ssmi.com/msu/monthly_time_series/rss_monthly_msu_amsu_channel_tlt_anomalies_land_and_ocean_v03_3.txt

For Hadcrut4, see: http://www.metoffice.gov.uk/hadobs/hadcrut4/data/current/time_series/HadCRUT.4.3.0.0.monthly_ns_avg.txt

For Hadsst3, see: http://www.cru.uea.ac.uk/cru/data/temperature/HadSST3-gl.dat

For GISS, see:

http://data.giss.nasa.gov/gistemp/tabledata_v3/GLB.Ts+dSST.txt

To see all points since January 2014 in the form of a graph, see the WFT graph below.

WoodForTrees.org – Paul Clark – Click the pic to view at source

As you can see, all lines have been offset so they all start at the same place in January 2014. This makes it easy to compare January 2014 with the latest anomaly.

Appendix

In this section, we summarize the data for each data set separately.

RSS

The slope is flat since October 1996 or 17 years, 11 months. (goes to August)

For RSS: There is no statistically significant warming since December 1992: CI from -0.018 to 1.802.

The RSS average anomaly so far for 2014 is 0.258. This would rank it as 6th place if it stayed this way. 1998 was the warmest at 0.55. The highest ever monthly anomaly was in April of 1998 when it reached 0.857. The anomaly in 2013 was 0.218 and it is ranked 10th.

UAH

The slope is flat since January 2005 or 9 years, 8 months. (goes to August using version 5.5 according to WFT)

For UAH: There is no statistically significant warming since April 1996: CI from -0.015 to 2.311. (This is using version 5.6 according to Nick’s program.)

The UAH average anomaly so far for 2014 is 0.197. This would rank it as 7th place if it stayed this way. 1998 was the warmest at 0.419. The highest ever monthly anomaly was in April of 1998 when it reached 0.662. The anomaly in 2013 was 0.197 and it is ranked 7th.

HadCRUT4.2

The slope is flat since February 2001 or 13 years, 6 months. (goes to July and may be discontinued)

For HadCRUT4: There is no statistically significant warming since December 1996: CI from -0.026 to 1.139.

The HadCRUT4 average anomaly so far for 2014 is 0.535. This would rank it as 3rd place if it stayed this way. 2010 was the warmest at 0.547. The highest ever monthly anomaly was in January of 2007 when it reached 0.829. The anomaly in 2013 was 0.487 and it is ranked 8th.

HadSST3

For HadSST3, the slope is flat since March 2009 or 5 years and 5 months. (goes to July and may be discontinued) For Hadsst3: There is no statistically significant warming since August 1994: CI from -0.014 to 1.665.

The HadSST3 average anomaly so far for 2014 is 0.439. This would rank it as 1st place if it stayed this way. 1998 was the warmest at 0.416 prior to 2014. The highest ever monthly anomaly was in July of 1998 when it reached 0.526. This is also prior to 2014. The anomaly in 2013 was 0.376 and it is ranked 6th.

GISS

The slope is flat since October 2004 or 9 years, 11 months. (goes to August)

For GISS: There is no statistically significant warming since October 1997: CI from -0.002 to 1.249.

The GISS average anomaly so far for 2014 is 0.65. This would rank it as third place if it stayed this way. 2010 was the warmest at 0.67. The highest ever monthly anomaly was in January of 2007 when it reached 0.93. The anomaly in 2013 was 0.61 and it is ranked 6th.

Conclusion

For the moment, WTI is dead. However it can be revived in one of two ways. Either HadCRUT3 needs to be updated or WFT needs to switch WTI from HadCRUT3 to HadCRUT4.3. And if this is done, then UAH should also be updated from version 5.5 to 5.6. But hopefully version 6 for UAH will be out soon. We also await HadCRUT5 with bated breath…

Update:

Tim Osborn (@TimOsbornClim) October 6, 2014 at 2:21 am

The changes from HadCRUT4.2 to HadCRUT4.3 arise from changing the marine component (the sea surface temperature anomalies) from HadSST.3.1.0.0 to HadSST.3.1.1.0 and the land component (the near-surface air temperature anomalies over land) from CRUTEM.4.2.0.0 to CRUTEM.4.3.0.0.

The HadSST changes do not alter the temperature anomalies, just the estimate of the uncertainties:

http://www.metoffice.gov.uk/hadobs/hadsst3/data/HadSST.3.1.1.0_release_notes.html

The changes in the HadCRUT4 temperatures therefore arise because of changes in the land component (CRUTEM4) which are described here:

http://www.metoffice.gov.uk/hadobs/crutem4/data/CRUTEM.4.3.0.0_release_notes.html

Principal changes are the addition of new station data (e.g. 10 new series from Spain, 250 from China, etc.), extension of existing station data to include more recent values (e.g. 518 series from Russia were updated, etc.), or the replacement of station data with series that have been the subject of homogeneity analysis by other initiatives.

The effect of these various changes is shown graphically on that page, for global, hemispheric and various continental-scale averages.

The effect on the overall trends of the new version looks like it would slightly weaken the warming trend in N. America, little overall effect in S. America, and strengthen the warming trend in Europe, Asia and Australasia. The Africa series also appears to warm, but actually this is because the geographical domain for that graph includes a little of southern Europe and it is the changes there that influence this series rather than changes to the African database.

0 0 votes
Article Rating
147 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Werner Brozek
October 5, 2014 1:11 pm

This is for dbstealey in case you are reading this. If I am not mistaken, you proposed a bet with someone regarding Hadcrut a few years into the future. Just some friendly advice: Be sure you specify whether it is Hadcrut17 or Hadcrut16 that will be the final judge. And should you lose the bet, be sure to stipulate that if Hadcrut17 is to be the judge, and if at least 3 of the last 10 years did not show cooling as compared with Hadcrut16, then all bets are null and void.

Greg
Reply to  Werner Brozek
October 5, 2014 11:40 pm

” However WFT does not give this option.”
Yes, WTF is rather limited in what it can do.
I would suggest that you and JTFacts download gnuplot http://gnuplot.info (the software used by WTF) and get the ability to do your own plots. It would then be trivial to do log(CO2) or whatever you need, as well a subtract different data sets.
WTF provides the data in text form as output by gnuplot in a format intended for replotting so you could just start from what you already use from their site.
PS, please try to be more brief. The point about HadCrut continually rigging the data is important but I got bored cut out about half way this long rambling post.

Werner Brozek
Reply to  Greg
October 6, 2014 8:16 am

Thank you! However I would think that over the 18 year period that I am focusing on, the difference between a straight line and a curve would be extremely small.

Reply to  Werner Brozek
October 6, 2014 4:47 pm

Werner,
I’ve proposed and/or tried to accept a wager or two over the years. But no one ever takes me up on them. It seems that some folks are always willing to make questionable statements. But when it’s time to put their money where their mouth is, they always back off.
That way, their money is safe.

Werner Brozek
Reply to  dbstealey
October 6, 2014 7:02 pm

Perhaps they know which way the wind is really blowing.
(P.S. Did you see the update? We are being watched! I wonder how many sites can say that.)

MikeUK
October 5, 2014 1:14 pm

The steepness of the NH+SH graph from 1900 to 1940 looks very suspect to me, as Australian data used to show COOLING over that period. Maybe its gotten corrupted by the fairy tale that is ACORN-SAT from the Australian Bureau of Meteorology.

Werner Brozek
Reply to  MikeUK
October 5, 2014 1:24 pm

It is possible that Australia went opposite to the global trend at that time. However the spike prior to 1945 is a huge problem for people insisting that anthropogenic CO2 is the main driver of temperatures.

yam
Reply to  Werner Brozek
October 5, 2014 3:25 pm

Just blame that spike on the U.S. Navy.

mobihci
Reply to  MikeUK
October 6, 2014 9:44 pm

yes, no doubt acorn-sat is causing warming in crutem4.3 because they claim on their update page that there were no changes to the bulk of acornsat, just some stations that dont matter. what a mess.
the latest on acornsat-
http://joannenova.com.au/2014/10/australian-summer-maximums-warmed-by-200/

October 5, 2014 1:36 pm

The above raises the question as to why these adjustments were made.

That is the key point.
If the adjustments are justified then we can assess the justifications and see if we are getting closer to truth.
But with no justification… how could the adjustments be distinguished from fabrication?

Reply to  M Courtney
October 5, 2014 1:50 pm

A very excellent point Mr. Courtney.

Ed Barbar
Reply to  M Courtney
October 5, 2014 1:50 pm

if you are looking for errors that keep things cooler, you will find them. If you are looking for errors that make things warmer, you will find them.
In other words, the adjustments could be completely justified, only no one is looking for the adjustments that keep things cooler, but only the adjustments that show more warming, you will find warming. And be completely justified.

Shoshin
Reply to  Ed Barbar
October 5, 2014 1:59 pm

A good friend of mine once remarked about the earth sciences “I would’na seen it if I had’na believed it”. Not sure if he was the first one to say this but it reverberates in me every time I’m asked to review data. The easiest person to fool is oneself.

Werner Brozek
Reply to  Ed Barbar
October 5, 2014 2:03 pm

If you are looking for errors that make things warmer, you will find them.
So two years ago, they presumably found errors to make 1998 warmer by 0.008 C. Exactly what did they find over the last 16 months to make 1998 another 0.004 C warmer that they missed two years ago?

Reply to  Ed Barbar
October 5, 2014 2:22 pm

if you are looking for errors that keep things cooler, you will find them. If you are looking for errors that make things warmer, you will find them.

And both are fine if they are justified.
But both are far from fine if we have to take them on faith.
I have faith but that is not my religion.
Surely, this should be a science question.

mpainter
Reply to  Ed Barbar
October 5, 2014 2:46 pm

You have put your finger on the very nub of the problem. These adjustments are why I no longer believe the thermometer record. And then there is the UHE- unadjusted. And thus the thermometer record becomes grist for the propaganda mill.

Ed Barbar
Reply to  Ed Barbar
October 5, 2014 4:18 pm

M Courtney,
What I’m suggesting is even though the adjustments are justified, they could still take you further from the truth.
Imagine you are wondering “Where the heck did the heat go?” and you find something that helps explain it. You are going to connect to that immediately. On the other hand, if a thought is cognitively dissonant to you, because it looks like it might increase cooling, you might ignore it as being implausible and not worth investigating.
In this case, both methods could be justified, both are part of the actual truth, but one is filtered out (presumably on account of subconscious bias). This could explain why new methods appear to warm in the right places only, and cool in the right places only.
In any event, I’m not too worried about it. The elephant in the room isn’t the “pause,” it’s temperatures vs. predictions, and they are off by a lot. Showing the models have very limited predictive power is a big part of giving some time (and breathing room) to reflect, understand, and develop appropriate solutions to our energy problem (not necessarily to our Global Warming problem). The satellites data (Lower Troposphere), BEST, and skeptic advocates will keep the surface temperatures close enough to honest.
I’ve been preaching a bit myself lately, and it’s the observation that my electric bill has been high a lot longer than global warming has been around. I live in CA, and I learned that 15% of our energy is imported. I also note that my electric rates are up to 39c/KwH, but my electric provider claims wholesale energy is 4c/KwH, the poles that deliver it to me could carry a lot more electricity (though not universally so). Anthony once said his energy costs could be upwards of $1.00/kwh, if memory serves, exactly when you need it the most (hot summer days in Chico).
I say, skeptics ought to use the opportunity and the uncertainty of climate change to push a number of initiatives.
First, we need to get more warmists like Hansen and the ex-Greenpeace guy to go on the record we need nuclear. That way, we can kill the Jane Fonda hysteria over nuclear brought about by the China Syndrome. Use the warmists own hype to kill their pet phantom. Let’s build new nuclear now based on current technology. Next, let’s put the $ into research for 2nd generation nuclear, pebble reactors, thorium cycle, whatever, that can be used anywhere in the world without putting dangerous materials in the hands of unscrupulous people.
In my view, this is an opportunity. With Germany and its 7 million in energy poverty, it’s time to strike against Solar and Wind (even though I understand 40% of german renewable energy is coming from wood burning!, some from US forests). And frankly, I do think Coal has issues as a generation method. It’s hard to trust the EPA on anything, but having recently been to China, I don’t think it can be healthy. So that’s another impetus to get to a clean energy source.

RoHa
Reply to  M Courtney
October 5, 2014 5:03 pm

Exactly. This is a point that should be repeated often.
I would also add that each adjustment means they got it wrong last time. The more adjustments, the greater the track record of being wrong, and there is no reason to believe they are getting closer to being right. So why believe them at all?

trafamadore
Reply to  RoHa
October 5, 2014 9:10 pm

I measured the chair I sat on, it is 32.4 cm heigh. I measured it again,, it is 32.9 cm high. So why believe them at all? I must be sitting on the floor!!!

RoHa
Reply to  RoHa
October 5, 2014 11:26 pm

Doesn’t follow that you are sitting on the floor. All that follows is that you do not know how high your chair is, and if you tell me that it is 33 cm I have good reason to doubt it.

Reply to  RoHa
October 6, 2014 6:29 am

RoHa:
Is someone sitting on the chair when it is measured? Are the legs in compression from that mass and gravity thereby making them shorter? If so, was someone else sitting on the seat when the 2nd measurement was made? Is the surface of the chair sloped? Is he measuring from the highest point, lowest point, mean or median height of the slope of the chair? Is the temperature the same between readings or is there thermal expansion/contraction to take into account? … continue to ridiculous extreme.
This is what happens when you don’t specifically define what the parameters of the system being measured are when the measurements are taken. That’s our problem with the Climateers. They are still trying to use weather data that has a huge range of differences to predict the function of an extremely complex chaotic system. As long as our taxes keep them employed they’ll keep playing the game and you will never know how high that chair is… only that it will kill you if you fall off it so give us more money and go live in a cold, dark cave to save the planet.

RoHa
Reply to  RoHa
October 6, 2014 9:20 pm

It isn’t my chair.

Editor
October 5, 2014 1:39 pm
Werner Brozek
Reply to  Walter Dnes
October 5, 2014 2:19 pm

Thank you for that Walter. With the huge jump in August to 0.649, the 2014 average jumps to 0.465 from 0.439. This is WAY above the 1998 record of 0.416. It is possible that the 5 year and 5 month flat period from last month has totally disappeared with the August anomaly. Can this be verified? Thanks!

Nick Stokes
Reply to  Werner Brozek
October 5, 2014 6:20 pm

” It is possible that the 5 year and 5 month flat period from last month has totally disappeared with the August anomaly. Can this be verified? Thanks!”
Yes, I think so. I’ve updated the trendviewer (a glitch had stopped updates for three weeks) and the minimum month is now May 2009, at 0.159 °C/century.

Werner Brozek
Reply to  Werner Brozek
October 5, 2014 6:54 pm

Thank you very much for that Nick! Does it take a while for me to see your updates? I just checked GISS and Hadcrut4.3 and Hadsst3 and saw that the update in August did not show up yet. Of course, the new URLs for Hadcrut4.3 and Hadsst3 may not have helped either.

Nick Stokes
Reply to  Werner Brozek
October 5, 2014 7:43 pm

Werner,
I updated only just now, and for some reason it didn’t complete uploading the files. But the data file went up. and a lot of the pics. I’m working on it.
I get the HAD stuff from CRU, where they don’t change the URLs. But they lag a little. I had to manually add the August SST3.

Werner Brozek
Reply to  Walter Dnes
October 5, 2014 2:35 pm

I believe that HadSST has been updated to version 3.1.1, which in turn forced the update to HadCRUT.
I did find the August value in your list. However when I went over the 16 hottest years against my list since January, I found no difference at all. So that cannot have been the reason for the new Hadcrut4.3 unless I missed something. Here are my Hadsst3 numbers from January:
1 {1998, 0.416},
2 {2010, 0.406},
3 {2009, 0.395},
4 {2003, 0.393},
5 {2005, 0.389},
6 2013 0.376
7 {2002, 0.368},
8 {2006, 0.365},
9 2004 at 0.354
10 {2012, 0.346},
11 {2001, 0.329},
12 {1997, 0.318},
13 {2007, 0.296},
14 {2011, 0.290},
15 {2008, 0.259},
16 {2000, 0.216}

Stephen Richards
October 5, 2014 1:42 pm

and these people were at “the dinner” pretending to be genuinely pleasant people !!

Dodgy Geezer
October 5, 2014 1:48 pm

I note that air temperatures are starting to be ignored (unsurprisingly!) by supporters of Alarmism.
Even the ‘scientists’ are now using the line: “It’s not just the air -it’s the wide range of data from multiple sources which convinces us…”
Now, I know how science works. A scientist defines a very specific phenomenon, then produces a hypothesis to explain it and starts to gather evidence. That evidence may contain contradictory data – it may be reasonable to consider the ‘wide range’ of available data -but the scientist will be considering ALL he can get his hands on.
I also know how a politician works, or a car salesman. They also mug up on reams of data. The difference is that when they talk to the punters, they select from that data the bits which show their product in a good light. they spread their net wide, so it’s impossible to consider ALL of their data. They may say “Look at the European Alps where glaciers are shrinking!” while ignoring all the other glaciers in the world. Or: “This car has incredible performance!”, while keeping quiet about its reliability.
The point that I’m trying to make is that, once a person says: “It’s the wide range of data from multiple sources which convinces us…” you KNOW he’s not working like a scientist. That single step should be enough to stop the debate right there…

Ed Barbar
Reply to  Dodgy Geezer
October 5, 2014 7:37 pm

I’ve noticed the same thing. It didn’t happen this year, or the last. It started when it became “Climate Change,” or “Climate Disruption,” in which every weather event had the potential to be blamed on CO2, and not in the way you might think. Not because the evidence showed it, but absent the evidence there is no demonstrated causal link.
It will make headlines, and in the US which hasn’t become totally socialist, unlike the rest of the world, it will become an issue of partisan divide.

masInt branch 4 C3I in is
October 5, 2014 1:49 pm

I would observe that the combined Susan Solomon and UK Met Office are absolutely outstandingly wrong with the absolutely outstandingly highest of confidence in the known universe.
Ha ha

KNR
October 5, 2014 2:33 pm

Remind again of way the climate ‘settled science’ needs so many ‘adjustments’ when oddly real settled science like the speed of light need very few or none ?
Orwell “He who controls the past controls the future. He who controls the present controls the past.”
Life is so much easier when you can prove your right by making sure the data does what you need it to do by ‘adjustments’ to its historic record.

thinair
October 5, 2014 2:52 pm

There must be emails, somewhere, that will give the world good documentation on their motivations for upward revisions at this time.

rgbatduke
October 5, 2014 2:54 pm

I’m made the observation that if one tests the p-value of the null hypothesis “the major temperature anomaly adjustments have been unbiased” one essentially rejects the null hypothesis with extreme prejudice. And not just HADCRUT — if anything, GISS has been worse. HADCRUT also fails to even try to correct for the UHI associated with their data sources, and GISS’s correction — you might have guessed it — produces more warming by the time they are done with it. Amazingly, they found a way to make UHI into UCI to the point where it actually net warms instead of knocking off the 0.1 to 0.2 C that is the most plausible outcome of correcting for it.
However, they are strongly constrained by UAH and RSS at this point. They are already diverging from them by an embarrassing, obviously nonphysical amount, and it is very difficult to argue that the satellites (that are the closest thing we have to an unbiased, truly global observation of temperature) are all systematically wrong while the incredibly complex process that transforms an inhomogenous, often corrupted set of land surface data plus inspired guesses of global SSTs into a surface temperature anomaly are all right. Indeed, GISS is diverging from HADCRUT, and you are quite right, HADCRUT might be feeling the “heat” and attempting to reduce the discrepancy lest they be accused of breaking ranks.
I think it isn’t implausible to explain a lot of this via sheer personalities. HADCRUT is at least partly the child of Jones, and we know that Jones, in the Climategate letters, saw the writing on the wall many years ago. I think that Jones, Briffa, and that whole crowd now deeply regret that they let themselves be stampeded into the Bad Science zone by Mann when Mann was lionized by the IPCC and they were ignored. I also think that Jones is quite possibly a bit tormented by the ongoing failure of the climate models, the pause/hiatus. HADCRUT4 warmed things a bit relative to 3, but since then adjustments have been tiny and even with bias they haven’t cumulated to much — certainly not enough to particularly matter. GISS is first Hansen, then as I understand it now Schmidt. They’ve been far more open (or perhaps brazen) in their pumping of GISS warming. In GISS there is hardly any pause at all — they are working hard on “erasing” it. The divergence is quite striking:
http://www.woodfortrees.org/plot/gistemp/from:1983/to:2015/plot/hadcrut4gl/from:1983/to:2015
They often differ systematically and extensively by 0.1 to 0.2 C, and GISSTEMP is basically always higher than HADCRUT4.
The divergence is even more glaring with HADCRUT3:
http://www.woodfortrees.org/plot/gistemp/from:1983/to:2015/plot/hadcrut3vgl/from:1983/to:2015
while HADCRUT4 hardly changes relative to HADCRUT3:
http://www.woodfortrees.org/plot/hadcrut4gl/from:1983/to:2015/plot/hadcrut3vgl/from:1983/to:2015
You can clearly see the “warming” — the two are almost precisely the same through the 80s, 4 gets a bit on top in the 90’s, and the 2000’s are as much as 0.01 to 0.02 warmer in 4 than in 3. But that’s a tiny discrepancy compared to the GISS to HADCRUT discrepancy.
One cannot help but wonder what an honest UHI correction would do to this. At a guess, drop HADCRUT4 by a more or less linear trend of perhaps 0.1 to 0.2 over the years plotted, erasing about half of the warming, but the really hard part would be adjusting the entire record for it. We might find that the Earth has not been warming since the beginning of the 20th century, and that the warming trend in the thermometric record in the 1920 to 1945 era really was a pretty good match for the late 20th century warming with cooling in between once UHI was removed. After all, the population of the world has gone up by roughly 6 billion people out of 7 across that time. That is a sevenfold increase in urbanization, worldwide. Existing cities are much, much larger, and there are many more cities, and many things that barely qualified as hamlets or whistle-stops in 1900 are now significant urban sprawl. There is some reason to think that the 1930’s were, in fact, the peak of the modern warm period, the 1980s and 1990s were the second peak of roughly equal amplitude, and that we are now slowly slipping out of the peak.
Into decline? Possibly. Maybe not. But given that the null hypothesis of unbiased global surface temperature estimation is definitely untrue, it is hard to know what is really happening to global temperatures. UAH/RSS are likely our best guide.
rgb

Nick Stokes
Reply to  rgbatduke
October 5, 2014 3:12 pm

“However, they are strongly constrained by UAH and RSS at this point. They are already diverging from them by an embarrassing, obviously nonphysical amount, and it is very difficult to argue that the satellites (that are the closest thing we have to an unbiased, truly global observation of temperature) are all systematically wrong while the incredibly complex process that transforms an inhomogenous, often corrupted set of land surface data plus inspired guesses of global SSTs into a surface temperature anomaly are all right.”
Actually the big divergence is between UAH and RSS. In terms of trends in the “pause” years, MSU is by far the lowest (hence Lord M’s articles), and UAH usually the highest. Last month, Werner had the UAH “pause” only back to June 2008, less pause than any of the surface indices. I’m not sure what has happened this month – with v5.6 I get a positive trend from any month in 2005. Even so, in Werner’s Section 1 RSS has by far the longest pause, UAH the shortest.

Werner Brozek
Reply to  Nick Stokes
October 5, 2014 3:38 pm

Actually the big divergence is between UAH and RSS. In terms of trends in the “pause” years, MSU is by far the lowest (hence Lord M’s articles), and UAH usually the highest.
This is very true. So it is very puzzling why both satellite data sets are at 7th warmest for 2014 so far and Hadcrut4 and GISS are either tied for first or very close to it.

Nick Stokes
Reply to  Nick Stokes
October 5, 2014 3:45 pm

“What divergence between UAH and RSS?”
I said during the trend years; you’ve plotted about five years. Try going back to say 2002, or 1999.

Nick Stokes
Reply to  Nick Stokes
October 5, 2014 3:46 pm

“during the trend years” -> during the pause years

Pamela Gray
Reply to  Nick Stokes
October 5, 2014 3:53 pm

Nick, you are arguing about wriggles that have no meaningful correlation to global climate warming. The conversation that is important is whether or not the addition in ppm molecules of CO2 produced the powerful 3X amplification in water vapor such that this human-driven portion can be made solely responsible for this silly trend or lack thereof in 5, 10, 15, etc years.
Mechanism, mechanism, mechanism. Energy needed must equal energy made available by anthropogenic CO2 (or lack thereof) to continue (or stall) a rise in global temperatures in the air and in the oceans. Without a better plausibly mechanized calculation, we are doing little else than matching my aging with global warming. Hathaway made this mistake with his early attempts at predicting solar activity based on the previous (and no debunked) trend in activity. We all know how that turned out.

Pamela Gray
Reply to  Nick Stokes
October 5, 2014 3:55 pm

oops. “…now debunked…”

Richard M
Reply to  Nick Stokes
October 5, 2014 4:47 pm

Nick Stokes October 5, 2014 at 3:45 pm
“What divergence between UAH and RSS?”
I said during the trend years; you’ve plotted about five years.

I did that for a reason. They are no longer diverging, so talking about something that occurred over a decade ago is not meaningful to what is happening today. The only “divergence” is between adjusted surface data and BOTH satellite data sets. You have to be pretty deluded to not see what is happening.

Richard Barraclough
Reply to  Nick Stokes
October 5, 2014 5:41 pm

The negative trend on UAH v.6 goes back as far as September 2008. The main dataset hasn’t yet been updated for Sept 2014, but the headline figure is available on Dr.Spencer’s website
http://www.drroyspencer.com/category/blogarticle/
One slight niggle I have with the accuracy of these figures is that the summary on his own website differs slightly from the raw data presented at
http://www.nsstc.uah.edu/data/msu/t2lt/uahncdc_lt_5.6.txt
which seems a little shoddy, to say the least.

Reply to  rgbatduke
October 5, 2014 3:13 pm

Different regions are responding in different ways so any notion of ‘global’ warming s not helpful. It is more regional warming with some regional cooling and some regions in stasis.
Taken together there has probably been a general rise in ymperaturrs since 1700 so giss etc can be seen as a staging post for warming and not the starting post.
We should hardly be surprised that we are a little warmer than during the LIA. Perhaps we should be querying why we aren’t even warmer and also what caused the LIA and the subsequent partial recovery. Indeed we really ought to try and find out what has caused the periodic warming and cooling interludes we can observe all through the Holocene
Tonyb

richard verney
Reply to  Tonyb
October 6, 2014 2:39 am

Different regions are responding in different ways so any notion of ‘global’ warming s not helpful. It is more regional warming with some regional cooling and some regions in stasis.
/////////////
Precisely.
I have been saying this for years, there is no such thing as global warming.
The global part of this is a construct, for political purposes so that politicians can say we are all in it together, we need a world (global) solution.
It is deliberately designed to downplay and distance self interest, ie., to prevent individual countries adopting policies that suit themselves, since for many countries warming will be a god send, for some rising sea level will have no impact etc.
It is completely unscientific to talk about global warming, or global climate. The only global climate is whether the planet is in an ice age, or in an inter glacial period.
Personally, I would completely ditch the land based thermometer record (with the exception of CET which due to its length can be useful as a reference point), and I would go only with the satellite data sets and ARGO (the latter being way too short and lacking coverage to tell us much, but climate is driven by the oceans and the only really useful metric is ocean temperature).
But if one is to go with the land based data sets they need re-evaluating from scratch. First the raw data should be presented with accurate error bars. There should be no adjustment for TOB, instrument changes etc, just an alteration in the error band width as appropriate.
The data record should be split regionally so that we can see what parts of the world are warming (and the rate), what parts are staying broadly neutral, and what parts are cooling (and their rates).
It should also be split to night and day, min and max etc and in that way we would have a far better grasp on what is happening and why that might be the case.
Finally, a proper reevaluation needs to be deone for UHI. We know that UHI is real and substantial, and we need to get a proper handle on how this has impacted upon the land based thermometer record (especially given station drop out and the growing proportion of urban stations oer rural stations).

Werner Brozek
Reply to  rgbatduke
October 5, 2014 3:22 pm

HADCRUT4 warmed things a bit relative to 3, but since then adjustments have been tiny and even with bias they haven’t cumulated to much — certainly not enough to particularly matter.
I agree. I looked at the 31 raises over the last two adjustments and found that 17 out of 31 were for 0.005 C or less. Why do they even bother risking their reputation for so little?

Reply to  Werner Brozek
October 5, 2014 8:14 pm

if the changes went the other way you would agree with them

Werner Brozek
Reply to  Werner Brozek
October 5, 2014 9:15 pm

If a third went one way and 2/3 the other, it would be plausible, but if 0 out of 32 show cooling, I do not accept it.

Boulder Skeptic
Reply to  Werner Brozek
October 6, 2014 12:05 am

Mosher the mind-reader…
Steven, your pithy responses seem to me to often ascribe thoughts and presumed actions to others. Are you really so sure that you can read minds and predict actions?
Bruce

David A
Reply to  Werner Brozek
October 6, 2014 4:01 am

Moser is not likely to engage in a real conversation. He is above that.

Ian W
Reply to  Werner Brozek
October 6, 2014 7:11 am

Steven,
Everyone might agree with them if the changes and reasons for carefully made observations being adjusted were seen to be documented and subject to a solid Quality Management System with each reporting station having its raw data stored together with the various adjustments and reasons and justification for each change to that station’s reports recorded and signed off by the station managers and the group doing the adjusting – whether Hadley Centre or NASA managers . There are not that many reporting stations and I am sure the observers at them would like to be told that someone in East Anglia or New York has decided they cannot observe correctly. As it is the documentation standard is what you would expect from an undergraduate project with ‘Harry Readme’ additions. This is frankly amateur work – and that is being unfair to many amateur software engineers.
The world industry and taxation system is being driven by this crowd of amateurs; seven million German homes are in energy poverty, thousands of old people die a month in UK of cold in energy poverty; entire African nations are being prevented from obtaining electricity. Is it really too much to ask these PhDs to join the real world and see if they can get ISO-9000 series quality management systems in place and become accredited and openly document what they are doing?

DirkH
Reply to  Werner Brozek
October 6, 2014 7:39 am

Steven Mosher
October 5, 2014 at 8:14 pm
“if the changes went the other way you would agree with them”
If the changes were distributed in both directions I would seriously consider the possibility that the people in the climate science institutes could actually be scientists.

Reply to  Werner Brozek
October 6, 2014 9:59 am

Because a little here and a little there all add up.

Auto
Reply to  Werner Brozek
October 6, 2014 2:14 pm

ISO 9001 –
Say what you do.
Do what you say.
And Prove it.
Do, please, check the ‘Scope’ of any ISO 9001 [or later standards, 14000, 22000. 25000, etc.] certification.
The Scope is what is actually assessed, by trained external – and generally independent – assessors.
I have been a Quality Assessor for an organisation that has, still, the 0001 number in the UK, from UKAS.
Auto

MikeB
Reply to  rgbatduke
October 5, 2014 3:31 pm

You must be careful when comparing GISS and HADCRUT temperature data. Remember, these are ‘anomalies’, not absolute temperatures. GISS and Hadley CRU use different baselines. GISS shows anomalies from a 1951 to 1980 average whilst Hadley uses a baseline of 1961 to 1990. Since the 1951-1980 period was cooler than 1961-1990 (by 0.236 degrees?) , GISS should always be higher.
This doesn’t excuse their intentional fiddling of the data of course. I try to avoid using GISS but I notice that their dataset is the favourite for Wikipedia, The Guardian, the BBC etc. and all the headline claims of ‘warmest year ever’ seem attributable to GISS ‘adjustments’.
The satellite measurements use a different baseline again, I think RSS is (1979- 1999) and Roy Spencer presents his graphs with respect to a 1981-2010 baseline.

Bart
Reply to  MikeB
October 5, 2014 6:07 pm

Yes. With a slight change in baseline, GISS and HADCRUT4 do not look so very different.
Over a longer timeline, HADCRUT4 appears to have turned a corner, and may be heading downward. GISS, on the other hand, does not appear to indicate an incipient downturn. From the very latest HADCRUT4, it does appear they may be attempting to boost things back up.
I think this bears careful scrutiny in the years ahead. HADCRUT may well be trying to match GISS with these adjustments.

Reply to  rgbatduke
October 5, 2014 6:05 pm

All of the published surface record temp series are models of the surface temperature. They don’t even really measure average daily temp, just minimum and maximum, and the only one doing much of anything is minimum is changing regionally. But you don’t really see that when you look at an average.

Reply to  rgbatduke
October 5, 2014 8:16 pm

Huh.
the satillite data is less reliable. Ask the RSS guys.
read the calibration documents.. I posted them for monkton

climatereason
Editor
Reply to  Steven Mosher
October 6, 2014 12:44 am

Mosh
That also relates to the satellite sea ice and sea level data I presume?
tonyb

michael hart
Reply to  Steven Mosher
October 6, 2014 5:49 am

And the first document link you had posted at that time didn’t contain the word “reliable”. I asked you what you meant by reliable, and you told me comment less. Do you expect people to go searching for which of your linked documents might (might) contain an appropriate explanation? A little less arrogance would be more helpful.

Reply to  Steven Mosher
October 6, 2014 6:27 am

Who said anything about satellite temps? Although they do seem to do a better job of following measurements.
BEST like all the others surface averages are a propaganda tool. If you were doing science you’d look at the transient response at the surface, you can point out that temps are not evolving the same globally, you could point out how big the holes really are in the data. There’s lots of good science to do, instead we talk about annual/decadal changes of tenths of a degree, when that tells us nothing. You guys can’t even highlight how different the sampling is for the past when comparing that to present which has 100’s to 1,000’s of times as many samples.
When you look at yesterday’s raising temp, and compare that to last nights falling temp there is no effect due to an increase in GHGs. None.

beng
Reply to  Steven Mosher
October 6, 2014 8:40 am

Mosh, the satellite records are less reliable than the surface records? Sir, you must be joking.

Reply to  Steven Mosher
October 6, 2014 5:06 pm

beng,
Correctomundo.
Satellites measure the globe. Most others measure the land, and if they measure the ocean it is one measurement within a huge rectangle.
But the important metric is the trend. The trend since at least 2005 [and if Ross McKittrick is right, for the past 20+ years] has been: no global warming. At all. None. A permanent pause. A humungous hiatus.
The trend has been flat for many years now. That deconstructs lots of predictions. That deconstructs all of the runaway global warming predictions, and all of the accelerating global warming predictions, and all of the continuing global warming predictions. They were all wrong.
Why should anyone listen to a group whose predictions have all turned out to be flat wrong?
Good question, no?

Shona
Reply to  Steven Mosher
October 8, 2014 2:53 pm

You mean less reliable than guys throwing canvas buckets over the gunwhales?

Reply to  rgbatduke
October 6, 2014 6:42 am

“Into decline? Possibly. Maybe not. But given that the null hypothesis of unbiased global surface temperature estimation is definitely untrue, it is hard to know what is really happening to global temperatures. UAH/RSS are likely our best guide. rgb”
Good post – I agree. Use satellite data over poor quality surface data.
I do expect decline – possibly serious enough cooling to affect grain crops. Hope not.
Regards to all, Allan

Truthseeker
October 5, 2014 3:15 pm

If you want to see what is being done to the land based temperatures, just have a browse through Steve Goddard’s “Real Science” site. A recent posting shows that only the “adjustments” correlate to CO2 trends …
http://stevengoddard.wordpress.com/2014/10/05/the-definitive-data-on-the-global-warmingclimate-change-scam/
For Australian data and the “adjustments” being made, go to Ken’s Kingdom, an excellent site with clear data and analysis without any of the politics …
http://kenskingdom.wordpress.com

nankerphelge
Reply to  Truthseeker
October 6, 2014 1:11 pm

Thank you Truthseeker! The comment is made on this site (my paraphrase) that the alterations to data are “startling” and the examples given are at least that . My fear is that the Australian results are a slavish response to the real figures jiggers ie Hadcrut, GISS, NASA and NOAA. When will this madness get exposed???

Reply to  Richard M
October 5, 2014 8:13 pm

the satellite data is continously adjusted as well

Werner Brozek
Reply to  Steven Mosher
October 5, 2014 9:22 pm

Adjustments are sometimes necessary, but not at the dizzying pace that GISS (every month) or Hadcrut4 (every year) adjusts things.

October 5, 2014 3:39 pm

“The above raises the question as to why these adjustments were made. It would have been nice to compare apples to apples to see if Hadcrut3 would have finally broken the 1998 record.”
HADCRUT makes no adjustments.
Long ago they did. thats why Steve Mc, Willis and I FOIA them.
please see climate audit for the history.
It was the “value add” that CRU claimed, that we wanted to investgate.
HADCRUT uses data that is now adjusted by the countries supplying the data.
For example, take canada. GHCN has many stations for canada hadcrut doesnt use them all. Instead Hadcrut uses the ADJUSTED canadian series. This adjustment is done by canadian officials.
So changes in HADCRUT come from two sources
1. adding stations
2. the data OWNER changes the data.
Note, there is a lot of data recovery projects going on. Something we asked for after climategate.
In my experience as we add New data we generally see this.
MORE warming in the modern period ( since 1950)
Cooling in the past.
This is just what I see as new series come in ( 2000 new series have just been recovered)
The southern hemisphere recovery efforts are very important and records are being read off original paper forms.
But here is the basic overview. As we improve methods ( CRU –GISS–BEST—C&W)
as we ADD data, as we digitize more old data…
the past is cooler and the present is warmer.
Remember. the versions we have of the past are estimates. Estimates based on data and methods. Every improvement to data and methods.. shows the same thing.
cooling past and warming present

bones
Reply to  Steven Mosher
October 5, 2014 5:28 pm

We who lived through the 1930s fail to believe that it was cooler then. When all of your data corrections go one way you ought to be asking why. It is hard work to eliminate confirmation bias. That seems to have escaped climate science entirely.

Reply to  bones
October 5, 2014 8:04 pm

doesnt matter what you believe.
next, we dont do corrections
third folks who do corrections have corrections both directions.

Gary Pearse
Reply to  Steven Mosher
October 5, 2014 5:47 pm

By present, do you mean last few years? last decade? last two decades?

Reply to  Gary Pearse
October 5, 2014 8:07 pm

in general post 1950. Most of the new data series would be high latitude.. previously undersampled.. now that boxes of old data are being digitized ( many skeptics begged for this happen) the post 1950 period gets warmer.. Not by much.. tiny bits..
just enough to set off conspiratorial thinking

Werner Brozek
Reply to  Gary Pearse
October 5, 2014 9:33 pm

just enough to set off conspiratorial thinking
With error bars of the order of 0.1 C, is it worth it to add 0.005 C to a certain year? Climategate sharpened our thinking.

scf
Reply to  Steven Mosher
October 5, 2014 5:54 pm

You have failed to explain why the new data you find is always different from the data that previously existed, and it always differs in the same directions.
You just say that it happens. We know that for this to happen by chance is nearly impossible, as shown by some basic probability calculations in the article.
You just state that the data we know about looks one way, and the data we don’t know about looks another way, and in fact as more data enters the first category, the data in the second category differs even more. And that’s just the way it is… because.

Reply to  scf
October 5, 2014 6:13 pm

We need to determine whether this is “manufactured” data from infilling (Australian BoM is the ‘world leader’ with this form of the fraud) or is this genuine original data … given the reduction in the weather stations it would be difficult to believe that it is original data.

Reply to  scf
October 5, 2014 8:12 pm

never said
1. That I found the data
2. That it ALWAYS is different
Its not impossible because it happens.
Now the amounts are small, but GENERALLY speaking if you find 2000 new records
the overall NET effect, if you see one, will be to nudge the current up a bit and nudge the past down ( by less)
Now of course the next 2000 may go the other way.
Given that the Northern latitude are under sampled.. however.. we can predict that as more northern records come in or get discovered.. the northern records will warm. just a bit. nothing science changing. but it does drive skeptics nuts..

Reply to  scf
October 5, 2014 8:13 pm

Now of course the next 2000 may go the other way.
But that almost never happens.

tty
Reply to  scf
October 6, 2014 3:21 am

“Given that the Northern latitude are under sampled.. ”
Actually it is the southern hemisphere that is vastly undersampled, not the northern.

David A
Reply to  Steven Mosher
October 6, 2014 4:08 am

Steve et us not over complicate things. Please explain just this one station, and the adjustments made. If you can explain one stations adjustments, then you may be able to begin to explain more. Let us start with this station…. http://stevengoddard.wordpress.com/2014/03/01/spectacular-data-tampering-from-giss-in-iceland/

Ian W
Reply to  David A
October 6, 2014 8:05 am

Steve could then go on to explain Darwin, and the other examples from BOM in Australia where cooling trends at a station become warming trends.
All this suspicion is due to the lack of documentation that a QMS would require.

October 5, 2014 3:42 pm

Thanks, Werner Brozek, Walter Dnes, Just The Facts. Very good article.

Gary Pearse
October 5, 2014 3:48 pm

This is an inbred bunch we are dealing with – joined at the hip for all I know. It was just the other day that the keeper of the RSS was pooh poohing the significance of the pause (Dr. Meare?) and I remarked that this is a technique for preparing us for some changes to this tormenting ‘pause’ to undo it. Lo and behold, HadCrut now is jacking the latest readings upward. We are going to get some unprecedented ‘highs’ to be sure. I think it a good strategy for someone to alert the Daily Mail in UK (which identified the pause as an unspeakable thing among the faithful a couple of years ago) to preempt Hadly CRU with this suspicion, also WSJ, etc. etc.

October 5, 2014 4:00 pm

HADCRUT was just driving the car, Steven?
Good one.
Not responsible for using adjusted data then. What a compelling point.
Not up to your usual stated ethics is it?
Join Nick in the line of apologists except he does not apologise for adjusting comments so you may be first in line instead.

Brent Hargreaves
Reply to  Angech
October 5, 2014 5:50 pm

Steven writes “the past is cooler and the present warmer”. I would add to that “and the past is getting cooler”. GISS is adjusting downward the historical record. Paul Homewood and I have catalogued this brazen fiddling in the Australian and Arctic records. Iceland a century ago is now 1.4C colder than the original pen-and-ink records. Any more of this and the fishing boats at the turn of the 19th century will be frozen in port. /sarc

Reply to  Brent Hargreaves
October 5, 2014 8:03 pm

GISS doesnt adjust the past down.
look at their code. Look at the data they bring in.
the ANSWER given by the new data fools you into thinking they are adjusting. But they only do one adjustment and it has very little effect.

Reply to  Brent Hargreaves
October 5, 2014 8:09 pm

GISS doesn’t adjust the record??
http://oi54.tinypic.com/fylq2w.jpg

Nick Stokes
Reply to  Brent Hargreaves
October 5, 2014 8:39 pm

This is not adjustment. They are different things. The last is a land/ocean index, dominated by SST. The other two are land data only. The first is not Gistemp, but from a paper by Hansen et al, derived, as they say froma few hundred stations, mainly NH. Stuff improves.

David A
Reply to  Brent Hargreaves
October 6, 2014 10:58 am

Nick, neither you are Steve can explain one adjustment. Try that before you bite off the entire convoluted data set. http://stevengoddard.wordpress.com/2014/03/01/spectacular-data-tampering-from-giss-in-iceland/

ossqss
October 5, 2014 5:04 pm

The pattern of assimilation is evident.

thingadonta
October 5, 2014 5:35 pm

“The Met Office Hadley Centre has the highest concentration of absolutely outstanding people who do absolutely outstanding work, spanning the breadth of modelling, attribution, and data analysis, of anywhere in the world.” Dr Susan Solomon, Co Chair IPCC AR4 WGI
The Met office Hadley centre receives the most absolutely overblown hype of anywhere in the world.
One thing never changes with the public service, they never fail to say how fantastic they all are, and they always do this to cover up the fact that they are not.

larrygeary
October 5, 2014 5:44 pm

“What did they learn in 2014 about the last 16 years that they did not know in 2013?”
That the public is no longer buying their story. So they fudge the data to raise the alarm even higher.

TRM
October 5, 2014 6:02 pm

So what happens to the adjusted land datasets for the USA when they are compared to the US CRN for the last 10 years? That is the amount of time that the CRN has been active. I would like to see a comparison.

October 5, 2014 6:35 pm

Steven Mosher;
In my experience as we add New data we generally see this.
MORE warming in the modern period ( since 1950)
Cooling in the past.

>>>>>>>>>>>>>>>>>
Mosher, can you provide a logical, plausible explanation as to why this should be? Why should “new” data from the past be on average cooler than the old data from the past? Why should “new” data from the modern period be on average warmer than old data from the same period?

Reply to  davidmhoffer
October 5, 2014 8:01 pm

Yes. the new data comes from areas that were undersampled.
We always worried that undersampling was biasing the record warm. remember?
Nobody had an issue when I argued that missing data could bias the record warm.
Now the data comes in.. Opps, the old record was biased cold.
So people wanted more data now they got it. read it and weep.
In particular the warming in the present comes from more samples at higher latitudes.
The cooler past I havent studied as much.. there is tons more data coming.. maybe the next batch will swing .05 the other way.

u.k.(us)
Reply to  Steven Mosher
October 5, 2014 8:12 pm

“So people wanted more data now they got it. read it and weep.”
====
Really ?

Werner Brozek
Reply to  Steven Mosher
October 5, 2014 10:18 pm

In particular the warming in the present comes from more samples at higher latitudes.
Due to the different ways that Hadcrut and GISS treat areas with few thermometers, I will concede that this could apply to Hadcrut, but not to GISS.
However if data comes in “from areas that were undersampled”, it seems odd that new data from 1998 is still being found in 2014 which was not found yet in 2013.

Hawkward
Reply to  Steven Mosher
October 6, 2014 7:16 am

I’m not familiar with this process. Where did new data on past temperatures come from, and how was it determined that it was biased cold?

DirkH
Reply to  Steven Mosher
October 6, 2014 7:43 am

Steven Mosher
October 5, 2014 at 8:01 pm
“Yes. the new data comes from areas that were undersampled.
We always worried that undersampling was biasing the record warm. remember?
Nobody had an issue when I argued that missing data could bias the record warm.
Now the data comes in.. Opps, the old record was biased cold.”
That means that you and the other jokers have an even bigger problem explaining away the record sea ice and the total failure of the climate models that predicted that Global Warming must be strongest at the poles.
You have set your trap, now wiggle out of it alone.

mpainter
Reply to  Steven Mosher
October 6, 2014 8:00 am

Mos her
You, with your BEST project have now become one of the data keepers. Your project has been shown to use data uncritically and erroneously. You are in no position to belittle those who question the “Pinocchio’s Nose” effect of Hadcrut or any other data sets. In this climate game, you find what you look for and scientists at the government level are busy looking.

David A
Reply to  Steven Mosher
October 6, 2014 11:00 am

Other then the debatable TOB adjustment, it is much easier to get false warm readings. Still waiting for you to explain one station before you correct the entire data set.
http://stevengoddard.wordpress.com/2014/03/01/spectacular-data-tampering-from-giss-in-iceland/

Reply to  Steven Mosher
October 7, 2014 4:31 pm

“Yes. the new data comes from areas that were undersampled……”
I thought you guys did all of that normalization, homogenization and field generation so you could calculate a value for locations that weren’t actually measured that would be correct!

Editor
October 5, 2014 6:36 pm

Meanwhile in the real world Japan had record snow last winter, Brazil had an extreme hail event, Australia had a load of snow, Eau Claire just had record early snow, the Pacific Northwest to west had early snow, Florida fell off a temperature cliff today, and ski resorts are open in Europe AND Anzus at the same time… Normal for a cool cycle, not for record heat “ever”.

Martin
Reply to  E.M.Smith
October 5, 2014 9:23 pm

Australia had an ok 2014 ski season, nothing special. The snow depth was 29.3 cm below the 61 year average.
Check out the ski resort snow cams and you can see it’s mostly all melted now.

nankerphelge
Reply to  Martin
October 5, 2014 10:15 pm

Martin I am a skier, a Falls Creek (Australia) veteran of over 60 years and I can tell you that this was the best season and snow for at least 20 years!!!!

Mick
Reply to  E.M.Smith
October 6, 2014 9:22 am

Good thing. I almost forgot what snow looked like.
Sarc

nankerphelge
October 5, 2014 7:40 pm

I still find it amazing that these “scientists” still feel free to make adjustments at will and supposedly without scrutiny. I have been writing to Greg Hunt (Australian Minister of the Environment) regularly trying to expose this, well I can only call it a fraud, but can’t even get a reply. Surely the jungle drums are being heard by now. This is not an easy fight that is for sure!

Martin
Reply to  nankerphelge
October 6, 2014 12:28 am

Nankerphelge – Falls had some good dumps of powder. The season started late though and looking at the snow cams it’s all melted away now so the depth wasn’t anything special.
http://ski.com.au/snowcams/australia/vic/fallscreek/index.html

nankerphelge
Reply to  Martin
October 6, 2014 1:32 am

I am not arguing that Martin but what I should have said was that this fantastic snow was not meant to happen again and yet it did some 20 years after we were warned that it would not!!! Personal opinion sure but opinions are allowed by minions and any scientist that allows their opinion to alter data is not a scientist.

george e. smith
October 5, 2014 7:42 pm

My conclusion from this expose, is that hadcru needs to buy a new set of darts, with decent quality flukes.
Anyone who needs 16 or 17 guesses as to what the climate is or is not, should take up lawn bowls or croquet.
Two sets of data, is infinitely worse, than having one set of data.
Each set is observational proof that the other set is total garbage.
Taking differences, is tantamount to differentiation, and it is well known that differentiation ALWAYS increases the noise.
if hadcru was a medical clinic, I would just take two aspirin, and forget about going in for a checkup.
Also in my book, altering in any way, already published data, is tantamount to professional fraud.
If you have discovered a whizzbang new method of reading a thermometer, simply record that fact, AND START A TOTALLY NEW DATA SET
We are told that the hadcru climate data set is the longest continuously recorded set of temperatures on this planet, going back to 1852 or whatever.
Balderdash ! or BS. The longevity of their continuous data set only goes back to the date that they started using a new calculation method.
They clearly have no continuity of record at all.
I have in the past, seen various graphs of TSI taken from this or that satellite, which collectively span something like three sunspot cycles or about 30 some odd years. But none of them covers that whole 30 years, and the three show clear offsets between them. That relates (apparently) to the simple fact that satellites have finite lifetimes, and better measuring equipment comes along to improve the data.
But you don’t see any idiots going back and trying to “amend” those earlier satellite’s data, to “extend” their continuous unbroken record of data.
Each can be seen on its own, as a valid record of what was measurable in those days, with that technology. There’s no need to try and “correct their relative offsets from each other, because we have no credible basis for believing any one of them over the other. The newer machinery, MIGHT have produced different numbers back then, if it had been available to use. But it wasn’t, and it might not have given different numbers, because that was a different time.
So for me, hadcru has zero credibility. I’m not going to choose sides on a haxadecagon of differences.

Reply to  george e. smith
October 5, 2014 7:55 pm

“We are told that the hadcru climate data set is the longest continuously recorded set of temperatures on this planet, going back to 1852 or whatever.”
Huh? who told you that?

October 5, 2014 10:02 pm

Steven Mosher;
Now the data comes in.. Opps, the old record was biased cold.
So people wanted more data now they got it. read it and weep.
In particular the warming in the present comes from more samples at higher latitudes.

My recollection Mr. Mosher, is that during the debate about station drop out, you insisted that it had no effect. Now we have the opposite of station drop out, and you are insisting that the effect is legit. I just don’t see how you can have it both ways.
But it doesn’t much matter. The fact that we are arguing over minuscule trends so small that we can’t even be sure they aren’t an artifact of sampling method or instrument error demonstrates that sensitivity is so small we can’t find it even when we are looking for it exclusively. Since pre-industrial, we’ve added 40% of one doubling of CO2 to the atmosphere, which, given the logarithmic nature of CO2, amounts to about 50% of one doubling in terms of forcing, and we still can’t say for certain that we’ve measured a warming signal or not.

mebbe
Reply to  davidmhoffer
October 5, 2014 10:09 pm

Their website???

mebbe
Reply to  mebbe
October 5, 2014 10:11 pm

Sorry, that was for mosher

October 5, 2014 11:12 pm

Why is there so much adjustment of measurements in temperature?

Geckko
October 6, 2014 1:18 am

A neitral prior would be that as we collect new data, or find previously unkown errors in the data, the effect on the measured long-term trend should be zero. i.e. the expected value of the mean of the changes should be zero. It strikes me a Bernoulli distribution of mean 0.5 – as I said a reasonable prior. If so, the odds of such an extreme number of changes that increase the trend, rather than reduce it would be highly, highly unlikely.
Mosher seems to be very knowledgable on all the changes; he seems adamant that there is no biases involved. Presumably he has access to a full list of changes, their source and their effect.
Accompanying that, maybe he can present a strong case for an alternative prior, one that suggests that there is naturally a much higher probability that as we learn more, we wil have to revise the data in such a wa that the trend increases.
It seems so far his argument is – “the overwhelming majority of new discoveries makes the trend greater. This is expected because I said so.”

October 6, 2014 2:21 am

The changes from HadCRUT4.2 to HadCRUT4.3 arise from changing the marine component (the sea surface temperature anomalies) from HadSST.3.1.0.0 to HadSST.3.1.1.0 and the land component (the near-surface air temperature anomalies over land) from CRUTEM.4.2.0.0 to CRUTEM.4.3.0.0.
The HadSST changes do not alter the temperature anomalies, just the estimate of the uncertainties:
http://www.metoffice.gov.uk/hadobs/hadsst3/data/HadSST.3.1.1.0_release_notes.html
The changes in the HadCRUT4 temperatures therefore arise because of changes in the land component (CRUTEM4) which are described here:
http://www.metoffice.gov.uk/hadobs/crutem4/data/CRUTEM.4.3.0.0_release_notes.html
Principal changes are the addition of new station data (e.g. 10 new series from Spain, 250 from China, etc.), extension of existing station data to include more recent values (e.g. 518 series from Russia were updated, etc.), or the replacement of station data with series that have been the subject of homogeneity analysis by other initiatives.
The effect of these various changes is shown graphically on that page, for global, hemispheric and various continental-scale averages.
The effect on the overall trends of the new version looks like it would slightly weaken the warming trend in N. America, little overall effect in S. America, and strengthen the warming trend in Europe, Asia and Australasia. The Africa series also appears to warm, but actually this is because the geographical domain for that graph includes a little of southern Europe and it is the changes there that influence this series rather than changes to the African database.

Werner Brozek
Reply to  Tim Osborn (@TimOsbornClim)
October 6, 2014 7:59 am

The changes in the HadCRUT4 temperatures therefore arise because of changes in the land component (CRUTEM4) which are described here:
http://www.metoffice.gov.uk/hadobs/crutem4/data/CRUTEM.4.3.0.0_release_notes.html
Principal changes are the addition of new station data (e.g. 10 new series from Spain, 250 from China, etc.), extension of existing station data to include more recent values (e.g. 518 series from Russia were updated, etc.), or the replacement of station data with series that have been the subject of homogeneity analysis by other initiatives.

Thank you very much for that! What caught my eye was this list of countries that had changes:
“Spain – homogenized long climate series
China – two subsets of homogenized climate series, 18 long series and 380 series beginning after 1950
USA – USHCNv2.5 updates
Russia – Russian Federation updates
Australia – updates to the ‘ACORN’ climate series and corrections to remote island series
Norway – additions to the homogenized climate series
Sweden – single series addition for an area not currently represented
Falkland Islands – the addition of a long climate series
India – the addition of some long series to enhance station spatial/temporal-density
Chile – series to enhance station temporal and spatial density”
I was given the impression above that it was the (warming) high latitudes where new readings were found that caused the increase. But if the above countries are mainly responsible, then it seems more odd that we just get increases over the last 16 years.

mpainter
Reply to  Werner Brozek
October 6, 2014 10:56 am

Yes, this gives the lie to the claim that the adjustments are from the higher latitudes (said to be underrepresented).
So, more b—- from Mr. Mosher.

Solomon Green
Reply to  justthefactswuwt
October 7, 2014 10:03 am

Tim Osborn must be thanked for venturing onto this site to provide more information but justthefactswuwt has asked an interesting question to which, as yet, Mr. Osborn has not replied. When I toss a coin time after time and it invariably comes up heads I draw the conclusion that the coin is biased. If the Met Office housed real scientists they would have already investigated the answer to justthefactswuwt question before it was even posed and be ready with an answer.

Reply to  justthefactswuwt
October 7, 2014 2:30 pm

Werner, justthefactswuwt,
For the current update, the graphs at the link I gave (release notes for CRUTEM.4.3.0.0) show that the region with the biggest increase in warming trend due to this update is the “Asian” region. Direct link to the graphic here: http://www.metoffice.gov.uk/hadobs/crutem4/data/update_diagnostics/asian.gif.
This could be partly a high latitude effect (perhaps particularly the increase in recent years) due to the updated Russian station series. But also the big changes in the China station database that were obtained from the (Cao et al., 2013, DOI: 10.1002/jgrd.50615) and Xu et al. (2013, doi:10.1002/jgrd.50791) studies listed in the release notes, which is clearly not a high latitude effect.
As to whether, or indeed why, our updates always strengthen the warming… overall they often have this effect and the reason is probably that the regions with most missing data have warmed more than the global average warming. This includes the Arctic region, but also many land regions. This is because, on average, land regions have warmed more than the ocean regions and so filling in gaps over the land with additional data tends to raise the global-mean warming.
Other effects arise too (e.g. including series where warm readings during 19th century summers have been adjusted to remove the bias experienced when thermometers were exposed on north facing walls instead of in screens).
However I also note that not all updates produce enhanced warming. The previous one from CRUTEM.4.1.1.0 to CRUTEM.4.2.0.0 resulted in slight net warming of both ends of the series, with slightly more warming at the beginning. Release notes and comparison graphs for that update were given at the time here:
http://www.metoffice.gov.uk/hadobs/crutem4/data/previous_versions/4.2.0.0/CRUTEM.4.2.0.0_release_notes.html

Werner Brozek
Reply to  justthefactswuwt
October 7, 2014 3:58 pm

Thank you very much for your response Mr. Osborn.
If my understanding of the differences between how Hadcrut treats areas where there are no thermometers versus how GISS treats areas with no thermometers, would it be fair to say that GISS would show no change on the average with the same additional information?
In your first response, it says in the link: “Australia – updates to the ‘ACORN’ climate series and corrections to remote island series”. However Jo Nova writes here: http://joannenova.com.au/2014/10/australian-summer-maximums-warmed-by-200/
“Ken Stewart points out that adjustments grossly exaggerate monthly and seasonal warming, and that anyone analyzing national data trends quickly gets into 2 degrees of quicksand. He asks: What was the national summer maximum in 1926?  AWAP says 35.9C.  Acorn says 33.5C.  Which data set is to be believed?”
May we have a response to this? Thank you!

Stephen Richards
October 6, 2014 2:38 am

Perhaps the most telling exposé of this data tampering is when anecdotal eyewitness evidence says that the glacier are retreating during a period when the adjusted temps show the coldest anomolies of the 20th century and when the temps show the warmest anomolies of the 21th century the antarctic and arctic ice is increasing.
h/t steven Goddard.
Mosher, care to explain ?

Stephen Richards
October 6, 2014 2:39 am

Steven Mosher
October 5, 2014 at 8:01 pm
Yes. the new data comes from areas that were undersampled.
They still are under sampled except by satelite measures and even there the RSS does not sample above 82.5° N and S

Werner Brozek
Reply to  Stephen Richards
October 6, 2014 8:02 am

RSS goes to 82.5 degrees north
 
With the circumference of Earth being about 40000 km, the distance from 82.5 to 90 would be 7.5/90 x 10000 = 830 km. So the area in the north NOT covered is pir^2 = 2.16 x 10^6 km2. Dividing this by the area of the earth, 5.1 x 10^8 km2, we get about 0.42% NOT covered by RSS in the Arctic.
And since it is mostly the north polar area that seems to be mentioned, that is only 0.42%. It seems as if the Antarctic has gotten colder. Has that been accounted for in any way?

A C Osborn
October 6, 2014 3:57 am

Let me re-adjust Mr Mosher’s quote about BEST to include all the non satelite data sets.
“If you want to know what the actual Temperature was look at the Raw data, If you want to know what we think it should have been (want it to have been) look at the final products”
ie it doesn’t fit our “world view” of Climate so we have found lots of excuses to adjust it.
What they fail to realise is that the Actual Real world still has those results and plenty of back up anecdotal evedince to show just how wrong they are. That also includes current temperatures, anyone over 50 knows it is nowhere near as hot now as it has been in the past.
Nature is and will “out” the lot of them.
Bring it on, along with the trials

Reply to  A C Osborn
October 6, 2014 4:04 am

Being wrong is not criminal.
Lysenko thought he was right.

DirkH
Reply to  M Courtney
October 6, 2014 7:47 am

Killing someone mistakenly is not much of an excuse.
“From 1934 to 1940, under Lysenko’s admonitions and with Stalin’s approval, many geneticists were executed (including Isaak Agol, Solomon Levit, Grigorii …”
http://en.wikipedia.org/wiki/Lysenkoism

Reply to  M Courtney
October 6, 2014 7:53 am

I was trying to avoid being Lysenko rather than avoiding being the victim.
Let’s not use the law to prosecute scientific errors. Deceptions? Yes, of course.
But not errors.

Mick
Reply to  A C Osborn
October 6, 2014 10:17 am

My 90 year old grandmother says that the 40s and were unbearably hot. They used to have to sleep outdoors to get relief at night. She grew up at English Bay, at the beach.
Personally, I have not had to use air conditioning for the last 4 summers.
I live in the valley near Vancouver BC, where it can get 3-5 degrees C warmer than the coastal cities.
When these old timers are gone, their memories of extreme weather, droughts and record heat waves will also be gone. This is our Empirical data, first hand accounts of what it was like back in the 30s and 40s.

David A
October 6, 2014 5:08 am

Steven Mosher cannot even explain the adjustments to ONE station here…
http://stevengoddard.wordpress.com/2014/03/01/spectacular-data-tampering-from-giss-in-iceland/
Please look at August 98, vs August 2014. Clearly 98 was far warmer… http://stevengoddard.wordpress.com/2014/09/18/us-government-agencies-just-cant-stop-lying/
For a good correlation with Northern Hemisphere T compare it to the AMO..comment image
Peterson 2003 claimed there was no difference between rural and city stations… There was….http://climateaudit.org/2007/08/04/1859/
I am SURPRISED WUWT has not examined a paper which expands the error bars of the surface record, especially pre satellite tremendously…. http://www.eike-klima-energie.eu/uploads/media/E___E_algorithm_error_07-Limburg.pdf
UHI doubled in this study…http://onlinelibrary.wiley.com/doi/10.1002/joc.4087/abstract

Alx
October 6, 2014 5:30 am

“The system simulates both the human driven climate change and the evolution of slow natural variation already locked into the system.”

How do you “lock” natural variation? I guess, you can extrapolate out from some past trends and pretend you have mastered how the entire global eco system works. But underlying this is a pathological need to treat all of nature as a photo, locked in time for perpetuity and humanity as something that exists seperate.
To add to the amount of delusionthat can possibly fit into one sentence, there is the pathological indifference to the difference between simulated human driven climate change and proven human driven climate change.
To sum up climate science to date; simulations supercede reality and the entire global eco system is 100% static and predictable.

Ima
October 6, 2014 5:44 am

Steven Mosher – Do you not understand that generally the only people who will accept the current historical temperature adjustment processes will be those who want to believe in climate change for social reasons other than climate change itself?
This process, accurate or not – and until the laws of human nature are repealed, I would venture not – is destroying the credibility of climate science. You are never going to be able to convince the general public that historical temperature readings were “false” and therefore had to be adjusted downward.
When scientific jargon conflicts with common sense, common sense prevails.

DavidinNYC
October 6, 2014 6:44 am

Question for Mr Mosher
My understanding from your previous posts about the methods of at least BEST, is that you construct a temperature field that has three spatial and a time dimension. In order to generate in filled data some interpolation method is applied along all dimensions. As a test of the validity of this method random real datapoints are withheld to determine whether or not in filling results in the proper prediction of those withheld points.
And supposedly the interpolation algorithm is such that the holdouts are correctly predicted. So that in some sense your initial data space is over sampled in that u can accurately predict the entirety of the temperature field with fewer than the actual number of real data points recorded.
It appears now that ‘new data’ ifrom the past is arriving that for some reason appears to contradict those results.
So two questions
1. Do u use the holdout technique against new data as it arrives from the future ?
2. Does that data conform to your modeling such that holdouts are properly predicted ?
Assuming the answer to two is yes that would seem to imply that the new data arriving from the past (which your model is not properly predicting) is in some way systematically incorrect.
If the answer to 2 is no, it would imply that historical in filling is a post hoc curve fit and that in fact the historical data space is under rather than over sampled

John Mann
October 6, 2014 7:18 am

“The Met Office Hadley Center has pioneered a new system to predict the climate a decade ahead.”
Really?
I always thought there were few professional climatologists in private practice because there would nothing to say day after day, year after year, decade after decade. How would you make a living outside government or academia. Your whole life would be spent looking backward for evidence of change in the fossil record. A changed climate among the many around the earth manifests itself in a new and stable fauna regime. Climate is defined by vegetation. And the changes from one climate to that of a neighboring climate might only be very subtle. The drive from Baltimore to Harrisburg is a good example of driving from one climate to another and I wonder how many notice the difference.

October 6, 2014 9:02 am

I’m sorry if this offends anyone … but … since I began reading about the subject of climate change in the late 1990’s, I have always wondered why anyone involved takes surface measurements seriously.
.
The starting point in the late 1800’s is most likely very few thermometers, far from “global”, and thermometers of that era (that have survived) consistently tend to read low.
.
That means (to me) the starting point for measurements is most likely inaccurate (too low) and the claimed warming before the era of weather satellites may be nothing more than measurement error.
.
Given the effects of economic growth (UHI) on weather station readings, and their typical poor siting even in the US (brilliantly uncovered in the original 2009 white paper), why does anyone here care about the surface data?
.
I can understand studying anecdotal evidence of the unusual heat in 1920-1940 era, but can anyone take surface measurements of the oceans by throwing a wood bucket over the rail of a ship seriously?
.
And later throwing a canvas bucket over the rail of a ship?
.
And then incoming engine cooling water?
.
Ships mainly in Northern hemisphere shipping lanes — not global.
.
And that surface data is supposed to be reliable enough to calculate the average temperature of about 70% of earth’s surface, and taken seriously by warmists (simply because they like what they see — they don’t care about accuracy) … but I don’t understand why sensible people here would care about such inaccurate data.
.
Can someone explain why those surface data are taken seriously and discussed here too?
.
I don’t understand how unreliable surface data, with unknown errors,
can ever be adjusted to be accurate, useful data.
.
So why don’t scientists simply say:
“We don’t know the average temperature before weather satellites in 1979
— the data are not accurate enough to be useful”?
.
From the accurate data I have studied,
I think the ONLY conclusions I can come to about the climate are:
(1) The average temperature on Earth is always changing,
(2) The 1930’s (1920 to 1940) were unusually warm, explanation unknown,
(3) There was some warming for about 22 years from 1976 to 1998,
which was a short-term trend (cause uncertain, except for 1998), that has ended. and
(4) The future average temperature of Earth is completely unpredictable,
meaning that people who use climate models are nothing more than “climate astrologers.”
(5) The water meter in my Michigan garage froze in February 2014 and cracked for the
first time since I moved in in 1987 — I had to pay $300 for a new one — as far as i’m concerned,
that anecdote is just as useful as surface measurements from the late 1800’s to 1979!
.
.

David A
Reply to  Richard Greene
October 6, 2014 11:18 am

Richard, you are correct about the LARGE error bars for the past. http://www.eike-klima-energie.eu/uploads/media/E___E_algorithm_error_07-Limburg.pdf
The AMO is a good barometer of past NH T. comment image
This is well reflected in all continuously active USHCN stations for the past almost 90 years.
If the heavily adjusted record was correct, then you would not see most of the record highs from the late 1930s and early to mid 1940s. Record highs can not be adjusted, so they cannot hide this in the raw data. TOB does not apply to record highs. http://stevengoddard.wordpress.com/2014/04/13/us-summer-afternoon-temperatures-declining-over-the-past-85-years/

Werner Brozek
October 6, 2014 9:02 am

UAH Update
For version 5.5, the September anomaly was 0.185. This drops the average to 0.196 and into 8th place as a ranking for 2014 so far. The flat line still starts from January 2005 making it 9 years and 9 months long.

Bart
October 6, 2014 11:59 am

In the end, it doesn’t really matter. The temperature series are still essentially an upward linear trend superimposed with a ~60 year cycle, both of which have been in evidence since the earliest time for which we have at least semi-reliable data, i.e., before CO2 could have been forcing it. Subtract out those long term, natural patterns, and you are left with very little that could even possibly be ascribed to anthropogenic forcing.

tadchem
October 6, 2014 12:19 pm

News Flash: Dateline 1 April 2114 – “Climate Scientists Report Ice Now Melts At 5° C – Models show adjustment accounts for ice pack blocking the Bering Sea.”

bit chilly
October 6, 2014 4:30 pm

the sooner people stop giving credence to these “products” as if they were observed measurements ,the better. including the governments funding them.

don penman
October 7, 2014 10:33 am

It is a strange kind of warming where the data we have been measuring remains the same but the data we have missed drags drags the global temperature higher.