Guest Post by Werner Brozek, Commentary By Walter Dnes, Edited by Just The Facts:

Update: Comment from Tim Osborn of the University of East Anglia (UEA), Climatic Research Unit (CRU), appended to the end of this article as it offers additional background on the recent changes to HadCRUT and HadSST.
The WoodForTrees Index (WTI) is a combination of surface data sets HadCRUT3 and GISS, as well as the satellite data sets RSS and UAH (version 5.5). All four must be present to produce the WTI. As can be seen in the graph above, three of the data sets go to August. However HadCRUT3 has not been updated since May, 2014. As a result, WTI has not been updated since then.
There are some who claim cherry picking when a particular data set is chosen, e.g. RSS to demonstrate the length of the Pause, thus the WTI offers a good average of data sets to address this criticism. Up to May, 2014, the WTI showed a very slight negative slope from January 2001. And since the anomalies have declined since then, it is very possible that the period without warming would be 13 years and 8 months at the end of August if WTI had data from all of the data sets.
But it doesn’t, so what’s up with Hadcrut3? Here are few quotes from years ago that may shed some light:
“The Met Office Hadley Center has pioneered a new system to predict the climate a decade ahead. The system simulates both the human driven climate change and the evolution of slow natural variation already locked into the system.”
“We are now using the system to predict changes out to 2014. By the end of this period, the global average temperature is expected to have risen by around 0.3 °C compared to 2004, and half of the years after 2009 are predicted to be hotter than the current record hot year, 1998.” Met Office Hadley Centre 2007
“The Met Office Hadley Centre has the highest concentration of absolutely outstanding people who do absolutely outstanding work, spanning the breadth of modelling, attribution, and data analysis, of anywhere in the world.” Dr Susan Solomon, Co Chair IPCC AR4 WGI
So let us see how “absolutely outstanding” the Met Office Hadley Centre’s 2007 prediction is turning out. The 2004 anomaly was 0.447. After 5 months in 2014, the average anomaly was 0.472, which is no where near an “around 0.3 °C” rise from 0.447. What about “half of the years after 2009 are predicted to be hotter than the current record hot year, 1998”? As of 2013, the 1998 record had not been broken on the Hadcrut3. We do not know what might have happened in the remainder of 2014, however Hadcrut3 had been tracking Hadcrut4 very closely. So if we assume the same change in Hadcrut3, as occurred in Hadcrut4 for 2014 compared to 2013, we get the following results. Hadcrut4 had an anomaly of 0.492 in 2013. The average to the end of August 2014 is 0.555. The difference is 0.063. So if we add 0.063 to the Hadcrut3 anomaly of 0.457 in 2013, we get 0.52. With this anomaly after 8 months, an average anomaly of 0.604 would have been required for the remaining 4 months of 2014 to set a record. This number has been beaten 5 times in 1998 and three other times after that. So it is safe to say Hadcrut3 had no chance of beating 1998 through 2014.
Which leads us to HadCRUT4 and some new adjustments. The HadCRUT4 data for this update couldn’t be drawn from HadCRUT4.2 since, as of July, that no longer seems to exist. Instead, we have HadCRUT4.3. Neither HadCRUT4.2 nor HadSST3 had been updated for August as of October 3, and since the new HadCRUT4.3 numbers are up, I will assume that we will no longer see HadCRUT4.2. I do not know about HadSST3.
Unsurprisingly, HadCRUT4 was adjusted up, again. Walter Dnes offers the following insight with respect to the new HadCRUT4.3: The anomaly increased to +0.669. The longest negative string is now November 2001 through August 2014. November 2001 (and January 2002) are just barely negative for the slope. If September’s value is +0.513 or higher, the next negative numbers are in late 2004.
Here is a graph of HadCRUT4 slopes for all months for the period from that month to the latest available data:

Note that, between 2007 and late 2008 and before late 2000 the slope() was greater than .004 C degree per year, thus was literally “off the graph”. The graph above, along with those for GISS, UAH5.6, RSS, and NCDC/NOAA, as well as the associated data is available from this Google spreadsheet.
The following are the “diagnostic plots show comparisons of global and hemispheric time series for HadCRUT.4.3.0.0 (this version) and HadCRUT.4.2.0.0 (the previous version of HadCRUT4).” Unsurprisingly the Met Office Hadley Center found more warming:

It appears they found more warming during the last 18 years in the Northern Hemisphere:

Whereas the Southern Hemisphere they apparently only found some warming that had been hiding out since World War I:

The monthly values for the new HadCRUT4.3, they are available here and the new yearly averages are here.
The above raises the question as to why these adjustments were made. It would have been nice to compare apples to apples to see if Hadcrut3 would have finally broken the 1998 record. But then the apple became a banana when the new Hadcrut4 came out. Then the banana became a red pepper when Hadcrut4.2 came out as can be seen here. Now, the red pepper became a jalapeno pepper when Hadcrut4.3 came out. The anomaly for the first 7 months on Hadcrut4.2 averaged 0.535 and it ranked in third place at that time, but the first 7 months on Hadcrut4.3 averaged 0.539, and the average for 8 months on the new Hadcrut4.3 is now 0.555 and this would tie it in first place with 2010 which is also at 0.555.
Why are they changing things so quickly? Do they want to take some of the heat off GISS? Are they embarrassed that Dr. McKitrick has found no statistically significant warming for 19 years and before the ink is barely dry on his report, they want to prove him wrong? Are they determined that by hook or by crook that 2014 will set a new record?
Last year, I wrote the following in comments:
“From 1997 to 2012 is 16 years. Here are the changes in thousandths of a degree with the new version of HadCRUT4 being higher than the old version in all cases. So starting with 1997, the numbers are 2, 8, 3, 3, 4, 7, 7, 7, 5, 4, 5, 5, 5, 7, 8, and 15. The 0.015 was for 2012. What are the chances that the average anomaly goes up for 16 straight years by pure chance alone if a number of new sites are discovered? Assuming a 50% chance that the anomaly could go either way, the chances of 16 straight years of rises is 1 in 2^16 or 1 in 65,536. Of course this does not prove fraud, but considering that “HadCRUT4 was introduced in March 2012”, it just begs the question why it needed a major overhaul only a year later.”
And how do you suppose the last 16 years went prior to this latest revision? Here are the last 16 years counting back from 2013. The first number is the anomaly in Hadcrut4.2 and the second number is the anomaly in Hadcrut4.3: 2013(0.487, 0.492), 2012 (0.448, 0.467), 2011 (0.406, 0.421), 2010(0.547, 0.555), 2009 (0.494, 0.504), 2008 (0.388, 0.394), 2007(0.483, 0.493), 2006 (0.495, 0.505), 2005 (0.539, 0.543), 2004(0.445, 0.448), 2003 (0.503, 0.507), 2002 (0.492, 0.495), 2001(0.437, 0.439), 2000 (0.294, 0.294), 1999 (0.301, 0.307), and 1998(0.531, 0.535). Do you notice something odd? There is one tie in 2000. All the other 15 are larger. So in 32 different comparisons, there is not a single cooling. Unless I am mistaken, the odds of not a single cooling in 32 tries is 2^32 or 4 x 10^9. I am not sure how the tie gets factored in, but however you look at it, incredible odds are broken in each revision. What did they learn in 2014 about the last 16 years that they did not know in 2013?
In the table, you can see the statistics for Hadcrut4.2 beside those of Hadcrut4.3. Chances are that if August were available for Hacrut4.2, it would also rank #1.
P.S. RSS for September came in at 0.206. This lowers the average to 0.252 so 2014 would rank as 7th warmest if it stayed this way. The length of no warming increases to 18 years and 1 month.
In the sections below, as in previous posts, we will present you with the latest facts. The information will be presented in three sections and an appendix. The first section will show for how long there has been no warming on several data sets. The second section will show for how long there has been no statistically significant warming on several data sets. The third section will show how 2014 to date compares with 2013 and the warmest years and months on record so far. The appendix will illustrate sections 1 and 2 in a different way. Graphs and a table will be used to illustrate the data.
Section 1
This analysis uses the latest month for which data is available on WoodForTrees.com (WFT). All of the data on WFT is also available at the specific sources as outlined below. We start with the present date and go to the furthest month in the past where the slope is a least slightly negative. So if the slope from September is 4 x 10^-4 but it is – 4 x 10^-4 from October, we give the time from October so no one can accuse us of being less than honest if we say the slope is flat from a certain month.
1. For GISS, the slope is flat since October 2004 or 9 years, 11 months. (goes to August)
2. For Hadcrut4, the slope is flat since February 2001 or 13 years, 6 months. (goes to July and may be discontinued)
3. For Hadsst3, the slope is flat since March 2009 or 5 years, 5 months. (goes to July and may be discontinued)
4. For UAH, the slope is flat since January 2005 or 9 years, 8 months. (goes to August using version 5.5)
5. For RSS, the slope is flat since October 1996 or 17 years, 11 months (goes to August).
The next graph shows just the lines to illustrate the above. Think of it as a sideways bar graph where the lengths of the lines indicate the relative times where the slope is 0. In addition, the upward sloping brown line at the top indicates that CO2 has steadily increased over this period.

When two things are plotted as I have done, the left only shows a temperature anomaly.
The actual numbers are meaningless since all slopes are essentially zero. As well, I have offset them so they are evenly spaced. No numbers are given for CO2. Some have asked that the log of the concentration of CO2 be plotted. However WFT does not give this option. The upward sloping CO2 line only shows that while CO2 has been going up over the last 18 years, the temperatures have been flat for varying periods on various data sets.
The next graph shows the above, but this time, the actual plotted points are shown along with the slope lines and the CO2 is omitted.

Section 2
For this analysis, data was retrieved from Nick Stokes’ Trendviewer available on his website <a href=”http://moyhu.blogspot.com.au/p/temperature-trend-viewer.html”. This analysis indicates for how long there has not been statistically significant warming according to Nick’s criteria. Data go to their latest update for each set. In every case, note that the lower error bar is negative so a slope of 0 cannot be ruled out from the month indicated.
On several different data sets, there has been no statistically significant warming for between 16 and 21 years according to Nick’s criteria.
Dr. Ross McKitrick has also commented on these parts and has slightly different number for the three data sets that he analyzed. I will also give his times.
The details for several sets are below.
For UAH: Since April 1996: CI from -0.015 to 2.311
(Dr. McKitrick says the warming is not significant for 16 years on UAH.)
For RSS: Since December 1992: CI from -0.018 to 1.802
(Dr. McKitrick says the warming is not significant for 26 years on RSS.)
For Hadcrut4: Since December 1996: CI from -0.026 to 1.139
(Dr. McKitrick says the warming is not significant for 19 years on Hadcrut4.)
For Hadsst3: Since August 1994: CI from -0.014 to 1.665
For GISS: Since October 1997: CI from -0.002 to 1.249
Note that all of the above times, regardless of the source, are larger than 15 years which NOAA deemed necessary to “create a discrepancy with the expected present-day warming rate”.
Section 3
This section shows data about 2014 and other information in the form of a table. The table shows the five data sources along the top and other places so they should be visible at all times. The sources are UAH, RSS, Hadcrut4, Hadsst3, and GISS.
Down the column, are the following:
1. 13ra: This is the final ranking for 2013 on each data set.
2. 13a: Here I give the average anomaly for 2013.
3. year: This indicates the warmest year on record so far for that particular data set. Note that two of the data sets have 2010 as the warmest year and three have 1998 as the warmest year.
4. ano: This is the average of the monthly anomalies of the warmest year just above.
5. mon: This is the month where that particular data set showed the highest anomaly. The months are identified by the first three letters of the month and the last two numbers of the year. Note that this does not yet include records set so far in 2014 such as Hadsst3 in June.
6. ano: This is the anomaly of the month just above.
7. y/m: This is the longest period of time where the slope is not positive given in years/months. So 16/2 means that for 16 years and 2 months the slope is essentially 0.
8. sig: This the first month for which warming is not statistically significant according to Nick’s criteria. The first three letters of the month are followed by the last two numbers of the year.
9. sy/m: NEW: This is the years and months for row 8. Depending on when the update was last done, the months may be off by one month.
10. McK: NEW: These are Dr. Ross McKitrick’s number of years for three of the data sets.
11. Jan: This is the January 2014 anomaly for that particular data set.
12. Feb: This is the February 2014 anomaly for that particular data set, etc.
19. ave: This is the average anomaly of all months to date taken by adding all numbers and dividing by the number of months. However if the data set itself gives that average, I may use their number. Sometimes the number in the third decimal place differs slightly, presumably due to all months not having the same number of days.
20. rnk: This is the rank that each particular data set would have if the anomaly above were to remain that way for the rest of the year. It will not, but think of it as an update 40 minutes into a game. Due to different base periods, the rank is more meaningful than the average anomaly.
I do not know the future of Hadcrut4.2 and Hadsst3. They did not come in by October 3 and I think they may be discontinued since a new Hadcrut4.3 is now out which includes August. Unfortunately, Hadcrut4.3 is not on WFT. As a result, there are several gaps for Hadcrut4.2 and Hadsst3.
| Source | UAH | RSS | Hd4.2 | Hd4.3 | Sst3 | GISS |
|---|---|---|---|---|---|---|
| 1.13ra | 7th | 10th | 8th | 9th | 6th | 6th |
| 2.13a | 0.197 | 0.218 | 0.487 | 0.492 | 0.376 | 0.61 |
| 3.year | 1998 | 1998 | 2010 | 2010 | 1998 | 2010 |
| 4.ano | 0.419 | 0.55 | 0.547 | 0.555 | 0.416 | 0.67 |
| 5.mon | Apr98 | Apr98 | Jan07 | Jan07 | Jul98 | Jan07 |
| 6.ano | 0.662 | 0.857 | 0.829 | 0.835 | 0.526 | 0.93 |
| 7.y/m | 9/8 | 17/11 | 13/6 | 12/10 | 5/5 | 9/11 |
| 8.sig | Apr96 | Dec92 | Dec96 | Aug94 | Oct97 | |
| 9.sy/m | 18/6 | 21/9 | 17/9 | 20/1 | 16/11 | |
| 10.McK | 16 | 26 | 19 | |||
| Source | UAH | RSS | Hd4.2 | Hd4.3 | Sst3 | GISS |
| 11.Jan | 0.236 | 0.261 | 0.509 | 0.508 | 0.342 | 0.70 |
| 12.Feb | 0.127 | 0.162 | 0.304 | 0.305 | 0.314 | 0.45 |
| 13.Mar | 0.137 | 0.214 | 0.540 | 0.548 | 0.347 | 0.70 |
| 14.Apr | 0.184 | 0.251 | 0.643 | 0.658 | 0.478 | 0.73 |
| 15.May | 0.275 | 0.286 | 0.584 | 0.596 | 0.477 | 0.79 |
| 16.Jun | 0.279 | 0.345 | 0.620 | 0.619 | 0.563 | 0.62 |
| 17.Jul | 0.221 | 0.351 | 0.549 | 0.541 | 0.552 | 0.53 |
| 18.Aug | 0.118 | 0.193 | 0.669 | 0.70 | ||
| Source | UAH | RSS | Hd4.2 | Hd4.3 | Sst3 | GISS |
| 19.ave | 0.197 | 0.258 | 0.535 | 0.555 | 0.439 | 0.65 |
| 20.rnk | 7th | 6th | 3rd | 1st | 1st | 3rd |
If you wish to verify all of the latest anomalies, go to the following:
For UAH, version 5.5 was used since that is what WFT used.
http://vortex.nsstc.uah.edu/public/msu/t2lt/tltglhmam_5.5.txt
For RSS, see: ftp://ftp.ssmi.com/msu/monthly_time_series/rss_monthly_msu_amsu_channel_tlt_anomalies_land_and_ocean_v03_3.txt
For Hadcrut4, see: http://www.metoffice.gov.uk/hadobs/hadcrut4/data/current/time_series/HadCRUT.4.3.0.0.monthly_ns_avg.txt
For Hadsst3, see: http://www.cru.uea.ac.uk/cru/data/temperature/HadSST3-gl.dat
For GISS, see:
http://data.giss.nasa.gov/gistemp/tabledata_v3/GLB.Ts+dSST.txt
To see all points since January 2014 in the form of a graph, see the WFT graph below.

As you can see, all lines have been offset so they all start at the same place in January 2014. This makes it easy to compare January 2014 with the latest anomaly.
Appendix
In this section, we summarize the data for each data set separately.
RSS
The slope is flat since October 1996 or 17 years, 11 months. (goes to August)
For RSS: There is no statistically significant warming since December 1992: CI from -0.018 to 1.802.
The RSS average anomaly so far for 2014 is 0.258. This would rank it as 6th place if it stayed this way. 1998 was the warmest at 0.55. The highest ever monthly anomaly was in April of 1998 when it reached 0.857. The anomaly in 2013 was 0.218 and it is ranked 10th.
UAH
The slope is flat since January 2005 or 9 years, 8 months. (goes to August using version 5.5 according to WFT)
For UAH: There is no statistically significant warming since April 1996: CI from -0.015 to 2.311. (This is using version 5.6 according to Nick’s program.)
The UAH average anomaly so far for 2014 is 0.197. This would rank it as 7th place if it stayed this way. 1998 was the warmest at 0.419. The highest ever monthly anomaly was in April of 1998 when it reached 0.662. The anomaly in 2013 was 0.197 and it is ranked 7th.
HadCRUT4.2
The slope is flat since February 2001 or 13 years, 6 months. (goes to July and may be discontinued)
For HadCRUT4: There is no statistically significant warming since December 1996: CI from -0.026 to 1.139.
The HadCRUT4 average anomaly so far for 2014 is 0.535. This would rank it as 3rd place if it stayed this way. 2010 was the warmest at 0.547. The highest ever monthly anomaly was in January of 2007 when it reached 0.829. The anomaly in 2013 was 0.487 and it is ranked 8th.
HadSST3
For HadSST3, the slope is flat since March 2009 or 5 years and 5 months. (goes to July and may be discontinued) For Hadsst3: There is no statistically significant warming since August 1994: CI from -0.014 to 1.665.
The HadSST3 average anomaly so far for 2014 is 0.439. This would rank it as 1st place if it stayed this way. 1998 was the warmest at 0.416 prior to 2014. The highest ever monthly anomaly was in July of 1998 when it reached 0.526. This is also prior to 2014. The anomaly in 2013 was 0.376 and it is ranked 6th.
GISS
The slope is flat since October 2004 or 9 years, 11 months. (goes to August)
For GISS: There is no statistically significant warming since October 1997: CI from -0.002 to 1.249.
The GISS average anomaly so far for 2014 is 0.65. This would rank it as third place if it stayed this way. 2010 was the warmest at 0.67. The highest ever monthly anomaly was in January of 2007 when it reached 0.93. The anomaly in 2013 was 0.61 and it is ranked 6th.
Conclusion
For the moment, WTI is dead. However it can be revived in one of two ways. Either HadCRUT3 needs to be updated or WFT needs to switch WTI from HadCRUT3 to HadCRUT4.3. And if this is done, then UAH should also be updated from version 5.5 to 5.6. But hopefully version 6 for UAH will be out soon. We also await HadCRUT5 with bated breath…
—
Update:
Tim Osborn (@TimOsbornClim) October 6, 2014 at 2:21 am
The changes from HadCRUT4.2 to HadCRUT4.3 arise from changing the marine component (the sea surface temperature anomalies) from HadSST.3.1.0.0 to HadSST.3.1.1.0 and the land component (the near-surface air temperature anomalies over land) from CRUTEM.4.2.0.0 to CRUTEM.4.3.0.0.
The HadSST changes do not alter the temperature anomalies, just the estimate of the uncertainties:
http://www.metoffice.gov.uk/hadobs/hadsst3/data/HadSST.3.1.1.0_release_notes.html
The changes in the HadCRUT4 temperatures therefore arise because of changes in the land component (CRUTEM4) which are described here:
http://www.metoffice.gov.uk/hadobs/crutem4/data/CRUTEM.4.3.0.0_release_notes.html
Principal changes are the addition of new station data (e.g. 10 new series from Spain, 250 from China, etc.), extension of existing station data to include more recent values (e.g. 518 series from Russia were updated, etc.), or the replacement of station data with series that have been the subject of homogeneity analysis by other initiatives.
The effect of these various changes is shown graphically on that page, for global, hemispheric and various continental-scale averages.
The effect on the overall trends of the new version looks like it would slightly weaken the warming trend in N. America, little overall effect in S. America, and strengthen the warming trend in Europe, Asia and Australasia. The Africa series also appears to warm, but actually this is because the geographical domain for that graph includes a little of southern Europe and it is the changes there that influence this series rather than changes to the African database.
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.

Why is there so much adjustment of measurements in temperature?
A neitral prior would be that as we collect new data, or find previously unkown errors in the data, the effect on the measured long-term trend should be zero. i.e. the expected value of the mean of the changes should be zero. It strikes me a Bernoulli distribution of mean 0.5 – as I said a reasonable prior. If so, the odds of such an extreme number of changes that increase the trend, rather than reduce it would be highly, highly unlikely.
Mosher seems to be very knowledgable on all the changes; he seems adamant that there is no biases involved. Presumably he has access to a full list of changes, their source and their effect.
Accompanying that, maybe he can present a strong case for an alternative prior, one that suggests that there is naturally a much higher probability that as we learn more, we wil have to revise the data in such a wa that the trend increases.
It seems so far his argument is – “the overwhelming majority of new discoveries makes the trend greater. This is expected because I said so.”
The changes from HadCRUT4.2 to HadCRUT4.3 arise from changing the marine component (the sea surface temperature anomalies) from HadSST.3.1.0.0 to HadSST.3.1.1.0 and the land component (the near-surface air temperature anomalies over land) from CRUTEM.4.2.0.0 to CRUTEM.4.3.0.0.
The HadSST changes do not alter the temperature anomalies, just the estimate of the uncertainties:
http://www.metoffice.gov.uk/hadobs/hadsst3/data/HadSST.3.1.1.0_release_notes.html
The changes in the HadCRUT4 temperatures therefore arise because of changes in the land component (CRUTEM4) which are described here:
http://www.metoffice.gov.uk/hadobs/crutem4/data/CRUTEM.4.3.0.0_release_notes.html
Principal changes are the addition of new station data (e.g. 10 new series from Spain, 250 from China, etc.), extension of existing station data to include more recent values (e.g. 518 series from Russia were updated, etc.), or the replacement of station data with series that have been the subject of homogeneity analysis by other initiatives.
The effect of these various changes is shown graphically on that page, for global, hemispheric and various continental-scale averages.
The effect on the overall trends of the new version looks like it would slightly weaken the warming trend in N. America, little overall effect in S. America, and strengthen the warming trend in Europe, Asia and Australasia. The Africa series also appears to warm, but actually this is because the geographical domain for that graph includes a little of southern Europe and it is the changes there that influence this series rather than changes to the African database.
The changes in the HadCRUT4 temperatures therefore arise because of changes in the land component (CRUTEM4) which are described here:
http://www.metoffice.gov.uk/hadobs/crutem4/data/CRUTEM.4.3.0.0_release_notes.html
Principal changes are the addition of new station data (e.g. 10 new series from Spain, 250 from China, etc.), extension of existing station data to include more recent values (e.g. 518 series from Russia were updated, etc.), or the replacement of station data with series that have been the subject of homogeneity analysis by other initiatives.
Thank you very much for that! What caught my eye was this list of countries that had changes:
“Spain – homogenized long climate series
China – two subsets of homogenized climate series, 18 long series and 380 series beginning after 1950
USA – USHCNv2.5 updates
Russia – Russian Federation updates
Australia – updates to the ‘ACORN’ climate series and corrections to remote island series
Norway – additions to the homogenized climate series
Sweden – single series addition for an area not currently represented
Falkland Islands – the addition of a long climate series
India – the addition of some long series to enhance station spatial/temporal-density
Chile – series to enhance station temporal and spatial density”
I was given the impression above that it was the (warming) high latitudes where new readings were found that caused the increase. But if the above countries are mainly responsible, then it seems more odd that we just get increases over the last 16 years.
Yes, this gives the lie to the claim that the adjustments are from the higher latitudes (said to be underrepresented).
So, more b—- from Mr. Mosher.
Hello Tim, thank you for your informative and factual comment, I’ve appended it to the end this article in order to ensure that is readily accessible to all readers.
One question, if you have an opportunity. In reference to Werner and other reader’s observations that every recent adjustment to HadCRUT appears to have resulted in additional warming being found, can you offer any background as to why this might be?
Tim Osborn must be thanked for venturing onto this site to provide more information but justthefactswuwt has asked an interesting question to which, as yet, Mr. Osborn has not replied. When I toss a coin time after time and it invariably comes up heads I draw the conclusion that the coin is biased. If the Met Office housed real scientists they would have already investigated the answer to justthefactswuwt question before it was even posed and be ready with an answer.
Werner, justthefactswuwt,
For the current update, the graphs at the link I gave (release notes for CRUTEM.4.3.0.0) show that the region with the biggest increase in warming trend due to this update is the “Asian” region. Direct link to the graphic here: http://www.metoffice.gov.uk/hadobs/crutem4/data/update_diagnostics/asian.gif.
This could be partly a high latitude effect (perhaps particularly the increase in recent years) due to the updated Russian station series. But also the big changes in the China station database that were obtained from the (Cao et al., 2013, DOI: 10.1002/jgrd.50615) and Xu et al. (2013, doi:10.1002/jgrd.50791) studies listed in the release notes, which is clearly not a high latitude effect.
As to whether, or indeed why, our updates always strengthen the warming… overall they often have this effect and the reason is probably that the regions with most missing data have warmed more than the global average warming. This includes the Arctic region, but also many land regions. This is because, on average, land regions have warmed more than the ocean regions and so filling in gaps over the land with additional data tends to raise the global-mean warming.
Other effects arise too (e.g. including series where warm readings during 19th century summers have been adjusted to remove the bias experienced when thermometers were exposed on north facing walls instead of in screens).
However I also note that not all updates produce enhanced warming. The previous one from CRUTEM.4.1.1.0 to CRUTEM.4.2.0.0 resulted in slight net warming of both ends of the series, with slightly more warming at the beginning. Release notes and comparison graphs for that update were given at the time here:
http://www.metoffice.gov.uk/hadobs/crutem4/data/previous_versions/4.2.0.0/CRUTEM.4.2.0.0_release_notes.html
Thank you very much for your response Mr. Osborn.
If my understanding of the differences between how Hadcrut treats areas where there are no thermometers versus how GISS treats areas with no thermometers, would it be fair to say that GISS would show no change on the average with the same additional information?
In your first response, it says in the link: “Australia – updates to the ‘ACORN’ climate series and corrections to remote island series”. However Jo Nova writes here: http://joannenova.com.au/2014/10/australian-summer-maximums-warmed-by-200/
“Ken Stewart points out that adjustments grossly exaggerate monthly and seasonal warming, and that anyone analyzing national data trends quickly gets into 2 degrees of quicksand. He asks: What was the national summer maximum in 1926? AWAP says 35.9C. Acorn says 33.5C. Which data set is to be believed?”
May we have a response to this? Thank you!
Hello Tim
In your response you state that;
From your statement, one could infer that the recent warming obtained from Cao et al., 2013 and Xu et al.;
Met Office Hadley Center – Click the pic to view at source[/caption]
Lijuan Cao, Ping Zhao, Zhongwei Yan, Phil Jones et al. – Click the pic to view at source[/caption]
Lijuan Cao, Ping Zhao, Zhongwei Yan, Phil Jones et al. – Click the pic to view at source[/caption]
Lijuan Cao, Ping Zhao, Zhongwei Yan, Phil Jones et al. – Click the pic to view at source[/caption]
[caption id="" align="alignnone" width="640"]
was the result of “filling in gaps” where there was “missing data”. However, looking first at the research of Lijuan Cao, Ping Zhao, Zhongwei Yan, Phil Jones et al.;
file:///C:/Users/End%20User/Downloads/Lijuan_Cao20140513-Instrumental_temperature_series_in_eastern_and_central_China_back_to_the_19th_century.pdf
it appears that the recent warming found was the result of a reconstruction;
[caption id="" align="alignnone" width="640"]
whereby existing station data;
[caption id="" align="alignnone" width="640"]
was adjusted to account for “relocation of meteorological station”, “instrument change” and “change points without clear reason”:
[caption id="" align="alignnone" width="640"]
Furthermore, this recent paper by co-author YAN ZhongWei;
http://earth.scichina.com:8080/sciDe/EN/abstract/abstract515494.shtml
states that:
In terms of Xu et al. 2013, this recent article in the Journal of Geophysical Research;
http://www.researchgate.net/publication/256980555_Homogenization_of_Chinese_daily_surface_air_temperatures_and_analysis_of_trends_in_the_extreme_temperature_indices
Wenhui Xu et al. notes that:
As such the recent warming in the Asia region appears to be the result of adjusting existing station data to account for “relocation of meteorological station”, “instrument change”, “station automation” and “change points without clear reason” versus “filling in gaps” for “missing data”. Is this your understanding as well?
Perhaps the most telling exposé of this data tampering is when anecdotal eyewitness evidence says that the glacier are retreating during a period when the adjusted temps show the coldest anomolies of the 20th century and when the temps show the warmest anomolies of the 21th century the antarctic and arctic ice is increasing.
h/t steven Goddard.
Mosher, care to explain ?
Steven Mosher
October 5, 2014 at 8:01 pm
Yes. the new data comes from areas that were undersampled.
They still are under sampled except by satelite measures and even there the RSS does not sample above 82.5° N and S
RSS goes to 82.5 degrees north
With the circumference of Earth being about 40000 km, the distance from 82.5 to 90 would be 7.5/90 x 10000 = 830 km. So the area in the north NOT covered is pir^2 = 2.16 x 10^6 km2. Dividing this by the area of the earth, 5.1 x 10^8 km2, we get about 0.42% NOT covered by RSS in the Arctic.
And since it is mostly the north polar area that seems to be mentioned, that is only 0.42%. It seems as if the Antarctic has gotten colder. Has that been accounted for in any way?
Let me re-adjust Mr Mosher’s quote about BEST to include all the non satelite data sets.
“If you want to know what the actual Temperature was look at the Raw data, If you want to know what we think it should have been (want it to have been) look at the final products”
ie it doesn’t fit our “world view” of Climate so we have found lots of excuses to adjust it.
What they fail to realise is that the Actual Real world still has those results and plenty of back up anecdotal evedince to show just how wrong they are. That also includes current temperatures, anyone over 50 knows it is nowhere near as hot now as it has been in the past.
Nature is and will “out” the lot of them.
Bring it on, along with the trials
Being wrong is not criminal.
Lysenko thought he was right.
Killing someone mistakenly is not much of an excuse.
“From 1934 to 1940, under Lysenko’s admonitions and with Stalin’s approval, many geneticists were executed (including Isaak Agol, Solomon Levit, Grigorii …”
http://en.wikipedia.org/wiki/Lysenkoism
I was trying to avoid being Lysenko rather than avoiding being the victim.
Let’s not use the law to prosecute scientific errors. Deceptions? Yes, of course.
But not errors.
My 90 year old grandmother says that the 40s and were unbearably hot. They used to have to sleep outdoors to get relief at night. She grew up at English Bay, at the beach.
Personally, I have not had to use air conditioning for the last 4 summers.
I live in the valley near Vancouver BC, where it can get 3-5 degrees C warmer than the coastal cities.
When these old timers are gone, their memories of extreme weather, droughts and record heat waves will also be gone. This is our Empirical data, first hand accounts of what it was like back in the 30s and 40s.
Steven Mosher cannot even explain the adjustments to ONE station here…
http://stevengoddard.wordpress.com/2014/03/01/spectacular-data-tampering-from-giss-in-iceland/
Please look at August 98, vs August 2014. Clearly 98 was far warmer… http://stevengoddard.wordpress.com/2014/09/18/us-government-agencies-just-cant-stop-lying/
For a good correlation with Northern Hemisphere T compare it to the AMO..
Peterson 2003 claimed there was no difference between rural and city stations… There was….http://climateaudit.org/2007/08/04/1859/
I am SURPRISED WUWT has not examined a paper which expands the error bars of the surface record, especially pre satellite tremendously…. http://www.eike-klima-energie.eu/uploads/media/E___E_algorithm_error_07-Limburg.pdf
UHI doubled in this study…http://onlinelibrary.wiley.com/doi/10.1002/joc.4087/abstract
How do you “lock” natural variation? I guess, you can extrapolate out from some past trends and pretend you have mastered how the entire global eco system works. But underlying this is a pathological need to treat all of nature as a photo, locked in time for perpetuity and humanity as something that exists seperate.
To add to the amount of delusionthat can possibly fit into one sentence, there is the pathological indifference to the difference between simulated human driven climate change and proven human driven climate change.
To sum up climate science to date; simulations supercede reality and the entire global eco system is 100% static and predictable.
Steven Mosher – Do you not understand that generally the only people who will accept the current historical temperature adjustment processes will be those who want to believe in climate change for social reasons other than climate change itself?
This process, accurate or not – and until the laws of human nature are repealed, I would venture not – is destroying the credibility of climate science. You are never going to be able to convince the general public that historical temperature readings were “false” and therefore had to be adjusted downward.
When scientific jargon conflicts with common sense, common sense prevails.
Question for Mr Mosher
My understanding from your previous posts about the methods of at least BEST, is that you construct a temperature field that has three spatial and a time dimension. In order to generate in filled data some interpolation method is applied along all dimensions. As a test of the validity of this method random real datapoints are withheld to determine whether or not in filling results in the proper prediction of those withheld points.
And supposedly the interpolation algorithm is such that the holdouts are correctly predicted. So that in some sense your initial data space is over sampled in that u can accurately predict the entirety of the temperature field with fewer than the actual number of real data points recorded.
It appears now that ‘new data’ ifrom the past is arriving that for some reason appears to contradict those results.
So two questions
1. Do u use the holdout technique against new data as it arrives from the future ?
2. Does that data conform to your modeling such that holdouts are properly predicted ?
Assuming the answer to two is yes that would seem to imply that the new data arriving from the past (which your model is not properly predicting) is in some way systematically incorrect.
If the answer to 2 is no, it would imply that historical in filling is a post hoc curve fit and that in fact the historical data space is under rather than over sampled
“The Met Office Hadley Center has pioneered a new system to predict the climate a decade ahead.”
Really?
I always thought there were few professional climatologists in private practice because there would nothing to say day after day, year after year, decade after decade. How would you make a living outside government or academia. Your whole life would be spent looking backward for evidence of change in the fossil record. A changed climate among the many around the earth manifests itself in a new and stable fauna regime. Climate is defined by vegetation. And the changes from one climate to that of a neighboring climate might only be very subtle. The drive from Baltimore to Harrisburg is a good example of driving from one climate to another and I wonder how many notice the difference.
I’m sorry if this offends anyone … but … since I began reading about the subject of climate change in the late 1990’s, I have always wondered why anyone involved takes surface measurements seriously.
.
The starting point in the late 1800’s is most likely very few thermometers, far from “global”, and thermometers of that era (that have survived) consistently tend to read low.
.
That means (to me) the starting point for measurements is most likely inaccurate (too low) and the claimed warming before the era of weather satellites may be nothing more than measurement error.
.
Given the effects of economic growth (UHI) on weather station readings, and their typical poor siting even in the US (brilliantly uncovered in the original 2009 white paper), why does anyone here care about the surface data?
.
I can understand studying anecdotal evidence of the unusual heat in 1920-1940 era, but can anyone take surface measurements of the oceans by throwing a wood bucket over the rail of a ship seriously?
.
And later throwing a canvas bucket over the rail of a ship?
.
And then incoming engine cooling water?
.
Ships mainly in Northern hemisphere shipping lanes — not global.
.
And that surface data is supposed to be reliable enough to calculate the average temperature of about 70% of earth’s surface, and taken seriously by warmists (simply because they like what they see — they don’t care about accuracy) … but I don’t understand why sensible people here would care about such inaccurate data.
.
Can someone explain why those surface data are taken seriously and discussed here too?
.
I don’t understand how unreliable surface data, with unknown errors,
can ever be adjusted to be accurate, useful data.
.
So why don’t scientists simply say:
“We don’t know the average temperature before weather satellites in 1979
— the data are not accurate enough to be useful”?
.
From the accurate data I have studied,
I think the ONLY conclusions I can come to about the climate are:
(1) The average temperature on Earth is always changing,
(2) The 1930’s (1920 to 1940) were unusually warm, explanation unknown,
(3) There was some warming for about 22 years from 1976 to 1998,
which was a short-term trend (cause uncertain, except for 1998), that has ended. and
(4) The future average temperature of Earth is completely unpredictable,
meaning that people who use climate models are nothing more than “climate astrologers.”
(5) The water meter in my Michigan garage froze in February 2014 and cracked for the
first time since I moved in in 1987 — I had to pay $300 for a new one — as far as i’m concerned,
that anecdote is just as useful as surface measurements from the late 1800’s to 1979!
.
.
Richard, you are correct about the LARGE error bars for the past. http://www.eike-klima-energie.eu/uploads/media/E___E_algorithm_error_07-Limburg.pdf
The AMO is a good barometer of past NH T.
This is well reflected in all continuously active USHCN stations for the past almost 90 years.
If the heavily adjusted record was correct, then you would not see most of the record highs from the late 1930s and early to mid 1940s. Record highs can not be adjusted, so they cannot hide this in the raw data. TOB does not apply to record highs. http://stevengoddard.wordpress.com/2014/04/13/us-summer-afternoon-temperatures-declining-over-the-past-85-years/
UAH Update
For version 5.5, the September anomaly was 0.185. This drops the average to 0.196 and into 8th place as a ranking for 2014 so far. The flat line still starts from January 2005 making it 9 years and 9 months long.
In the end, it doesn’t really matter. The temperature series are still essentially an upward linear trend superimposed with a ~60 year cycle, both of which have been in evidence since the earliest time for which we have at least semi-reliable data, i.e., before CO2 could have been forcing it. Subtract out those long term, natural patterns, and you are left with very little that could even possibly be ascribed to anthropogenic forcing.
News Flash: Dateline 1 April 2114 – “Climate Scientists Report Ice Now Melts At 5° C – Models show adjustment accounts for ice pack blocking the Bering Sea.”
the sooner people stop giving credence to these “products” as if they were observed measurements ,the better. including the governments funding them.
It is a strange kind of warming where the data we have been measuring remains the same but the data we have missed drags drags the global temperature higher.