HADCRUT4 Adjustments – Discovering Missing Data or Reinterpreting Existing Data? (Now Includes September Data)

Guest Post by Werner Brozek and Just The Facts:

WoodForTrees.org – Paul Clark – Click the pic to view at source

To begin, we would like to sincerely thank Tim Osborn of the University of East Anglia (UEA), Climatic Research Unit (CRU) for his responses (1, 2) to this recent WUWT article. Unfortunately, by the time the dialogue got going, many readers had already moved on.

Tim’s responses did leave few loose ends so we would like to carry on from where we left off. Hopefully Tim Osborn will respond again so we can continue the dialogue. In comments of the prior article both Tim Osborn and Steven Mosher argued that recent adjustments to HadCRUT4 were associated with missing data, i.e.:

Steven Mosher October 5, 2014 at 8:01 pm

Yes. the new data comes from areas that were undersampled.

We always worried that undersampling was biasing the record warm. remember?

Nobody had an issue when I argued that missing data could bias the record warm.

Now the data comes in.. Opps, the old record was biased cold.

So people wanted more data now they got it. read it and weep.

In particular the warming in the present comes from more samples at higher latitudes.

Tim Osborn (@TimOsbornClim) October 7, 2014 at 2:30 pm

“As to whether, or indeed why, our updates always strengthen the warming… overall they often have this effect and the reason is probably that the regions with most missing data have warmed more than the global average warming. This includes the Arctic region, but also many land regions. This is because, on average, land regions have warmed more than the ocean regions and so filling in gaps over the land with additional data tends to raise the global-mean warming.”

Looking at the most recent adjustments to HadCRUT4:

Met Office Hadley Center – Click the pic to view at source

it appears they found more warming during the last 18 years in the Northern Hemisphere;

Met Office Hadley Center – Click the pic to view at source

whereas in the Southern Hemisphere they found warming that had been missing since World War I:

Met Office Hadley Center – Click the pic to view at source

When looking at the warming over last 18 years found in the Northern Hemisphere, it appears that a significant portion of it was found in Asia;

Met Office Hadley Center – Click the pic to view at source

and Tim noted in his comment that;

“the big changes in the China station database that were obtained from the (Cao et al., 2013, DOI: 10.1002/jgrd.50615) and Xu et al. (2013, doi:10.1002/jgrd.50791) studies listed in the release notes, which is clearly not a high latitude effect.”

Looking at the recent warming obtained from Cao, Lijuan Cao, Ping Zhao, Zhongwei Yan, Phil Jones et al. appears to have found recent warming as a result of a reconstruction;

Lijuan Cao, Ping Zhao, Zhongwei Yan, Phil Jones et al. – Click the pic to view at source

whereby existing station data;

Lijuan Cao, Ping Zhao, Zhongwei Yan, Phil Jones et al. – Click the pic to view at source

was adjusted to account for “relocation of meteorological station”, “instrument change” and “change points without clear reason”:

Lijuan Cao, Ping Zhao, Zhongwei Yan, Phil Jones et al. – Click the pic to view at source

Furthermore, this recent paper by co-author Yan ZhongWei states that:

Long-term meteorological observation series are fundamental for reflecting climate changes. However, almost all meteorological stations inevitably undergo relocation or changes in observation instruments, rules, and methods, which can result in systematic biases in the observation series for corresponding periods. Homogenization is a technique for adjusting these biases in order to assess the true trends in the time series. In recent years, homogenization has shifted its focus from the adjustments to climate mean status to the adjustments to information about climate extremes or extreme weather. Using case analyses of ideal and actual climate series, here we demonstrate the basic idea of homogenization, introduce new understanding obtained from recent studies of homogenization of climate series in China, and raise issues for further studies in this field, especially with regards to climate extremes, uncertainty of the statistical adjustments, and biased physical relationships among different climate variables due to adjustments in single variable series.

In terms of Xu et al. 2013, in this recent article in the Journal of Geophysical Research Xu et al. state that:

This study first homogenizes time series of daily maximum and minimum temperatures recorded at 825 stations in China over the period from 1951 to 2010, using both metadata and the penalized maximum t test with the first-order autocorrelation being accounted for to detect change points and using the quantile-matching algorithm to adjust the data time series to diminish discontinuities. Station relocation was found to be the main cause for discontinuities, followed by station automation.

As such the recent warming in the Asia region appears to be the result of adjusting existing station data to account for “relocation of meteorological station”, “instrument change”, “station automation” and “change points without clear reason” versus “filling in gaps” for “missing data”.

A second aspect of the recent HadCRUT4 adjustments that raises questions, is related to Tim Osborn’s explanation for why “our updates always strengthen the warming”, i.e.;

“the reason is probably that the regions with most missing data have warmed more than the global average warming. This includes the Arctic region, but also many land regions. This is because, on average, land regions have warmed more than the ocean regions and so filling in gaps over the land with additional data tends to raise the global-mean warming.”

Per Morice et al., 2012;

“HadCRUT4 remains the only one of the four prominent combined land and SST data sets that does not employ any form of spatial infilling and,as a result, grid-box anomalies can readily be traced back to observational records. The global near-surface temperature anomaly data set of the Goddard Institute for Space Studies (GISS) [Hansen etal., 2010], is again a blend of land and SST data sets.The land component is presented as a gridded data set in which grid-box values are a weighted average of temperature anomalies for stations lying within 1200 km of grid-box centres. The sea component is formed from a combination of the HadISST1 data set [Rayner et al., 2003] with the combined in situ and satellite SST data set of Reynolds et al. [2002].”

And per the CRU’s Frequently Asked Questions:

How are the land and marine data combined?

Both the component parts (land and marine) are separately averaged into the same 5° x 5° latitude/longitude grid boxes. The combined version (HadCRUT4 ) takes values from each component and weights the grid boxes according to the area, ensuring that the land component has a weight of at least 25% for any grid box containing some land data. The weighting method is described in detail in Morice et al. (2012). The previous combined versions (HadCRUT3 and HadCRUT3v) take values from each component and weight the grid boxes where both occur (coastlines and islands) according their errors of estimate (see Brohan et al., 2006 for details).

UEA CRU and the Met Office Hadley Center moved to this methodology with the introduction of HADCRUT4 in 2012, i.e.:

“The blending approach adopted here differs from that used in the Brohan et al. [2006] data set. Here, land and sea components are combined at a grid-box level by weighting each component by fractional areas of land and sea within each grid-box, rather than weighting in inverse proportion to error variance. This approach has been adopted to avoid over representation of sea temperatures in regions where SST measurements dominate the total number of measurements in a grid-box. The grid-box average temperature for grid-box i is formed from the grid-box average temperature

anomalies for the land component, for the SST component, and the fractional area of land in the grid-box”

“Coastal grid-boxes for which the land fraction is less than 25% of the total grid-box area are assigned a land fraction weighting of 25%. Here, we are making the assumption that land near-surface air temperature anomalies measured in grid-boxes that are predominantly sea covered are more representative of near-surface air temperature anomalies over the surrounding sea than sea-surface temperature anomalies. These fractions ensure that islands with long land records are not swamped by possibly sparse SST data in open ocean areas (where the island is only a small fraction of the total grid-box area).”

The more cynical might wonder if this change in methodology had anything to do with the fact that;

“Land-based temperatures averaged 0.04°C below sea temperatures for the period 1950 to 1987 but after 1997 averaged 0.41°C above sea temperatures.”

With oceans covering about 70% and land covering about 30% of the total area, it seems that there would be more missing data from oceans than for land, especially since few people live in the middle of the southern ocean. However, any new data that may be found after 1998 would likely be land areas and not remote ocean areas, thus these land areas could have an inordinate influence on global temperatures.

Suppose for example that a landlocked 5 X 5 grid had thermometer readings for 25% of its land area in 1998 and that the anomaly had increased by 0.8 C from a historical baseline. Assume that we knew nothing about the other 75% of the land area in 1998. Also, assume that the ocean average temperature increased by 0.4 C during the same period.

Suppose that in 2014 readings were found for the other 75% of this same landlocked 5 X 5 grid and it was found that this remaining area increased by only 0.7 C. Then even though the 1998 anomaly for this particular landlocked 5 X 5 grid appeared cooler in 2014 than was thought to be the case in 1998, the global anomaly would still go up since 0.7 C is greater than the 0.4 C that the oceans went up by. Is this correct based upon the HadCRUT4 gridding methodology?

A third aspect of the HadCRUT4.3 record and adjustments that raises questions can be found in the Release Notes, where it notes the “Australia – updates to the ‘ACORN’ climate series and corrections to remote island series”. However, as Jo Nova recently wrote;

“Ken Stewart points out that adjustments grossly exaggerate monthly and seasonal warming, and that anyone analyzing national data trends quickly gets into 2 degrees of quicksand. He asks: What was the national summer maximum in 1926? AWAP says 35.9C. Acorn says 33.5C. Which dataset is to be believed?”

As such, if there is a problem with source data such as ACORN in Australia, then this problem is reflected in HadCRUT4.3 record Furthermore, this problem would be compounded by the HADCRUT4 methodology of giving the “land component has a weight of at least 25% for any grid box containing some land data”, as the ACORN network is heavily weighted towards coastal overage, i.e.:

National Climate Centre, Australian Bureau of Meteorology – Click the pic to view at source

In the sections below, as in previous posts, we will present you with the latest facts. The information will be presented in three sections and an appendix. The first section will show for how long there has been no warming on several data sets. The second section will show for how long there has been no statistically significant warming on several data sets. The third section will show how 2014 to date compares with 2013 and the warmest years and months on record so far. The appendix will illustrate sections 1 and 2 in a different way. Graphs and a table will be used to illustrate the data.

Section 1

This analysis uses the latest month for which data is available on WoodForTrees.com (WFT). All of the data on WFT is also available at the specific sources as outlined below. We start with the present date and go to the furthest month in the past where the slope is a least slightly negative. So if the slope from September is 4 x 10^-4 but it is – 4 x 10^-4 from October, we give the time from October so no one can accuse us of being less than honest if we say the slope is flat from a certain month.

1. For GISS, the slope is flat since October 2004 or an even 10 years. (goes to September)

2. For Hadcrut4, the slope is flat since January 2005 or 9 years, 9 months. (goes to September). Note that WFT has not updated Hadcrut4 since July.

3. For Hadsst3, the slope is not flat for any period.

4. For UAH, the slope is flat since January 2005 or 9 years, 9 months. (goes to September using version 5.5)

5. For RSS, the slope is flat since September 1, 1996 or 18 years, 1 month (goes to September 30).

The following graph shows just the lines to illustrate the above. Think of it as a sideways bar graph where the lengths of the lines indicate the relative times where the slope is 0. In addition, the upward sloping brown line at the top indicates that CO2 has steadily increased over this period.

WoodForTrees.org – Paul Clark – Click the pic to view at­ source

When two things are plotted as I have done, the left only shows a temperature anomaly.

The actual numbers are meaningless since all slopes are essentially zero. As well, I have offset them so they are evenly spaced. No numbers are given for CO2. Some have asked that the log of the concentration of CO2 be plotted. However WFT does not give this option. The upward sloping CO2 line only shows that while CO2 has been going up over the last 18 years, the temperatures have been flat for varying periods on various data sets.

Section 2

For this analysis, data was retrieved from Nick Stokes’ Trendviewer available on his website <a href=”http://moyhu.blogspot.com.au/p/temperature-trend-viewer.html”. This analysis indicates for how long there has not been statistically significant warming according to Nick’s criteria. Data go to their latest update for each set. In every case, note that the lower error bar is negative so a slope of 0 cannot be ruled out from the month indicated.

On several different data sets, there has been no statistically significant warming for between 14 and almost 22 years according to Nick’s criteria. Cl stands for the confidence limits at the 95% level.

Dr. Ross McKitrick has also commented on these parts and has slightly different numbers for the three data sets that he analyzed. I will also give his times.

The details for several sets are below.

For UAH: Since May 1996: CI from -0.024 to 2.272

(Dr. McKitrick says the warming is not significant for 16 years on UAH.)

For RSS: Since November 1992: CI from -0.001 to 1.816

(Dr. McKitrick says the warming is not significant for 26 years on RSS.)

For Hadcrut4.3: Since March 1997: CI from -0.016 to 1.157

(Dr. McKitrick says the warming is not significant for 19 years on Hadcrut4.2. Hadcrut4.3 could be very slightly different.)

For Hadsst3: Since October 1994: CI from -0.015 to 1.722

For GISS: Since February 2000: CI from -0.060 to 1.326

Note that all of the above times, regardless of the source, with the exception of GISS are larger than 15 years which NOAA deemed necessary to “create a discrepancy with the expected present-day warming rate”.

Section 3

This section shows data about 2014 and other information in the form of a table. The table shows the five data sources along the top and other places so they should be visible at all times. The sources are UAH, RSS, Hadcrut4, Hadsst3, and GISS.

Down the column, are the following:

1. 13ra: This is the final ranking for 2013 on each data set.

2. 13a: Here I give the average anomaly for 2013.

3. year: This indicates the warmest year on record so far for that particular data set. Note that two of the data sets have 2010 as the warmest year and three have 1998 as the warmest year.

4. ano: This is the average of the monthly anomalies of the warmest year just above.

5. mon: This is the month where that particular data set showed the highest anomaly. The months are identified by the first three letters of the month and the last two numbers of the year. Note that this does not yet include records set so far in 2014 such as Hadsst3 in June, etc.

6. ano: This is the anomaly of the month just above.

7. y/m: This is the longest period of time where the slope is not positive given in years/months. So 16/2 means that for 16 years and 2 months the slope is essentially 0.

8. sig: This the first month for which warming is not statistically significant according to Nick’s criteria. The first three letters of the month are followed by the last two numbers of the year.

9. sy/m: This is the years and months for row 8. Depending on when the update was last done, the months may be off by one month.

10. McK: These are Dr. Ross McKitrick’s number of years for three of the data sets.

11. Jan: This is the January 2014 anomaly for that particular data set.

12. Feb: This is the February 2014 anomaly for that particular data set, etc.

20. ave: This is the average anomaly of all months to date taken by adding all numbers and dividing by the number of months. However if the data set itself gives that average, I may use their number. Sometimes the number in the third decimal place differs slightly, presumably due to all months not having the same number of days.

21. rnk: This is the rank that each particular data set would have if the anomaly above were to remain that way for the rest of the year. It may not, but think of it as an update 45 minutes into a game. Due to different base periods, the rank is more meaningful than the average anomaly.

Source UAH RSS Had4 Sst3 GISS
1.13ra 7th 10th 9th 6th 6th
2.13a 0.197 0.218 0.492 0.376 0.60
3.year 1998 1998 2010 1998 2010
4.ano 0.419 0.55 0.555 0.416 0.66
5.mon Apr98 Apr98 Jan07 Jul98 Jan07
6.ano 0.662 0.857 0.835 0.526 0.92
7.y/m 9/9 18/1 9/9 0 10/0
8.sig May96 Nov92 Mar97 Oct94 Feb00
9.sy/m 18/5 21/11 17/7 20/0 14/8
10.McK 16 26 19
Source UAH RSS Had4 Sst3 GISS
11.Jan 0.236 0.261 0.508 0.342 0.68
12.Feb 0.127 0.161 0.305 0.314 0.42
13.Mar 0.137 0.214 0.548 0.347 0.68
14.Apr 0.184 0.251 0.658 0.478 0.71
15.May 0.275 0.286 0.596 0.477 0.78
16.Jun 0.279 0.345 0.619 0.563 0.61
17.Jul 0.221 0.350 0.542 0.551 0.52
18.Aug 0.117 0.193 0.667 0.644 0.69
19.Sep 0.185 0.206 0.595 0.578 0.77
Source UAH RSS Had4 Sst3 GISS
20.ave 0.196 0.252 0.560 0.477 0.65
21.rnk 8th 7th 1st 1st 3rd

If you wish to verify all of the latest anomalies, go to the following:

For UAH, version 5.5 was used since that is what WFT used, see: http://vortex.nsstc.uah.edu/public/msu/t2lt/tltglhmam_5.5.txt

For RSS, see: ftp://ftp.ssmi.com/msu/monthly_time_series/rss_monthly_msu_amsu_channel_tlt_anomalies_land_and_ocean_v03_3.txt

For Hadcrut4, see: http://www.metoffice.gov.uk/hadobs/hadcrut4/data/current/time_series/HadCRUT.

For Hadsst3, see: http://www.cru.uea.ac.uk/cru/data/temperature/HadSST3-gl.dat

For GISS, see: http://data.giss.nasa.gov/gistemp/tabledata_v3/GLB.Ts+dSST.txt

To see all points since January 2014 in the form of a graph, see the WFT graph below. Note that HadCRUT4 is the old version that has been discontinued. WFT does not show HadCRUT4.3 yet.

WoodForTrees.org – Paul Clark – Click the pic to view at source

As you can see, all lines have been offset so they all start at the same place in January 2014. This makes it easy to compare January 2014 with the latest anomaly.


In this part, we are summarizing data for each set separately.


The slope is flat since September 1, 1996 or 18 years, 1 month. (goes to September 30)

For RSS: There is no statistically significant warming since November 1992: CI from -0.001 to 1.816.

The RSS average anomaly so far for 2014 is 0.252. This would rank it as 7th place if it stayed this way. 1998 was the warmest at 0.55. The highest ever monthly anomaly was in April of 1998 when it reached 0.857. The anomaly in 2013 was 0.218 and it is ranked 10th.


The slope is flat since January 2005 or 9 years, 9 months. (goes to September using version 5.5 according to WFT)

For UAH: There is no statistically significant warming since May 1996: CI from -0.024 to 2.272. (This is using version 5.6 according to Nick’s program.)

The UAH average anomaly so far for 2014 is 0.196. This would rank it as 8th place if it stayed this way. 1998 was the warmest at 0.419. The highest ever monthly anomaly was in April of 1998 when it reached 0.662. The anomaly in 2013 was 0.197 and it is ranked 7th.


The slope is flat since January 2005 or 9 years, 9 months. (goes to September)

For Hadcrut4: There is no statistically significant warming since March 1997: CI from -0.016 to 1.157.

The Hadcrut4 average anomaly so far for 2014 is 0.560. This would rank it as 1st place if it stayed this way. 2010 was the warmest at 0.555. The highest ever monthly anomaly was in January of 2007 when it reached 0.835. The anomaly in 2013 was 0.492 and it is ranked 9th.


For HadSST3, the slope is not flat for any period. For HadSST3: There is no statistically significant warming since October 1994: CI from -0.015 to 1.722.

The HadSST3 average anomaly so far for 2014 is 0.477. This would rank it as 1st place if it stayed this way. 1998 was the warmest at 0.416 prior to 2014. The highest ever monthly anomaly was in July of 1998 when it reached 0.526. This is also prior to 2014. The anomaly in 2013 was 0.376 and it is ranked 6th.


The slope is flat since October 2004 or an even 10 years. (goes to September)

For GISS: There is no statistically significant warming since February 2000: CI from -0.060 to 1.326.

The GISS average anomaly so far for 2014 is 0.65. This would rank it as third place if it stayed this way. 2010 was the warmest at 0.66, so the 2014 anomaly is only 0.01 colder at this point. The highest ever monthly anomaly was in January of 2007 when it reached 0.92. The anomaly in 2013 was 0.60 and it is ranked 6th.


At present, the average HadCRUT4.3 anomaly for the first 9 months of 2014 is 0.560, which is above the previous record of 0.555 set in 2010. With a September anomaly of 0.595, a new record for 2014 is a definitive possibility. However, can we trust any HADCRUT4 record or might it simply be an artifact of questionable adjustments to the temperature record?

122 thoughts on “HADCRUT4 Adjustments – Discovering Missing Data or Reinterpreting Existing Data? (Now Includes September Data)

  1. Let’s abandon science and go back to superstition. Could it really give us worse results? Or coin tossing?

    • Don’t ever forget that figures lie and liars figure.
      I also have a real problem with the anomaly base period being 1961-1990. I know some will say it doesn’t matter but should we really be using such a short term for the base period especially one that ended 24 years ago? What is significant about this base period? I guess if you want to make your graph look like it does, then by all means choose the base period that will do the job.

      • That is why I give the ranking in line 21 of the table. Hadcrut4.3 is in first place right now, regardless of the base period. My question is whether or not 1998 would not be in first place if we had as many thermometers in 1998 as we do in 2014. This problem does not come up with the satellite data and they still will have 1998 as the record year at the end of this year.

    • ‘Finding’ lost climate data is the same as ‘finding’ lost ballots.
      I’m extremely disappointed Moshpit is buying into this fraud.
      I seldom agree with him but I thought he had some integrity.

      • It would be all right if they found as many cooler readings as warmer readings. But they just find warmer readings. Of course this is understandable since who has unknown readings from the middle of the ocean?

      • ‘Running with the foxes and hunting with the hounds’ is what comes to mind when Mosher’s name surfaces in any discussion.

  2. I’m trying to get my head round the fact there are only 18 stations in the whole of China. (Or was it just 18 stations – out of how many – that were adjusted?)

  3. “However, can we trust any HADCRUT4 record or might it simply be an artifact questionable adjustments to the temperature record?”
    The same question could be asked of all the data sets. You admirally strike the default “sceptic” pose adopted when confronted with inconvenient results: question the data

    • No, the skeptic “pose” is to question the adjustments to the data, and that is what has happened here. That you can not see that speaks volumes about your acumen

    • No, as a sceptic I question all the data and all the adjustments, in either direction. I think the number and size of the continual adjustments points to the fact climate science is claiming much smaller error margins than should apply. We simply are not capable of collecting data with the accuracy required to make the claims that have been made. Seriously, can anyone with scientific training believe we know the worldwide temperature before 1940 with any accuracy more than +/- 2 degrees? How about the accuracy and coverage of measurements out of Antarctica and Africa in the 1950’s?

    • You are somewhat of an idiot if you do not question how the data is collected and it’s associated margin of error.
      You are a complete idiot if you do not question derived data and it’s associated margins of error.
      The nobility of not questioning data or methodology in religion is fine, thats the way religion works. That is not the way science works.

      • It would be nice if the data or facts were not in dispute and the only thing that was required was the proper interpretation of the data. But in climate “science” we have to question the data.

    • It would certainly appear that the high end of the climate sensitivities are out to lunch. And if that is the case, there is nothing catastrophic to worry about.

  4. Have any of these organizations put in place a complete Quality Management System for their processing and maintenance of weather data that has achieved certification to the ISO 9000 series?

    • What difference would that make? Those ISO standards (particularly 9000) are largely a joke as they are just box ticking exercises for the auditors.

      • They can be if incorrectly assessed. But it would require the meteorological organizations to have a set of procedures that people were trained in understood and were applied. At the moment there is a ‘fait accompli’ here is the new version, we don’t know if any formal procedure was carried out in development and V&V testing or whether it was as a result of a masters research project that was accepted due to confirmation bias. At least it provides a place to start.

  5. 0.005 or slightly more for a new “record” – (what was the HADCRUT for the Cretaceous)? Oh yes, adjustments play a big part in that scale.

  6. Tim Osborn’s comment looks odd to me:
    “As to whether, or indeed why, our updates always strengthen the warming… overall they often have this effect and the reason is probably that the regions with most missing data have warmed more than the global average warming. This includes the Arctic region, but also many land regions. This is because, on average, land regions have warmed more than the ocean regions and so filling in gaps over the land with additional data tends to raise the global-mean warming.”
    The existing land data should have been weighted by the area of land that each station covers, so adding more land stations would only increase the warming trend if the new stations had more warming than the existing ones. If not then something may be wrong with how the weighting is done.

    • Let us suppose the oceans went up by 0.4 C and land went up by 0.8 C on the average. Since oceans cover about 2/3 of the total area and land about 1/3, then the average increase is about 0.53 C. Unlike GISS, if some area is missing, Hadcrut4.3 just leaves it blank, which for all intents and purposes “assigns” that certain land area an anomaly of 0.53 by default. Now if a value of 0.7 for a previously blank area in 1998 is found in 2014, it raises the 1998 anomaly.

      • So Hadcrut is violating a fundamental requirement for the weighting scheme, that if (in a test scenario) a new station is added that has identical data to all its closest neighbours, then the output should be unchanged.

      • I agree. However GISS has been criticized for doing exactly that if I am correct. Each method has its advantages. However it does look suspicious if adjustments always cause the most recent numbers to rise. Perhaps we should just use satellite data where you will not find 16 year old data to average in.

      • “So Hadcrut is violating a fundamental requirement for the weighting scheme”
        That is a result of the treatment of missing data (omitting empty cells). As Werner says, they are effectively assigned the hemisphere average value. If they were assigned a local average value, then they would behave as you say they should when replaced by actual data. In effect, that local treatment is the Cowtan and Way kriging correction to HADCRUT.

  7. “it appears they found more warming during the last 18 years in the Northern Hemisphere; … whereas in the Southern Hemisphere they found warming that had been missing since World War I …”
    What a restrained way to point out that they are a pack of liars. Congratulations.

    • But is it even true? The perturbation is tiny, but is a brief up and down. It’s not clear to me that it is warming.
      And I thought the usual accusation was of “cooling the past”.

      • “But is it even true?”
        Yes, they are a pack of liars. Just like the packs of liars that “adjust” and “homogenize” the records over on this side of the pond. Some of the data set liars even show up here to defend their lies.

      • Specifically, Cooling the early 1900’s. And by little steps do we begin the journey to massive cheating and lying to a warmer world.

  8. I have a pet suggestion for our current sensor readings. Regular calibration, IE random on-the-spot temperature readings away from civilization but within a grid that contains a station, as well as in grids that contain no station, would inform measure reliability by providing more accurate error bars, as well as providing on-the-spot sensor-less grid data. Going forward, this may be the only way to deal with UHI intrusions (which have nothing to do with climate or weather).

    • Pamela, I had suggested that they should do a homogenization run for every station. That is remove a station from the database then do the homogenization to generate what they think that station should be, then compare the actual readings against the homogenization. That will give them an idea of how large their error bars should be. If the difference is more than the reported precision in the reports either fail the homogenization process or reduce the precision of their reports. That may mean a reported precision of +/- a degree C which would nullify the reason for panic.
      Overall climate science and weather reporting suffer from a failure of quality management and a lack of validation testing.

      • This would be the correct way to do things to determine error when there is doubt as to ‘calculated’ data points. However as they report in 0.001 of a degree in order to get their ‘record’ temperatures…

    • When I was in the service as a weather observer some 30 years ago, the TMQ-11 was given a full PM every quarter per FAA regulations. We also would do sanity checks with the old trusty sling physchrometer (i.e. thermometer) if there was some crazy reading.

  9. Makes no sense. If missing and now added data is making the world warmer, then how did that affect the previous data ? The obvious answer is that they dont know. So what we are supposed to accept is their ‘guess’ at what the temperatures were.
    this is about as accurate as the temperature records of a kitchen.

    • “If missing and now added data is making the world warmer, then how did that affect the previous data ?”
      Good question.
      Do they add the same amount of warming to the old data, or do they add a modeled amount…and claim that the higher latitude relative warming only occurred most recently…or do they ignore the whole thing and just spike it up and go “There! Unprecedented!”

  10. Here’s a couple-o-three questions that have been a-bothering me (well, actually, there are many, but I’ll stick to the subject in hand):
    1) Since these adjustments also ‘back-adjust’ past records, and the trend of the adjustments always seems to be (strangely) to ‘cool the past and warm the present’, does that mean that the reference period for the ‘anomaly’ is also adjusted (downwards), thus making the present anomaly appear higher?
    2) Has anyone plotted the ‘anomaly’ if the reference period was 60, rather than 30 years (ie a full PDO cycle)?
    3) Why on earth (and sea) do they all use different reference periods (especially considering the PDO as mentioned above, and other known cycles)? Since details matter (rather than merely straight-line trends), and all the datasets show differences in those details, how can anybody really rely on them at all, particularly down to hundredths of a degree?
    It seems to me that the data was not properly validated from the get-go. IMO they should have done a study such as Anthony’s Surface Station project FIRST. Maybe then they could validate their ‘station move/instrumentation change’ algorithms (or not). Otherwise it just looks like more ‘virtual world’ simulation to me.
    I always hated stats, now I know why!
    ChrisD in Bristol

    • I cannot do this for Hadcrut4.3 since WFT does not have it up yet. However I tried it for GISS using their period of 1951 to 1980. The average over these 30 years was -0.000222. Unless I am overlooking something, I would have thought the reference would be exactly 0.0000000. This brings up the question as to whether the reference average was exactly 0 some time in the past and has now cooled.
      (The earlier reference period of course makes the present GISS anomalies higher than the Hadcrut4 anomalies.)
      As for a 60 year period, it would depend on which end the extra 30 years is attached to.
      As for the different reference periods, it should be the latest 30 years that ends in 0, but anomalies would look lower then.

      • Many thanks . . . A little odd, but not toooo incriminating on the face of it. However, I can’t help harbouring the suspicion that (thinking as an ex-programmer/analyst) the anomaly calculation is the final step in a series of modules, and uses the output of the earlier (adjustment) algorithms as its input. The question is, is the reference value a ‘constant’ that has not changed since it was first defined, or a ‘variable’, generated by the earlier modules on each run. A small change, maybe, but rinse and repeat on a monthly basis, and presto – everybody’s gonna die.
        Yup, I ought to investigate this mesel’, but as per usual it’s a matter of time & energy, so I wondered if anyone had investigated this before. It’s all part of this ‘warmest xxx ever’ meme. I know it’s a con, but it’s hard to pin it down without resorting to “they’re cheating”, which isn’t terribly persuasive (even if true).
        Is it also possible that the more ‘rural’ stations, or those in far-distant areas report late, are ‘homogenised’ upwards using the (UHI-tainted) urban/airport sites, and then later re-adjusted back when the actual report comes in (ie after announcements are made)? This would also make it constantly warmer come announcement time . . .
        Re: reference periods, hmmm . . . still a bit confused – how do they define ‘0’ for each dataset (do they use 20th Century average, or is it somewhat arbitrary)?
        To think I used to believe (back in the years when I ‘believed’) that the temperatures that they announced were, like, you know, thermometer readings or something. The more I learn about the data we have (and don’t have) the more shonky the whole thing looks to me (it really is worse than I thought!)
        Shonky (UK slang) : Ramshackle. Held together with gaffa tape and glue, looking like it might fall to pieces at any moment. Made by Heath-Robinson with bad hangovers . . .
        (OK, I made that up)

      • I just checked RSS and for its base period of 1979 to 1998, the average is -4.16667e-06 which is almost as close to 0 as you can get. I understand that the base period should be from 1981 to 2010 in 2014 for all data sets, but few seem to pay attention to this. However GISS changes the past constantly. As for the “warmest ever” I found GISS extremely odd for the month of September. Compare January to August last month with this month’s values. For last month’s, see the table in http://wattsupwiththat.com/2014/10/05/is-wti-dead-and-hadcrut-adjusts-up-again-now-includes-august-data-except-for-hadcrut4-2-and-hadsst3/
        Every single value was lower this month than last month!

      • “Unless I am overlooking something, I would have thought the reference would be exactly 0.0000000.”
        The anomaly calculation is made at the stage a grid cell average is performed. There is some tolerance for missing years. So when is is all put together, it won’t be exactly zero.

      • “But did you mean:”
        Yes, that’s better. Months or years for a given station. It is (IIRC) actually years, because each month is averaged individually over the years.

  11. The worry is if the temperature has been adjusted to show a warming trend then if there is a cold period about to happen it has a degree or 2 head start. I suppose the giveaway will be when we’re wearing parkas in the middle of the warmest summer on record

  12. I read Steven Mosher’s comment on the original thread and re-reading it here does not make me feel any more comfortable with his underlying triumphalism. Put it this way. This is what he actually said:

    Nobody had an issue when I argued that missing data could bias the record warm.
    Now the data comes in.. Opps, the old record was biased cold.
    So people wanted more data now they got it. read it and weep.

    [My bold]
    This is what it comes across to me as: Assume the world is in line for an asteroid strike, but many scientists and mathematicians have gone over the numbers and declared that the numbers are wrong and the asteroid will miss us. However, one group of mathematicians led by someone called Mosher has found some new numbers and he says: “You sceptics are so wrong. I told you so! Our new data means there’s no escape, the asteroid is gonna fry you! Hooray! It’s so good to right!!”
    Such a lovely man. So magnanimous. Not.

    • Harry Passfield, exactly so.
      Read it and weep
      This phrase does not inspire confidence in the results. Still no adjustment for the UHE. Too much of a bother?
      I will put my faith in the satellite data, thank you. As far as the pre-satellite era is concerned, the records have been massaged beyond credibility.
      Triumphalism is a good word

  13. i thought that the whole point of using anomalies instead of actual temperature readings was that it kept station moves, equipment updates, etc., from influencing long term readings. if this is the case, how do they justify these additional adjustments?
    also, have adjustments to the data always been made or was this something that started only with the recent generation of activist climate scientists? in other words, were the gatekeepers of the country’s temperature records making adjustments back in the 20’s, 30’s, 40’s?

  14. We have over 40 “explanations” for the “missing heat” i.e. “the pause”. My memory may fail me, but most of these adjustments to surface temperature records have occured since “the pause” has become a nuisance to the Alarmists. I see it as a disguised attempt to create real “man made” global warming where none or little exist in nature.

  15. My caps:
    “Here, we are making the ASSUMPTION that land near-surface air temperature anomalies measured in grid-boxes that are predominantly sea covered are more representative of near-surface air temperature anomalies over the surrounding sea than sea-surface temperature anomalies. ”
    Another “assumption”, much like their computer programs. Pretty much says it all.

  16. My Prediction: 2014 will be the warmest year on record for HADCRUT4.
    Because there is a climate conference in Paris in 2015.
    My Hypothesis: The magnitude of adjustments to HADCRUT4 is inversely proportional tot he length of time until the next climate conference (with an amplifying factor related to the possibility of binding legislation being passed).
    Just a hypothesis – I haven’t checked the adjustments prior to Copenhagen – I can’t even remember the last peak year.

    • Hadcrut3 still has 1998 as the hottest year. But that was discontinued in May. Hadcrut4 solved that embarrassment. Now, 2010, 2005, and possibly 2014 will be warmer than 1998.

    • Don’t forget intensity of meeting schedule as a function of how many days until the projected NH sea ice minimum date.
      It would be interesting to see the new Senate hold a hearing as soon as they were sworn in and call Hansen, and open the windows and turn off the heater in a reversal of what was done earlier.

  17. I have a question. It seems to me, based on my observations, that average temperatures and anomalies from land, especially arid areas, are going to be different than temperatures from adjacent Ocean areas for a couple of reasons.
    The land based temperature will get much hotter during the day and colder during the night than the adjacent Ocean surface next to it. The average temperature is going to be different for the two areas and the anomaly is going to be different for the two areas. The physics (heat flux) of warming and cooling for the two areas is very dissimilar. I can make a strong case that increased downwelling IR radiation over the ocean cools the ocean and increased downwelling IR radiation over land warms the land.
    Wouldn’t it make make more sense to analyze the differences between the two data sets than to homogenize them together and erase the signal?
    I think the question is, an assumption has been made that the temperature anomaly will be universal because the CO2 coverage is universal. Shouldn’t that be tested before weighting, homogenizing, infilling, adjusting and averaging all the data together? It seems backwards to me.

  18. I really hate to totally diss something that folks put a lot of effort into, but the fact remains, There Is No Global Temperature.
    I know I keep harping on and on about this, but it really is an important concept. The graphs with the single lines representing the entire globe are simply fantasy. Not representative of anything meaningful.
    From the paper I just linked to:

    Physical, mathematical and observational grounds are employed to show that there
    is no physically meaningful global temperature for the Earth in the context of the issue
    of global warming. While it is always possible to construct statistics for any given set of
    local temperature data, an infinite range of such statistics is mathematically permissible
    if physical principles provide no explicit basis for choosing among them. Distinct and
    equally valid statistical rules can and do show opposite trends when applied to the
    results of computations from physical models and real data in the atmosphere. A given
    temperature field can be interpreted as both “warming” and “cooling” simultaneously,
    making the concept of warming in the context of the issue of global warming physically

    And from the conclusion (my emphasis):

    There is no global temperature. The reasons lie in the properties of the equation of state
    governing local thermodynamic equilibrium, and the implications cannot be avoided by substituting statistics for physics. Since temperature is an intensive variable, the total temperature is meaningless in terms of the system being measured, and hence any one simple average has no necessary meaning. Neither does temperature have a constant proportional relationship with energy or other extensive thermodynamic properties.
    Averages of the Earth’s temperature field are thus devoid of a physical context which
    would indicate how they are to be interpreted, or what meaning can be attached to changes
    in their levels, up or down.
    Statistics cannot stand in as a replacement for the missing physics because data alone are context-free. Assuming a context only leads to paradoxes such as
    simultaneous warming and cooling in the same system based on arbitrary choice in some
    free parameter. Considering even a restrictive class of admissible coordinate transformations yields families of averaging rules that likewise generate opposite trends in the same data,
    and by implication indicating contradictory rankings of years in terms of warmth.
    The physics provides no guidance as to which interpretation of the data is warranted.
    Since arbitrary indexes are being used to measure a physically non-existent quantity, it is
    not surprising that different formulae yield different results with no apparent way to select
    among them.
    The purpose of this paper was to explain the fundamental meaninglessness of so-called
    global temperature data. The problem can be (and has been) happily ignored in the name of
    the empirical study of climate
    . But nature is not obliged to respect our statistical conventions and conceptual shortcuts. Debates over the levels and trends in so-called global temperatures will continue interminably, as will disputes over the significance of these things for the human experience of climate, until some physical basis is established for the meaningful measurement of climate variables, if indeed that is even possible.
    It may happen that one particular average will one day prove to stand out with some
    special physical significance. However, that is not so today. The burden rests with those
    who calculate these statistics to prove their logic and value in terms of the governing dynamical equations, let alone the wider, less technical, contexts in which they are commonly

    Essentially all posts like the OP are moot.

    • Jeff Alberts
      November 5, 2014 at 7:29 am
      “the fact remains, There Is No Global Temperature.”
      You could say the same thing about a garden, etc. There will be different temperatures at different precise locations even at the same time of day. You can still obtain an estimated ‘average’ temperature by taking the mean of readings from the same locations and at the same times daily.
      This won’t be the *true* average temperature of the garden; but as long as you maintain the location of the thermometers, read them at the same time daily, adjust for any obvious influences that arise or move them to nearby locations if necessary, you should still be able to detect changes in trends over time.

      • The question is really: Is this reading T average any indication of climatic evolution and what is the exact relation with it?

      • TomRude
        November 5, 2014 at 8:46 am
        “Is this reading T average any indication of climatic evolution and what is the exact relation with it?”
        If you run the record for long enough you should be able to determine whether there has been change over time or not using the normal statistical methods that are applied to any time series.

      • David, “you should still be able to detect changes in trends over time.”
        Of course, but you are missing the point. The changes in trends over time are meaningless. Modern physics is teaching us that history is the sum of all paths, not necessarily the one we choose.

      • Genghis
        November 5, 2014 at 9:05 am
        “The changes in trends over time are meaningless.”
        Makes you wonder why so much effort is put into identifying them in that case.

    • This is perfectly correct.
      However, the climate ‘scientists’ are convinced that CO2 traps temperature and so that is what they are going to measure. The complete and continual imprecision of their language hot, hottest year, etc., should show that these people are either ignorant or playing to an ignorant audience such as politicians.

      Werner Brozek
      November 5, 2014 at 9:00 am
      You (Jeff) are correct. However until something better comes along, we have to work with what we have.

      Something better exists already. We have the recordings of humidity from all these stations so the calculation of atmospheric enthalpy would allow the generation of atmospheric heat content in units of kilojoules per kilogram. The correct metric.

      • Werner Brozek
        November 5, 2014 at 9:54 am
        For how long have we had this and is there a data set that shows this?

        For as long as there have been Stevenson screens there have been wet and dry bulb thermometers that provide humidity. From that the enthalpy can be calculated.
        Ideally with an hourly automated report from a modern station it would be possible to see the actual heat in the atmosphere with diurnal variation not an intensive variable that tells you nothing about heat. I have a little project to do when I expect to have a little time shortly to generate such information as a strawman for the great and the good to throw stones at.

    • The change in global average temperature is supposed to correlate closely with a change in average heat capacity (extensive property / not specific heat capacity) of the atmosphere near the surface. Did you know that there can be 4°C difference between the ground and the thermometer 2m up on a frosty morning?
      We should be looking at the rate of change of T over a year at each site and find some sort of meaningful average of those, gridded and weighted but not corrected, for indication of what the climate is doing regionally and globally.
      I looked at the raw data of a station in rural Australia using daily minimum temperatures. I calculated an anomaly from the long term average by fitting a sine function to the long term monthly means and subtracted this from the daily measurements. A moving mean over 365 days still produces variations of ±2°C. The SD for those 365 days is ±4°C. Using an algorithm to adjust data by fractions of a degree without historical context according to neighbouring sites hundreds of kilometers away is just too stupid to quantify.
      And then there is my problems with error estimates. Apparently the precision will still be brilliant when averaged over thousands of measurements even though we are talking about an uncertainty due to adjustments to indivual stations..

  19. Has anyone used some complete data set for some region to do an analysis of the homogenization process? Remove some of the original data, then homogenize the rest to see if it gets the removed one correct or is this process as poorly checked as the climate models that use this for their initial “data”?

    • Yes, that is the statistical correlation they are looking for. And to answer your real question the statistical correlation is falling as their cherry picked data sets become obsolete.
      This was the whole point of the Hiding the Decline with Mann and Briffa’s paleontologies, because current tree rings indicate continued falling temperatures. So they just truncated the paleontologies and spliced on thermometer records.

  20. …So people wanted more data now they got it. read it and weep.

    This is funny. People wanted more data so they just made some up for them. It is beyond reasonable belief that the margin of error is 100% consistently biased as continually getting colder.
    Derived data in the case of temperature stations is meaningless, it literally has no value. It is literally fiction no matter how much math is applied to it.
    Formulas, algorithms, various methodologies are used for deriving data. You input A and B and you get C. In this case the derived data C, is output from the data input A nd B. It does not attempt to derive a new A and B from A and B, it can only derive C.
    Knowing the debits and credits in Sammy Smith’s checking account for June we can derive the account total at the end of June. Climate science attempts to derive the debits and credits for September from the debits and credits in June and then claim to know the ending September account total. Filling in data in this way is just not acceptable, it’s like the snake eating it’s tail, it leads to nowhere except oblivion.
    Just admit to using convulated alogoritms to in the end guess the data and stop the pretense of it being “station data”. Missing data is missing data, it can not be derived or “filled in”. Read that and weep.

  21. Even if correct this shows that the response to the warming is even less than the climate consensus claims. It is one thing to “find” some extra thermometers. It is another to “find” extra storms, droughts, floods, slr. Not that our climate obsessed friends don’t try hard.

  22. Not sure what base period is used to make the final chart, but when all sets are base on the UAH 1981-2010 period, then average temperatures for 2014 are almost identical for HadCRUT4, GISS, NCDC and UAH: 0.27, 0.26, 0.26 and 0.26C respectively (UAH includes October update; 0.25 up to Sep).
    The outlier in 2014 is RSS, which is 0.15 on the same base period to Sep.

    • The satellite data are not even close. UAH, version 5.5, averages 0.196 to the end of September. The record is 0.419. But Hadcrut4.3 averages 0.560 which is above the 2010 record of 0.555.

      • Werner Brozek
        November 5, 2014 at 9:16 am
        “The satellite data are not even close.”
        The latest UAH version is v5.6, which has a 1981-2010 anomaly base, averages 0.26C from Jan-Oct.
        Putting HadCRUT4.3 to the same base period: the HadCRUT4.3 1981-2010 average anomaly was 0.29C. Deducting this from 0.560 (HadCRUT4 2014 to date) gives 0.27C.
        I would call 0.26 pretty close to 0.27.

      • I apologize. You are correct. The average anomaly is oddly virtually the same but the rankings are totally different with a huge difference between the record and the present average. The following sort of illustrates why. Unfortunately WFT only shows UAH version 5.5 and Hadcrut4.2. The latter was discontinued in July so the latest spikes are not shown.

      • Hello Werner,
        Thanks for digging out the explanations for the adjustments to Hadcrut 4. I wondered if your correspondence with them included any indication of whether there will be more adjustments to come for the historical data, or have they finished that exercise. I can understand that the most recent few months might need the odd correction.
        As others have pointed out UAH is now using version 5.6, which has a completely different set of anomalies from the V 5.5 you have used here. The negative slope on version 5.6 goes back only as far as October 2008, now that this October’s anomaly has been included, and 2014 is heading for 3rd warmest behind 1998 and 2010.

      • Hello Richard, It was Tim Osborn who gave us the information in the last post which was referenced here. And Just The Facts was responsible for most of the introduction today. As for new updates, they did an update last year, and another one now, so I almost expect another one next year and with higher anomalies over the last 16 years.
        As for the two versions of UAH, they could come in third, but there is no way either one will come first or second.

  23. Anybody got a comment on the use of temperature, etc. records from the world’s Merchant Marine, Naval, Civil and Military Aviation logs?

  24. Here’s hoping that the Senate now joins the House in voting to defund the IPCC, & that both houses add GISS & NCAR to the hit list.

  25. It looks like Tim’s experience matches mine.
    As a matter of fact Ive found that when you add more data the warming goes up.
    Note, there are large numbers of stations in places that havent been added.
    These could cool the record.
    However, the changes will be slight and 95% of the time within the error bands

    • I predict that if “you all” found record sets that significantly cool the trend (except in the past) then those will be conveniently ignored or statistically manipulated to “your all’s” benefit. The reasons are psychological and grant/paycheck protection.

    • “…there are large numbers of stations in places that haven’t been added.”
      Steve, no one believes this station shell game.
      We really just found data from previously unknown stations? That’s unbelievable! (literally)
      Where have these stations been hiding?
      If we’ve had it all along, why wasn’t this data added before?
      Why not add back the data from the huge number of cooler stations that have been deleted?
      I repeat my earlier statement: ‘Finding’ lost climate data is the same as ‘finding’ lost ballots.
      I’m extremely disappointed in you for buying into this fraud.
      If you can truly defend this, you need to write a guest post here at WUWT explaining how this works.

  26. This may have something to do with it. From the article:
    “Land-based temperatures averaged 0.04°C below sea temperatures for the period 1950 to 1987 but after 1997 averaged 0.41°C above sea temperatures.”
    And usually, it is missing land data that is found.

    • Werner,
      Thanks for your work in this area and for this post. I have to believe that these data sets are polluted with a sort of confirmation bias, in the sense that finding what you look for is the rule; no one looks for cooler data.

  27. missing data found at the bottom of climate models bucket , which are classic GIGO .
    Meanwhile let us ask yourselves a simple question , Osborn’s career worse , better or no change if AGW is BS? and that goes toward motivation to finfd the ‘right short ‘ of missing data at a time when finding reason why temperature has simply failed to match in reality the claims of these same models .
    Has for Chinese station data , I though CRU’s Phil told us ‘the dog had eat it ‘ , has that dog passed it back out now ?

    RSS for October is 0.272. As a result, the 10 month average is 0.254. This would keep it in 7th place after 10 months. As for the period of no slope, I am not 100% sure, but I believe it will increase by one month to 18 years and two months. If someone can confirm that, please let us know.

    • I just found out that the time stays at 18 years and one month by the smallest of margins. From September 1996, the slope is an extremely small 1.72E-08 Celsius degrees per year. So it is negative from October 1996.

      • Yes Werner,
        I agree that it now goes back to October 1996, but our calculations disagree after the 4th decimal! Probably not a noticeable difference to all but the most sensitive life forms.

      • I got the above number from a private source. However I see that WFT gives a slope = 1.66795e-07 per year from September. Either way, it is positive but totally insignificant. RSS went up from 0.206 in September to 0.272 in October. If we assume that the rise was uniform over 31 days, then it would be 0.270 after October 30. Are you able to figure out if an anomaly of 0.270 would have made the slope negative? Since if that is the case, then the time for no positive slope could be 18 years and 2 months if October had 30 days instead of 31 days.

  29. Recently I got severely rebuked over paper in which I discussed some aspects of solar activity, the rate of the Earth’s rotation ( LOD ) and temperatures.
    I normally only look at the North Hemisphere, since the South is mostly ocean and there is a high uncertainty even for the most recent decades.
    Temperature spectral components determine composition of the natural variability.
    In this illustration
    top graph is spectra for both variables (LOD & HADCRUT4) since 1850.
    Lower graph shows what happens when HADCRUT4 frequency is increased by 17.5% (denoted as HADCRUT4’’).
    What does this tell us?
    Largest contributor to the LOD’s decadal variability is considered to be change of angular moment in rotation of the Earth’s inner liquid core, while for the decadal temperature variability, the oceans play critical role.
    Here we could speculate that the LOD and HADCRUT4 spectrum variability is one and the same, but due to different rate of propagation trough the liquid core (higher) and the oceans (lower) two spectra are different but directly related.
    Does this matter? At the first instance probably not, but researcher have to be willing to probe into data beyond the ‘first instance’.

  30. For what it’s worth, the public could give a RA about Global Warming and Climate Change in relation to other priorities like the economy, especially when things like very cold snaps happen in mid-November….
    View it and weep.
    Ouch, that look’s like a Polar Vortex excursion deep into the lower 48…..

  31. So, have I got this right (keeping it simple for my simple mind) . . . .
    Most of the land area of the planet is very sparsely measured, even now, and most of the measurements we do have are/have been subject to siting problems/reporting problems/station moves/TOBS etc, and have an accuracy of +/- 0.5C if we’re lucky . . .
    Most land-based stations are in areas that have been significantly urbanised over the past 60 years (towns/cities/airports etc). No review of station quality was made (until WUWT SSP), and no adjustments made for UHI.
    Until the 1950s sea surface temps were taken by hanging buckets over the sides of ships & sticking thermometers in them. After the 50’s we had a (very sparse) network of buoys, and since 2003 we’ve had ARGO, measuring down to 2000m (3000 buoys to cover 70% of the earth’s surface area for 11 years is still pretty sparse,).
    Most pre-2003 measurements are, therefore, on shipping lanes or the fixed buoy network (ie near the coast?), leaving most of the (really quite big) oceans unmeasured.
    So they split the earth into 1200 km ‘squares’, and average temps over each square that actually has temp. records, the majority (UHI-affected) readings causing (upward) adjustment of minority (rural) sites, even if they are nearly 1200km away.
    1200km squares that have no record are either assigned a value based on readings from adjacent squares (GISS), or the global mean (HADCRU). These ‘infilled’ (made up) values may, in turn, be used to ‘infill’ the next 1200km square it it, too, has no records available, and so on.
    They then ‘dropped’ 5/6 of the reporting stations (were the criteria for this published/checked?), just when you would have thought that we needed the data most, meaning that substantially more area is now covered by computer-generated data.
    Some discontinued stations (‘zombies’) have somehow kept reporting (using ‘infilled’ data?)
    All data is ‘reprocessed’ each month (daily for NCDC?) using a suite of computer algorithms to make adjustments automatically. The cumulative effect of this has been to cause past temperatures to drop, and more recent ones to rise.
    Additionally, successive versions of each land dataset have progressively caused past temperatures to drop, and more recent ones rise still further.
    Now, after 25 years and billions-o-bucks, they suddenly ‘find’ new (old) data, and use another (the same?) set of computer algorithms to create a dataset for China based on 18 stations, and using data from ‘adjacent countries’.
    There are 4 main land/sea datasets created using these methods, using different reference periods, giving different (though similar results).
    There are 2 satellite datasets, that don’t measure ground temps (computing them from TOA readings), and which don’t agree with each other (although they are, mercifully, free of the data mangling algorithms used for the land/sea sets).
    Then they tell us that they know the global mean temperatures going back to the 19th Century with no mention of any margins of error, and gleefully announce the one closest to their expectation (usually NCDC/GISS)
    But they don’t tell you that ‘temperature’ isn’t the same as ‘heat’, so what they’re measuring isn’t necessarily meaningful anyhow (unless humidity is taken into account, which it isn’t).
    I know I’m a layman, so correct me where I’m wrong, but isn’t this a bit, you know, nuts.
    Like I said earlier – the whole thing seems completely shonky to me.
    Shonky (UK slang) : Ramshackle. Held together with gaffa tape and glue, looking like it might fall to pieces at any moment. Made by Heath-Robinson with bad hangovers . . .
    (OK, I made that up – the definition, not the word)

    • You’ve described the sorry industry of index manufacture pretty accurately. All that needs to be added is a dash of academic hubris in selling it to the politicians and the public.

    • The cumulative effect of this has been to cause past temperatures to drop, and more recent ones to rise.

      This isn’t correct. The largest adjustment to the temperature record actually caused past temperatures to increase, i.e. the Bucket Model;
      was used to increase the pre-1945 temperature record drastically:
      [caption id="" align="alignnone" width="850"] Bob Tisdale – bobtisdale.wordpress.com – Click the pic to view at source[/caption]
      This adjustment was necessary because the rate of warming prior to 1945, i.e. before the influence of anthroprogenic CO2 was potentially consequential, exceeded the warming after 1945, as was demonstrated here:
      Even with the highly arbitrary Bucket Model adjustments applied, Phil Jones noted in at 2010 interview that:

      “the two periods 1910-40 and 1975-1998 the warming rates are not statistically significantly different”

      Point being, if you eliminate the upward adjustments of the past, the natural warming that occurred during the first half of the 20th century, exceeds the supposedly anthropogenically influenced warming that occurred during the second half of the 20th century.

  32. In Animal Farm, after endless adjustments to figures, they ‘just wanted more food and less statistics’.
    Since we have more food with more c02 and more warmth, we got the first and more important part, which is a damn sight better than the animals of Animal Farm got.

  33. To compute the “global mean temperature” (which, of course, it isn’t) we take thousands of stations with millions of data points with many missing and torture it statistically for years and years and every year the early data gets cooler and the more recent warmer.
    To compute CO2 we take one direct measurement from one location and follow it.
    Go figure.

  34. Mr. Layman here.
    I don’t mean this to sound as cynical as it my come off, but isn’t Man’s ability to take the Globe’s temperature not just in it’s infancy but still in the womb?
    To me it seems like trying to get a regional value from an old or even a new reading is like putting a drop of water on a period printed with an old inkjet printer. You get a big blob but lose the period…and maybe blur the sentence. Sprinkle enough drops and the page can be claimed to say anything you want.
    The efforts to read it may be sincere but how certain can anyone be?

  35. HadSST3 update:
    HadSST3 came in at 0.529 in October. This is a drop from 0.574 in September. However it is the fifth consecutive month that it was above the previous high, before 2014, of 0.526 in July 1998. The average rose to 0.482 which remains way above the previous record of 0.416 in 1998 so HadSST3 is guaranteed to set a new record in 2014 and there is no flat period at all.

    • CRU and GISS can wait until Mid-November to release those HadSST3 results in a press release. By mid-November the lower 48-US will be in an unusual Polar Vortex, early deep-freeze and thus will get a laugh-off by the public.

  36. More on HADCRUT4 and LOD (see also my post above )
    Here HADCRUT4 Northern hemisphere is linearly and polynomially (square law) de-trended:
    From 1890s to mid-1980s two de-trended results are virtually identical and concurrent with the negative LODs (Earth’s rotation rate of change) variability, but a divergence appear in the late 1980s and steadily increases.
    Summer CET relationship to the sunspot number shows similar divergence appearing around 1990
    (the LOD shows that the temperature fall in 1960s/70s was caused by natural variability and not by aerosols, as it was implied previously)
    Since the LOD leads the temperature by about 3 years it can be concluded that the temperature decadal natural variability is strongly associated directly or indirectly with the LOD’s forcing causes.
    Here I have demonstrated that the changes in the LOD are concurrent with the changes in the solar magnetic polarity.

  37. FMI added new station in Helsinki it shows 2C higher temps than older station in Kaisaniemi. They moved Station of Kittilä Pokka to warmer position, if this kinda manoveurs happens all ower the world, it is wery far from science it,s way to do propaganda. If it wont warm up they’ll make warming.

    Osborne’s explanation as to why more data, recent data warms the profile is a HUGE comment. His point is that the historical record was deficient in the Arctic regions, while the newest data has more, i.e. a higher proportion. Since the Arctic is “warming faster”, the temp anomaly goes up.
    What he is saying is that the Arctic “missing” data would have “warmed” the record. As time goes by more warm Arctic temp data comes in, but that does not mean that the Arctic is necessarily a lot warmer now. If UEA were to drop the proportionality of Arctic data back to that of the earlier days, his observations suggests that the temperatures of the past would be warmer, not cooler, than thought.
    If infilling of “missing” data warms the record today, the suggestion is that infilling of missing OLD data could very well warm the OLD data.

    • There are several things to consider here. RSS shows that the region from 60 to 82.5 N is warming at the rate of 0.323 K/decade according to the sea ice page. If we assume a constant lapse rate, then the polar surface would warm just as fast.
      Now let us suppose that 40% of the Arctic was covered in 1998 and the same 40% was covered in 2014. If an additional 20% of area was found for 1998 in 2014, and if an additional 20% of area was used in 2014, then the added area would presumably increase the warming rate of this area by 0.323 K/decade which would cause a larger slope for the whole earth.
      But suppose that an additional 20% was found for 1998, but an additional 50% was found for 2014, then the warming for the Arctic region would be much larger than 0.323 K/decade, so the global slope would also be larger. If that is the case, and if 2014 beats 1998, I would consider that to be like comparing apples and oranges.
      Note that there is no way that RSS will beat 1998 in 2014 and this is comparing apples to apples.
      That is how I see it anyway.
      If we could go back to the 1920s and 1930s and find lots of thermometer readings, who knows what we may find? Especially if the Arctic was much warmer then. See the 2 postings at:

  39. I guess I would ask the obvious question: If this were a new drug and we were “adjusting” the results, would the public buy it or run us out of town on a rail? I am a scientist, biochemistry and nuclear power. I read these papers and the “adjustments” and I am just stunned. What schools did they go to that ever implied that you could “adjust” data? I know you can postulate that the data was off, describe why and generate a data set and give your reasoning. That would be science. But the adjustment of data, and raw data at that, is fraud. Nothing else.

    • If adjustments are made for good reasons, that is one thing. However if people making the adjustments are known to have certain biases and if the adjustments always support those biases, that is troublesome.

Comments are closed.