Guest Post by Werner Brozek and Just The Facts:

To begin, we would like to sincerely thank Tim Osborn of the University of East Anglia (UEA), Climatic Research Unit (CRU) for his responses (1, 2) to this recent WUWT article. Unfortunately, by the time the dialogue got going, many readers had already moved on.
Tim’s responses did leave few loose ends so we would like to carry on from where we left off. Hopefully Tim Osborn will respond again so we can continue the dialogue. In comments of the prior article both Tim Osborn and Steven Mosher argued that recent adjustments to HadCRUT4 were associated with missing data, i.e.:
Steven Mosher October 5, 2014 at 8:01 pm
Yes. the new data comes from areas that were undersampled.
We always worried that undersampling was biasing the record warm. remember?
Nobody had an issue when I argued that missing data could bias the record warm.
Now the data comes in.. Opps, the old record was biased cold.
So people wanted more data now they got it. read it and weep.
In particular the warming in the present comes from more samples at higher latitudes.
Tim Osborn (@TimOsbornClim) October 7, 2014 at 2:30 pm
“As to whether, or indeed why, our updates always strengthen the warming… overall they often have this effect and the reason is probably that the regions with most missing data have warmed more than the global average warming. This includes the Arctic region, but also many land regions. This is because, on average, land regions have warmed more than the ocean regions and so filling in gaps over the land with additional data tends to raise the global-mean warming.”
Looking at the most recent adjustments to HadCRUT4:

it appears they found more warming during the last 18 years in the Northern Hemisphere;

whereas in the Southern Hemisphere they found warming that had been missing since World War I:

When looking at the warming over last 18 years found in the Northern Hemisphere, it appears that a significant portion of it was found in Asia;

and Tim noted in his comment that;
“the big changes in the China station database that were obtained from the (Cao et al., 2013, DOI: 10.1002/jgrd.50615) and Xu et al. (2013, doi:10.1002/jgrd.50791) studies listed in the release notes, which is clearly not a high latitude effect.”
Looking at the recent warming obtained from Cao, Lijuan Cao, Ping Zhao, Zhongwei Yan, Phil Jones et al. appears to have found recent warming as a result of a reconstruction;

whereby existing station data;

was adjusted to account for “relocation of meteorological station”, “instrument change” and “change points without clear reason”:

Furthermore, this recent paper by co-author Yan ZhongWei states that:
Long-term meteorological observation series are fundamental for reflecting climate changes. However, almost all meteorological stations inevitably undergo relocation or changes in observation instruments, rules, and methods, which can result in systematic biases in the observation series for corresponding periods. Homogenization is a technique for adjusting these biases in order to assess the true trends in the time series. In recent years, homogenization has shifted its focus from the adjustments to climate mean status to the adjustments to information about climate extremes or extreme weather. Using case analyses of ideal and actual climate series, here we demonstrate the basic idea of homogenization, introduce new understanding obtained from recent studies of homogenization of climate series in China, and raise issues for further studies in this field, especially with regards to climate extremes, uncertainty of the statistical adjustments, and biased physical relationships among different climate variables due to adjustments in single variable series.
In terms of Xu et al. 2013, in this recent article in the Journal of Geophysical Research Xu et al. state that:
This study first homogenizes time series of daily maximum and minimum temperatures recorded at 825 stations in China over the period from 1951 to 2010, using both metadata and the penalized maximum t test with the first-order autocorrelation being accounted for to detect change points and using the quantile-matching algorithm to adjust the data time series to diminish discontinuities. Station relocation was found to be the main cause for discontinuities, followed by station automation.
As such the recent warming in the Asia region appears to be the result of adjusting existing station data to account for “relocation of meteorological station”, “instrument change”, “station automation” and “change points without clear reason” versus “filling in gaps” for “missing data”.
A second aspect of the recent HadCRUT4 adjustments that raises questions, is related to Tim Osborn’s explanation for why “our updates always strengthen the warming”, i.e.;
“the reason is probably that the regions with most missing data have warmed more than the global average warming. This includes the Arctic region, but also many land regions. This is because, on average, land regions have warmed more than the ocean regions and so filling in gaps over the land with additional data tends to raise the global-mean warming.”
Per Morice et al., 2012;
“HadCRUT4 remains the only one of the four prominent combined land and SST data sets that does not employ any form of spatial infilling and,as a result, grid-box anomalies can readily be traced back to observational records. The global near-surface temperature anomaly data set of the Goddard Institute for Space Studies (GISS) [Hansen etal., 2010], is again a blend of land and SST data sets.The land component is presented as a gridded data set in which grid-box values are a weighted average of temperature anomalies for stations lying within 1200 km of grid-box centres. The sea component is formed from a combination of the HadISST1 data set [Rayner et al., 2003] with the combined in situ and satellite SST data set of Reynolds et al. [2002].”
And per the CRU’s Frequently Asked Questions:
How are the land and marine data combined?
Both the component parts (land and marine) are separately averaged into the same 5° x 5° latitude/longitude grid boxes. The combined version (HadCRUT4 ) takes values from each component and weights the grid boxes according to the area, ensuring that the land component has a weight of at least 25% for any grid box containing some land data. The weighting method is described in detail in Morice et al. (2012). The previous combined versions (HadCRUT3 and HadCRUT3v) take values from each component and weight the grid boxes where both occur (coastlines and islands) according their errors of estimate (see Brohan et al., 2006 for details).
UEA CRU and the Met Office Hadley Center moved to this methodology with the introduction of HADCRUT4 in 2012, i.e.:
“The blending approach adopted here differs from that used in the Brohan et al. [2006] data set. Here, land and sea components are combined at a grid-box level by weighting each component by fractional areas of land and sea within each grid-box, rather than weighting in inverse proportion to error variance. This approach has been adopted to avoid over representation of sea temperatures in regions where SST measurements dominate the total number of measurements in a grid-box. The grid-box average temperature for grid-box i is formed from the grid-box average temperature
anomalies for the land component, for the SST component, and the fractional area of land in the grid-box”
“Coastal grid-boxes for which the land fraction is less than 25% of the total grid-box area are assigned a land fraction weighting of 25%. Here, we are making the assumption that land near-surface air temperature anomalies measured in grid-boxes that are predominantly sea covered are more representative of near-surface air temperature anomalies over the surrounding sea than sea-surface temperature anomalies. These fractions ensure that islands with long land records are not swamped by possibly sparse SST data in open ocean areas (where the island is only a small fraction of the total grid-box area).”
The more cynical might wonder if this change in methodology had anything to do with the fact that;
“Land-based temperatures averaged 0.04°C below sea temperatures for the period 1950 to 1987 but after 1997 averaged 0.41°C above sea temperatures.”
With oceans covering about 70% and land covering about 30% of the total area, it seems that there would be more missing data from oceans than for land, especially since few people live in the middle of the southern ocean. However, any new data that may be found after 1998 would likely be land areas and not remote ocean areas, thus these land areas could have an inordinate influence on global temperatures.
Suppose for example that a landlocked 5 X 5 grid had thermometer readings for 25% of its land area in 1998 and that the anomaly had increased by 0.8 C from a historical baseline. Assume that we knew nothing about the other 75% of the land area in 1998. Also, assume that the ocean average temperature increased by 0.4 C during the same period.
Suppose that in 2014 readings were found for the other 75% of this same landlocked 5 X 5 grid and it was found that this remaining area increased by only 0.7 C. Then even though the 1998 anomaly for this particular landlocked 5 X 5 grid appeared cooler in 2014 than was thought to be the case in 1998, the global anomaly would still go up since 0.7 C is greater than the 0.4 C that the oceans went up by. Is this correct based upon the HadCRUT4 gridding methodology?
A third aspect of the HadCRUT4.3 record and adjustments that raises questions can be found in the Release Notes, where it notes the “Australia – updates to the ‘ACORN’ climate series and corrections to remote island series”. However, as Jo Nova recently wrote;
“Ken Stewart points out that adjustments grossly exaggerate monthly and seasonal warming, and that anyone analyzing national data trends quickly gets into 2 degrees of quicksand. He asks: What was the national summer maximum in 1926? AWAP says 35.9C. Acorn says 33.5C. Which dataset is to be believed?”
As such, if there is a problem with source data such as ACORN in Australia, then this problem is reflected in HadCRUT4.3 record Furthermore, this problem would be compounded by the HADCRUT4 methodology of giving the “land component has a weight of at least 25% for any grid box containing some land data”, as the ACORN network is heavily weighted towards coastal overage, i.e.:

—
In the sections below, as in previous posts, we will present you with the latest facts. The information will be presented in three sections and an appendix. The first section will show for how long there has been no warming on several data sets. The second section will show for how long there has been no statistically significant warming on several data sets. The third section will show how 2014 to date compares with 2013 and the warmest years and months on record so far. The appendix will illustrate sections 1 and 2 in a different way. Graphs and a table will be used to illustrate the data.
Section 1
This analysis uses the latest month for which data is available on WoodForTrees.com (WFT). All of the data on WFT is also available at the specific sources as outlined below. We start with the present date and go to the furthest month in the past where the slope is a least slightly negative. So if the slope from September is 4 x 10^-4 but it is – 4 x 10^-4 from October, we give the time from October so no one can accuse us of being less than honest if we say the slope is flat from a certain month.
1. For GISS, the slope is flat since October 2004 or an even 10 years. (goes to September)
2. For Hadcrut4, the slope is flat since January 2005 or 9 years, 9 months. (goes to September). Note that WFT has not updated Hadcrut4 since July.
3. For Hadsst3, the slope is not flat for any period.
4. For UAH, the slope is flat since January 2005 or 9 years, 9 months. (goes to September using version 5.5)
5. For RSS, the slope is flat since September 1, 1996 or 18 years, 1 month (goes to September 30).
The following graph shows just the lines to illustrate the above. Think of it as a sideways bar graph where the lengths of the lines indicate the relative times where the slope is 0. In addition, the upward sloping brown line at the top indicates that CO2 has steadily increased over this period.

When two things are plotted as I have done, the left only shows a temperature anomaly.
The actual numbers are meaningless since all slopes are essentially zero. As well, I have offset them so they are evenly spaced. No numbers are given for CO2. Some have asked that the log of the concentration of CO2 be plotted. However WFT does not give this option. The upward sloping CO2 line only shows that while CO2 has been going up over the last 18 years, the temperatures have been flat for varying periods on various data sets.
Section 2
For this analysis, data was retrieved from Nick Stokes’ Trendviewer available on his website <a href=”http://moyhu.blogspot.com.au/p/temperature-trend-viewer.html”. This analysis indicates for how long there has not been statistically significant warming according to Nick’s criteria. Data go to their latest update for each set. In every case, note that the lower error bar is negative so a slope of 0 cannot be ruled out from the month indicated.
On several different data sets, there has been no statistically significant warming for between 14 and almost 22 years according to Nick’s criteria. Cl stands for the confidence limits at the 95% level.
Dr. Ross McKitrick has also commented on these parts and has slightly different numbers for the three data sets that he analyzed. I will also give his times.
The details for several sets are below.
For UAH: Since May 1996: CI from -0.024 to 2.272
(Dr. McKitrick says the warming is not significant for 16 years on UAH.)
For RSS: Since November 1992: CI from -0.001 to 1.816
(Dr. McKitrick says the warming is not significant for 26 years on RSS.)
For Hadcrut4.3: Since March 1997: CI from -0.016 to 1.157
(Dr. McKitrick says the warming is not significant for 19 years on Hadcrut4.2. Hadcrut4.3 could be very slightly different.)
For Hadsst3: Since October 1994: CI from -0.015 to 1.722
For GISS: Since February 2000: CI from -0.060 to 1.326
Note that all of the above times, regardless of the source, with the exception of GISS are larger than 15 years which NOAA deemed necessary to “create a discrepancy with the expected present-day warming rate”.
Section 3
This section shows data about 2014 and other information in the form of a table. The table shows the five data sources along the top and other places so they should be visible at all times. The sources are UAH, RSS, Hadcrut4, Hadsst3, and GISS.
Down the column, are the following:
1. 13ra: This is the final ranking for 2013 on each data set.
2. 13a: Here I give the average anomaly for 2013.
3. year: This indicates the warmest year on record so far for that particular data set. Note that two of the data sets have 2010 as the warmest year and three have 1998 as the warmest year.
4. ano: This is the average of the monthly anomalies of the warmest year just above.
5. mon: This is the month where that particular data set showed the highest anomaly. The months are identified by the first three letters of the month and the last two numbers of the year. Note that this does not yet include records set so far in 2014 such as Hadsst3 in June, etc.
6. ano: This is the anomaly of the month just above.
7. y/m: This is the longest period of time where the slope is not positive given in years/months. So 16/2 means that for 16 years and 2 months the slope is essentially 0.
8. sig: This the first month for which warming is not statistically significant according to Nick’s criteria. The first three letters of the month are followed by the last two numbers of the year.
9. sy/m: This is the years and months for row 8. Depending on when the update was last done, the months may be off by one month.
10. McK: These are Dr. Ross McKitrick’s number of years for three of the data sets.
11. Jan: This is the January 2014 anomaly for that particular data set.
12. Feb: This is the February 2014 anomaly for that particular data set, etc.
20. ave: This is the average anomaly of all months to date taken by adding all numbers and dividing by the number of months. However if the data set itself gives that average, I may use their number. Sometimes the number in the third decimal place differs slightly, presumably due to all months not having the same number of days.
21. rnk: This is the rank that each particular data set would have if the anomaly above were to remain that way for the rest of the year. It may not, but think of it as an update 45 minutes into a game. Due to different base periods, the rank is more meaningful than the average anomaly.
| Source | UAH | RSS | Had4 | Sst3 | GISS |
|---|---|---|---|---|---|
| 1.13ra | 7th | 10th | 9th | 6th | 6th |
| 2.13a | 0.197 | 0.218 | 0.492 | 0.376 | 0.60 |
| 3.year | 1998 | 1998 | 2010 | 1998 | 2010 |
| 4.ano | 0.419 | 0.55 | 0.555 | 0.416 | 0.66 |
| 5.mon | Apr98 | Apr98 | Jan07 | Jul98 | Jan07 |
| 6.ano | 0.662 | 0.857 | 0.835 | 0.526 | 0.92 |
| 7.y/m | 9/9 | 18/1 | 9/9 | 0 | 10/0 |
| 8.sig | May96 | Nov92 | Mar97 | Oct94 | Feb00 |
| 9.sy/m | 18/5 | 21/11 | 17/7 | 20/0 | 14/8 |
| 10.McK | 16 | 26 | 19 | ||
| Source | UAH | RSS | Had4 | Sst3 | GISS |
| 11.Jan | 0.236 | 0.261 | 0.508 | 0.342 | 0.68 |
| 12.Feb | 0.127 | 0.161 | 0.305 | 0.314 | 0.42 |
| 13.Mar | 0.137 | 0.214 | 0.548 | 0.347 | 0.68 |
| 14.Apr | 0.184 | 0.251 | 0.658 | 0.478 | 0.71 |
| 15.May | 0.275 | 0.286 | 0.596 | 0.477 | 0.78 |
| 16.Jun | 0.279 | 0.345 | 0.619 | 0.563 | 0.61 |
| 17.Jul | 0.221 | 0.350 | 0.542 | 0.551 | 0.52 |
| 18.Aug | 0.117 | 0.193 | 0.667 | 0.644 | 0.69 |
| 19.Sep | 0.185 | 0.206 | 0.595 | 0.578 | 0.77 |
| Source | UAH | RSS | Had4 | Sst3 | GISS |
| 20.ave | 0.196 | 0.252 | 0.560 | 0.477 | 0.65 |
| 21.rnk | 8th | 7th | 1st | 1st | 3rd |
If you wish to verify all of the latest anomalies, go to the following:
For UAH, version 5.5 was used since that is what WFT used, see: http://vortex.nsstc.uah.edu/public/msu/t2lt/tltglhmam_5.5.txt
For RSS, see: ftp://ftp.ssmi.com/msu/monthly_time_series/rss_monthly_msu_amsu_channel_tlt_anomalies_land_and_ocean_v03_3.txt
For Hadcrut4, see: http://www.metoffice.gov.uk/hadobs/hadcrut4/data/current/time_series/HadCRUT.4.3.0.0.monthly_ns_avg.txt
For Hadsst3, see: http://www.cru.uea.ac.uk/cru/data/temperature/HadSST3-gl.dat
For GISS, see: http://data.giss.nasa.gov/gistemp/tabledata_v3/GLB.Ts+dSST.txt
To see all points since January 2014 in the form of a graph, see the WFT graph below. Note that HadCRUT4 is the old version that has been discontinued. WFT does not show HadCRUT4.3 yet.

As you can see, all lines have been offset so they all start at the same place in January 2014. This makes it easy to compare January 2014 with the latest anomaly.
Appendix
In this part, we are summarizing data for each set separately.
RSS
The slope is flat since September 1, 1996 or 18 years, 1 month. (goes to September 30)
For RSS: There is no statistically significant warming since November 1992: CI from -0.001 to 1.816.
The RSS average anomaly so far for 2014 is 0.252. This would rank it as 7th place if it stayed this way. 1998 was the warmest at 0.55. The highest ever monthly anomaly was in April of 1998 when it reached 0.857. The anomaly in 2013 was 0.218 and it is ranked 10th.
UAH
The slope is flat since January 2005 or 9 years, 9 months. (goes to September using version 5.5 according to WFT)
For UAH: There is no statistically significant warming since May 1996: CI from -0.024 to 2.272. (This is using version 5.6 according to Nick’s program.)
The UAH average anomaly so far for 2014 is 0.196. This would rank it as 8th place if it stayed this way. 1998 was the warmest at 0.419. The highest ever monthly anomaly was in April of 1998 when it reached 0.662. The anomaly in 2013 was 0.197 and it is ranked 7th.
Hadcrut4.3
The slope is flat since January 2005 or 9 years, 9 months. (goes to September)
For Hadcrut4: There is no statistically significant warming since March 1997: CI from -0.016 to 1.157.
The Hadcrut4 average anomaly so far for 2014 is 0.560. This would rank it as 1st place if it stayed this way. 2010 was the warmest at 0.555. The highest ever monthly anomaly was in January of 2007 when it reached 0.835. The anomaly in 2013 was 0.492 and it is ranked 9th.
HadSST3
For HadSST3, the slope is not flat for any period. For HadSST3: There is no statistically significant warming since October 1994: CI from -0.015 to 1.722.
The HadSST3 average anomaly so far for 2014 is 0.477. This would rank it as 1st place if it stayed this way. 1998 was the warmest at 0.416 prior to 2014. The highest ever monthly anomaly was in July of 1998 when it reached 0.526. This is also prior to 2014. The anomaly in 2013 was 0.376 and it is ranked 6th.
GISS
The slope is flat since October 2004 or an even 10 years. (goes to September)
For GISS: There is no statistically significant warming since February 2000: CI from -0.060 to 1.326.
The GISS average anomaly so far for 2014 is 0.65. This would rank it as third place if it stayed this way. 2010 was the warmest at 0.66, so the 2014 anomaly is only 0.01 colder at this point. The highest ever monthly anomaly was in January of 2007 when it reached 0.92. The anomaly in 2013 was 0.60 and it is ranked 6th.
Conclusion
At present, the average HadCRUT4.3 anomaly for the first 9 months of 2014 is 0.560, which is above the previous record of 0.555 set in 2010. With a September anomaly of 0.595, a new record for 2014 is a definitive possibility. However, can we trust any HADCRUT4 record or might it simply be an artifact of questionable adjustments to the temperature record?
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.
Anybody got a comment on the use of temperature, etc. records from the world’s Merchant Marine, Naval, Civil and Military Aviation logs?
Here’s hoping that the Senate now joins the House in voting to defund the IPCC, & that both houses add GISS & NCAR to the hit list.
It looks like Tim’s experience matches mine.
As a matter of fact Ive found that when you add more data the warming goes up.
Note, there are large numbers of stations in places that havent been added.
These could cool the record.
However, the changes will be slight and 95% of the time within the error bands
I predict that if “you all” found record sets that significantly cool the trend (except in the past) then those will be conveniently ignored or statistically manipulated to “your all’s” benefit. The reasons are psychological and grant/paycheck protection.
How useless are the error bands of a useless metric?
“…there are large numbers of stations in places that haven’t been added.”
Steve, no one believes this station shell game.
We really just found data from previously unknown stations? That’s unbelievable! (literally)
Where have these stations been hiding?
If we’ve had it all along, why wasn’t this data added before?
Why not add back the data from the huge number of cooler stations that have been deleted?
I repeat my earlier statement: ‘Finding’ lost climate data is the same as ‘finding’ lost ballots.
I’m extremely disappointed in you for buying into this fraud.
If you can truly defend this, you need to write a guest post here at WUWT explaining how this works.
This may have something to do with it. From the article:
“Land-based temperatures averaged 0.04°C below sea temperatures for the period 1950 to 1987 but after 1997 averaged 0.41°C above sea temperatures.”
And usually, it is missing land data that is found.
Werner,
Thanks for your work in this area and for this post. I have to believe that these data sets are polluted with a sort of confirmation bias, in the sense that finding what you look for is the rule; no one looks for cooler data.
missing data found at the bottom of climate models bucket , which are classic GIGO .
Meanwhile let us ask yourselves a simple question , Osborn’s career worse , better or no change if AGW is BS? and that goes toward motivation to finfd the ‘right short ‘ of missing data at a time when finding reason why temperature has simply failed to match in reality the claims of these same models .
Has for Chinese station data , I though CRU’s Phil told us ‘the dog had eat it ‘ , has that dog passed it back out now ?
Yawn. Isn’t HADCRUT considered irrelevant by now?
RSS UPDATE
RSS for October is 0.272. As a result, the 10 month average is 0.254. This would keep it in 7th place after 10 months. As for the period of no slope, I am not 100% sure, but I believe it will increase by one month to 18 years and two months. If someone can confirm that, please let us know.
I just found out that the time stays at 18 years and one month by the smallest of margins. From September 1996, the slope is an extremely small 1.72E-08 Celsius degrees per year. So it is negative from October 1996.
Yes Werner,
I agree that it now goes back to October 1996, but our calculations disagree after the 4th decimal! Probably not a noticeable difference to all but the most sensitive life forms.
I got the above number from a private source. However I see that WFT gives a slope = 1.66795e-07 per year from September. Either way, it is positive but totally insignificant. RSS went up from 0.206 in September to 0.272 in October. If we assume that the rise was uniform over 31 days, then it would be 0.270 after October 30. Are you able to figure out if an anomaly of 0.270 would have made the slope negative? Since if that is the case, then the time for no positive slope could be 18 years and 2 months if October had 30 days instead of 31 days.
Recently I got severely rebuked over paper in which I discussed some aspects of solar activity, the rate of the Earth’s rotation ( LOD ) and temperatures.
I normally only look at the North Hemisphere, since the South is mostly ocean and there is a high uncertainty even for the most recent decades.
Temperature spectral components determine composition of the natural variability.
In this illustration
http://www.vukcevic.talktalk.net/LOD-HCT4.gif
top graph is spectra for both variables (LOD & HADCRUT4) since 1850.
Lower graph shows what happens when HADCRUT4 frequency is increased by 17.5% (denoted as HADCRUT4’’).
What does this tell us?
Largest contributor to the LOD’s decadal variability is considered to be change of angular moment in rotation of the Earth’s inner liquid core, while for the decadal temperature variability, the oceans play critical role.
Here we could speculate that the LOD and HADCRUT4 spectrum variability is one and the same, but due to different rate of propagation trough the liquid core (higher) and the oceans (lower) two spectra are different but directly related.
Does this matter? At the first instance probably not, but researcher have to be willing to probe into data beyond the ‘first instance’.
errata:Earth’s outer liquid core
For what it’s worth, the public could give a RA about Global Warming and Climate Change in relation to other priorities like the economy, especially when things like very cold snaps happen in mid-November….
View it and weep.
http://i60.tinypic.com/313qb15.gif
Ouch, that look’s like a Polar Vortex excursion deep into the lower 48…..
So, have I got this right (keeping it simple for my simple mind) . . . .
Most of the land area of the planet is very sparsely measured, even now, and most of the measurements we do have are/have been subject to siting problems/reporting problems/station moves/TOBS etc, and have an accuracy of +/- 0.5C if we’re lucky . . .
Most land-based stations are in areas that have been significantly urbanised over the past 60 years (towns/cities/airports etc). No review of station quality was made (until WUWT SSP), and no adjustments made for UHI.
Until the 1950s sea surface temps were taken by hanging buckets over the sides of ships & sticking thermometers in them. After the 50’s we had a (very sparse) network of buoys, and since 2003 we’ve had ARGO, measuring down to 2000m (3000 buoys to cover 70% of the earth’s surface area for 11 years is still pretty sparse,).
Most pre-2003 measurements are, therefore, on shipping lanes or the fixed buoy network (ie near the coast?), leaving most of the (really quite big) oceans unmeasured.
So they split the earth into 1200 km ‘squares’, and average temps over each square that actually has temp. records, the majority (UHI-affected) readings causing (upward) adjustment of minority (rural) sites, even if they are nearly 1200km away.
1200km squares that have no record are either assigned a value based on readings from adjacent squares (GISS), or the global mean (HADCRU). These ‘infilled’ (made up) values may, in turn, be used to ‘infill’ the next 1200km square it it, too, has no records available, and so on.
They then ‘dropped’ 5/6 of the reporting stations (were the criteria for this published/checked?), just when you would have thought that we needed the data most, meaning that substantially more area is now covered by computer-generated data.
Some discontinued stations (‘zombies’) have somehow kept reporting (using ‘infilled’ data?)
All data is ‘reprocessed’ each month (daily for NCDC?) using a suite of computer algorithms to make adjustments automatically. The cumulative effect of this has been to cause past temperatures to drop, and more recent ones to rise.
Additionally, successive versions of each land dataset have progressively caused past temperatures to drop, and more recent ones rise still further.
Now, after 25 years and billions-o-bucks, they suddenly ‘find’ new (old) data, and use another (the same?) set of computer algorithms to create a dataset for China based on 18 stations, and using data from ‘adjacent countries’.
There are 4 main land/sea datasets created using these methods, using different reference periods, giving different (though similar results).
There are 2 satellite datasets, that don’t measure ground temps (computing them from TOA readings), and which don’t agree with each other (although they are, mercifully, free of the data mangling algorithms used for the land/sea sets).
Then they tell us that they know the global mean temperatures going back to the 19th Century with no mention of any margins of error, and gleefully announce the one closest to their expectation (usually NCDC/GISS)
But they don’t tell you that ‘temperature’ isn’t the same as ‘heat’, so what they’re measuring isn’t necessarily meaningful anyhow (unless humidity is taken into account, which it isn’t).
I know I’m a layman, so correct me where I’m wrong, but isn’t this a bit, you know, nuts.
Like I said earlier – the whole thing seems completely shonky to me.
Shonky (UK slang) : Ramshackle. Held together with gaffa tape and glue, looking like it might fall to pieces at any moment. Made by Heath-Robinson with bad hangovers . . .
(OK, I made that up – the definition, not the word)
You’ve described the sorry industry of index manufacture pretty accurately. All that needs to be added is a dash of academic hubris in selling it to the politicians and the public.
This isn’t correct. The largest adjustment to the temperature record actually caused past temperatures to increase, i.e. the Bucket Model;
Bob Tisdale – bobtisdale.wordpress.com – Click the pic to view at source[/caption]
http://wattsupwiththat.com/2013/05/25/historical-sea-surface-temperature-adjustmentscorrections-aka-the-bucket-model/
was used to increase the pre-1945 temperature record drastically:
[caption id="" align="alignnone" width="850"]
This adjustment was necessary because the rate of warming prior to 1945, i.e. before the influence of anthroprogenic CO2 was potentially consequential, exceeded the warming after 1945, as was demonstrated here:
http://wattsupwiththat.com/2014/03/29/when-did-anthropogenic-global-warming-begin/
Even with the highly arbitrary Bucket Model adjustments applied, Phil Jones noted in at 2010 interview that:
http://news.bbc.co.uk/2/hi/8511670.stm
Point being, if you eliminate the upward adjustments of the past, the natural warming that occurred during the first half of the 20th century, exceeds the supposedly anthropogenically influenced warming that occurred during the second half of the 20th century.
Well, that’s why I like WUWT so much, you really do learn something new every day. Thanks.
In Animal Farm, after endless adjustments to figures, they ‘just wanted more food and less statistics’.
Since we have more food with more c02 and more warmth, we got the first and more important part, which is a damn sight better than the animals of Animal Farm got.
To compute the “global mean temperature” (which, of course, it isn’t) we take thousands of stations with millions of data points with many missing and torture it statistically for years and years and every year the early data gets cooler and the more recent warmer.
To compute CO2 we take one direct measurement from one location and follow it.
Go figure.
Mr. Layman here.
I don’t mean this to sound as cynical as it my come off, but isn’t Man’s ability to take the Globe’s temperature not just in it’s infancy but still in the womb?
To me it seems like trying to get a regional value from an old or even a new reading is like putting a drop of water on a period printed with an old inkjet printer. You get a big blob but lose the period…and maybe blur the sentence. Sprinkle enough drops and the page can be claimed to say anything you want.
The efforts to read it may be sincere but how certain can anyone be?
HadSST3 update:
HadSST3 came in at 0.529 in October. This is a drop from 0.574 in September. However it is the fifth consecutive month that it was above the previous high, before 2014, of 0.526 in July 1998. The average rose to 0.482 which remains way above the previous record of 0.416 in 1998 so HadSST3 is guaranteed to set a new record in 2014 and there is no flat period at all.
CRU and GISS can wait until Mid-November to release those HadSST3 results in a press release. By mid-November the lower 48-US will be in an unusual Polar Vortex, early deep-freeze and thus will get a laugh-off by the public.
Nice article. But it’s just shady stuff.
I’ll never trust it.
More on HADCRUT4 and LOD (see also my post above )
Here HADCRUT4 Northern hemisphere is linearly and polynomially (square law) de-trended:
http://www.vukcevic.talktalk.net/HCT4-LOD.gif
From 1890s to mid-1980s two de-trended results are virtually identical and concurrent with the negative LODs (Earth’s rotation rate of change) variability, but a divergence appear in the late 1980s and steadily increases.
Summer CET relationship to the sunspot number shows similar divergence appearing around 1990
http://www.vukcevic.talktalk.net/CET-JJAvsSSN.gif
(the LOD shows that the temperature fall in 1960s/70s was caused by natural variability and not by aerosols, as it was implied previously)
Since the LOD leads the temperature by about 3 years it can be concluded that the temperature decadal natural variability is strongly associated directly or indirectly with the LOD’s forcing causes.
Here I have demonstrated that the changes in the LOD are concurrent with the changes in the solar magnetic polarity.
In this article
NASA Study Goes to Earth’s Core for Climate Insights (J. Dickey)
http://www.nasa.gov/topics/earth/features/earth20110309.html
leading JPL geo-dynamics scientist considers the global temperature relationship to the LOD
FMI added new station in Helsinki it shows 2C higher temps than older station in Kaisaniemi. They moved Station of Kittilä Pokka to warmer position, if this kinda manoveurs happens all ower the world, it is wery far from science it,s way to do propaganda. If it wont warm up they’ll make warming.
IMPORTANT:
Osborne’s explanation as to why more data, recent data warms the profile is a HUGE comment. His point is that the historical record was deficient in the Arctic regions, while the newest data has more, i.e. a higher proportion. Since the Arctic is “warming faster”, the temp anomaly goes up.
What he is saying is that the Arctic “missing” data would have “warmed” the record. As time goes by more warm Arctic temp data comes in, but that does not mean that the Arctic is necessarily a lot warmer now. If UEA were to drop the proportionality of Arctic data back to that of the earlier days, his observations suggests that the temperatures of the past would be warmer, not cooler, than thought.
If infilling of “missing” data warms the record today, the suggestion is that infilling of missing OLD data could very well warm the OLD data.
There are several things to consider here. RSS shows that the region from 60 to 82.5 N is warming at the rate of 0.323 K/decade according to the sea ice page. If we assume a constant lapse rate, then the polar surface would warm just as fast.
Now let us suppose that 40% of the Arctic was covered in 1998 and the same 40% was covered in 2014. If an additional 20% of area was found for 1998 in 2014, and if an additional 20% of area was used in 2014, then the added area would presumably increase the warming rate of this area by 0.323 K/decade which would cause a larger slope for the whole earth.
But suppose that an additional 20% was found for 1998, but an additional 50% was found for 2014, then the warming for the Arctic region would be much larger than 0.323 K/decade, so the global slope would also be larger. If that is the case, and if 2014 beats 1998, I would consider that to be like comparing apples and oranges.
Note that there is no way that RSS will beat 1998 in 2014 and this is comparing apples to apples.
That is how I see it anyway.
If we could go back to the 1920s and 1930s and find lots of thermometer readings, who knows what we may find? Especially if the Arctic was much warmer then. See the 2 postings at:
http://wattsupwiththat.com/2014/11/05/hadcrut4-adjustments-discovering-missing-data-or-reinterpreting-existing-data-now-includes-september-data/#comment-1781241
I guess I would ask the obvious question: If this were a new drug and we were “adjusting” the results, would the public buy it or run us out of town on a rail? I am a scientist, biochemistry and nuclear power. I read these papers and the “adjustments” and I am just stunned. What schools did they go to that ever implied that you could “adjust” data? I know you can postulate that the data was off, describe why and generate a data set and give your reasoning. That would be science. But the adjustment of data, and raw data at that, is fraud. Nothing else.
If adjustments are made for good reasons, that is one thing. However if people making the adjustments are known to have certain biases and if the adjustments always support those biases, that is troublesome.