Approximately 66% of global surface temperature data consists of estimated values

Summary of GHCN Adjustment-Model Effects on Temperature Data

Guest essay by John Goetz

As the debate over whether or not this year will be the hottest year ever burns on, it is worth revisiting a large part of the data used to make this determination: GHCN v3.

The charts in this post use the dataset downloaded at approximately 2:00 PM on 9/23/2015 from the GHCN FTP Site.

The monthly GHCN V3 temperature record that is used by GISS undergoes an adjustment process after quality-control checks are done. The adjustments that are done are described at a high-level here.

The adjustments are somewhat controversial, because they take presumably raw and accurate data, run it through one or more mathematical models, and produce an estimate of what the temperature might have been given a set of conditions. For example, the time of observation adjustment (TOB) takes a raw data point at, say 7 AM, and produces an estimate of what the temperature might have been at midnight. The skill of that model is nearly impossible to determine on a monthly basis, but it is unlikely to be consistently producing a result that is accurate to the 1/100th degree that is stored in the record.

A simple case in point. The Berlin-Tempel station (GHCN ID 61710384000) began reporting temperatures in January, 1701 and continues to report them today. Through December, 1705 it was the only station in the GHCN record reporting temperatures. Forty-eight of the sixty possible months during that time period reported an unflagged (passed quality-control checks) raw average temperature, and the remaining 12 months reported no temperature. Every one of those 48 months was estimated downward by the adjustment models exactly 0.14 C. In January, 1706 a second station was added to the network – De Bilt (GHCN ID 63306260000). For the next 37 years it reported a valid temperature every month and in most of those months it was the only GHCN station reporting a temperature. The temperature for each one of those months was estimated downward by exactly 0.03 C.

Is it possible that the models skillfully estimated the “correct” temperature at those two stations over the course of forty plus years using just two constants? Anything is possible, but it is highly unlikely.

How Much Raw Data is Available?

The following chart shows the amount of data that is available in the GHCN record for every month from January, 1700 to the present. The y-axis is the number of stations reporting data, so any point on the curve represents the number of measurements reported in the given month. In the chart, the green curve represents the number of raw, unflagged measurements and the purple curve represents the number of estimated measurements. The difference between the green and purple curves represents the number of raw measurements that are not changed by the adjustment models, meaning the difference between the estimated value and raw value is zero. The blue curve at the bottom represents the measurements where an unflagged raw value was discarded by the adjustment models and replaced with an invalid value (represented by -9999). The count of discarded raw data (blue curve) is not included in the total count represented by the green curve.

Number of Monthly Raw and Estimated GHCN Temperatures 1700 - Present
Number of Monthly Raw and Estimated GHCN Temperatures 1700 – Present

The second chart shows the same data as the first, but the start date is set to January 1, 1880. This is the start date for GISS analysis.

Number of Monthly Raw and Estimated GHCN Temperatures 1880 - Present
Number of Monthly Raw and Estimated GHCN Temperatures 1880 – Present

How Much of the Data is Modeled?

In the remainder of this post, “raw data” refers to data that passed the quality-control tests (unflagged). Flagged data is discarded by the models and replaced with an invalid value (-9999).

In the next chart the purple curve represents the percentage of measurements that are estimated (estimated / raw). The blue curve represents the percentage of discarded measurements relative to the raw measurements that were not discarded (discarded / raw). Prior to 1935, approximately 80% of the raw data was changed to an estimate, and from 1935 to 1990 there was a steady decline to about 40% of the data being estimated. In 1990 there was an upward spike to about 55%, followed by a steady decline to the present 30%. The blue curve at the bottom shows that approximately 7% to 8% of the raw data was discarded by the adjustment models, with the exception of a recent spike to 20%. (Yes, the two curves combine oddly enough to look like a silhouette of Homer Simpson on his back snoring.)

Percent Raw GHCN Data Replaced with Estimate or Discarded
Percent Raw GHCN Data Replaced with Estimate or Discarded

The next chart shows the estimate percentages broken out by rural and non-rural (suburban and urban) stations. For most of the record, non-rural stations were estimated more frequently than rural stations. However, over the past 18 years they have had temperatures estimated at approximately the same rate.

Percent Rural and Urban (non-Rural) Raw GHCN Data Replaced with Estimate
Percent Rural and Urban (non-Rural) Raw GHCN Data Replaced with Estimate

The fifth chart shows the average change to the raw value due to the models replacing it with an estimated value. There are two curves shown in the chart. The red curve is the average change when not including measurements where the estimated value was equal to the raw value. It is possible, however, that the adjustment models will produce an estimated value of zero. The blue curve considers this possibility and represents all measurements, including those with no difference between the raw and estimated values. The trend lines for both are shown in the plot, and it is interesting to note that the slopes for both are nearly identical.

Average Change in Degrees C * 100 When Estimate Replaces Raw Data
Average Change in Degrees C * 100 When Estimate Replaces Raw Data

What About the Discarded Data?

Recall that the first two charts showed the number of raw measurements that were removed by the adjustment models (blue curve on both charts). No flags were present in the estimated data to indicate why the raw data were removed. The purple curve in the following chart shows the anomaly of the removed data in degrees C * 100 (1951 – 1980 baseline period). There is a slight upward trend from 1880 through 1948, a large jump upward from 1949 through 1950, and a moderate downward trend from 1951 to present. The blue curve is the number of measurements that were discarded by the models. Caution should be used in over-analyzing this particular chart because no gridding was done in calculating the anomaly, and prior to 1892 only a handful of measurements are represented by that data.

Average Anomaly in Degrees C * 100 of Discarded GHCN Data
Average Anomaly in Degrees C * 100 of Discarded GHCN Data

Conclusion

Overall, from 1880 to the present, approximately 66% of the temperature data in the adjusted GHCN temperature data consists of estimated values produced by adjustment models, while 34% of the data are raw values retained from direct measurements. The rural split is 60% estimated, 40% retained. The non-rural split is 68% estimated, 32% retained. Total non-rural measurements outpace rural measurements by a factor of 3x.

The estimates produced by NOAA for the GHNC data introduce a warming trend of approximately a quarter degree C per century. Those estimates are produced at a slightly higher rate for non-rural stations than rural stations over most of the record. During the first 60 years of the record measurements were estimated at a rate of about 75%, with the rate gradually dropping to 40% in the early 1990s, followed by a brief spike in the rate before resuming the drop to its present level.

Approximately 7% of the raw data is discarded. If this data were included as-is in the final record it would likely introduce a warming component from 1880 to 1950, followed by a cooling component from 1951 to the present.

Epilogue

The amount of estimation and its effects change over time. This is due to the addition of newer data that lengthens time series used as input to the adjustment models. The following chart shows the percentage of measurements that are estimated (purple curves) and percentage of discarded measurements. The darker curves are generated from the data set as of 9/23/2015 (data is complete through 8/2015). The lighter curves are generated from the data set as of 6/27/2014 (data is complete through 5/2014). Clearly, fewer measurements were estimated in the current data set than in the past data set. However, more measurements from the early part of the record were discarded in the current data set.

Percent Raw GHCN Data Replaced with Estimate or Discarded 8/2015 versus 5/2014
Percent Raw GHCN Data Replaced with Estimate or Discarded 8/2015 versus 5/2014

A chart showing the average change to the raw data is not shown, because an overlay is virtually indistinguishable. However, the slope of the estimated data trend produced by the current data set is slightly greater than the past data set (0.0204 versus 0.0195). The reason that the slope of 0.0204 differs from the slope in the fifth chart above (blue curve) is that the comparison end month is May, 2014, whereas the chart above ends with August, 2015.

Note: the title was changed to better reflect the thrust of the article, the original title is now a sub headline. The guest essay line was also added shortly after publication, and a featured image added as the guest author did not provide these normal elements of publication at WUWT – Anthony

0 0 votes
Article Rating
238 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
September 24, 2015 4:17 pm

This is a wonderful report. Thanks much.
It would appear that this “data” set is darn near useless. It is mostly fudged up numbers that yield whatever “average” the minions need to produce for their paymasters.
And we should dismantle the industrialized west over this?

Svante Callendar
Reply to  markstoval
September 24, 2015 4:33 pm

markstoval.
Why do you think the adjusted data set “is damn near useless”?

Reply to  Svante Callendar
September 24, 2015 4:46 pm

Why is the “data” set “darned near useless”?
Because 66% of the “data” is just plain made up. And the people making up the numbers have a job where showing CAGW is the objective. Biased, made up numbers used for political purposes is not a data set.

Svante Callendar
Reply to  Svante Callendar
September 24, 2015 4:53 pm

markstoval.
I won’t comment on your conspiracy theory because there is no point discussing a claim made with no evidence.
The data is not made up, it is derived by adjusting the “raw” values. The reason for the adjustments is sound.

Slywolfe
Reply to  Svante Callendar
September 24, 2015 6:09 pm

Once data is adjusted, it is no longer “data”

Alx
Reply to  Svante Callendar
September 24, 2015 7:11 pm

“The data is not made up, it is derived by adjusting the “raw” values. The reason for the adjustments is sound.”
There are sound reasons to try many different hypothetical approaches. Every month there are new reasons and new ways to adjust/correct/massage the data. There is no solid evidence beyond those speculative “reasons” that the adjustments are valid.
Put another way, there were sound reasons in humanities past to believe the earth was flat.

PiperPaul
Reply to  Svante Callendar
September 24, 2015 7:47 pm

You betcha it’s ‘sound’. It’s also ‘fit for purpose’, and the purpose is to prop up a predetermined conclusion. Not making the raw data available makes things pretty obvious, don’t you think? I wonder if Gruber learned from The Team.

Steve Oregon
Reply to  Svante Callendar
September 24, 2015 9:07 pm

Yep once adjusted the data becomes conjecture.
As conjecture it is no longer scientific evidence. It is interpretive opinion.

Reply to  Svante Callendar
September 25, 2015 1:01 am

@ Svante Callendar
“I won’t comment on your conspiracy theory because there is no point discussing a claim made with no evidence.”
I mentioned no conspiracy. You just made that up so that you could knock down a strawman. In other words, you are a liar.

climatereason
Editor
Reply to  Svante Callendar
September 25, 2015 1:33 am

John Mentions the Berlin Tempel station. In looking at early temperature and weather observations, this book is invaluable.
https://books.google.co.uk/books?id=O6jAo4m4L_gC&pg=PA10&lpg=PA10&dq=mannheim+palatine+thermometers&source=bl&ots=MrQt0bcAUN&sig=KJeS24aO5RAaSNgkEJ_5KYW6z1g&hl=en&sa=X&ved=0CDMQ6AEwA2oVChMI6Ya4kd6RyAIVQQ4aCh0–gDf#v=onepage&q=mannheim%20palatine%20thermometers&f=false
Readers might especially enjoy the chapter commencing page 8 ‘the emergence of organised meteorology.’
There were a number of attempts to set up organised networks from 1717 to 1727 covering Germany and London and other places
From 1724 the Royal society organised observations in Britain, North America, India and elsewhere. This noted a rapidly growing period of warmth from the deep cold of the LIA in 1695 (best seen in CET) which in Britain culminated in the 1730’s becoming the warmest decade until the 1990’s. A fact noted by Phil Jones who observed in his 2006 paper that natural variability had been greater than he had hitherto realised when examining the juxtaposition of the cold 1690’s the warm 1730’s and the catastrophic cold of 1741.
Then as now, there is little evidence of any conspiracy theory, which would have to be on a massive scale to be effective. There is however a lack of appreciation of natural variability and data that is not always robust is used.
tonyb

Keith Willshaw
Reply to  Svante Callendar
September 25, 2015 2:04 am

IF you ‘adjust data’ from a test of an engineered product you are engaging in a criminal act. Ask the ex CEO of Volkswagen. The same applies to mining, petrochemical exploration or clinical trials.
Only in the field of climate science is adjusting the data allowed. This is an egregious insult to the thousands of men and women, largely unpaid volunteers, who spent their lives collecting this information in the days before automated instruments. At school (in the 60’s) had a science teacher who took the readings several times a day from the station in the grounds regardless of weather or how he felt. That this data so carefully observed can just be tossed aside in favour of a set of adjusted values is a disgrace.
Note that when the CRU at the University of East Anglia were requested to provide the unadjusted values recorded over decades they claimed to have lost it. They also refused to provide the code used for the adjustments. What we now have is next to useless and is utterly untrustworthy.
You do not have to believe me, they actually admit it on their web site.
http://www.cru.uea.ac.uk/cru/data/availability/
“Since the 1980s, we have merged the data we have received into existing series or begun new ones, so it is impossible to say if all stations within a particular country or if all of an individual record should be freely available. Data storage availability in the 1980s meant that we were not able to keep the multiple sources for some sites, only the station series after adjustment for homogeneity issues. We, therefore, do not hold the original raw data but only the value-added (i.e. quality controlled and homogenized) data.”
So records that could be stored up to 1980 suddenly became impossible to archive eh !
Can you imagine the reaction if the British Library decided to throw away all its books printed before the 1980’s on the grounds that it was inconvenient to keep them and that they had ‘improved’ texts available !
Welcome to the wonderful world of ‘Climate Science’ where we are expected to spend billions of pounds and trash our economy on the basis of conclusions reached from data that has been discarded.
Not even George Orwell thought of that one.

DaveS
Reply to  Svante Callendar
September 25, 2015 6:05 am


Surely the question is why NOAA aren’t held to account?

MarkW
Reply to  Svante Callendar
September 25, 2015 6:48 am

Unless you can present and defend your “adjustment” methodology, then it is just made up.
To date, the so called scientists refuse to do that.

Samuel C. Cogar
Reply to  Svante Callendar
September 25, 2015 7:02 am

Why do you think the adjusted data set “is damn near useless”?

My first reason for thinking so is that …. overall, from 1880 to the present, ….. 100% of all Interglacial Global Warming temperature increases have been “high-jacked” by the proponents of CO2 causing Anthropogenic Global Warming.
It is impossible for said proponents to determine the difference between Interglacial caused temperature increases and Anthropogenic caused temperature increases …. thus they claim all said increases as being anthropogenic to benefit themselves and justify their “junk science” claims.

ralfellis
Reply to  Svante Callendar
September 25, 2015 7:38 am

>> The reason for the adjustments is sound.
The reason for the adjustments is political manipulation.
It is simply not possible to have an honest and valid adjustment system, that consistently makes the past cooler and the present warmer. This obvious bias is not an ‘adjustment’, it is data manipulation for economic or political gain.
Check the following ‘before and after’ graph, from Steven Goddard’s site:comment image

Tim Hammond
Reply to  Svante Callendar
September 25, 2015 10:48 am

Seriously?
You want to remake the world economy based on guesses?
This is not data, it’s literally guesses.
As an attempt to recreate temperatures it’s fine, but to make any kind of claim of accuracy is laughable.

Reply to  Svante Callendar
September 25, 2015 2:42 pm

Confirmation bias is a thing, M Stoval is making the assumption that bias has favored adjustments which tell a warming story. Based on the vast amount of evidence for warming bias, I would say he attributing correctly. Surface records cannot be taken seriously, I propose we decide on an appropriate instrument and sitting regs and we form our own reporting network which will no doubt falsify the existing.

George E. Smith
Reply to  Svante Callendar
September 25, 2015 11:35 pm

If it was down on paper ,say in the form of a newspaper, then at least you would be able to use it in the bottom of a parrot’s cage, or your kitty litter box.
But if it is only in electronic form, you can’t even do that.
What possible earthly use is ANY set of numbers, which don’t actually reflect anything that anybody ever observed or measured anywhere at any time, because nothing like it ever happened anywhere to be observed.
g

FAH
Reply to  markstoval
September 24, 2015 6:59 pm

Svante, It is not a conspiracy at work but the normal research funding process. Here is how it works.
With respect to bias, NASA/NOAA funding priorities could not be more biased. To anyone who has worked for any length of time based on winning competitive grants or contracts from Federal agencies, it is no mystery why so much climate research comes to alarmist conclusions no matter what the data actually says. The key to winning a grant or contract is to propose work that 1) you have demonstrated capability to do and 2) addresses what the sponsor wants. Gauging what the sponsor wants and targeting those wants is perhaps the most important determinant of your proposal. Thankfully I have long been involved in this in another area, but just for fun I thought I would look at the NOAA Broad Area Announcement (BAA). BAA’s are a typical request for research or development proposals from Federal Agencies.
The NOAA BAA (and others) can be found at the grants.gov site via a search
http://www.grants.gov/web/grants/search-grants.html?keywords=noaa
and then clicking on the link to NOAA-NFA-NFAPO-2014-2003949
The BAA spells out what NOAA wants by stating the given assumptions up front:
“Projected future climate-related changes include increased global temperatures, melting sea ice and glaciers, rising sea levels, increased frequency of extreme precipitation events, acidification of the oceans, modifications of growing seasons, changes in storm frequency and intensity, air quality, alterations in species’ ranges and migration patterns, earlier snowmelt, increased drought, and altered river flow volumes. Impacts from these changes are regionally diverse, and affect numerous sectors related to water, energy, transportation, forestry, tourism, fisheries, agriculture, and human health. A changing climate will alter the distribution of water resources and exacerbate human impacts on fisheries and marine ecosystems, which will result in such problems as overfishing, habitat destruction, pollution, changes in species distributions, and excess nutrients in coastal waters. Increased sea levels are expected to amplify the effects of other coastal hazards as ecosystem changes increase invasions of non-native species and decrease biodiversity. The direct impact of climate change on commerce, transportation, and the economy is evidenced by retreating sea ice in the Arctic, which allows the northward expansion of commercial fisheries and provides increased access for oil and gas development, commerce, and tourism.”
If one plans to put in a proposal, it better toe this line. No wonder so much research aims to identify alarming consequences of AGW, and does it no matter what.
However, that given, it is very difficult to communicate the reality of research funding to those without direct personal experience within its workings. For my part, I have over 30 years experience as a principal investigator doing research (not in climate science) and several terms serving as a government official responsible for research portfolios. It is difficult to convey the tremendously arcane, mundane, and irritating legal and bureaucratic overhead associated with the process. But it is the case that those who manage the funds and award successful proposals do so with significant guidance from the agency heads and the office of the president, if the political mojo is flowing that way. Remember Gina Mccarthy’s famous statement that “there better not be any deniers here” when she took over the EPA. The rank and file program managers and researchers are not evil or malevolent, simply trying to do their jobs under the constraints they are given.
Those who claim conspiracy and have in mind the closed room setting of global agendas have no concept of how the government administers research. A conspiracy in that sense is impossible given the way the bureaucracies operate. However, program managers quickly learn the priorities of the agency within which they work and learn to operate within them.

Lady Gaiagaia
Reply to  FAH
September 24, 2015 9:22 pm

All too true. Sad but true. But there is also a conspiracy, as amply demonstrated by the Climategate emails, but already well known among real climatologists.

Brent Loken
Reply to  FAH
September 24, 2015 9:24 pm

FAH,
This is among the most sensible things I’ve seen written here on WUWT. I’ve noticed a lot of conspiracy theory type thinking creeping in and frankly it does nothing to advance the sceptic view. An an organization organization or institution can be corrupt without any conspiracy – it is a complex web of incentives, ideologies and human psychological biases. Human behaviour is a complex system just like climate, most certainly much more complex.

M. Hovland
Reply to  FAH
September 24, 2015 11:12 pm

Whenever VolksWagen (VW) hampers with measured results, they are charged and fined, and the CEO has to resign. Whenever the ‘climate specialists’ (NOAA) hamper with measured results, it’s just another day in the office…

Reply to  FAH
September 25, 2015 1:04 am

@ FAH
I think your comment should be expanded upon and sent in for a post to this site. It is important for lay people to understand the funding issues are and how the need to get funding drives the “consensus” since the feds are funding “science” research. Please consider doing a full post on the issue.

rah
Reply to  FAH
September 25, 2015 5:01 am

So VW should not be held accountable?

Reply to  FAH
September 25, 2015 5:53 am

Seems institutions become “institutionalized”

Reply to  FAH
September 25, 2015 7:48 am

I agree I would like to hear more knowledgeable info about how the grant funding bureaucracy operates, without deliberate conspiracy, to nevertheless bias climate research. One of the main sticking points in arguing the skeptical position is that people don’t believe scientists would deliberately be biased or mislead. They don’t understand how it is a symptom of the system.

Silver ralph
Reply to  FAH
September 25, 2015 11:18 am

I’ve noticed a lot of conspiracy theory type thinking creeping in and frankly it does nothing to advance the sceptic view. An an organization organization or institution can be corrupt without any conspiracy – it is a complex web of incentives, ideologies and human psychological biases.
_______________________________________________
Which is exactly how conspiracies are organised and run. Conspiracies start with high level influence, and then others jump on the bandwagon because they find it advantageous, or they find the results of not joining the bandwagon highly disadvantageous.
Ralph

Lady Gaiagaia
Reply to  FAH
September 25, 2015 11:23 am

It’s not a conspiracy theory but a conspiracy fact, as evinced by the conspirators own words in the Climategate emails and testimony of scientists presumed to be in on the conspiracy by the conspirators. You can debate the conspirators’ motives, but the fact of the conspiracy is not in doubt.

Stronzo Bestiale
Reply to  FAH
September 27, 2015 8:38 am

A succinct and brilliant summation. Thank you.

MarkW
Reply to  markstoval
September 25, 2015 6:47 am

Even if the raw numbers were accurate to 0.001th of a degree, the idea that you could take those numbers and calculate the daily average to a few hundredths of a degree is ludicrous.
Then add in the fact that the records from 100 years ago were only recorded to the nearest degree C, through in the many other well documented problems with the data and you get a result that makes ludicrous look good.

Reply to  MarkW
September 25, 2015 7:05 am

Exactly. Chart average global temperature between the daily min and max anywhere in the world, and you have a stubbornly flat line. It is only by claiming “post adjustment” an unachievable precision that any “trend” appears. To say we can measure .2 global temp delta between 1920 and 1984 for example is beyond absurd.
It is like recording the thermostat settings in 1000 homes and claiming the average ambient in door temperature of every household in the world has risen by .6 degrees.

Reply to  markstoval
September 29, 2015 6:05 am

Liars lying out of their liar holes………….

Proud Skeptic
September 24, 2015 4:22 pm

I think this is an important approach to exposing this science for what it really is.

Lady Gaiagaia
Reply to  Proud Skeptic
September 24, 2015 4:25 pm

Which is anti-scientific politics.

Dawtgtomis
Reply to  Lady Gaiagaia
September 24, 2015 8:45 pm

An agenda cloaked in a lab coat.

Lady Gaiagaia
Reply to  Lady Gaiagaia
September 24, 2015 8:52 pm

The white-frocked priesthood of High Druids.

George E. Smith
Reply to  Lady Gaiagaia
September 25, 2015 11:45 pm

Are you now being nit picky too ??
g
In light of the modern common core math where getting the right answer is not important but the method of seeking it is.
I thought it would be axiomatic that if you do employ the proper method you do get the correct answer, which is surely the ONLY test of the correctness of the method.

George E. Smith
Reply to  Lady Gaiagaia
September 25, 2015 11:47 pm

Make that getting the correct answer.
g

Svante Callendar
September 24, 2015 4:32 pm

It is not surprising the “adjusted” data set contains adjustment. My guess is the number of adjustments will increase over time.
So what is the issue exactly?
Personally, I do not see any issue with it. All temperature measurements are estimates, even the “raw” ones.

willnitschke
Reply to  Svante Callendar
September 24, 2015 4:51 pm

They should not be thought of as ‘estimates’ but more as ‘models’. You change your model assumptions and you change your model trends.

Svante Callendar
Reply to  willnitschke
September 24, 2015 4:58 pm

willnitschke.
Hence the need for the adjustments to remove non-climatic changes. I doubt if any of the measurement stations has remained the same for the entire age of the data set. I do recall reading somewhere (can’t remember the reference) that a measurement station is expected to change in some way once every 10 years or so.

BFL
Reply to  willnitschke
September 24, 2015 7:37 pm

“I do recall reading somewhere (can’t remember the reference) that a measurement station is expected to change in some way once every 10 years or so.”
Mann! just imagine the error bar height with those assumptions/errr estimates, but then they are never shown, how convenient.

willnitschke
Reply to  willnitschke
September 24, 2015 7:41 pm

Only if your adjustments plausibly improve the data, rather than make it conform to assumptions on what the temperature should have been. There is a world of difference here. When your adjustments end up creating a certain trend that didn’t previously exist, you’ve build a model, not plausibly “improved” the data. If you find yourself in that situation, it’s more scientifically rational to remove your assumptions and let the data speak for itself. Errors will tend towards averaging out over time.

Lady Gaiagaia
Reply to  Svante Callendar
September 24, 2015 5:02 pm

Why do all the adjustments introduce a warming bias, even those supposedly intended to adjust for the UHI effect?

Lady Gaiagaia
Reply to  Lady Gaiagaia
September 24, 2015 5:03 pm

Except of course for adjustments to data from before the 1940s, where the bias is always toward cooling.

Reply to  Lady Gaiagaia
September 24, 2015 5:24 pm

Lady G,
Good question. But I don’t think harrytwinotter “Svante Callendar” will answer.

Svante Callendar
Reply to  Lady Gaiagaia
September 24, 2015 6:37 pm

Lady Gaiagaia.
“Why do all the adjustments introduce a warming bias, even those supposedly intended to adjust for the UHI effect?”
Do they? Do you have evidence for that?

Svante Callendar
Reply to  Lady Gaiagaia
September 24, 2015 6:39 pm

dbstealey.
Back in your box, rover.
I wonder sometimes about blogs that use attack dogs to disrupt discussions.

Lady Gaiagaia
Reply to  Lady Gaiagaia
September 24, 2015 7:31 pm

Svante Callendar,
Yes, I do. Anyone who has studied the adjustments comes to that unavoidable conclusion.
The so-called “surface record” is science fiction with a political objective. Just compare the older record as NCAR had it in the late 1970s with what it now purports to be. Compare the ’80s and ’90s as they happened with where they are now. It’s glaringly obvious.
Nor are the continuous “adjustments” in any way justified on the basis of science. Only politically.

Lady Gaiagaia
Reply to  Lady Gaiagaia
September 24, 2015 7:46 pm

One of many such studies finding consistent warming bias for recent decades:
http://hockeyschtick.blogspot.com/2015/05/new-paper-finds-large-warming-bias-in.html

Mike the Morlock
Reply to  Lady Gaiagaia
September 24, 2015 8:47 pm

Svante Callendar: “I wonder sometimes about blogs that use attack dogs to disrupt discussions”.
dbstealey is not an attack dog. Those are only found on warmist blogs.
(Sorry Anthony)
michael

Lady Gaiagaia
Reply to  Lady Gaiagaia
September 24, 2015 8:54 pm

Not to mention the subject of this post as an example of warming bias.

Svante Callendar
Reply to  Lady Gaiagaia
September 24, 2015 9:24 pm

Lady Gaiagaia.
“Yes, I do. Anyone who has studied the adjustments comes to that unavoidable conclusion.”
Please explain.
Also, that paper from Schtick does not say anything about a warming bias from station measurement which is what the GHCN-M is. Are you sure you have read the actual paper?

Lady Gaiagaia
Reply to  Lady Gaiagaia
September 24, 2015 9:27 pm

Yes, that and many more.
To see the bias, just look at “Steven Goddard’s” comparison of previous records of the early 20th century warming with the ongoing “adjustments”. Look at how NCAR saw the post-war cooling in 1975 and how it sees it now.
The warming bias is incontrovertible fact and a criminal conspiracy.

Editor
Reply to  Lady Gaiagaia
September 24, 2015 9:52 pm

Svante Callendar

I wonder sometimes about blogs that use attack dogs to disrupt discussions.

I wonder about people who use false names that are a play on deceased scientists. I like to think we’ve moved on since then.

jim
Reply to  Lady Gaiagaia
September 25, 2015 2:40 am

Here is the NOAA graph of the adjustments. It slopes strongly up from about 1920-1990:
http://cdiac.ornl.gov/epubs/ndp/ushcn/ts.ushcn_anom25_diffs_urb-raw_pg.gif

AndyG55
Reply to  Lady Gaiagaia
September 25, 2015 2:56 am

“I wonder sometimes about blogs that use attack dogs to disrupt discussions”
And I wonder about climate attack dogs sent to disrupt discussion, as you very obviously are.

Reply to  Lady Gaiagaia
September 25, 2015 6:27 am

“Svante Callendar” says:
Back in your box, rover. I wonder sometimes about blogs that use attack dogs to disrupt discussions.
On another thread (June 4, 2015 at 8:29 pm) “harrytwinotter” wrote:
dbstealey,
Back in your box. Don’t you ever tire of the “attack dog” role?

There are other “harrytwinotter” comments that use the same “back in your box”, “attack dog” comments.
What say you, “Svante”?

Reply to  Lady Gaiagaia
September 25, 2015 6:32 am

Looking at Jim’s graph, anyone care to explain why there is a systematic trend of correction to hotter values required?
What is the physical explanation for this effect in the adjustments?

George E. Smith
Reply to  Lady Gaiagaia
September 25, 2015 11:56 pm

If everybody just reported what their equipment recorded, and in addition recorded accurately whenever the equipment of other parameters change, then anybody who wanted to make use of the information; including the information that the equipment or other conditions had changed, then all such things could be taken into account by those who use the data for whatever purpose.
Reporting anything other than the equipment readings; plus information on that equipment or other conditions of the experiment; is just blatant fraud in my book, and I would fire anybody who did that.
“Mr. Mac” the father of McDonald Aircraft used to say: ” We seldom fire anybody for making a mistake; but we invariably fire anybody who tries to cover up a mistake. ”
A philosophy to live by; sometimes called ” being nit picky. ”
g

Alx
Reply to  Svante Callendar
September 24, 2015 7:26 pm

The issue is overstating the results.
We have continually evolving adjustments producing unstable data-sets somehow leading to the unsupported assumption that the data-sets are becoming more accurate. It is foolish. Recursively re-estimating raw data does not lead to more accurate data. It simply cannot. It is a fallacy.
A fuzzy picture that is artificially sharpened does not produce more detail. There is never more detail than the original – claiming more detail and more accuracy is overstating the results, it is simply creating new detail that never existed.
Overstating results and making stuff up is pathological in climate science.

emsnews
Reply to  Svante Callendar
September 24, 2015 8:26 pm

Yeah, why not just invent numbers totally and ditch any attempt at mirroring reality?

Evan Jones
Editor
September 24, 2015 4:32 pm

Excellent, Brother John. (He’s a fellow station-surveyor, and anyone who has surveyed a station is my brother.)
TOBS is a valid adjustment (though I have no idea whether NOAA’s method is correct) so you must either use TOBS data for those — or drop ’em (I go for option b.) MMTS is a valid adjustment, though it is clear that NOAA is doing that wrong. It must include a “CRS adjustment” (sharply reducing CRS trend, esp. Tmax) as well, but it does not.
Then we need to adjust for . . . microsite! That can only be done correctly and reliably for stations whose microsite is known. That is a whopping 60% downward adjustment to Tmean trend (or else just drop the badly sited stations for the same effect).
This does not affect sea temp trends, of course, except where station radii overlaps ocean (350 km for Bad Haddy and 1200 km for Wicked uncle GISS).

Reply to  John Goetz
September 25, 2015 4:23 am

“we turn the data into an estimate”
It was an estimate anyway. That is the issue with TOBS. When you combine daily max and min temps to form a monthly average, you assume that those maxes and mins were recorded on consecutive, distinct, days. They may not have been, and that creates a bias, with warm afternoons or cool mornings double counted (depending on TOB).
There is nothing special about the assumption made in “raw” data. It’s just based on not having any reason to think the min/man days aren’t consecutive. Now we do, and we can figure how often it would have happened.

Evan Jones
Editor
Reply to  John Goetz
September 26, 2015 9:47 am

I will add that I take a fundamental approach different than BEST’s. They adjust. I drop.
I have sympathy for BEST. I can drop. They can’t: I am looking at USHCN over a 30-year period; they are doing the entire GHCN. I can get (arguably) enough spots in my grids even when I drop 78% of initial sample. I got the data-rich CONUS. But BEST has Outer Mongolia.
Thing is, that dropping is a check-sum on adjusting. If you include only “pristine” stations, then you can look at a thinner but far more real signal, and you can see cleraly what the adjustments are up to and whether they are correct.
In short, I deduce. BEST induces (they have no choice). But I think we have found the two factors unaccounted for in the GHCN adjustment procedures: Microsite and CRS issues. Furthermore, there is a “systematic” error in their dataset, and that makes a travesty of homogenization.

A C Osborn
Reply to  Evan Jones
September 25, 2015 3:10 am

Evan, didn’t you forget the “UHI” joke of an adjustment, that bears no relationship to the amount of UHI everyone knows exists as they are told about it during every weather forecast?

Evan Jones
Editor
Reply to  A C Osborn
September 25, 2015 4:12 pm

The question is not whether UHI causes increased temperatures (offset). That is non-controversial for all sides. The question is how much it affects trends. Our findings are ambivalent. Gridded urban data shows more warming than non-gridded, but the well sited urban stations are very few, and the non-gridded, well sited urban trend is actually a bit lower than non-urban stations.
What is essential and decisive is not UHI. It is Microsite. Heat sink Effect — HSE. Well sited urban stations warm far slower than poorly sited urban stations. Same applies to non-urban.
Conclusion: HSE is the new UHI. You heard it here first. Microsite is where you will find your disparity, but not so much for mesosite (UHI).

September 24, 2015 4:34 pm

Computers are wonderful tools for recording and cataloging and recalling data.
But they are also dangerous tool for changing data and other things.
The internet?
In the past if one wanted to change something previously recorded, in say, an encyclopedia or almanac, they’d have to go round up all the and burn what they didn’t want known.
With the internet, a few keystrokes makes that much easier. (Think Wikipedia.)

September 24, 2015 4:36 pm

I have to admit here is a certain beauty to it all: modeled temperature data for climate models. Never have to worry about reality. What could be sweeter for a climatologist?

September 24, 2015 5:03 pm

“Overall, from 1880 to the present, approximately 66% of the temperature data in the adjusted GHCN temperature data consists of estimated values produced by adjustment models,”
Well, yes. The adjusted set is … adjusted. I can’t see the point of this count. Once an adjustment is made, all the previous values are moved, since present is the reference. So it doesn’t calculate individual identified adjustments, just the propagating effects.
With US data, any change in time of obs will change all previous data. Elsewhere, any station move etc will, if identified, produce the same propagating effect. It is hardly implausible that more than 66% of stations have moved, or had an equipment change, over their time. In fact, the MMTS change alone would have caused a very large amount of past data to be adjusted.

Nick Stokes
Reply to  John Goetz
September 24, 2015 5:50 pm

The organizations who estimate the global temperature do so on the basis of estimates of the temperatures of regions. Not of thermometers. Thermometer readings are part of the evidence. And if the basis for relating those readings to the region has changed, then of course they should change their estimation procedure.
As I said, I thought the MMTS adjustments alone would have changed at least 50% of readings. So what should they do? Pretend the change made no difference? Climate didn’t change because they brought in MMTS.

Nick Stokes
Reply to  John Goetz
September 24, 2015 6:34 pm

“The public is fed information that is stamped with the label of authoritative certainty”
The public expects that people who actually know what they are doing will give the best estimate they can of global temperature. And they do. That involves detecting and correcting inhomogeneities.

lee
Reply to  John Goetz
September 24, 2015 7:36 pm

Nick Stokes. BOM has adjusted Carnarvon. The raw data says long term cooling, the homogenised data says long term warming. That seems to imply a failing station. With a budget of >$340m, why not replace the station? Or put in a temporary one and adjust for any step-change? The step -change is a constant .

willnitschke
Reply to  John Goetz
September 24, 2015 7:45 pm

There is little point in debating with Nick Stokes. He states complete nonsense with a high degree of chutzpah hoping that he can steam roll over logic.Global temperature adjustment is the only field in science that takes the arrow of time and works it backwards. It’s idiocy to the nth degree, but to Nick Stokes it’s all fine so long as it gives him the results he wants.

Nick Stokes
Reply to  John Goetz
September 24, 2015 8:38 pm

“The raw data says long term cooling, the homogenised data says long term warming”
It’s frustrating when people rattle off this stuff with no links or backing.
Here is the GHCN Carnarvon adjusted and unadjusted. Both rising at the same rate, in fact in GHCN the adjustment is nil. The unadjusted will be the same as the BoM data.

mebbe
Reply to  John Goetz
September 24, 2015 8:53 pm

Nick Stokes September 24, 2015 at 6:34 pm
“The public is fed information that is stamped with the label of authoritative certainty”
The public expects that people who actually know what they are doing will give the best estimate they can of global temperature. And they do. That involves detecting and correcting inhomogeneities.
——————————————
How does the public know that these people know what they are doing?
Is your statement true for all publics; American, Lithuanian, Syrian?
Is the intuition of the public always so infallible?
Does the term ‘public’ include deviant members, who do not conform?
If so, what percentage of the public can be non-conforming without impacting the reliability of their good sense?
Are you a politician?

RD
Reply to  John Goetz
September 24, 2015 9:29 pm

@ Nick Stokes
You say …”climate didn’t change”
>>>>>>>>>>>>>>>>>>>>>>>>>>
Indeed, well said.

RD
Reply to  John Goetz
September 24, 2015 9:39 pm

“The public expects that people who actually know what they are doing will give the best estimate they can of global temperature.”
???????????????????????????????
I’ve seen the estimates…………………………..ROFL

Svante Callendar
Reply to  John Goetz
September 24, 2015 9:42 pm

John Goetz.
The objective of the adjustment process is to improve the accuracy of the measurements. So not doing that would be a disservice to the public.
You talk as if “actual” measurements are somehow a better representation of the temperature. This is an unfounded assumption on your part.

willnitschke
Reply to  John Goetz
September 24, 2015 10:10 pm

“The objective of the adjustment process is to improve the accuracy of the measurements.”
The strawman is to assume that adjustments always make the data better, when they might well make the adjustments worse. If your adjustments, say, reverse the trend found in the original data, that is a red flag that something has gone wrong with your alterations to the data. Adjustments are fine in theory, until too many red flags start waving. Then you are doing a great disservice to the public.

lee
Reply to  John Goetz
September 24, 2015 11:30 pm

Nick Stokes, I haven’t figured out how to post an image. But look at this please.
http://kenskingdom.files.wordpress.com/2014/07/carnarvon-tmin.jpg?w=450&h=256
I’ll agree the modern data seems to accord pretty well with the raw data.

Reply to  John Goetz
September 25, 2015 6:38 am

Stokes
The graph for unadjusted data is “quality controlled” . What happens in the “quality control” process?

Reply to  John Goetz
September 25, 2015 12:25 pm

@ Svante.. try adjusting your measurements in a physics class to make your experiment work… see how long it will take before you are in the dean’s office. That’s the most ridiculous statement I’ve ever read. Well, except for adjusting the original data then throwing the original date away in a landfill. What scientist in their right mind would do such a thing? The only reason I can think of is to commit fraud.
I could just as easily adjust the data to indicate a cooling world. In fact, it probably is. Let’s say the theory of AGW is correct. Then if the temperature is suppose to be what the IPCC says it is suppose to be and it isn’t, the concern should be that even with the massive amount of co2 that has been released since 1998, without that, the temperatures would already be dangerously cold…… so which is it? Is it getting colder or warmer? You’re going to be hard pressed to make a case for absolute warmer.
Even with the adjustments, which in my view enhance global warming, the temperatures have fallen below the lowest projection of the models. Care to elaborate?

Evan Jones
Editor
Reply to  John Goetz
September 25, 2015 7:06 pm

The net effect of adjustments is to greatly and spuriously inflate the trends of land surface stations, Nick. I have walked the walk. That’s just the way it is.
This does not address the effects of Bad Haddy and Wicked Uncle GISS and their recent SST adjustments, which have come under heavy criticism and refutation.

Reply to  John Goetz
September 25, 2015 7:40 pm

“But look at this please.”
It’s very hard to work things out when they are so poorly referenced. Carnarvon’s climate data is here. Those look like minimum temperatures, though I can’t see that mentioned anywhere on kenskingdom.
OK, so why would minimum temps be so adjusted. Well, here is the ACORN station catalogue. The site was at the PO, by the sea until 1950. Then it moved to not sure where, then to its present location about 2km inland in 1974. Now Carnarvon is in a dry area, so 2km inland makes a big difference to minimum temperatures.

lee
Reply to  John Goetz
September 25, 2015 10:10 pm

Nick Stokes, apart from a step-change in the record, why the other adjustments?

Reply to  John Goetz
September 26, 2015 1:48 am

“why the other adjustments”
What other adjustments? Please specify.

Louis Hunt
Reply to  Nick Stokes
September 24, 2015 6:15 pm

Are we supposed to believe that the methods used to make adjustments for things like “time of obs” are an exact science? They can never be more than rough estimates. So the more temperature data that has to be adjusted, the greater the margin of error becomes. When a new hottest-day ever occurs by 1 one-hundredth of a degree, it is laughable. With adjustments to 66% of the data, there’s no way the average temperature can be anywhere near that accurate.

Nick Stokes
Reply to  Louis Hunt
September 24, 2015 6:43 pm

“Are we supposed to believe that the methods used to make adjustments for things like “time of obs” are an exact science?”
Time of observation is entirely quantitative. The effects on the min/max average are clear-cut. And the changes of observation times are documented.
But what counts is the statistical effect of the changes, because they are used for averaging. There it is important to remove bias. Unbiased noise is greatly reduced by averaging.

willnitschke
Reply to  Louis Hunt
September 24, 2015 10:12 pm

Phrases such as “clear cut” and “documented” are weasel words, used when you cannot say they have been demonstrated as correct.

KTM
Reply to  Nick Stokes
September 24, 2015 7:00 pm

comment image?w=640
Coincidence?

Svante Callendar
Reply to  KTM
September 24, 2015 9:47 pm

KTM.
“Coincidence?”
Who knows? Steve Goddard’s stuff is so bad it is usually impossible to figure out what he is talking about.

Lady Gaiagaia
Reply to  KTM
September 24, 2015 10:19 pm

Hilarious!
Sad, but true.
What a scam.
Costing the world trillion in treasure and million in lives.
Criminal mass murderers.

Reply to  KTM
September 25, 2015 1:11 am

“Who knows? Steve Goddard’s stuff is so bad it is usually impossible to figure out what he is talking about.”
Then you need help with your reading comprehension. The Goddard site is written so that even an 8th grader could follow. Yet you complain it is too hard for you.

Hugs
Reply to  KTM
September 25, 2015 5:45 am

“The Goddard site is written so that even an 8th grader could follow.”
Sure, but that doen’t make it much better.
Goddard’s graphs are funny, or scary, but their value as science is nil before somebody reproduces them. And Goddard is somewhat pompously not explaining what he does, rather he just tells how everybody are included in the big conspiracy. Not convincing.
While I like Goddards excerpts from old newspapers, I don’t like his Excel ‘science’.

Alx
Reply to  Nick Stokes
September 24, 2015 7:42 pm

They are overstating the accuracy and certainty of the temperature data-sets.
Somehow the science is settled does not jive with we are still working on it. We really do not know what the temperatures actually were, we only know what they are adjusted to based on assumptions and latest best hypothesis/guesses.
Now assumptions are only required when you do not know. They may be reasonable assumptions but they are still based on lack of knowledge. So who is going to tell the public about the level of assumptions and the fact we do not know enough and may never know enough to be certain of past temperature to hundredths of a degree. Who is going to tell the public the data-sets in essence are unproven hypothetical models of the temperature record.
Or is this one of those “They can’t handle the truth!” moments. Just tell them climate change leads the world to climate annihilation and leave it at that.

Reply to  Nick Stokes
September 24, 2015 8:23 pm

Nick writes “With US data, any change in time of obs will change all previous data.”
Where is the adjustment due to changes in thermometer response times?

Peter Miller
Reply to  Nick Stokes
September 25, 2015 2:06 am

And so it is just coincidence with: I) Obama going ecoloon in trying to hobble the U.S. economy by imposing new illegal environmental rules through the EPA, and ii) the potentially disastrous meeting of Paris-ites later this year, that GISS suddenly embarked on yet another unjustified set of temperature adjustments to make the past cooler and the present warmer?
Was this an executive order, or just climate bureaucrats trying to preserve their comfortable lifestyles?
Anyhow, that embarrassing Pause was eliminated by what can only be described as the dodgiest of statistical methodology, so bad that it might have even made the great Mann himself blush.

V. eng
Reply to  Nick Stokes
September 26, 2015 10:23 am

So you think the reported temperature series is not somehow anchored to the actual measured temperature but to some adjusted value.. That must be so if an adjustment to a reading changes all previous reported temperatures. And if that is so then any present error propagates into nearly the entire ‘data’ set making the whole thing useless.
That ain’t science or even engineering.

September 24, 2015 5:04 pm

There are six separate steps in the adjustment procedure, which end up showing a consistent increase in adjustment over time (bigger negative adjustments the further back in time it goes). It’s actually hard to believe that there isn’t some little routine in their programs that introduces a date factor. Otherwise, an outside observer with experience in dealing with slightly messy data from other fields of endeavour might expect that the adjustment vs time plot would be randomly spiky with no consistent trend.
Which is a polite way of saying – it looks like the data are fudged to enhance the warming story,

Reply to  Smart Rock
September 24, 2015 5:09 pm

Heaven forfend!

pippen kool
September 24, 2015 5:08 pm

“A chart showing the average change to the raw data is not shown, because an overlay is virtually indistinguishable. ”
So a whole article on “virtually indistinguishable”. Good one.

pippen kool
Reply to  John Goetz
September 24, 2015 6:16 pm

But your money graph shows little difference either.

Luke
September 24, 2015 5:21 pm

This is much to do about nothing.
From Judith Curry’ website:
“On balance the effect of adjustments is inconsequential.”
http://judithcurry.com/2015/02/09/berkeley-earth-raw-versus-adjusted-temperature-data/
From Berkeley Earth
“Having worked with many of the scientists in question, I can say with certainty that there is no grand conspiracy to artificially warm the earth; rather, scientists are doing their best to interpret large datasets with numerous biases such as station moves, instrument changes, time of observation changes, urban heat island biases, and other so-called inhomogenities that have occurred over the last 150 years.”
To educate yourself go to:
http://berkeleyearth.org/understanding-adjustments-temperature-data/

Reply to  Luke
September 24, 2015 5:28 pm

Luke,
B.E.S.T. is not credible. They only show what they want the public to see:comment image

Evan Jones
Editor
Reply to  dbstealey
September 25, 2015 7:16 pm

My complaints about BEST are 1.) that they do not account for microsite when they pairwise. They should pairwise with only Class 1\2 stations. That would reduce the trends by 50%, and, 2.) They do not address the defective CRS stations — they should not be increasing MMTS trends, they should be (hugely) reducing CRS trends.
Point 2 will have a decisive effect on the entire pre-MMTS record.

Luke
Reply to  John Goetz
September 24, 2015 5:51 pm

Your source?

Svante Callendar
Reply to  John Goetz
September 24, 2015 6:58 pm

John Goetz.
I would like to know how you derive that 0.25C per century figure as well.

willnitschke
Reply to  John Goetz
September 24, 2015 7:47 pm

He just told you how he derived it… (facepalm)

Svante Callendar
Reply to  John Goetz
September 24, 2015 9:50 pm

willnitschke.
If you think you how the figure was derived, please share with us then.

willnitschke
Reply to  John Goetz
September 24, 2015 10:14 pm

“my source is the GHCN unadjusted and adjusted data sets”
You can download both sets of data and subtract the differences between them if you don’t trust the claim.

Mike the Morlock
Reply to  John Goetz
September 24, 2015 10:34 pm

Svante Callendar September 24, 2015 at 9:50 pm
willnitschke.
If you think you how the figure was derived, please share with us then.
John Goetz September 24, 2015 at 6:20 pm
Luke, my source is the GHCN unadjusted and adjusted data sets, as spelled out in this post. Did you skip over the post and jump directly to the comments?
Talk less read more.
michael

Hugs
Reply to  John Goetz
September 25, 2015 6:02 am

That is not true. The GHCN adjustment models introduce a warming bias of approximately 0.25 C per century.

I don’t think ‘bias’ is the right word here. They introduce some warming to the trend, but knowing where the biases are, is the hot potato. Adjustments are supposed to fix biases.

Reply to  John Goetz
September 25, 2015 7:00 am

Jims graph posted above at 2.40 am on September 25 shows the graph of adjustments with time, as actually shown by NOAA.
The question is – what is the physical explanation for a systematic trend in the adjustments? Why will no-one answer that question?

Evan Jones
Editor
Reply to  John Goetz
September 25, 2015 7:19 pm

McIntyre shows a +0.46C/century adjustment to USHCN trends to 2006 (+0.14 to +0.60) over the raw data, using the USHCN1 dataset.

Evan Jones
Editor
Reply to  John Goetz
September 25, 2015 8:11 pm

The question is – what is the physical explanation for a systematic trend in the adjustments? Why will no-one answer that question?
Outliers, TOBS-bias, MMTS adjustment, station moves. Then (groan), homogenization.
MMTS is handled ass-backwards from how it should be. Outliers, TOBS-bias and moves are valid concerns, but I have no idea if HCN addresses them correctly. But failure to adjust the trends sharply downward for microsite is a fiasco, and the H-word makes it even worse than the straight average.
Raw data is bad. It warms too quickly. The raw trend must be adjusted (DOWN) for HSE, or else all stations with non-compliant microsite must be dropped.

knr
Reply to  Luke
September 25, 2015 1:33 am

In other new snake oil salesman tell us how affective snake oil is in application.
And reminds us what is stopping these same people telling the public about the poor quality of the data that needs so many ‘adjustments’ , could it be that would be a hard sell after you spent so long telling the same people how it is all ‘settled science ‘ ?
No ‘grand conspiracy’ true , but a lot of self serving interest in an area which will lose a great deal if ‘the cause ‘ falls .

Reply to  Luke
September 25, 2015 12:49 pm

That’s like saying nothing. If they don’t know then why is the rhetoric and the tone so certain that if we don’t stop right now nothing but disaster? Why are people saying that they need to invoke the Rico act to shut critics of AGW up? I wouldn’t have thought there was collusion 20 years ago, and the people who were saying so I thought were a little off, well, AGW has convinced me that I think there is collusion too. Are you aware of how many published contradictions have been made every time a valid point has been raised? Most of 2006 was spent arguing about co2 and isotopes, how long co2 stayed in the atmosphere. Now it seems that most of the certainty about co2 on the part of CAGW was little than guesses, and wrong ones to support their cause.

Marcus
September 24, 2015 5:27 pm

This is absolute, positively unfalsifiable, irrefutable proof that G.I.G.O exists …..

Marcus
Reply to  Marcus
September 24, 2015 5:28 pm

…Also known as the ” Garbage In, Garbage Out ” paradox !!!!

Evan Jones
Editor
Reply to  Marcus
September 24, 2015 7:01 pm

Garbage In, Gospel Out.

Brian Jones
September 24, 2015 5:29 pm

The most depressing part of all this is rounding up or down all these figures to a hundredth or even a tenth of a degree and pretending that the results are meaningful in any way. They are estimates and should be treated as such. If you were doing this in a financial system and told your boss that this is the estimated cost of doing something to such a finite number you would be laughed out of the room.

Svante Callendar
Reply to  John Goetz
September 24, 2015 9:53 pm

John Goetz.
“I have to agree that I find an adjustment of 0.03C applied to every month’s worth of data for decades a bit laughable.”
Why do you say that?
If that is result from the algorithm, then to just ignore it would be irresponsible.

DaveS
Reply to  John Goetz
September 25, 2015 6:14 am

Svante Callendar
“If that is result from the algorithm, then to just ignore it would be irresponsible.”
Indeed. To ignore such a laughable adjustment would indeed be irresponsible. Your faith in the magical algorithm is remarkable.

Peter Sable
Reply to  Brian Jones
September 24, 2015 6:56 pm

If you were doing this in a financial system and told your boss that this is the estimated cost of doing something to such a finite number you would be laughed out of the room.

As an engineer I estimated stuff all the time, including the cost of a project. Often 30% wasn’t that big of deal. (Being as it’s software, 1.5x-2x is all too common on schedules…)
The rise since 1880 is 0.8degC. The adjustment is 0.25degC. I’m perfectly fine with calling the rise 0.65degC if you want. It’s still going up. That’s why the lukewarmer hypothesis is likely the most correct – That’s what the evidence is showing so far. A bit of warming that’s not that alarming, whether there’s adjustments or not.
The adjustments are 30%. The models are 100% to 200% off adjusted and unadjusted values. I’d worry far more about the models. Crying foul over the adjustments is almost nitpicking.
Peter

A C Osborn
Reply to  Peter Sable
September 25, 2015 3:22 am

You call yourself an engineer and can’t even subtract 0.25 from 0.80.
The earth is coming out of the LIA, of course it has been getting warmer.
The adjusted data is used to create Headline news, the hottest ever month, year or decade it has nothing to do with Science.

Reply to  Peter Sable
September 25, 2015 6:08 am

You are forgetting the uncertainty of the results, too. Take a look, for instance, at the error bars for 1900.
http://data.giss.nasa.gov/gistemp/updates_v3/ersst4vs3b/
http://data.giss.nasa.gov/gistemp/updates_v3/ersst4vs3b/v3b+v4_lrg.png

Reply to  Peter Sable
September 25, 2015 6:18 am

Ha. I just noticed the new values (right graph) for 1882 and 1900 are outside the old CI (left graph). So, the old CI was wrong. How do we know the new CI isn’t also wrong?
Some may say this is inconsequential nitpicking. But the big-brained experts claim to measure temperature differences from year to year with accuracy to hundredths of a degree. So, these small things matter, too.

Evan Jones
Editor
Reply to  Peter Sable
September 25, 2015 7:27 pm

The adjustments are 30%. The models are 100% to 200% off adjusted and unadjusted values. I’d worry far more about the models. Crying foul over the adjustments is almost nitpicking.
No it ain’t — because the trends need to be sharply adjusted downward (owing to the Heat Sink Effect from poor Microsite). It is off by ~60% from HSE alone. And then there is the fatally flawed CRS stationset to consider. “A thoid — skimmed right off the top.” (I know Bela. He didn’t offer you beans.)

Evan Jones
Editor
Reply to  Brian Jones
September 25, 2015 8:24 pm

Oh, it’s fine to have the calcs below the MoE — provided you include the MoE, of course. (Otherwise, it’s false precision.)

Rex
September 24, 2015 5:38 pm

Never mind the estimates … what about the language ?
There is no way a mean of between 14 and 15 C can
be described as ‘hot’ …. anyway, I am now beginning to
find climate scientists insufferable, strutting about suffused
with self-importance … they may know something about
climate, but it is clear that they know nothing about the
collection, analysis, and interpretation of survey data, which
is critical.

Luke
Reply to  Rex
September 24, 2015 10:29 pm

“they know nothing about the collection, analysis, and interpretation of survey data, which is critical.”
Really? Please enlighten us all by publishing your critique of their analyses in a peer-reviewed scientific journal.

Lady Gaiagaia
Reply to  Luke
September 24, 2015 10:58 pm

Peer review is a bad joke. In “climate (anti-)science” there is only pal review, as shown by the Climategate emails.

Mike the Morlock
Reply to  Luke
September 24, 2015 11:34 pm

“they know nothing about the collection, analysis, and interpretation of survey data, which is critical.”
Really? Please enlighten us all by publishing your critique of their analyses in a peer-reviewed scientific journal.
I think Rex should; but having first Establishing a fee you should pay, knowledge isn’t cheap you know.
michael

Rex
Reply to  Luke
September 25, 2015 1:40 pm

I was wrong to describe the land-based matrix of temperature
stations as a ‘survey’ … it is a dog’s breakfast. Climate scientists
serm to make the same mistake some of my fellows in the survey
analysis field, in which I’ve been involved for 45 years: they assume
that the ‘survey error’ is the same as the statistical error. Ah, no.

Ed
Reply to  Rex
September 25, 2015 10:11 am

That temperature range can be described as HOT by the same process that characterizes a change in ocean pH from 8.60 to 8.58 as “THE OCEAN IS GETTING MORE ACIDIC!!” By that line of thinking, if i cut down the number of my adulterous affairs from 20 women to 19, I am being more faithful to my wife. Yay for me.

Latitude
September 24, 2015 6:18 pm

…and the other 34% is automatically retroactively adjusted every time they put in a new set of numbers
That makes the whole 100% f a k e

Louis Hunt
September 24, 2015 6:38 pm

Surely there must be some thermometers that are highly accurate. Have they ever tested their adjustment algorithms against accurate thermometers to see how far off they are? For example, have they collected hourly temperatures for a year or so and then fed them into adjustment algorithms to see how close they are in their estimates? If they collected hourly data and then simulated a change in time of observation, they could see how well the adjustments compared to the actual temperature readings they collected for the new time. It would be an interesting experiment and would give some idea of how far off the estimates can be. I’m sure they must have already run some tests on their algorithms, but have they made the results public?

Peter Sable
Reply to  Louis Hunt
September 24, 2015 7:01 pm

Surely there must be some thermometers that are highly accurate.

The ARGO thermometers are extremely accurate (+/- 0.001degC). Of course they adjusted the heck out of them, instead of adjusting the bucket measurements like they should have…
If you look at Fluke Instruments website you can find calibrators that are accurate to +/- 0.001degC.
Example: http://us.flukecal.com/products/temperature-calibration/calibration-baths/standard-calibration-baths/7008-7040-7037-7012-70?quicktabs_product_details=2
We’ll have to wait 50 years or so to get a decent trend out of the Argo data though.
Peter

Billy Liar
Reply to  Peter Sable
September 26, 2015 1:05 pm

The Argo thermometers supposedly have great precision. Whether they are accurate is anybody’s guess.

Reply to  Louis Hunt
September 24, 2015 7:04 pm

Yes. For example: http://judithcurry.com/2015/02/22/understanding-time-of-observation-bias/
There are also a number of side-by-side MMTS/CRS experiments. See this post for links and additional analysis: http://rankexploits.com/musings/2010/a-cooling-bias-due-to-mmts/

Reply to  Zeke Hausfather
September 24, 2015 8:14 pm

When you assume station operators don’t know how to use min/max thermometers while ignoring acres of asphalt appearing around your stations you’re going to give yourself credibility problems.
The net adjustments are obviously wrong. NCDC has been reporting near-average Great Lakes temperatures concomitant with record Great Lakes ice coverage. The raw data do not have this disagreement with reality (I saw the ice, I stood on the ice, it was not computer-generated). There is additional corroboration from snow cover, numbers of 100 degrees days reported, etc.
The data quality is way too low for this kind of analysis to be useful anyway, you really can’t even know the average annual temperature of the Earth to within a degree or you wouldn’t have these gigantic adjustments to past data even since 2000 or 1983. You guys should just average your crappy data (the actual measurements, such as they are), report the result, and apologize for the smell rather than mashing it into pleasing shapes, putting frills on it, and calling it civit at a huge markup.

KTM
Reply to  Zeke Hausfather
September 24, 2015 8:44 pm

comment image?w=640
Coincidence?

willnitschke
Reply to  Zeke Hausfather
September 24, 2015 10:18 pm

It was very cold yesterday where I was, so I went to my phone app and checked the current temperature and then the temperatures of some neighbouring suburbs and I found variations of 1-3C. It struck me how utterly idiotic it was for these ‘temperature modelers’ to be smearing their thermometers readings over great plots of land. These people really seem to be fools.

Lady Gaiagaia
Reply to  Zeke Hausfather
September 24, 2015 10:20 pm

Will,
Not fools but shameless birds feathering their nests.

Reply to  Zeke Hausfather
September 25, 2015 1:41 am

It’s obviously not a coincidence.
But nor is it proof of deliberate corruption of the data.
There’s a reason that real science uses double-blind trials.
Unwittingly, the “scientists” can bias the output because they know what it should be.
I see this as proof of shoddy work providing shoddy outputs form which shoddy policy decisions are made. But no-one wants to be rubbish.
They just are.

Reply to  M Courtney
September 25, 2015 10:39 am

Unwittingly, the “scientists” can bias the output because they know what it should be.

This is what I found when I dug deep enough, everything I found had some reference to model results indicating why their little piece of the puzzle fit the bigger picture.

Evan Jones
Editor
Reply to  Zeke Hausfather
September 25, 2015 8:42 pm

There are also a number of side-by-side MMTS/CRS experiments.
You said it; I didn’t.
And rather than adjust the seriously inaccurate CRS units to match the MMTS units, they do it the other way ’round. yeah, I have no trouble with applying any conversion offsets. I do that according to Menne (2009) estimates. But Menne now does a 15-year pairwise spread (Ooo, ick, and other comments) and then distorts the far more accurate MMTS record to match in with the inaccurate CRS record. Forget the engineers. That’s even enough to make a wargame designer’s heart rebel.

KTM
Reply to  Louis Hunt
September 24, 2015 8:54 pm

http://wattsupwiththat.com/2015/06/14/despite-attempts-to-erase-it-globally-the-pause-still-exists-in-pristine-us-surface-temperature-data/
The USCRN was designed to collect accurate data from pristine sites, so that the measurements would not require any adjustments. Since it went into operation, zero warming has been measured.
Maybe at some point they’ll start adjusting USCRN to account for the air intake temperatures of aircraft flying nearby, as they did to ARGO?

Svante Callendar
Reply to  KTM
September 24, 2015 9:56 pm

KTM.
The US is not the world. Plus they have not been collecting the data long enough.

Reply to  KTM
September 24, 2015 10:26 pm

“Svante Callendar” says:
…they have not been collecting the data long enough.
And they will never collect data long enough. Because the data refutes the ‘dangerous man-made global warming’ narrative.

willnitschke
Reply to  KTM
September 24, 2015 10:30 pm

“This evidence suggests that much of the reported U.S. warming in the last 100+ years could be spurious, assuming that thermometer measurements made around 1880-1900 were largely free of spurious warming effects. This is a serious issue that NOAA needs to address in an open and transparent manner.”
http://www.drroyspencer.com/2012/08/spurious-warmth-in-noaas-ushcn-from-comparison-to-uscrn/

Evan Jones
Editor
Reply to  KTM
September 25, 2015 8:44 pm

The USCRN was designed to collect accurate data from pristine sites
“Pristine” my left ass. (But, yeah, that’s the claim.)

Reply to  Louis Hunt
September 25, 2015 7:02 am

“Louis Hunt September 24, 2015 at 6:38 pm
Surely there must be some thermometers that are highly accurate. Have they ever tested their adjustment algorithms against accurate thermometers to see how far off they are?”
There is probably sufficient data available from the US Climate Reference Network stations to run through the adjustment algorithms as a validation test. I doubt the results of that test would improve anyone’s confidence in the accuracy of the algorithms.

Evan Jones
Editor
Reply to  Gary Wescom
September 25, 2015 8:50 pm

There is probably sufficient data available from the US Climate Reference Network stations to run through the adjustment algorithms as a validation test.
Yes, CRN is so beautiful it makes a man weep. I’ve surveyed ~a dozen, and they are so Class 1 it hurts. Consistent equipment. Triple-redundant sensors. No TOBS issue. To be served raw. Sublime. Climate sushi. But it’s still just a child, and its lifetime is over a near-zero trend span. So there’s only so much we can garner.

Evan Jones
Editor
Reply to  Louis Hunt
September 25, 2015 8:31 pm

Surely there must be some thermometers that are highly accurate. Have they ever tested their adjustment algorithms against accurate thermometers to see how far off they are?
Why, yes. That is the meat and drink of Anthony’s siting project. It is, in fact, the basis of all my comments.
Within the USHCN, ~22% are “valid” (from 1979 to 2008): Well sited (vitally important), not moved, no significant TOBS bias or moves. (Urban stations are fine — provided always that they are well sited.)
There is, however, the CRS problem to consider. ALL of those stations are in need of trend adjustment. A cooling adjustment. A big one. A CRS unit carries its own heat sink on its back no matter how well it is sited.

Richard M
September 24, 2015 6:48 pm

In medical research bias was found to affect results approximately 100% of the time. There is no doubt that these people are biased in which case there can be no doubt their results are incorrect. This has been proven over and over again. It does not matter how honest or how hard the scientists try to avoid bias.
Using poorly maintained, non-calibrated, poorly sited devices is bad enough. But, trying to guess at the real signal by those who truly believe it must show more warming is clearly going to show more warming than actually occurred.

September 24, 2015 6:49 pm

Reblogged this on Public Secrets and commented:
And yet we should take drastic and extreme economy and liberty-killing measures… based on guesswork. Right.

Evan Jones
Editor
Reply to  Phineas Fahrquar
September 25, 2015 8:52 pm

If it were halfway decent guesswork, I wouldn’t mind so much.

Mervyn
September 24, 2015 7:20 pm

The reality is that propagandists like Obama don’t give a damn about science.
Scientists are deluding themselves that this climate change controversy is about climate science and climate data. It is not. It is all about politics, and the once in lifetime opportunity for the United Nations, once and for all, to crush democracy and capitalism, and impose the ambitious, arrogant and unscrupulous ideology of environmentalism.
If people think I’m talking “pie-in-the-sky” stuff, consider who the United Nations has now put in charge of Human Rights – Saudi Arabia, which is currently planning on crucifying a Shia muslim who dared speak out against the government. How many people would have thought that possible?
Obama is driving the UN’s agenda for the ‘New World Order” in which the facade of democracy will still exist but in reality, governments will be enforcing the UN’s agenda … a UN made up of unelected people imposing their view of the world on everyone else.
If we had wanted that, we could easily have left Hitler to take over the world. Back then, we had Sir Winston Churchill, thank God. Who do we have today?

September 24, 2015 7:36 pm

It’s worse than it looks, this is monthly data, in NCDC’S global summary of days data set slightly more than half (72 million records out of ~125 million) are part of a station that took nearly a full year (greater than 360 samples per year) of data that year.

Reply to  micro6500
September 24, 2015 7:40 pm

But, while you can’t monitor global temperature, you can measure a reasonably good rate of change at those individual locations and determine what happened at those locations, since 1940 it’s either 0.0F + / – 0.1F, or if you carry a few more decimal points, it cools slightly more at night than it warmed the previous day.

September 24, 2015 7:46 pm

It’s no longer appropriate to refer to published temperature records as “data,” they are now “models.”
Someone notify the AP Style Guide!

SAMURAI
September 24, 2015 8:38 pm

“(Yes, the two curves combine oddly enough to look like a silhouette of Homer Simpson on his back snoring.)”
DOH!!!!
The Homer Simpson Constant… Don’t you love it so…
This explains why land-based temp data are so inaccurate and spurious and why ONLY satellite/radiosonde/un-tampered ARGO data global temp data should be used in climate science.
The pro-CAGW Homer Simpson Constant fudge factors of land-based temp data, and now the Karl2015 ARGO ocean temp data fudge factor are the ONLY things keeping the CAGW hypothesis on life support…
Satellite data show there hasn’t been a global warming trend for almost TWO decades, despite 30% of ALL manmade CO2 emissions since 1750 being made over JUST over the last 20 years:
http://www.woodfortrees.org/plot/rss/from:1996.6/plot/rss/from:1996.6/trend/plot/esrl-co2/from:1996.6/normalise/trend/plot/esrl-co2/from:1996.6/normalise
And there are such HUGE discrepancies (2+ standard deviations) between CAGW model projections vs. reality, that the CAGW hypothesis is already a disconfirmed hypothesis under the rules of the Scientific Method….
CAGW will go down as one of the biggest and most expensive scandals in human history…
CAGW is despicable.

Reply to  SAMURAI
September 25, 2015 12:55 pm

+100… very well said.

Evan Jones
Editor
Reply to  SAMURAI
September 25, 2015 8:54 pm

There is no CAGW. But some AGW, I think. We are in a negative PDO now. Should be cooling, but is essentially flat. (That’s the measurable extent of AGW, such as it is. Not so much, but not nothing, either.)

mebbe
September 24, 2015 8:40 pm

If a general circulation model with 75,000 surface cells is seeded with data from 6,000 reporting stations, it is understandable that one would have to get creative with the meagre measurements available.
Not to mention the many vertical layers and the 30 minute time increments.

601nan
September 24, 2015 8:46 pm

The GHCN V3 “data” are useless at best and fraud as most probable.

Svante Callendar
September 24, 2015 8:53 pm

“The monthly GHCN V3 temperature record that is used by GISS undergoes an adjustment process after quality-control checks are done. The adjustments that are done are described at a high-level here.”
Some questions:
– is the GHCN-M data set used by GISS?
– the description link given is for the USHCN. Is this description relevant to GHCN-M?

KTM
September 24, 2015 8:56 pm

Some people have no problem doing the usual song and dance about how adjustments are necessary/proper/irrelevant. But when you drill down to the adjustments being applied to individual stations, the hand-waving falls apart and the hand-wringing begins.

BFL
September 24, 2015 10:02 pm

With all the adjustments/estimates/modeling over the decades, the temp error ranges must look like that of the multiple computer model runs for future temperatures versus CO2. Ahh Sooo now I see why those model runs appear okay to climastrologists, because if the temp “data” error bands were overlaid onto the model runs then they would overlap!

ohflow
September 24, 2015 10:24 pm

It’s mostly claimed that the adjustments to raw data add more cooling to the data than warmth, I can’t find a graph to match that assertion. Why are they saying this?

Llanite
Reply to  ohflow
September 24, 2015 11:19 pm

Because what the adjustment does is ‘cool’ the past to make it appear as though the slope of the increase is steeper. Their ‘adjustments’ are anchored in the present, so to make apparent global warming fit their models they have to decrease past temperatures to make is appear as though warming has occurred at the rate they predict.

ohflow
Reply to  Llanite
September 24, 2015 11:52 pm

Is there a graph showing this? Like a single a graph, not fifteen

Llanite
Reply to  Llanite
September 25, 2015 6:02 am
Lady Gaiagaia
Reply to  ohflow
September 24, 2015 11:23 pm

The totally bogus adjustments warm recent decades but cool more distant decades. On balance there is cooling, but it’s all bogus in the interests of promoting the Warmunistas’ agenda.

Lady Gaiagaia
Reply to  Lady Gaiagaia
September 24, 2015 11:26 pm

Whether BEST, HadCRU, GISS or NOAA, they are all rent-seeking, greedy, self-serving, trough-feeding swine, enemies of humanity who are destroying the good name of science built up over centuries.

Mike the Morlock
Reply to  Lady Gaiagaia
September 24, 2015 11:47 pm

Lady Gaiagaia Ahem, other provide you with wonderful bacon,chops and ham what dear friend has “Swine” done to you to speak of them in such a disparaging manner. Heavens I’m just besides myself, snacking here on sausage.
michael 🙂

Patrick
September 24, 2015 10:41 pm

All this talk of global average temperature adjusted since the 1800’s is just proof evident that the science behind “climastrology” is shonky!

September 24, 2015 11:36 pm

So 66 percent of the temperature data are estimates and that is a big problem. It is an even bigger problem because the data only covers approximately 15 percent of the Earth’s surface. There are virtually no stations for some 85 percent of the oceans or continents.

Walt D.
Reply to  Tim Ball
September 25, 2015 2:37 pm

Tim: A huge problem is that the oceans are not warming. If 2/3 of the Earth’s surface is not warming at all then the land would have to warm by 6C to reach the catastrophic 2C global average.

Koba
September 25, 2015 12:13 am

A truly shocking statistic that most of the reported temperatures are fictitious and we all know that the global warming models are much hotter than reality.

paqyfelyc
September 25, 2015 12:19 am

thanks, this required a lot of work

Somebody
September 25, 2015 12:54 am

The only way to check if the adjustments are correct is to pick a certain time and place for a certain station adjustment, take your time machine, go to that time and place and measure better and compare. The pesky experiment pseudo scientists hate so much. Since it’s not possible, I invoke Newton’s flaming laser sword.

knr
September 25, 2015 1:02 am

Although there may ways you can quite rightly mock climate ‘science ‘ you have to give them credit where it is due and that fact that they have managed to build so much on so poor a foundation of ‘better than nothing ‘ does show some skill.
It is therefore a real; shame that they cannot show the same skill or effort when it comes to following good scientific practice in there work . although to be fair if they did then they may get the ‘wrong type of right results ‘ which would be no good at all for their careers and certainly not met any political ‘needs’.

Svante Callendar
September 25, 2015 1:54 am

John Goetz or Nick Stokes.
One thing that does look intriguing is the “sawtooth” pattern in chart 5, the one showing the average adjustments over time.
Are they monthly average values?
If they were averaged annually the sawtooth pattern might disappear.
To me it looks like most of the adjustments are for Time of Observation (ToB) bias, considering how many of the GHCN-M stations are in the United States which tended to use volunteers before they got automated.

Reply to  Svante Callendar
September 25, 2015 5:31 am

I noticed that, too. I think it probably as you say, monthly averages.
I believe it may illustrate something that Walter Dnes showed in a post here last year (second graph) though only for USHCN:
http://wattsupwiththat.com/2014/08/23/ushcn-monthly-temperature-adjustments/
In recent years, the months December through April are adjusted much more than the other months are. I suppose it is only coincidence that the reigning CO2-temperature hypothesis says winter should warm faster than summer does… but I digress. 🙂comment image

mothcatcher
September 25, 2015 2:00 am

I also am very reluctant to believe that there is a conspiracy, at least a conscious one, to adjust the data in a particular direction, and I can see why some of the people who work on this (e.g. the BEST guys) get a bit upset at the tone of the comments, and may as a result refuse to engage with the doubters. Please, folks, try to discuss these things on topic, at least.
However, essays like those from John Goetz here are not without value, because they remind us of the frequency and the volume of adjustments and estimates that are made – and this has consequences for our understanding of the headlines we see. For me, it just goes to underline the very large general uncertainty there is about temperature trends (though I accept that the keepers of the adjustments do believe that they are reducing that uncertainty) and pushes me towards the satellite observations. Sure, there are adjustments and estimates there too, but they are of a different (and, importantly, much more readily auditable) nature to those in the surface record.

Old Forge
September 25, 2015 2:35 am

To summarise the comments so far:
A Comment: ‘66% adjustments – the recent past always warmer and the longer-term past always cooler. Suspicious.’
SC/NS: ‘Adjustments have to be made, and they’re consistent because they’re scientific.’
A Comment: ‘Consistent, yes. But agenda-driven/on message – present warmer, past cooler.’
SC/NS: ‘Can you substantiate that?’
A Comment: ‘Yes – look at X, Y, Z.’
SC/NS: ‘I can’t look at such stuff, it’s rubbish.’
A Comment: ‘Well it’s not as rubbish as your adjusted temperature estimates!!’
SC/NS: ‘Adjustments have to be made, and they’re consistent because they’re scientific.’
[…]
Am I missing something?
NB – didn’t James Hansen include an extra 30 years’ worth of ‘data’ for Antarctica, 30 years before the first met station opened there? Wasn’t the single station he used outside the Antarctic circle and, oddly enough, adjusted to show the whole South Polar region as 2 degrees cooler than the present?

angech
September 25, 2015 2:43 am

Happy to see this presentation but disappointed in the layout and some of the assertions/assumptions.
Has great potential to be redone and represented more clearly, not by me, I am a critic and suffer from lack of follow through.
Pleased to see Nick and Zeke commenting here. They often come on board when their particular viewpoints are challenged. .Usually in Nick’s case to say forget that, look at this.
We seem to be missing Mosher who has been commenting at depth on a similar issue of the record accuracy at J Curry’s > 100 matches.
His claim?
“That is why you can for example pick 110 PRISTINE sites in the US
(CRN) and predict the rest of the country: Including
100s of other pristine sites ( RCRN) and 1000’s of “bad” sites.
What’s it tell you when you can start with 60 samples and get
one time series… then add 300 and get the same,,, then add
3000 and get the same…. then add 30000 and get the same?
whats that tell you about sampling?
Whats it tell you when you can pick 5000 and then predict any
other 5000 or 10000?”
I replied What it tells anyone with common sense is that all the sites have been linked to each other by an algorithm and are no longer individual raw or individual site modified data but data that has been homogenized to fit in with every adjacent site.
This is not something to be proud of.
This is scientifically very, very wrong.
Any true set of recordings makes allowance for the fact that temperatures very from minute to minute from site to site and that due to known weather variations sites do not have to match each other in step.
What Mosher alludes to is pure chicanery.
Any set of sample sites that agrees this perfectly means they are not real temperature recordings anyway in any form.
It is the Cowtan and Way Kriging experience over again.
You must be able to pick sites that do not agree with each other in any sample.
That is what weather temperature, measurement is all about.
When you link everything to each other so they all move in step whatever sample you take you do not have real measurements.
Try it on the raw data Steven. See if they all move the same way whether you use 2 or 50,000.
I will guarantee they don’t.
Take your modified data and prove they all link perfectly.
I guarantee they do as well. I have your word for it.
And what do you call your data?
Well not data anymore.
Sorry for cross threading but the absolute ability for any sample of the data to agree with all other samples of the data means the 66% of values are estimated is incorrect.100% of the data is estimated and homogenised.
As Nick Stokes said September 24, 2015 at 6:34 pm
“The public expects that people who actually know what they are doing will give the best estimate they can of global temperature. And they do. That involves detecting and correcting inhomogeneities.”
Ie none of the data we get is real it is all correlated and homogenized as per Mosher’s stunning observation.
Surely people here and elsewhere can follow up on this admission of data being so modified that all real variations have been removed.

Peta in Cumbria
September 25, 2015 3:05 am

Sums up the ‘conspiracy’ nearly perfectly..

As she sings…
“Everyone’s a super-hero”
“Everyone’s a Captain Kirk”
With orders to identify, classify etc”
“Floating in the summer sky – something’s out there
“Somethings here from somewhere else”
99 minsters meet, worry worry super scurry
“The President is on the line…
Just brilliant, and she’s very pretty.

jeanparisot
September 25, 2015 3:44 am

While I can accept adjusting individual data points in a well documented manner, and using them for local functions; I have a problem with using the local adjustments in a larger dataset unless those specfic adjustments were applied universally. The averaging functions of the large data sets should incorporate the rationale for the adjustments in their error calculations.
In other words, the adjusted dataset is weather, the raw data is climate.

A C Osborn
September 25, 2015 4:21 am

One thing that is missing from this study is the number of Sites that now have decades of “Estimated” data which never existed. There was no Raw Data, the Estimated data has been added to lengthen the historic records or fill in where data was just missing.
Of course the estimated data is lower than average temperatures for that area.

richard verney
September 25, 2015 5:19 am

One of the real problems here is spatial coverage, and station drop out (there has been a substantial decline in stations in recent years)..
The plot shows how little global coverage there was in the 1880s. Even today, only about 2500 stations are used. Can one truly ‘estimate’ global temperatures from just a few hundred or even 3000 stations. The globe is a big place and the spatial coverage is poor. This is not simply a number issue, but a density issue. It is not a question that in 1880 there were say about 300 stations equally positioned throughout the globe, and today there are about 2,500 equally distributed. There are very large tracts of planet Earth where there are none, of just a handful of stations.
The temperature record has now become so horribly bastardised (for a number of reasons) that we are now left at reviewing the veracity and probative value of the adjustments/homogenisation rather than the underlying data itself.
Personally, I consider the spatial coverage to be so poor and the margins of error so high that we cannot say anything of value about global temperatures.
Probably all that can be said about temperatures is a generalisation that there is much year to year variability and that the 1880s, 1930s and today are all warm periods but due to limitations of the data (and its true error margins), we are unable to say on a global basis whether today is warmer than the 1880s or the 1930s, but as far as the US is concerned it is likely cooler today than it was in the 1930s.
Obviously we can say that it is today warmer than it was at the lows of the LIA, and we are unable to demonstrate form the thermometer record an increase in the rate of warming in the modern period, over and above the rate of warming which was seen in the circa 1880s to 1900s and the circa 1920s to 1940s (indeed I recall that Phil Jones of CRU acknowledged that there was no statistical difference in the rates of warming during these 3 periods).
Of course, the thermometer record can at best tell us something about temperature, but not about the cause of any rise in temperature, and the land temperature record cannot tell us anything about changes in energy.
The whole thing is simply too course to be useful, it has little scientific use.

geronimo
September 25, 2015 5:27 am

Let me say first of all I’m taking it that the adjustments to temperature readings taken between 20 and 120 years ago are made in good faith. However if it was put to jury that these people had told politicians that the world was warming and that great sacrifices would have to be made to stop future catastrophes The politicians had taken them at their word and brought in policies to combat climate change that resulted in an increase of energy, coal miners to be thrown out of work, industries decamping to more energy friendly nations and were generally frowned upon by the voters.
Would it not seem reasonable that their jobs and livelihoods depended upon their being correct in their diagnosis and prognosis and it was then shown that they had systematically changed previous data to exaggerate the warming in the 20th century that they were doing this to save their faces and jobs?

Billy Liar
Reply to  geronimo
September 26, 2015 1:33 pm

I’m waiting for the class action suit.

Solomon Green
September 25, 2015 5:43 am

I note that Nick Stokes has not replied to Lee {9/24@11.30}.
I am confused. I would like Mr. Stokes (if he has not retired hurt) to explain: “The organizations who estimate the global temperature do so on the basis of estimates of the temperatures of regions. Not of thermometers. Thermometer readings are part of the evidence.”
If thermometer readings are only “part of the evidence” what are the other parts? And how can these contribute to any estimate of temperature if they are not based on measurement? And if they are based on measurement how are those measurements derived without thermometer readings?
By the way, I understand the need for TOBs when estimating Tmean as (Tmax +Tmin)/2 but for at least ten years it has been possible to measure Tmean accurately as a continuous function over any 24 hour period and more than one station that I have seen is doing this.

lee
Reply to  Solomon Green
September 25, 2015 8:15 pm

Extract of report into BoM-
‘The Forum noted that the extent to which the development of the ACORN-SAT dataset from
the raw data could be automated was likely to be limited, and that the process might better be
described as a supervised process in which the roles of metadata and other information required
some level of expertise and operator intervention. ‘
http://www.bom.gov.au/climate/change/acorn-sat/documents/2015_TAF_report.pdf
So “operator intervention” -AKA best guess.

September 25, 2015 6:38 am

John Goetz, can you please just provide the trend of the data which is still Raw, not adjusted.
This is the temperature data WE should use and start promoting as the real land temperature record. There is an issue with respect to gridding a reduced database, but it will be the truth.
It is clear that the adjusted data is “wink, wink, nudge, nudge” used by the climate scientists to keep their theory alive. Maybe it is not a conspiracy, but there is a lot of winking going on whenever a new Karl et al 2015 paper comes along with a new adjustment increasing proposal.
The point is “what is the real temperature increase. Is this theory true or not”. This is what is important. John’s data above shows that 0.4C of the trend is just an artifact of the winking process.
Throw out the adjusted temperatures. Facts are more important.

Walt D.
September 25, 2015 8:05 am

When all is said and done, changing the data does not change the temperature of what is being measured. Again, the ocean temperatures at an around the ARGO buoys have not suddenly jumped. The ARGO Buoy thermometers have not suddenly lost their accuracy.
All you do what you tamper with the data is to increase the difference between the data and reality.

September 25, 2015 8:38 am

If CAGW were real, they wouldn’t have to adjust the data to prove warming, and 2, since 1998 if CAGW were real, we wouldn’t be still arguing about it. Any reasonable person could see the results. I think we have seen the results and CAGW is unreasonable. It didn’t and isn’t happening. When the Temps have fallen out of the lowest projection, how is it reasonable to think that co2 controls temperature?

Matt G
September 25, 2015 8:39 am

To estimate 66% from a decline of 0.2% to 0.1% coverage of the planets surface is a disgrace for anybody that say it is better than satellite data.
Could you imagine the uproar if the satellite data only covered the tropics and polar zones, but the rest of the world was estimated?
There is little doubt this data has deteriorated from bad to worse and now mainly relies on confirmation bias modeled temperatures for majority of the data set.
When proper data was used and not adjusted for models or infilling from land to ocean surface, the GISS in particularly resembled something a bit more realistic.
The corrected data shows 0.4 c artifact especially since 2001 and another artifact by the shifting the anomalies between the 1940’s and 1980. This shift now corrected resembles the ERSST global surface temperatures between the 1940’s and 1980. There is now still a 0.8 c difference between the early 1900’s and recent period (like HADCRUT), whereas before GISS had suddenly been dishonestly changed to ~1.3 c difference.
http://i772.photobucket.com/albums/yy8/SciMattG/GISS-corrected2_zpssymskhge.png

Walt D.
Reply to  Matt G
September 25, 2015 9:36 am

The real problem is not manipulated data, but a lack of adequate data.
There are just not enough data to estimate what they are trying to estimate.
Using estimates instead of actual data produces smoothing. The histogram of estimated values is different from the histogram of actual values. There is also the problem of bias – systematic over-estimation or under-estimation.

Matt G
Reply to  Walt D.
September 25, 2015 11:48 am

Lack of samples has been certainly part of the problem, but they have deliberately reduced stations numbers because they thought it was adequate enough. The less samples are used the easier it is to introduce bias when they are changed to somewhere else or sample numbers are changed. HADCRUT4 did this in the last version by introducing an extra 100+ stations in the NH and 400+ in the SH. Why reduce thousands of stations and then add hundreds? No other reason other than to show a warming bias and this change reflected that between the versions 3&4. Cooling numerous stations significantly over the earliest decades to cause a warming trend instead of a overall cooling one, is not caused by lack of adequate data.
Using estimates only produces smoothing depending how it is done. If it is like GISS, ocean surface temperatures help smooth the data because of their slow response. Infilling from land temperatures over ocean surface reduce the smoothing and increase warm bias during warm periods. Estimates don’t cool past station data and warm recent data, that is due to bias human adjustments.
The best method is not to use estimates at all and only use quality stations that don’t need estimating. If the station data are missing for any month then the method should be done like HADCRUT does it. They emit it while keeping the rest of the stations unaffected by it.

September 25, 2015 8:52 am

How about a post that shows the temperature history of only non-adjusted data, only adjusted data, and only discarded data?

Mike Smith
September 25, 2015 9:00 am

To dismiss all adjustments to raw data is to throw out the baby with the bathwater. However, it seems to me that we have two massive problems with the adjustments made to the global temperature record:
1. The direction of the adjustments is very heavily skewed in one direction creating an obvious appearance of bias (whether or not such bias is real). The limited transparency concerning the basis of the adjustments and the use of invalid statistical methods in some cases has only made matters worse.
2. The magnitude of the adjustments is very large. Almost as large as the warming signal being studied.
These simple facts made the surface temperature record untrustworthy to the point of uselessness.
Policy decisions which involve economic impacts measured in trillions of dollars can only reasonably be made based on the far more reliable satellite data record.

September 25, 2015 9:24 am

John
‘The skill of that model is nearly impossible to determine on a monthly basis, but it is unlikely to be consistently producing a result that is accurate to the 1/100th degree that is stored in the record.”
WRONG.
1. The skill of the model has been tested many times and the results are published.
2. NOBODY claims ACCURACY to 1/100th of a degree. That is NOT TRUE.
Its like this
Suppose I weigh you with a scale that has a precision of 1 lb
200, 201, 200 are the measurements I record
Now, I ask you to PREDICT or ESTIMATE what a PERFECT scale would record
The best prediction is 200.333333333333333333333333333333333333333.
That DOES NOT MEAN I am claiming to KNOW your weigh to with 1/100th or a degree or whatever
it means THIS
IF you recorded the weigh with a perfect scale the estimate of 200 1/3 pounds would MINIMIZE the error
of prediction.
get it.. So when we adjust for tobs and say 74.76478 F, we are saying THAT ESTIMATE minimizes
the error of PREDICTION
That is why in the tests of TOBS models the error of prediction is recorded.. Its typically around 0.25F

Reply to  Steven Mosher
September 26, 2015 12:33 pm

Mosher,
Your example sums up “Climate Science” practice very well. If your scale is accurate to 1 pound, the average of 200, 201, and 200 is 200, and you simply do not have any more information than that to report. Anything past the decimal point is unknown, and reporting such numbers is false, nothing more, nothing less. Historical temperature records are not accurate enough to discuss any digits past the decimal point, but you guys claim to know the average temp of the Earth in 1880 to .00 accuracy, or at least, you permit the media to publish such rubbish.

Billy Liar
Reply to  Steven Mosher
September 26, 2015 1:46 pm

You could use the scalpel on the outlier, 201, and call it 200.

Reply to  Billy Liar
September 26, 2015 8:17 pm

What? You cannot call it anything other 200. An instrument accurate to pounds cannot report any information of fractions of pounds, how fundamental is this to science???

September 25, 2015 9:27 am

“The next chart shows the estimate percentages broken out by rural and non-rural (suburban and urban) stations. For most of the record, non-rural stations were estimated more frequently than rural stations. However, over the past 18 years they have had temperatures estimated at approximately the same rate.”
If you are using the GHCN metadata for urban and rural STOP!
that data is
A) old
B) wrong
That is why no one uses it.

angech2014
Reply to  Steven Mosher
September 25, 2015 8:24 pm

“That data is old”
You mean real data?
Thank god.
No you mean old modified data which has not got your new adjustments in it.
Oh, well. Was hoping
“Wrong?”
Wrong to use old data?
All data is old , the older the older the better usually
except for modified homogenised rubbish.
One moment you argue for it the next you dismiss it when you have changed it
What fantastic logic.

Reply to  Steven Mosher
September 26, 2015 8:20 pm

You are the CAGW equivalent of Willis, ardent and yet un-schooled….

Reply to  Michael Moon
September 26, 2015 8:24 pm

Harsh, yet justified.
Although actually, Willis might be a little more schooled than Steven, however equally ardent.
In both cases, I’m reminded of the Restoration comedy Puritan character, Zeal of the Land.

September 25, 2015 11:11 am

“Svante Callendar” September 24, 2015 at 6:39 pm:
Back in your box, rover. I wonder sometimes about blogs that use attack dogs to disrupt discussions.
On another thread (June 4, 2015 at 8:29 pm) “harrytwinotter” wrote:
dbstealey,
Back in your box. Don’t you ever tire of the “attack dog” role?

There are several other comments by “harrytwinotter” that post the same “attack dog” and “back in your box” comments. But “Svante Callendar” never replied to my observation.
So how about it, “Svante”? Are you a sockpuppet?

September 25, 2015 2:22 pm

Why is average temperature a meaningful statistic?
If it is meaningful, would a warming of +0.5 degrees C. matter to ordinary people (not climate gamers or politicians)?
If +0.5 degrees C. did matter, would it be good news, or bad news ?
If it was bad news, would humans be able to reverse +0.5 degree of warming?
Until these questions are answered, the collection of average temperature data appears to be mainly a waste of taxpayer’s money.
Debates over the temperature “adjustments” bog down “deniers” in climate minutia, where they will have little influence on the climate change “scam”.
From my own point of view, based on evidence and logic, and speaking on behalf of humans, animals and green plants:
– Slight warming since 1880 is good news.
– More CO2 in the air since 1880 is good news,
– Even more warming in the future would be better news, and
– Even more CO2 in the air in the future would be better news.
Average temperature is not a measurement.
It is a statistic than can be compiled in hundreds of different ways.
No one on Earth lives in the average temperature.
Therefore, no one on Earth should care about the average temperature.
Average temperature is mainly a propaganda tool used by leftists to scare people, with the ultimate goal of gaining political power.
This effort is 99% politics and 1% science.

Walt D.
Reply to  Richard Greene
September 25, 2015 7:07 pm

“Senator Iselin, you need to pick one number and stick to it”.
This effort is 97% politics and 3% science,

prjindigo
September 25, 2015 3:11 pm

Lemme just remind you something important. If you increase the resolution of the model by a factor of 2, then 99% of the data is faked.

Rico L
September 26, 2015 9:47 pm

Vic Reeves: “88.2% of statistics are made up on the spot”. Never a better view on statistics.

October 3, 2015 11:30 am

Reblogged this on Climate Collections and commented:
Outstanding review of GHCN treatment of historical data.
Executive Summary: Overall, from 1880 to the present, approximately 66% of the temperature data in the adjusted GHCN temperature data consists of estimated values produced by adjustment models, while 34% of the data are raw values retained from direct measurements. The rural split is 60% estimated, 40% retained. The non-rural split is 68% estimated, 32% retained. Total non-rural measurements outpace rural measurements by a factor of 3x.
The estimates produced by NOAA for the GHNC data introduce a warming trend of approximately a quarter degree C per century. Those estimates are produced at a slightly higher rate for non-rural stations than rural stations over most of the record. During the first 60 years of the record measurements were estimated at a rate of about 75%, with the rate gradually dropping to 40% in the early 1990s, followed by a brief spike in the rate before resuming the drop to its present level.
Approximately 7% of the raw data is discarded. If this data were included as-is in the final record it would likely introduce a warming component from 1880 to 1950, followed by a cooling component from 1951 to the present.