Practicing the Dark Art of Temperature Trend Adjustment

Did Federal Climate Scientists Fudge Temperature Data to Make It Warmer?

Ronald Bailey of Reason Magazine writes:

The NCDC also notes that all the changes to the record have gone through peer review and have been published in reputable journals. The skeptics, in turn, claim that a pro-warming confirmation bias is widespread among orthodox climate scientists, tainting the peer review process. Via email, Anthony Watts—proprietor of Watts Up With That, a website popular with climate change skeptics—tells me that he does not think that NCDC researchers are intentionally distorting the record.

But he believes that the researchers have likely succumbed to this confirmation bias in their temperature analyses. In other words, he thinks the NCDC’s scientists do not question the results of their adjustment procedures because they report the trend the researches expect to find. Watts wants the center’s algorithms, computer coding, temperature records, and so forth to be checked by researchers outside the climate science establishment.

Clearly, replication by independent researchers would add confidence to the NCDC results. In the meantime, if Heller episode proves nothing else, it is that we can continue to expect confirmation bias to pervade nearly every aspect of the climate change debate.

Read it all here: http://reason.com/archives/2014/07/03/did-federal-climate-scientists-fudge-tem

Advertisements

  Subscribe  
newest oldest most voted
Notify of
Jimmy Haigh.

The warm is turning…

“The NCDC also notes that all the changes to the record have gone through peer review and have been published in reputable journals. ”
The one paper I tried to check was behind a pay wall.

GeologyJim

There used to be 9000+ reporting stations in the global network. Then the Soviet Union collapsed, economies flattened, priorities changed, and now the network is on the order of 3000 stations.
Those stations retained are disproportionately sited where lots of people live (cities, airports, etc) and they are disproportionately affected by Urban Heat Island issues. Many high latitude and high altitude stations (generally colder) disappeared from the network.
No amount of averaging, gridding, massaging, extrapolating, or “in-filling missing data” can negate these network changes.
The network is trending warmer because the reporting stations are in warmer locations than in the past.
And the historical high temperatures are still from the 1930s-1940s
NASA-NOAA-HADCRUT-GHCN adjusted data are inherently biased.

Latitude

why don’t they just come right out and say it…..
15 years ago when they said they knew what they were doing, it was accurate….
..they were lying through their teeth

JimS

I wonder how many reporting stations are located in Antarctica?

An audit would seen to be in order …

Ivan

“Anthony Watts—proprietor of Watts Up With That, a website popular with climate change skeptics—tells me that he does not think that NCDC researchers are intentionally distorting the record”
What is exactly the evidence for this claim? If the adjustments are going on permanently and they always cool the past and warm the present, it seems that the null hypothesis should be that they are doing this on purpose. Especially when we have in mind the Climate-gate correspondence and their open deliberations how to “eliminate the blip” of 1940. It seems that they are not only eliminating the warming blip 1920-1940, but also the cooling blip of 1940-1970 as well.

Alan Robertson

“…if Heller episode proves nothing else, it is that we can continue to expect confirmation bias to pervade nearly every aspect of the climate change debate…”as framed by government sponsored researchers and spokesmen.
————-
fixed

D. Cohen

Why do I never, ever hear or read anything about possible errors in the temperature-adjustment process or the parameters used in the temperature-adjustment process? Always it sounds as if this adjustment is “infinitely accurate”. In other science and engineering fields where there is data that is significantly contaminated by both random error and biases, adjusting the data to eliminate the biases is usually a bad idea because the error that comes from not knowing to infinite accuracy the bias, when added to the overall error budget for the data, ends up making the adjusted data less rather than more accurate.

Latitude

he does not think that NCDC researchers are intentionally distorting the record”…
In other words…no one noticed?
You have to notice…..by the time someone gets a paper published using today’s data…..the data’s changed….when they go back to check it….their paper is wrong….by the time they re-write….go back and check….it’s changed again
wash…rinse….repeat……their paper would never be right

How can the time of observation change the temperature for a day?

kramer

Seems to me that a correct way to see if the Earth is warming or cooling would be to just record the raw data of rural stations.

RH

It’s worse than just confirmation bias. The reviewers who disagree with the AGW POV know better than open their mouths. At best, they will say they have no opinion.

richardscourtney

Latitude:
Your post at July 4, 2014 at 1:02 pm says in total

he does not think that NCDC researchers are intentionally distorting the record”…
In other words…no one noticed?
You have to notice…..by the time someone gets a paper published using today’s data…..the data’s changed….when they go back to check it….their paper is wrong….by the time they re-write….go back and check….it’s changed again
wash…rinse….repeat……their paper would never be right

And that is precisely how our paper discussing the ‘adjustments’ was prevented from publication; see here.
Richard

DirkH

“Via email, Berkeley Earth researcher Zeke Hausfather notes that Berkeley Earth’s breakpoint method finds “U.S. temperature records nearly identical to the NCDC ones (and quite different from the raw data), despite using different methodologies and many more station records with no infilling or dropouts in recent years.” ”
Ah, how the supreme genius of warmist scientists reveals the true nature of things, undeterred by the attempts of raw data to misguide them! Let it be told; the Earth is warming while your thermometer lies to you, human!

“Anthony Watts—proprietor of Watts Up With That, a website popular with climate change skeptics—tells me that he does not think that NCDC researchers are intentionally distorting the record”
I agree with Anthony. I think he change-point algorithms were accepted due to ignorance of natural climate cycles like the Pacific Decadal Oscillation that was just named recently in 1997, so the adjusted data fed their confirmation bias and prompting researchers’ failure to critically analyze the gross distortions that lowered most of the high temperatures in the 30s and 40s as discussed here http://landscapesandcycles.net/why-unwarranted-temperature-adjustments-.html

highflight5643

Consider each station is covering 20 square miles; with earth being 196,939,000 square miles, those stations are covering 0.03% of the surface…now that’s some coverage. NOT!Satellite data can be “adjusted” as well.

ckb42

“Anthony Watts—proprietor of Watts Up With That, a website popular with climate change skeptics—tells me that he does not think that NCDC researchers are intentionally distorting the record”
I agree with regards to the NCDC. I am very reluctant to attribute something to malevolence when incompetence would explain it just as well.

J Martin

It is either fraud or incompetance.
Either way they should be fired / dismissed and have to look for new employment elsewhere.

Latitude

richardscourtney says:
July 4, 2014 at 1:11 pm
====
Richard…..exactly
Say we’ve been going out every year ..for 30 years…counting manatees….
…and every year, we count more manatees
but every year someone was jiggling with the numbers making the first few years numbers bigger…making the trend look like there’s less manatees each year
someone would definitely notice!…………….we did, that’s a true story

Latitude

Did Federal Climate Scientists Fudge Temperature Data to Make It Warmer?
…absolutely

rgbatduke

It is really extraordinary how given the UHI effect the adjustments in temperature always seem to raise the temperature of the present compared to the past, on average, when one expects precisely the opposite to be the dominant trend.
A second thing to note is that statistical angels fear to tread the long, dark path to data adjustment and infilling, because it always presumes knowledge that, in fact, you almost certainly do not have. Furthermore, all of the adjustments you make come with a substantial cost in the probable error of the “improved” estimate. Of course this never matters in climate science because the uncertainty in the “anomaly” computed is almost never presented, in part because it would then have to be added to the uncertainty in the actual global average temperature that the anomaly is supposedly referenced to. The noise, in fact, exceeds the signal by more than a factor of two everywhere but the satellite era.
There are other “interesting” things — just about exactly half of the state high temperature records were set in a single decade, and it wasn’t the last ten years. Guess which decade it was?
rgb

A C Osborn

Anthony, please apolagise to NCDC immediately.
Your mate STEVE McINTYRE confirms NCDC’s TOB algorithm is working correctly and you are all wrong.
He has advises Paul homewood to retract his work.
See
http://notalotofpeopleknowthat.wordpress.com/2014/07/01/temperature-adjustments-in-alabama-2/#comment-26224
REPLY: But I’m not disputing the TOBs adjustment, but rather a lot of the infilling data from surrounding stations that have been compromised. And to be precise Steve hasn’t actually checked the code that is running in NCDC’s computers. He’s done his own calculation, probably in R, but that isn’t the same as the code that runs at NCDC. That’s what I’d like to see evaluated by an external review. – Anthony

Russ R.

Dale Hartz,
“How can the time of observation change the temperature for a day?”
If recordings are made at the time of day when temperatures are highest (mid-afternoon), daily highs can be “double-counted”, biasing averages higher.
Consider the following example. A thermometer shows 3 things… the current temp, the max high and the max low since the thermometer was last reset.
An observer records all three at 3pm on Monday. Max = 80F, Min = 65F, Current = 75F. He resets the max and min, and returns the next day at 3pm. Tuesday was cooler, a high of only 70F, but the max since the last reading was 75F from when the thermometer was last reset on Monday. The average of the two days Tmax readings is 77.5F, when it should be 75F.
If temperatures were measured at 9am instead, around the middle of the day’s temperature range, there would be much less likelihood of double-counting, and the high-bias would be reduced.

mjc

” Russ R. says:
July 4, 2014 at 1:55 pm
An observer records all three at 3pm on Monday. Max = 80F, Min = 65F, Current = 75F. He resets the max and min, and returns the next day at 3pm. Tuesday was cooler, a high of only 70F, but the max since the last reading was 75F from when the thermometer was last reset on Monday. The average of the two days Tmax readings is 77.5F, when it should be 75F.
If temperatures were measured at 9am instead, around the middle of the day’s temperature range, there would be much less likelihood of double-counting, and the high-bias would be reduced.”
There’s only one BIG problem with that idea…that ASSumes that EVERY observer is not properly resetting the equipment after every observation. And that also assumes every thermometer is the same type.
In other words, it is just one big mass of assumptions all leaning towards the incompetence of the person making the observations.

highflight56433

Also consider most of these stations are at an airport that accumulates heat as airline traffic peaks coincidentally with the warm part of the day. No influence of course… 🙂

David in Cal

IMHO a better way to calculate the average change in temperature from year to year would be to average temperature changes, rather than temperatures. That is, the average temperature change from year (n-1) to year n would be measured as the average over all usable weather stations of (Average temperature for year n/Average temperature for year n-1). A weather station would be usable for a given year only when the station’s reading for that year and the prior year were available and were comparable. This method eliminates the need for infilling. When a station is re-sited, just one temperature change would be left out of the average. All other years would be used without the need for adjustment. Stations going through a period of UHI effect would be left out, rather than some guess being used as to the magnitude of the UHI effect.
I’m an actuary, not a climatologist. However, this seems to me to be a more sound method of measuring the average change in temperature by year.

darwin

“Anthony Watts—proprietor of Watts Up With That, a website popular with climate change skeptics—tells me that he does not think that NCDC researchers are intentionally distorting the record”
Seriously? Is there any doubt that these people have been “informed” that it’s in their best interest to support global warming?
They may not want to distort the record, but I do not doubt for one minute that the records are being intentionally distorted.
After all the lies, distortions, climategate emails, refusals to debate and attacks on skeptics what person in their right mind would give them the benefit of the doubt?

Gary Pearse

I suppose this is a naive question. When they use infilling of nearby stations to give a data point to a defunct station, do they evaluate comparisons of the surrounding stations with this station during a period when all stations were operating satisfactorily? This is the way I would have done it and assumed that it was the way it has been done. Anyone?

Graham

Russ R says
An observer records all three at 3pm on Monday. Max = 80F, Min = 65F, Current = 75F. He resets the max and min, and returns the next day at 3pm. Tuesday was cooler, a high of only 70F, but the max since the last reading was 75F from when the thermometer was last reset on Monday. The average of the two days Tmax readings is 77.5F, when it should be 75F
So you have an example showing a warm bias when Tuesday is cooler. Repeat for Tuesday is warmer and you will get a cool bias, so the average over time is no TOB. All we have is a lag which shouldn’t impact the average. Adjust those numbers and you will introduce an bias reflected by the algorithm you use to adjust with.

pokerguy

“Anthony Watts—proprietor of Watts Up With That, a website popular with climate change skeptics—.me that he does not think that researchers are intentionally distorting the record
Smart Anthony. While I know you’d be less than shocked if it turns out they are, this is the way to play it in th

A data set in which 25% of the observations are produced by imputation (filling in missing values mathematically) is rather marginal. Personally, I wouldn’t expect to be able to make any useful claims based on a data set of that kind.

Willis Eschenbach

Ivan says:
July 4, 2014 at 12:47 pm

“Anthony Watts—proprietor of Watts Up With That, a website popular with climate change skeptics—tells me that he does not think that NCDC researchers are intentionally distorting the record”

What is exactly the evidence for this claim? If the adjustments are going on permanently and they always cool the past and warm the present, it seems that the null hypothesis should be that they are doing this on purpose.

Never ascribe to malice what can be explained by confirmation bias …
w.

Coke and Pepsi should be regulated like the coal industry for all the CO2 their products release into the atmosphere. People are ingesting an official government pollutant CO2 contained in their products. There should be a warning label on their packaging listing the dangers.
Coke and Pepsi should be forced to reduce the amount of carbonation in their products.

Mindert Eiting

GeologyJim: Agree. However, those 6000 stations did not disappear randomly. The best predictor for station drop-out is the correlation of its time series with the time series of its latitude region. The lower the correlation the higher the drop-out risk. Repeat my computations if you want This shows that stations were dropped on purpose because of their history. The procedure of dropping dissident stations creates an artificial signal out of noise.

Even if the adjustments are justified, applying arbitrary adjustments which are of similar magnitude to the global warming trend you claim is occurring, is awfully dodgy.
Its like measuring a kid’s height, to see how fast they are growing, but putting a few books under their feet if you measure their height in the afternoon, to compensate for the fact people get a bit shorter during the day, as their spine compresses.
If the rate they are growing is similar to the height of the stack of books you put under the kid’s feet, then you have to ask, is the growth you measure down to the stack of books rather than the real rate they are growing.

Katherine

It’s one thing to infill if only a few days in a month are missing, it’s something else altogether if entire years are missing. Defunct stations should just be dropped.
And what’s with warming the data for rural stations to match the warming in urban stations to supposedly offset the UHI effect? That’s counterintuitive—unless intuition demands a warming trend.

Genghis

Having played with the Max/min thermometers I can verify that there is a warming effect if the measurements are taken at the hottest part of the day, because the max couldn’t be set lower than the current hot temperature. It is just a piece of metal in the vacuum tube that is adjusted with a magnet. It double counts the high temp.
The same holds true if the measurements are taken at the coolest parts of the day, except that it biases cold.
The proper time to read one of those thermometers is probably at midnight, and that isn’t going to happen.
So to summarize, afternoon readings in the summer will bias high and morning readings in the winter will bias low. It also turns out that high temperatures tend to be more variable than low temps so overall there is a warming bias.
But here is the problem, in order to properly eliminate the bias the actual high or low needs to be known. If the day to day temps are the same or rising, there is no bias (hot bias anyway) regardless of the time of observation.
What the computer algorithm does is look for nearby stations and check if it is a rising or falling trend and if it is a falling trend it adjusts the raw temp down.
The problem that they have discovered now though is that 40% of the stations are zombies, just infilled data points, which in itself isn’t a show stopper, but what is happening is that the accuracy is deteriorating because of station loss. Computer truncation and subtle programmer constants are starting to dominate the output. I am also seeing a regression to the mean and the modelers are fighting that hard.

Leigh

_Jim says:July 4, 2014 at 12:45 pm
An audit would seen to be in order …
And I say good luck with that Jim.
Here’s just a little heads up on what sort of wall you will hit when you demand an audit of your NCDC.
Be under no illusions Jim, that wall will be ably manned by their peers(pals).
Do not for one instance think that this fraud of data ajustment is isolated in America.
Everybody from “fellow”scientists to laymen like myself can see exactly what they are doing.
If they had nothing to hide the information and explanation would be freely availiable.
What they are doing is not bordering on criminal.
It is criminal.
When governments use that altered date supplied by their respective keepers of temperature records to form and spend billions in their budgets.
It is simply fraud.
I believe sooner rather than later, one of these fraudsters will go down in a civil court.
Which will open the flood gates to criminal prosecution’s.
The two links will give you an idea as to the lengths they will go to Jim.
Simply to maintain the fraud.
http://joannenova.com.au/2014/06/dont-miss-jennifer-marohasy-speaking-in-sydney-wednesday/
http://joannenova.com.au/2014/06/australian-bom-neutral-adjustments-increase-minima-trends-up-60/_

Bill Illis

The TOBs adjustment continues to grow every day. How is that possible? The first paper published on TOBs was 1854 and was fully fixed by the NCDC/Weather Bureau in 1890, in 1909, in 1954, in 1970, in 1977, in 1983, in 1986, in 2003, yet it continues to change every day.

Bill Illis

Or let’s put it this way, what happened in the last few months that changed the temperature recordings in 1903.

“Anthony Watts—proprietor of Watts Up With That, a website popular with climate change skeptics—tells me that he does not think that NCDC researchers are intentionally distorting the record”
People have a huge ability to delude themselves that what they are doing is ethical, correct, scientific and accurate when they truly believe that what they are doing is in the interests of a Great Moral Cause. It still requires conscious effort, however, and is therefore intentional. It may be confirmation bias, it may be incompetence, it may be well intended, but it is still intentional. It is indistinguishable from malevolence in its effects.

mjc

” Genghis says:
July 4, 2014 at 3:28 pm
Having played with the Max/min thermometers I can verify that there is a warming effect if the measurements are taken at the hottest part of the day, because the max couldn’t be set lower than the current hot temperature. It is just a piece of metal in the vacuum tube that is adjusted with a magnet. It double counts the high temp.
The same holds true if the measurements are taken at the coolest parts of the day, except that it biases cold.
The proper time to read one of those thermometers is probably at midnight, and that isn’t going to happen.
So to summarize, afternoon readings in the summer will bias high and morning readings in the winter will bias low. It also turns out that high temperatures tend to be more variable than low temps so overall there is a warming bias.
But here is the problem, in order to properly eliminate the bias the actual high or low needs to be known. If the day to day temps are the same or rising, there is no bias (hot bias anyway) regardless of the time of observation.”
There is only one maximum per 24hr period…now if that period doesn’t coincide with a calendar ‘day’ then so be it. If the high temp on Tuesday was at 3:01 pm on Monday and the temps are being checked every day at 3:00 pm, then that is the reporting period and the ACTUAL high temperature for that period…it’s not complicated, really.

jaffa68

“Watts wants the center’s algorithms, computer coding, temperature records, and so forth to be checked by researchers outside the climate science establishment.”
They’ve got decades of invested in this, why should they hand over all their data when you just want to find something wrong with it?

Leigh

Riddle me this one Bill.
Here in Australia our Bureau of Meteorology flatly refuses to use temperature data pre 1910.
Why?
Because they deem it to be unreliable.
Yet the the UN and their rubber stamp for the fraud IPCC.
Use our and believe our temperature records are fine dating back to the 1860’s.

Thing is I just don’t see how the infilling cannot exacerbate the good site/bad site problem, with bias going to the side that has larger numbers – which at the moment appears to be bad sites (and these too will vary from lightly bad to bloody terrible). To break it into simple numbers: Take a sample area which should have 20 stations. 10 of them are zombies. 3 of the remainder are good and seven varying degrees of bad. Leaving out the zombies the good are averaging a 0.1 temp increase per decade, and the bad a 0.3 (these are hypothetical figures). Averaged out that gives a 0.219 degree increase. Now add the zombies – based on infilling from their neighbors. Probability of each of those neighbors being a bad station is 0.7 or 70% chance. And the more stations they use for infilling the less likely that data will match the good sites. Let’s say they use the nearest 3 stations. For the zombie to provide ‘good station’ data, it would have to have be surrounded by the three good stations, which is highly unlikely (I think something like the magic 97% chance 🙂 that it won’t be). So the zombies are not even going to give the 3:7 good : bad ratio, but most likely all some degree of bad.So instead of 7/10 bad (70%) you now have 17 (or if you really got lucky 16) out of 20. (85% to 80%) of bad – of course some of that bad is ameliorated a little by good data, and I could work it out if I had the patience – but the average for the area simply has to be a higher increase. Or am I getting this all wrong and the algorerythms toss out all the ‘bad’ stations?

Latitude

Bill Illis says:
July 4, 2014 at 3:50 pm
Or let’s put it this way, what happened in the last few months that changed the temperature recordings in 1903.
=========
….a fatwa

Genghis

mjc says:
July 4, 2014 at 4:05 pm
“There is only one maximum per 24hr period…now if that period doesn’t coincide with a calendar ‘day’ then so be it. If the high temp on Tuesday was at 3:01 pm on Monday and the temps are being checked every day at 3:00 pm, then that is the reporting period and the ACTUAL high temperature for that period…it’s not complicated, really.”
Lets do a quick test to see if your math skills are up to par. Day one has a high of 30 and a low of zero and is checked at 3 pm. The average and high temp is 15. Sometime after 3 pm the temperature plummets and drops to zero and stays at zero. The next day the thermometer will indicate that the high was 30 and the low was zero. Clearly that is an error.
It incorrectly indicates one maximum for two 24 hour periods.

sunshinehours1,
Actually NCDC makes all their papers available for free. You can find all the USHCN ones here: ftp://ftp.ncdc.noaa.gov/pub/data/ushcn/papers/

RH

jaffa says:
July 4, 2014 at 4:05 pm
““Watts wants the center’s algorithms, computer coding, temperature records, and so forth to be checked by researchers outside the climate science establishment.”
They’ve got decades of invested in this, why should they hand over all their data when you just want to find something wrong with it?”
Because “they” work for the U.S. public. It isn’t their data, it is ours.