Did Federal Climate Scientists Fudge Temperature Data to Make It Warmer?
Ronald Bailey of Reason Magazine writes:
The NCDC also notes that all the changes to the record have gone through peer review and have been published in reputable journals. The skeptics, in turn, claim that a pro-warming confirmation bias is widespread among orthodox climate scientists, tainting the peer review process. Via email, Anthony Watts—proprietor of Watts Up With That, a website popular with climate change skeptics—tells me that he does not think that NCDC researchers are intentionally distorting the record.
But he believes that the researchers have likely succumbed to this confirmation bias in their temperature analyses. In other words, he thinks the NCDC’s scientists do not question the results of their adjustment procedures because they report the trend the researches expect to find. Watts wants the center’s algorithms, computer coding, temperature records, and so forth to be checked by researchers outside the climate science establishment.
Clearly, replication by independent researchers would add confidence to the NCDC results. In the meantime, if Heller episode proves nothing else, it is that we can continue to expect confirmation bias to pervade nearly every aspect of the climate change debate.
Read it all here: http://reason.com/archives/2014/07/03/did-federal-climate-scientists-fudge-tem
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.
Also consider most of these stations are at an airport that accumulates heat as airline traffic peaks coincidentally with the warm part of the day. No influence of course… 🙂
IMHO a better way to calculate the average change in temperature from year to year would be to average temperature changes, rather than temperatures. That is, the average temperature change from year (n-1) to year n would be measured as the average over all usable weather stations of (Average temperature for year n/Average temperature for year n-1). A weather station would be usable for a given year only when the station’s reading for that year and the prior year were available and were comparable. This method eliminates the need for infilling. When a station is re-sited, just one temperature change would be left out of the average. All other years would be used without the need for adjustment. Stations going through a period of UHI effect would be left out, rather than some guess being used as to the magnitude of the UHI effect.
I’m an actuary, not a climatologist. However, this seems to me to be a more sound method of measuring the average change in temperature by year.
“Anthony Watts—proprietor of Watts Up With That, a website popular with climate change skeptics—tells me that he does not think that NCDC researchers are intentionally distorting the record”
Seriously? Is there any doubt that these people have been “informed” that it’s in their best interest to support global warming?
They may not want to distort the record, but I do not doubt for one minute that the records are being intentionally distorted.
After all the lies, distortions, climategate emails, refusals to debate and attacks on skeptics what person in their right mind would give them the benefit of the doubt?
I suppose this is a naive question. When they use infilling of nearby stations to give a data point to a defunct station, do they evaluate comparisons of the surrounding stations with this station during a period when all stations were operating satisfactorily? This is the way I would have done it and assumed that it was the way it has been done. Anyone?
Russ R says
An observer records all three at 3pm on Monday. Max = 80F, Min = 65F, Current = 75F. He resets the max and min, and returns the next day at 3pm. Tuesday was cooler, a high of only 70F, but the max since the last reading was 75F from when the thermometer was last reset on Monday. The average of the two days Tmax readings is 77.5F, when it should be 75F
So you have an example showing a warm bias when Tuesday is cooler. Repeat for Tuesday is warmer and you will get a cool bias, so the average over time is no TOB. All we have is a lag which shouldn’t impact the average. Adjust those numbers and you will introduce an bias reflected by the algorithm you use to adjust with.
“Anthony Watts—proprietor of Watts Up With That, a website popular with climate change skeptics—.me that he does not think that researchers are intentionally distorting the record
Smart Anthony. While I know you’d be less than shocked if it turns out they are, this is the way to play it in th
A data set in which 25% of the observations are produced by imputation (filling in missing values mathematically) is rather marginal. Personally, I wouldn’t expect to be able to make any useful claims based on a data set of that kind.
Ivan says:
July 4, 2014 at 12:47 pm
Never ascribe to malice what can be explained by confirmation bias …
w.
Coke and Pepsi should be regulated like the coal industry for all the CO2 their products release into the atmosphere. People are ingesting an official government pollutant CO2 contained in their products. There should be a warning label on their packaging listing the dangers.
Coke and Pepsi should be forced to reduce the amount of carbonation in their products.
GeologyJim: Agree. However, those 6000 stations did not disappear randomly. The best predictor for station drop-out is the correlation of its time series with the time series of its latitude region. The lower the correlation the higher the drop-out risk. Repeat my computations if you want This shows that stations were dropped on purpose because of their history. The procedure of dropping dissident stations creates an artificial signal out of noise.
Even if the adjustments are justified, applying arbitrary adjustments which are of similar magnitude to the global warming trend you claim is occurring, is awfully dodgy.
Its like measuring a kid’s height, to see how fast they are growing, but putting a few books under their feet if you measure their height in the afternoon, to compensate for the fact people get a bit shorter during the day, as their spine compresses.
If the rate they are growing is similar to the height of the stack of books you put under the kid’s feet, then you have to ask, is the growth you measure down to the stack of books rather than the real rate they are growing.
It’s one thing to infill if only a few days in a month are missing, it’s something else altogether if entire years are missing. Defunct stations should just be dropped.
And what’s with warming the data for rural stations to match the warming in urban stations to supposedly offset the UHI effect? That’s counterintuitive—unless intuition demands a warming trend.
Having played with the Max/min thermometers I can verify that there is a warming effect if the measurements are taken at the hottest part of the day, because the max couldn’t be set lower than the current hot temperature. It is just a piece of metal in the vacuum tube that is adjusted with a magnet. It double counts the high temp.
The same holds true if the measurements are taken at the coolest parts of the day, except that it biases cold.
The proper time to read one of those thermometers is probably at midnight, and that isn’t going to happen.
So to summarize, afternoon readings in the summer will bias high and morning readings in the winter will bias low. It also turns out that high temperatures tend to be more variable than low temps so overall there is a warming bias.
But here is the problem, in order to properly eliminate the bias the actual high or low needs to be known. If the day to day temps are the same or rising, there is no bias (hot bias anyway) regardless of the time of observation.
What the computer algorithm does is look for nearby stations and check if it is a rising or falling trend and if it is a falling trend it adjusts the raw temp down.
The problem that they have discovered now though is that 40% of the stations are zombies, just infilled data points, which in itself isn’t a show stopper, but what is happening is that the accuracy is deteriorating because of station loss. Computer truncation and subtle programmer constants are starting to dominate the output. I am also seeing a regression to the mean and the modelers are fighting that hard.
_Jim says:July 4, 2014 at 12:45 pm
An audit would seen to be in order …
And I say good luck with that Jim.
Here’s just a little heads up on what sort of wall you will hit when you demand an audit of your NCDC.
Be under no illusions Jim, that wall will be ably manned by their peers(pals).
Do not for one instance think that this fraud of data ajustment is isolated in America.
Everybody from “fellow”scientists to laymen like myself can see exactly what they are doing.
If they had nothing to hide the information and explanation would be freely availiable.
What they are doing is not bordering on criminal.
It is criminal.
When governments use that altered date supplied by their respective keepers of temperature records to form and spend billions in their budgets.
It is simply fraud.
I believe sooner rather than later, one of these fraudsters will go down in a civil court.
Which will open the flood gates to criminal prosecution’s.
The two links will give you an idea as to the lengths they will go to Jim.
Simply to maintain the fraud.
http://joannenova.com.au/2014/06/dont-miss-jennifer-marohasy-speaking-in-sydney-wednesday/
http://joannenova.com.au/2014/06/australian-bom-neutral-adjustments-increase-minima-trends-up-60/_
The TOBs adjustment continues to grow every day. How is that possible? The first paper published on TOBs was 1854 and was fully fixed by the NCDC/Weather Bureau in 1890, in 1909, in 1954, in 1970, in 1977, in 1983, in 1986, in 2003, yet it continues to change every day.
Or let’s put it this way, what happened in the last few months that changed the temperature recordings in 1903.
“Anthony Watts—proprietor of Watts Up With That, a website popular with climate change skeptics—tells me that he does not think that NCDC researchers are intentionally distorting the record”
People have a huge ability to delude themselves that what they are doing is ethical, correct, scientific and accurate when they truly believe that what they are doing is in the interests of a Great Moral Cause. It still requires conscious effort, however, and is therefore intentional. It may be confirmation bias, it may be incompetence, it may be well intended, but it is still intentional. It is indistinguishable from malevolence in its effects.
” Genghis says:
July 4, 2014 at 3:28 pm
Having played with the Max/min thermometers I can verify that there is a warming effect if the measurements are taken at the hottest part of the day, because the max couldn’t be set lower than the current hot temperature. It is just a piece of metal in the vacuum tube that is adjusted with a magnet. It double counts the high temp.
The same holds true if the measurements are taken at the coolest parts of the day, except that it biases cold.
The proper time to read one of those thermometers is probably at midnight, and that isn’t going to happen.
So to summarize, afternoon readings in the summer will bias high and morning readings in the winter will bias low. It also turns out that high temperatures tend to be more variable than low temps so overall there is a warming bias.
But here is the problem, in order to properly eliminate the bias the actual high or low needs to be known. If the day to day temps are the same or rising, there is no bias (hot bias anyway) regardless of the time of observation.”
There is only one maximum per 24hr period…now if that period doesn’t coincide with a calendar ‘day’ then so be it. If the high temp on Tuesday was at 3:01 pm on Monday and the temps are being checked every day at 3:00 pm, then that is the reporting period and the ACTUAL high temperature for that period…it’s not complicated, really.
“Watts wants the center’s algorithms, computer coding, temperature records, and so forth to be checked by researchers outside the climate science establishment.”
They’ve got decades of invested in this, why should they hand over all their data when you just want to find something wrong with it?
Riddle me this one Bill.
Here in Australia our Bureau of Meteorology flatly refuses to use temperature data pre 1910.
Why?
Because they deem it to be unreliable.
Yet the the UN and their rubber stamp for the fraud IPCC.
Use our and believe our temperature records are fine dating back to the 1860’s.
Thing is I just don’t see how the infilling cannot exacerbate the good site/bad site problem, with bias going to the side that has larger numbers – which at the moment appears to be bad sites (and these too will vary from lightly bad to bloody terrible). To break it into simple numbers: Take a sample area which should have 20 stations. 10 of them are zombies. 3 of the remainder are good and seven varying degrees of bad. Leaving out the zombies the good are averaging a 0.1 temp increase per decade, and the bad a 0.3 (these are hypothetical figures). Averaged out that gives a 0.219 degree increase. Now add the zombies – based on infilling from their neighbors. Probability of each of those neighbors being a bad station is 0.7 or 70% chance. And the more stations they use for infilling the less likely that data will match the good sites. Let’s say they use the nearest 3 stations. For the zombie to provide ‘good station’ data, it would have to have be surrounded by the three good stations, which is highly unlikely (I think something like the magic 97% chance 🙂 that it won’t be). So the zombies are not even going to give the 3:7 good : bad ratio, but most likely all some degree of bad.So instead of 7/10 bad (70%) you now have 17 (or if you really got lucky 16) out of 20. (85% to 80%) of bad – of course some of that bad is ameliorated a little by good data, and I could work it out if I had the patience – but the average for the area simply has to be a higher increase. Or am I getting this all wrong and the algorerythms toss out all the ‘bad’ stations?
Bill Illis says:
July 4, 2014 at 3:50 pm
Or let’s put it this way, what happened in the last few months that changed the temperature recordings in 1903.
=========
….a fatwa
mjc says:
July 4, 2014 at 4:05 pm
“There is only one maximum per 24hr period…now if that period doesn’t coincide with a calendar ‘day’ then so be it. If the high temp on Tuesday was at 3:01 pm on Monday and the temps are being checked every day at 3:00 pm, then that is the reporting period and the ACTUAL high temperature for that period…it’s not complicated, really.”
Lets do a quick test to see if your math skills are up to par. Day one has a high of 30 and a low of zero and is checked at 3 pm. The average and high temp is 15. Sometime after 3 pm the temperature plummets and drops to zero and stays at zero. The next day the thermometer will indicate that the high was 30 and the low was zero. Clearly that is an error.
It incorrectly indicates one maximum for two 24 hour periods.
sunshinehours1,
Actually NCDC makes all their papers available for free. You can find all the USHCN ones here: ftp://ftp.ncdc.noaa.gov/pub/data/ushcn/papers/
jaffa says:
July 4, 2014 at 4:05 pm
““Watts wants the center’s algorithms, computer coding, temperature records, and so forth to be checked by researchers outside the climate science establishment.”
They’ve got decades of invested in this, why should they hand over all their data when you just want to find something wrong with it?”
Because “they” work for the U.S. public. It isn’t their data, it is ours.