Practicing the Dark Art of Temperature Trend Adjustment

Did Federal Climate Scientists Fudge Temperature Data to Make It Warmer?

Ronald Bailey of Reason Magazine writes:

The NCDC also notes that all the changes to the record have gone through peer review and have been published in reputable journals. The skeptics, in turn, claim that a pro-warming confirmation bias is widespread among orthodox climate scientists, tainting the peer review process. Via email, Anthony Wattsā€”proprietor of Watts Up With That, a website popular with climate change skepticsā€”tells me that he does not think that NCDC researchers are intentionally distorting the record.

But he believes that the researchers have likely succumbed to this confirmation bias in their temperature analyses. In other words, he thinks the NCDC’s scientists do not question the results of their adjustment procedures because they report the trend the researches expect to find. Watts wants the center’s algorithms, computer coding, temperature records, and so forth to be checked by researchers outside the climate science establishment.

Clearly, replication by independent researchers would add confidence to the NCDC results. In the meantime, if Heller episode proves nothing else, it is that we can continue to expect confirmation bias to pervade nearly every aspect of the climate change debate.

Read it all here: http://reason.com/archives/2014/07/03/did-federal-climate-scientists-fudge-tem

0 0 votes
Article Rating
113 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
July 4, 2014 12:26 pm

The warm is turning…

July 4, 2014 12:36 pm

“The NCDC also notes that all the changes to the record have gone through peer review and have been published in reputable journals. ”
The one paper I tried to check was behind a pay wall.

GeologyJim
July 4, 2014 12:42 pm

There used to be 9000+ reporting stations in the global network. Then the Soviet Union collapsed, economies flattened, priorities changed, and now the network is on the order of 3000 stations.
Those stations retained are disproportionately sited where lots of people live (cities, airports, etc) and they are disproportionately affected by Urban Heat Island issues. Many high latitude and high altitude stations (generally colder) disappeared from the network.
No amount of averaging, gridding, massaging, extrapolating, or “in-filling missing data” can negate these network changes.
The network is trending warmer because the reporting stations are in warmer locations than in the past.
And the historical high temperatures are still from the 1930s-1940s
NASA-NOAA-HADCRUT-GHCN adjusted data are inherently biased.

Latitude
July 4, 2014 12:45 pm

why don’t they just come right out and say it…..
15 years ago when they said they knew what they were doing, it was accurate….
..they were lying through their teeth

JimS
July 4, 2014 12:45 pm

I wonder how many reporting stations are located in Antarctica?

July 4, 2014 12:45 pm

An audit would seen to be in order …

Ivan
July 4, 2014 12:47 pm

“Anthony Wattsā€”proprietor of Watts Up With That, a website popular with climate change skepticsā€”tells me that he does not think that NCDC researchers are intentionally distorting the record”
What is exactly the evidence for this claim? If the adjustments are going on permanently and they always cool the past and warm the present, it seems that the null hypothesis should be that they are doing this on purpose. Especially when we have in mind the Climate-gate correspondence and their open deliberations how to “eliminate the blip” of 1940. It seems that they are not only eliminating the warming blip 1920-1940, but also the cooling blip of 1940-1970 as well.

Alan Robertson
July 4, 2014 12:48 pm

“…if Heller episode proves nothing else, it is that we can continue to expect confirmation bias to pervade nearly every aspect of the climate change debate…”as framed by government sponsored researchers and spokesmen.
————-
fixed

D. Cohen
July 4, 2014 1:00 pm

Why do I never, ever hear or read anything about possible errors in the temperature-adjustment process or the parameters used in the temperature-adjustment process? Always it sounds as if this adjustment is “infinitely accurate”. In other science and engineering fields where there is data that is significantly contaminated by both random error and biases, adjusting the data to eliminate the biases is usually a bad idea because the error that comes from not knowing to infinite accuracy the bias, when added to the overall error budget for the data, ends up making the adjusted data less rather than more accurate.

Latitude
July 4, 2014 1:02 pm

he does not think that NCDC researchers are intentionally distorting the recordā€…
In other words…no one noticed?
You have to notice…..by the time someone gets a paper published using today’s data…..the data’s changed….when they go back to check it….their paper is wrong….by the time they re-write….go back and check….it’s changed again
wash…rinse….repeat……their paper would never be right

July 4, 2014 1:04 pm

How can the time of observation change the temperature for a day?

July 4, 2014 1:05 pm

Seems to me that a correct way to see if the Earth is warming or cooling would be to just record the raw data of rural stations.

RH
July 4, 2014 1:09 pm

It’s worse than just confirmation bias. The reviewers who disagree with the AGW POV know better than open their mouths. At best, they will say they have no opinion.

richardscourtney
July 4, 2014 1:11 pm

Latitude:
Your post at July 4, 2014 at 1:02 pm says in total

he does not think that NCDC researchers are intentionally distorting the recordā€ā€¦
In other wordsā€¦no one noticed?
You have to noticeā€¦..by the time someone gets a paper published using todayā€™s dataā€¦..the dataā€™s changedā€¦.when they go back to check itā€¦.their paper is wrongā€¦.by the time they re-writeā€¦.go back and checkā€¦.itā€™s changed again
washā€¦rinseā€¦.repeatā€¦ā€¦their paper would never be right

And that is precisely how our paper discussing the ‘adjustments’ was prevented from publication; see here.
Richard

DirkH
July 4, 2014 1:11 pm

“Via email, Berkeley Earth researcher Zeke Hausfather notes that Berkeley Earth’s breakpoint method finds “U.S. temperature records nearly identical to the NCDC ones (and quite different from the raw data), despite using different methodologies and many more station records with no infilling or dropouts in recent years.” ”
Ah, how the supreme genius of warmist scientists reveals the true nature of things, undeterred by the attempts of raw data to misguide them! Let it be told; the Earth is warming while your thermometer lies to you, human!

July 4, 2014 1:13 pm

ā€œAnthony Wattsā€”proprietor of Watts Up With That, a website popular with climate change skepticsā€”tells me that he does not think that NCDC researchers are intentionally distorting the recordā€
I agree with Anthony. I think he change-point algorithms were accepted due to ignorance of natural climate cycles like the Pacific Decadal Oscillation that was just named recently in 1997, so the adjusted data fed their confirmation bias and prompting researchers’ failure to critically analyze the gross distortions that lowered most of the high temperatures in the 30s and 40s as discussed here http://landscapesandcycles.net/why-unwarranted-temperature-adjustments-.html

highflight5643
July 4, 2014 1:24 pm

Consider each station is covering 20 square miles; with earth being 196,939,000 square miles, those stations are covering 0.03% of the surface…now that’s some coverage. NOT!Satellite data can be “adjusted” as well.

ckb
Editor
July 4, 2014 1:27 pm

ā€œAnthony Wattsā€”proprietor of Watts Up With That, a website popular with climate change skepticsā€”tells me that he does not think that NCDC researchers are intentionally distorting the recordā€
I agree with regards to the NCDC. I am very reluctant to attribute something to malevolence when incompetence would explain it just as well.

J Martin
July 4, 2014 1:30 pm

It is either fraud or incompetance.
Either way they should be fired / dismissed and have to look for new employment elsewhere.

Latitude
July 4, 2014 1:31 pm

richardscourtney says:
July 4, 2014 at 1:11 pm
====
Richard…..exactly
Say we’ve been going out every year ..for 30 years…counting manatees….
…and every year, we count more manatees
but every year someone was jiggling with the numbers making the first few years numbers bigger…making the trend look like there’s less manatees each year
someone would definitely notice!…………….we did, that’s a true story

Latitude
July 4, 2014 1:32 pm

Did Federal Climate Scientists Fudge Temperature Data to Make It Warmer?
…absolutely

rgbatduke
July 4, 2014 1:33 pm

It is really extraordinary how given the UHI effect the adjustments in temperature always seem to raise the temperature of the present compared to the past, on average, when one expects precisely the opposite to be the dominant trend.
A second thing to note is that statistical angels fear to tread the long, dark path to data adjustment and infilling, because it always presumes knowledge that, in fact, you almost certainly do not have. Furthermore, all of the adjustments you make come with a substantial cost in the probable error of the “improved” estimate. Of course this never matters in climate science because the uncertainty in the “anomaly” computed is almost never presented, in part because it would then have to be added to the uncertainty in the actual global average temperature that the anomaly is supposedly referenced to. The noise, in fact, exceeds the signal by more than a factor of two everywhere but the satellite era.
There are other “interesting” things — just about exactly half of the state high temperature records were set in a single decade, and it wasn’t the last ten years. Guess which decade it was?
rgb

A C Osborn
July 4, 2014 1:42 pm

Anthony, please apolagise to NCDC immediately.
Your mate STEVE McINTYRE confirms NCDC’s TOB algorithm is working correctly and you are all wrong.
He has advises Paul homewood to retract his work.
See
http://notalotofpeopleknowthat.wordpress.com/2014/07/01/temperature-adjustments-in-alabama-2/#comment-26224
REPLY: But I’m not disputing the TOBs adjustment, but rather a lot of the infilling data from surrounding stations that have been compromised. And to be precise Steve hasn’t actually checked the code that is running in NCDC’s computers. He’s done his own calculation, probably in R, but that isn’t the same as the code that runs at NCDC. That’s what I’d like to see evaluated by an external review. – Anthony

Russ R.
July 4, 2014 1:55 pm

Dale Hartz,
“How can the time of observation change the temperature for a day?”
If recordings are made at the time of day when temperatures are highest (mid-afternoon), daily highs can be “double-counted”, biasing averages higher.
Consider the following example. A thermometer shows 3 things… the current temp, the max high and the max low since the thermometer was last reset.
An observer records all three at 3pm on Monday. Max = 80F, Min = 65F, Current = 75F. He resets the max and min, and returns the next day at 3pm. Tuesday was cooler, a high of only 70F, but the max since the last reading was 75F from when the thermometer was last reset on Monday. The average of the two days Tmax readings is 77.5F, when it should be 75F.
If temperatures were measured at 9am instead, around the middle of the day’s temperature range, there would be much less likelihood of double-counting, and the high-bias would be reduced.

mjc
July 4, 2014 2:08 pm

” Russ R. says:
July 4, 2014 at 1:55 pm
An observer records all three at 3pm on Monday. Max = 80F, Min = 65F, Current = 75F. He resets the max and min, and returns the next day at 3pm. Tuesday was cooler, a high of only 70F, but the max since the last reading was 75F from when the thermometer was last reset on Monday. The average of the two days Tmax readings is 77.5F, when it should be 75F.
If temperatures were measured at 9am instead, around the middle of the dayā€™s temperature range, there would be much less likelihood of double-counting, and the high-bias would be reduced.”
There’s only one BIG problem with that idea…that ASSumes that EVERY observer is not properly resetting the equipment after every observation. And that also assumes every thermometer is the same type.
In other words, it is just one big mass of assumptions all leaning towards the incompetence of the person making the observations.

highflight56433
July 4, 2014 2:12 pm

Also consider most of these stations are at an airport that accumulates heat as airline traffic peaks coincidentally with the warm part of the day. No influence of course… šŸ™‚

David in Cal
July 4, 2014 2:29 pm

IMHO a better way to calculate the average change in temperature from year to year would be to average temperature changes, rather than temperatures. That is, the average temperature change from year (n-1) to year n would be measured as the average over all usable weather stations of (Average temperature for year n/Average temperature for year n-1). A weather station would be usable for a given year only when the station’s reading for that year and the prior year were available and were comparable. This method eliminates the need for infilling. When a station is re-sited, just one temperature change would be left out of the average. All other years would be used without the need for adjustment. Stations going through a period of UHI effect would be left out, rather than some guess being used as to the magnitude of the UHI effect.
I’m an actuary, not a climatologist. However, this seems to me to be a more sound method of measuring the average change in temperature by year.

darwin
July 4, 2014 2:30 pm

“Anthony Wattsā€”proprietor of Watts Up With That, a website popular with climate change skepticsā€”tells me that he does not think that NCDC researchers are intentionally distorting the recordā€
Seriously? Is there any doubt that these people have been “informed” that it’s in their best interest to support global warming?
They may not want to distort the record, but I do not doubt for one minute that the records are being intentionally distorted.
After all the lies, distortions, climategate emails, refusals to debate and attacks on skeptics what person in their right mind would give them the benefit of the doubt?

Gary Pearse
July 4, 2014 2:31 pm

I suppose this is a naive question. When they use infilling of nearby stations to give a data point to a defunct station, do they evaluate comparisons of the surrounding stations with this station during a period when all stations were operating satisfactorily? This is the way I would have done it and assumed that it was the way it has been done. Anyone?

Graham
July 4, 2014 2:33 pm

Russ R says
An observer records all three at 3pm on Monday. Max = 80F, Min = 65F, Current = 75F. He resets the max and min, and returns the next day at 3pm. Tuesday was cooler, a high of only 70F, but the max since the last reading was 75F from when the thermometer was last reset on Monday. The average of the two days Tmax readings is 77.5F, when it should be 75F
So you have an example showing a warm bias when Tuesday is cooler. Repeat for Tuesday is warmer and you will get a cool bias, so the average over time is no TOB. All we have is a lag which shouldn’t impact the average. Adjust those numbers and you will introduce an bias reflected by the algorithm you use to adjust with.

pokerguy
July 4, 2014 2:37 pm

“Anthony Wattsā€”proprietor of Watts Up With That, a website popular with climate change skepticsā€”.me that he does not think that researchers are intentionally distorting the record
Smart Anthony. While I know you’d be less than shocked if it turns out they are, this is the way to play it in th

July 4, 2014 2:38 pm

A data set in which 25% of the observations are produced by imputation (filling in missing values mathematically) is rather marginal. Personally, I wouldn’t expect to be able to make any useful claims based on a data set of that kind.

Editor
July 4, 2014 2:44 pm

Ivan says:
July 4, 2014 at 12:47 pm

ā€œAnthony Wattsā€”proprietor of Watts Up With That, a website popular with climate change skepticsā€”tells me that he does not think that NCDC researchers are intentionally distorting the recordā€

What is exactly the evidence for this claim? If the adjustments are going on permanently and they always cool the past and warm the present, it seems that the null hypothesis should be that they are doing this on purpose.

Never ascribe to malice what can be explained by confirmation bias ā€¦
w.

July 4, 2014 2:49 pm

Coke and Pepsi should be regulated like the coal industry for all the CO2 their products release into the atmosphere. People are ingesting an official government pollutant CO2 contained in their products. There should be a warning label on their packaging listing the dangers.
Coke and Pepsi should be forced to reduce the amount of carbonation in their products.

Mindert Eiting
July 4, 2014 2:55 pm

GeologyJim: Agree. However, those 6000 stations did not disappear randomly. The best predictor for station drop-out is the correlation of its time series with the time series of its latitude region. The lower the correlation the higher the drop-out risk. Repeat my computations if you want This shows that stations were dropped on purpose because of their history. The procedure of dropping dissident stations creates an artificial signal out of noise.

Admin
July 4, 2014 3:00 pm

Even if the adjustments are justified, applying arbitrary adjustments which are of similar magnitude to the global warming trend you claim is occurring, is awfully dodgy.
Its like measuring a kid’s height, to see how fast they are growing, but putting a few books under their feet if you measure their height in the afternoon, to compensate for the fact people get a bit shorter during the day, as their spine compresses.
If the rate they are growing is similar to the height of the stack of books you put under the kid’s feet, then you have to ask, is the growth you measure down to the stack of books rather than the real rate they are growing.

Katherine
July 4, 2014 3:26 pm

It’s one thing to infill if only a few days in a month are missing, it’s something else altogether if entire years are missing. Defunct stations should just be dropped.
And what’s with warming the data for rural stations to match the warming in urban stations to supposedly offset the UHI effect? That’s counterintuitiveā€”unless intuition demands a warming trend.

Genghis
July 4, 2014 3:28 pm

Having played with the Max/min thermometers I can verify that there is a warming effect if the measurements are taken at the hottest part of the day, because the max couldn’t be set lower than the current hot temperature. It is just a piece of metal in the vacuum tube that is adjusted with a magnet. It double counts the high temp.
The same holds true if the measurements are taken at the coolest parts of the day, except that it biases cold.
The proper time to read one of those thermometers is probably at midnight, and that isn’t going to happen.
So to summarize, afternoon readings in the summer will bias high and morning readings in the winter will bias low. It also turns out that high temperatures tend to be more variable than low temps so overall there is a warming bias.
But here is the problem, in order to properly eliminate the bias the actual high or low needs to be known. If the day to day temps are the same or rising, there is no bias (hot bias anyway) regardless of the time of observation.
What the computer algorithm does is look for nearby stations and check if it is a rising or falling trend and if it is a falling trend it adjusts the raw temp down.
The problem that they have discovered now though is that 40% of the stations are zombies, just infilled data points, which in itself isn’t a show stopper, but what is happening is that the accuracy is deteriorating because of station loss. Computer truncation and subtle programmer constants are starting to dominate the output. I am also seeing a regression to the mean and the modelers are fighting that hard.

Leigh
July 4, 2014 3:47 pm

_JimĀ says:July 4, 2014 at 12:45 pm
An audit would seen to be in order ā€¦
And I say good luck with that Jim.
Here’s just a little heads up on what sort of wall you will hit when you demand an audit of your NCDC.
Be under no illusions Jim, that wall will be ably manned by their peers(pals).
Do not for one instance think that this fraud of data ajustment is isolated in America.
Everybody from “fellow”scientists to laymen like myself can see exactly what they are doing.
If they had nothing to hide the information and explanation would be freely availiable.
What they are doing is not bordering on criminal.
It is criminal.
When governments use that altered date supplied by their respective keepers of temperature records to form and spend billions in their budgets.
It is simply fraud.
I believe sooner rather than later, one of these fraudsters will go down in a civil court.
Which will open the flood gates to criminal prosecution’s.
The two links will give you an idea as to the lengths they will go to Jim.
Simply to maintain the fraud.
http://joannenova.com.au/2014/06/dont-miss-jennifer-marohasy-speaking-in-sydney-wednesday/
http://joannenova.com.au/2014/06/australian-bom-neutral-adjustments-increase-minima-trends-up-60/_

Bill Illis
July 4, 2014 3:48 pm

The TOBs adjustment continues to grow every day. How is that possible? The first paper published on TOBs was 1854 and was fully fixed by the NCDC/Weather Bureau in 1890, in 1909, in 1954, in 1970, in 1977, in 1983, in 1986, in 2003, yet it continues to change every day.

Bill Illis
July 4, 2014 3:50 pm

Or let’s put it this way, what happened in the last few months that changed the temperature recordings in 1903.

July 4, 2014 3:54 pm

ā€œAnthony Wattsā€”proprietor of Watts Up With That, a website popular with climate change skepticsā€”tells me that he does not think that NCDC researchers are intentionally distorting the recordā€
People have a huge ability to delude themselves that what they are doing is ethical, correct, scientific and accurate when they truly believe that what they are doing is in the interests of a Great Moral Cause. It still requires conscious effort, however, and is therefore intentional. It may be confirmation bias, it may be incompetence, it may be well intended, but it is still intentional. It is indistinguishable from malevolence in its effects.

mjc
July 4, 2014 4:05 pm

” Genghis says:
July 4, 2014 at 3:28 pm
Having played with the Max/min thermometers I can verify that there is a warming effect if the measurements are taken at the hottest part of the day, because the max couldnā€™t be set lower than the current hot temperature. It is just a piece of metal in the vacuum tube that is adjusted with a magnet. It double counts the high temp.
The same holds true if the measurements are taken at the coolest parts of the day, except that it biases cold.
The proper time to read one of those thermometers is probably at midnight, and that isnā€™t going to happen.
So to summarize, afternoon readings in the summer will bias high and morning readings in the winter will bias low. It also turns out that high temperatures tend to be more variable than low temps so overall there is a warming bias.
But here is the problem, in order to properly eliminate the bias the actual high or low needs to be known. If the day to day temps are the same or rising, there is no bias (hot bias anyway) regardless of the time of observation.”
There is only one maximum per 24hr period…now if that period doesn’t coincide with a calendar ‘day’ then so be it. If the high temp on Tuesday was at 3:01 pm on Monday and the temps are being checked every day at 3:00 pm, then that is the reporting period and the ACTUAL high temperature for that period…it’s not complicated, really.

jaffa
July 4, 2014 4:05 pm

“Watts wants the centerā€™s algorithms, computer coding, temperature records, and so forth to be checked by researchers outside the climate science establishment.”
They’ve got decades of invested in this, why should they hand over all their data when you just want to find something wrong with it?

Leigh
July 4, 2014 4:11 pm

Riddle me this one Bill.
Here in Australia our Bureau of Meteorology flatly refuses to use temperature data pre 1910.
Why?
Because they deem it to be unreliable.
Yet the the UN and their rubber stamp for the fraud IPCC.
Use our and believe our temperature records are fine dating back to the 1860’s.

July 4, 2014 4:39 pm

Thing is I just don’t see how the infilling cannot exacerbate the good site/bad site problem, with bias going to the side that has larger numbers – which at the moment appears to be bad sites (and these too will vary from lightly bad to bloody terrible). To break it into simple numbers: Take a sample area which should have 20 stations. 10 of them are zombies. 3 of the remainder are good and seven varying degrees of bad. Leaving out the zombies the good are averaging a 0.1 temp increase per decade, and the bad a 0.3 (these are hypothetical figures). Averaged out that gives a 0.219 degree increase. Now add the zombies – based on infilling from their neighbors. Probability of each of those neighbors being a bad station is 0.7 or 70% chance. And the more stations they use for infilling the less likely that data will match the good sites. Let’s say they use the nearest 3 stations. For the zombie to provide ‘good station’ data, it would have to have be surrounded by the three good stations, which is highly unlikely (I think something like the magic 97% chance šŸ™‚ that it won’t be). So the zombies are not even going to give the 3:7 good : bad ratio, but most likely all some degree of bad.So instead of 7/10 bad (70%) you now have 17 (or if you really got lucky 16) out of 20. (85% to 80%) of bad – of course some of that bad is ameliorated a little by good data, and I could work it out if I had the patience – but the average for the area simply has to be a higher increase. Or am I getting this all wrong and the algorerythms toss out all the ‘bad’ stations?

Latitude
July 4, 2014 4:53 pm

Bill Illis says:
July 4, 2014 at 3:50 pm
Or letā€™s put it this way, what happened in the last few months that changed the temperature recordings in 1903.
=========
….a fatwa

Genghis
July 4, 2014 4:54 pm

mjc says:
July 4, 2014 at 4:05 pm
“There is only one maximum per 24hr periodā€¦now if that period doesnā€™t coincide with a calendar ā€˜dayā€™ then so be it. If the high temp on Tuesday was at 3:01 pm on Monday and the temps are being checked every day at 3:00 pm, then that is the reporting period and the ACTUAL high temperature for that periodā€¦itā€™s not complicated, really.”
Lets do a quick test to see if your math skills are up to par. Day one has a high of 30 and a low of zero and is checked at 3 pm. The average and high temp is 15. Sometime after 3 pm the temperature plummets and drops to zero and stays at zero. The next day the thermometer will indicate that the high was 30 and the low was zero. Clearly that is an error.
It incorrectly indicates one maximum for two 24 hour periods.

July 4, 2014 4:56 pm

sunshinehours1,
Actually NCDC makes all their papers available for free. You can find all the USHCN ones here: ftp://ftp.ncdc.noaa.gov/pub/data/ushcn/papers/

RH
July 4, 2014 5:03 pm

jaffa says:
July 4, 2014 at 4:05 pm
“ā€œWatts wants the centerā€™s algorithms, computer coding, temperature records, and so forth to be checked by researchers outside the climate science establishment.ā€
Theyā€™ve got decades of invested in this, why should they hand over all their data when you just want to find something wrong with it?”
Because “they” work for the U.S. public. It isn’t their data, it is ours.

Jimbo
July 4, 2014 5:19 pm

Clearly, replication by independent researchers would add confidence to the NCDC results. In the meantime, if Heller episode proves nothing else, it is that we can continue to expect confirmation bias to pervade nearly every aspect of the climate change debate.

And group think caused by lavish funding.

Mike T
July 4, 2014 5:21 pm

From the linked article: “They’ve clarified a lot this way. For example, simply shifting from liquid-in-glass thermometers to electronic maximum-minimum temperature systems led to an average drop in maximum temperatures of about 0.4Ā°C and to an average rise in minimum temperatures of 0.3Ā°C”. This is the opposite of the real effect. Electronic sensors generally read higher than liquid-in-glass thermometers for the maximum and usually, lower for minimum thermometers.

Jimbo
July 4, 2014 5:24 pm

If NASA can crash a probe by accident onto Mars due to a simple error, then confirmation bias is possible. If your funding thrives on continued global warming, confirmation bias is possible. Money is at the root of all eeeeeeevil and pal review. Sorry, but there it is.

R. Shearer
July 4, 2014 5:26 pm

I suppose NCDC’s Tom Karl misrepresented his academic credentials because of confirmation bias.

July 4, 2014 5:29 pm

GeologyJim,
There is still a 10,000+ station reporting network.
http://berkeleyearth.lbl.gov/auto/Regional/TAVG/Figures/global-land-TAVG-Counts.pdf
You can download data from is here: http://berkeleyearth.org/data

mjc
July 4, 2014 5:43 pm

” Genghis says:
July 4, 2014 at 4:54 pm
mjc says:
July 4, 2014 at 4:05 pm
ā€œThere is only one maximum per 24hr periodā€¦now if that period doesnā€™t coincide with a calendar ā€˜dayā€™ then so be it. If the high temp on Tuesday was at 3:01 pm on Monday and the temps are being checked every day at 3:00 pm, then that is the reporting period and the ACTUAL high temperature for that periodā€¦itā€™s not complicated, really.ā€
Lets do a quick test to see if your math skills are up to par. Day one has a high of 30 and a low of zero and is checked at 3 pm. The average and high temp is 15. Sometime after 3 pm the temperature plummets and drops to zero and stays at zero. The next day the thermometer will indicate that the high was 30 and the low was zero. Clearly that is an error.
It incorrectly indicates one maximum for two 24 hour periods.”
Not if the check is 3pm, every day. The reporting period and ‘day’ don’t match, sure, but for THAT period, the high was what was recorded at the START of that period. The 24 hr period is 3pm to 2:59 pm and as long as that remains consistent there is only ONE maximum temp recorded in THAT 24 hr period. It doesn’t matter what time of day that occurs. If the period is consistent, then the min/max will be consistent, for that location and that period. The only time an ‘adjustment’ will need to be made, then is when the period is changed.
What law states that an observation period and ‘day’ have to align?
And without knowing how long and how consistent any non-alignment was, applying an arbitrary ‘adjustment’ is just about the worst thing you can do to protect data integrity.

Jantar
July 4, 2014 5:48 pm

Genghis says:
“……Lets do a quick test to see if your math skills are up to par. Day one has a high of 30 and a low of zero and is checked at 3 pm. The average and high temp is 15. Sometime after 3 pm the temperature plummets and drops to zero and stays at zero. The next day the thermometer will indicate that the high was 30 and the low was zero. Clearly that is an error.”

Clearly that is NOT an error. The average temperature for these records is taken as (Tmax – Tmin)/2 and on both of theses days the Tmax and Tmin are the same.
It would be an error if we were taking a true average and reading the temperature every single minute, then dividing the total by 1440, then the result would be in error.
However using your reasoning. the same could be said that if a warm front arrived shortly before the temperature was taken and being typically slow moving held the temperature up for most of the day until a cold front arrives just before the next reading is taken which causes the temperature to plumet. Day 2 would now show COLDER than the true average.
As long as we are using a simple average of (Tmax – Tmin)/2 then no correction is needed, and any uncertainty should be shown by the error bars.

JeffC
July 4, 2014 6:10 pm

adjusted raw data is no longer good data … mixing infilled data points with raw and adjusted data is like mixing a certain % of dog crap into your vanilla ice cream … the question is always how small a % can you go and still eat it ?
They don’t have any data prior to the satellites … period … they have a bunch of numbers that have been beaten into uselessness for the purpose of studying climate change or global temps …
You should be hammering that point home Anthony …

Eeyore Rifkin
July 4, 2014 6:44 pm

Goddard destroyed TOBS with his what’s-2-times-zero argument (in reference to 100 degree days, which can’t be double counted when they simply don’t occur). The temptation to do TOBS station by station is understandable, but given a large data set the assumption that the errors cancel out is probably wisest.

Genghis
July 4, 2014 6:47 pm

mjc says:
July 4, 2014 at 5:43 pm
” If the period is consistent, then the min/max will be consistent, for that location and that period. The only time an ā€˜adjustmentā€™ will need to be made, then is when the period is changed.”
That would be correct if all of the other locations were measured at 3 pm too, they aren’t. Now you have to decide which record to adjust and you are back to exactly the same problem.

Genghis
July 4, 2014 7:11 pm

Jantar says:
July 4, 2014 at 5:48 pm
“Clearly that is NOT an error. The average temperature for these records is taken as (Tmax ā€“ Tmin)/2 and on both of theses days the Tmax and Tmin are the same.”
Lets say we have two identical thermometers side by side. Thermometer A gets read in the morning and thermometer B gets read in the afternoon. On April 2nd the thermometer gets read in the morning and April 1’st High temperature gets entered into the record. In the afternoon, thermometer B gets read, is it the high temperature for April 1’st or April 2nd? Could thermometer A’s reading be in error?
How do you determine the absolute temperature for that day based on those two high and low temperature records at the exact same place? More importantly, what is the procedure that you use to solve that puzzle day after day?

July 4, 2014 7:21 pm

The question is when does it stop. When do the adjustments stop cooling the past?
To meet the theory’s predictions, temperatures have to rise 2.5C in the next 86 years.
Will temperatures increase that much or will the NCDC of 2099 cool the past by another 2.5C?
Sounds ridiculous doesn’t it. But nobody is stopping them from cooling the past right now at the same rate per year that would be required to cool the past by 2.5C by 2100.
They have to be stopped or everyone will have 2 windmills in their backyard and solar panels on their roof while their backyard has real temperatures that are no different than 1903.

Genghis
July 4, 2014 7:49 pm

Bill Illis says:
July 4, 2014 at 7:21 pm
“The question is when does it stop. When do the adjustments stop cooling the past?”
When the past says exactly what they want it to say.
What is really funny, is that a good argument can be made for all of the adjustments.

July 4, 2014 8:48 pm

Any technician who adjusts “data” does not know what “data” is. These individuals are technicians, doing as they are told. No professional engineer nor any genuine scientist would ever ever “adjust” data. Data is all you have to work with, if you adjust it, you have nothing…

July 4, 2014 9:10 pm

This proposal is too wimpy. The US has an Information Quality Act that mandates all government agencies maintain data quality.
Now is the time for Congress to insist upon an official audit of temperature data, its generation, collection, processing, adjustment and reporting.

Eliza
July 4, 2014 10:00 pm

This posting just encourages the warmists and the NCDC/They will NEVER, NEVER change or admit to any wrondoing. All the evidence (not only NCDC but ALL the evidence), I repeat, adds up to just plain intentional fraud and fabrication to met an agenda. You still don’t get it Mr Watts.

July 4, 2014 10:09 pm

July 4, 2014 at 1:11 pm | DirkH says
———
šŸ˜‰ No matter what ‘data’ is fed into the ‘algorithm’ … a hockey stick is achieved !

July 4, 2014 10:17 pm

July 4, 2014 at 3:26 pm | Katherine says:

And whatā€™s with warming the data for rural stations to match the warming in urban stations to supposedly offset the UHI effect? Thatā€™s counterintuitiveā€”unless intuition demands a warming trend.

Indeed, cooling the urban stations to conform to the rural data would seem more pertinent, after all, it will help sort out the UHI bias that (supposedly) doesn’t happen.

July 4, 2014 10:32 pm

Mr. Bailey, you say:

Once all the calculating is done, the 2009 study concludes, the new adjusted data suggests that the “trend in maximum temperature is 0.064Ā°C per decade, and the trend in minimum temperature is 0.075Ā°C per decade” for the continental U.S. since 1895. The NCDC folks never rest in their search for greater precision. This year they recalculated the historical temperatures, this time by adjusting data in each of the 344 climate divisions into which the coterminous U.S. is divvied up. They now report a temperature trend of 0.067Ā°C per decade.

Since its inception, NCDC has done numerous “adjustments” of the raw climate data. How many adjustments have been made for all the “confounding factors” you mention: (TOBS, urban encroachments, “upgrades” to MMTS (which have their own set of issues), “lazy” reader artifacts, etc)? If you can obtain this as an absolute number, I, for one, would love to know: how many of those adjustments resulted in a warmer modern period, and how many resulted in a cooler modern day? Assuming NCDC’s (or NOAA’s) intent was the relentless pursuit of “greater precision”, as you say, we should expect they made just as many corrections that resulted in a warmer past / cooler modern day as in a cooler past / warmer modern day.
The revelations over the last 10 or so years have suggested quite the opposite: the ratio of adjustments we’ve been privy to (and I doubt they are as transparent as you think) have shown a predilection for “warming” the present, and, if I may say, stirring the pot on every possible opportunity. Considering that these adjustments are merely mathematical and statistical corrections of former inaccuracy, there is no reason to expect one-sidedness, is there?
People who provide cursory historical “analyses” of these government institutions without providing any of their own data are not contributing much to the overall understanding of what NOAA has been doing, whether they are reliable, unbiassed proprietors of climate data. From what I can see, they’ve done little but fudge the data.

norah4you
July 4, 2014 11:11 pm

Sweden have good, maybe the worst example of adjustments to “fix” warmer trend.
First Swedish text: Mest skrƤmmande exemplet gĆ„r att finna hƤr i Sverige: FrĆ„n Forskning och Framsteg: ā€VĆ„r rekonstruktion av vinter- och vĆ„rtemperaturens variationer ƶver ett halvt Ć„rtusende visas i bild 1. MƤtningarna efter 1860 Ƥr korrigerade fƶr den artificiella uppvƤrmning som orsakats av staden Stockholms tillvƤxt, sĆ„ att kurvan visar de mer naturliga fƶrƤndringarna.ā€ KƤlla: Forskning och Framsteg nr 5 2008, 500 Ć„rs vƤder.
Quick translation to English language:
One of the most frightening example of “adjustments” can be found here in Sweden: From Forskning och Framsteg (Swedish science magasin): “Our reconstruction of winter and spring temperature variations over the last half millennium is shown in Figure 1. Measurements since 1860 are corrected for the artificial warming caused by the city of Stockholm’s growth, so the curve shows the more natural change. “ Source: Forskning och Framsteg No. 5, 2008, 500 Ć„rs vƤder. (English: 500 years of weather.)
Looking at other countries, US included, I have seen the same strange behavior among those who call themselves Scientists…..

July 4, 2014 11:13 pm

“The NCDC folks never rest in their search for greater precision.”
If this was intended ironically, I apologize for my conclusion above… though I would still like to know how many of NCDS;s adjustments actually favor a neutral or cooling trend.

richard verney
July 5, 2014 12:43 am

The claim of peer review is very often hollow. It adds nothing that the adjustments may have been reviewed by other climate scientists who know little of the issues raised.
The simple point when considering data significant trends, and the collation of such data is whether it has been peer reviewed by statistians,
The idea that one can have approximately 40% of the data not actual but rather constructed, and not be an issue, is absurd.
In fact we know there is a problem with the data merely from the fact that it is continually being adjusted. If there wwas only only one, or perhaps two adjustments then there may be valid reasons. But when one adjusts the same data a dozen or so times, it shows that one does not know what the adjustment should be. It shows that later adjustments are adding to the need to revisit and remake adjustments that you hhad made earlier therebu suggesting some positive feedback loop in the adjustments that you are making.
When you make one adjustment, you create a margin of error. Of course, the adjustment may be made with the view that your adjustments is ‘correcting’ but there is a risk that your adjustment is not correcting. When you make 10 such adjustments, you create the possibility of 10 errors, a 1000, the possibility of a 1000 errors etc etc. Hopefully, some errors will cancel out, but that need not by necessity be the case, especially when based upon the same algorithym which has an underlying flaw.
The fact that this data set diverges from the satellite data record also adds weight to the fact that there is a problem with the ‘adjusted’ data set.
Personally, I would ditch the land based data set post 1979, and I would reassemble it up to the period of 1979. There would be better station coverage if the record up to 1979 was used, and to some extent, the effects of UHI may be lessened. You are likely to get a cleaner data set.
After 1979 use the staellite. It has better spatial coverage, and is meant to use our most sophisticated and advanced measuring technology. It should be a better record (although it has some issues of its own).
Definitely do not splice the satellite record post 1979 onto the land based record upto 1979.

richard verney
July 5, 2014 1:03 am

Bill Illis says:
July 4, 2014 at 3:48 pm
The TOBs adjustment continues to grow every day. How is that possible? The first paper published on TOBs was 1854 and was fully fixed by the NCDC/Weather Bureau in 1890, in 1909, in 1954, in 1970, in 1977, in 1983, in 1986, in 2003, yet it continues to change every day.
////////////////////
I had not read Bill’s comment when I posted my earlier comment.
I consider this to be one of the most material points. I can understand someone saying that we need to make an adjustment for TOB, or we need to make an adjustment for a station move, or an instrument change etc. But if we knew what we were doing that would be a one off adjustment. The adjustment is made, the issue has been ‘corrected’
But the problem is that we are making adjustments to old records not just once or twice, but sometimes a dozen or so times. Just ask yourselves, how many times has the 1930s data undergone an adjustment? This establishes that we do not know what we are doing, period. This is very clear when one super-imposes trends based upon the record as it was drawn over the years. If climate scientists cannot see that there is an issue there,,well it says a lot about their abilities and way of their thinking.
It would be interesting to know whether a difference would result if we were to adjust modern data to bring it in line with old data collection, or old data to bring it in line with modern data collection.

KenB
July 5, 2014 1:09 am

Imperative to act immediately and as it is USA based temperature that needs to be audited, your Senate should set up a Senate backed Inquiry on the validity and data processing of the historic and present temperature record(s)
The effect of using the present rational whereby algorithms are allowed to change past and present temperatures.
The Senate should ensure that am audit team is assembled to assist in examining and auditing the various contributing agencies.
As to the physical makeup of the professional audit team, I am sure that Steve at Climate Audit could be trusted to set up an unbiased team of auditors with the qualifications and experience to get to the nub of this problem.
The audit team would report back on the level of co-operation and transparency of the organisations involved, and if necessary the Senate would have the power to subpoena key staff and executives to ensure data and internal directives are properly produced for audit and evidence adduced to get answers on the who what and why, that created the present errors and or omissions, bias or whatever.
No one should object to this precautionary audit on the principle that the American people must have absolute confidence when trillions of American Taxpayers money have been, and will be expended, and the end product must be an absolute and trustworthy historic temperature record. Once this confidence and trust is affirmed politicians can then make the sort of energy and policy decisions that might be expected to flow from that information.
Sceptic blogs might like to propose leaders and members for the audit team with a public review by way of Senate oversight.
Lastly as this is the most urgent problem facing human society as some say, the audit team and support staff should be funded by equal contributions from current budgets of the organisations involved, the issues of trust and confidence are all important considering the economic and social considerations underlying these issues and it is not one where the agencies themselves should be trusted with simply relying on internal audits. There is too much at stake!
It is the United States historic temperature record, and the people of the United states must have confidence in all the decisions that build on that record – it does not belong to any organisation or particular political persuasion of government and the methods must be open and transparent to the American people and the Senate is the appropriate body to represent those people.
Over to you, we also need this sorted out as it appears our Australian records have suffered similar “adjusting” and manipulation and loss of confidence.

July 5, 2014 1:29 am

The Japanese IBUKU climate satellite data completely negates the claim that carbon dioxide in the atmosphere is coming from humans. The IBUKU results confirm that the CO2 in the atmosphere is a result of temperature induced and moisture induced releases from mainly equatorial high vegetation regions. In personal communication with Professor Richard Lindzen I find he agrees with IBUKU.

policycritic
July 5, 2014 1:35 am

Latitude says:
July 4, 2014 at 1:02 pm
You have to noticeā€¦..by the time someone gets a paper published using todayā€™s dataā€¦..the dataā€™s changedā€¦.when they go back to check itā€¦.their paper is wrongā€¦.by the time they re-writeā€¦.go back and checkā€¦.itā€™s changed again

American science at its finest. With excuses. We only get huffy when we think people in other countries are doing it. Then we’re vociferously anti-GIGO.

J Martin
July 5, 2014 1:51 am

Never ascribe to malice what can be explained by confirmation bias ā€¦
Willis. Inspired.
+1

Randy
July 5, 2014 2:52 am

Personally I think it is much more likely all of this is purposeful then not. We are told catastrophe is undeniable, we are told there has been in depth debate, all the questions are answered. The only thing left is for deniers to shut up and severely limit the first world, and keep the third world on its knees forever.

Editor
July 5, 2014 3:02 am

Terri Jackson says:
July 5, 2014 at 1:29 am

The Japanese IBUKU climate satellite data completely negates the claim that carbon dioxide in the atmosphere is coming from humans. The IBUKU results confirm that the CO2 in the atmosphere is a result of temperature induced and moisture induced releases from mainly equatorial high vegetation regions. In personal communication with Professor Richard Lindzen I find he agrees with IBUKU.

Thanks, Terri. I went to your website, and to the Japanese IBUKU website, and dug deep on Google, and nowhere did I find the actual data. Oh, there’s lots of pretty pictures, but where can I find the actual values, month by month, of CO2 fluxes of the various regions?
Because until I see the numbers and analyze them myself ā€¦ it’s just pretty pictures, none of which show net annual fluxes, and I don’t make claims based on pretty pictures.
Finally, I can’t see how you get from the IBUKU data to saying that their pretty pictures “completely negates the claim that carbon dioxide in the atmosphere is coming from humans”. To me, it shows the oppositeā€”as best as I can tell from their cruddy pictures, all of the identified locations of positive net CO2 flows are areas of high human density ā€¦ I doubt very much that that is a coincidence, but of course without the data I would never make a sweeping statement such as yours.
Any assistance in finding the data gladly accepted ā€¦
w.

Editor
July 5, 2014 3:08 am

OK, Terri, I located the IBUKU data and mapped it up. Here are the results:

As you can see, where there are concentrations of humans, we get CO2. Some is from biomass burning, some is from fossil fuel burning, some is from cement production.
Now, you can certainly make the case that this shows that humans in the developing world are a major contributor to CO2, and that the common meme that the developed nations are to blame is not true ā€¦ but you can’t make the case that

The Japanese IBUKU climate satellite data completely negates the claim that carbon dioxide in the atmosphere is coming from humans.

as you state above.
Best regards, and thanks for the pointer to the IBUKU dataset,
w.

peter azlac
July 5, 2014 3:26 am

Does anyone know how many stations NCDC/GISS/CRU etc have where there was a reasonable overlap period between the use of the traditional glass thermometers and the MMTS thermometer replacements and what the analyses of the overlaps show. This seems to me to be the very minimum action that should have been taken and if it was there should be little need for all the BEST etc adjustments, they should have been done for the individual stations.

July 5, 2014 5:23 am

Mike T says:
July 4, 2014 at 5:21 pm
From the linked article: ā€œTheyā€™ve clarified a lot this way. For example, simply shifting from liquid-in-glass thermometers to electronic maximum-minimum temperature systems led to an average drop in maximum temperatures of about 0.4Ā°C and to an average rise in minimum temperatures of 0.3Ā°Cā€. This is the opposite of the real effect. Electronic sensors generally read higher than liquid-in-glass thermometers for the maximum and usually, lower for minimum thermometers.
________________________________________________________________________
The only reason a well-maintained electronic temperature min-max would be different from liquid in glass is a continuous recording feature. Both the liquid-in-glass and electronic thermometers should read the true temperature in some reasonably tight range above and below. That’s assuming the manufacturing processes were in statistical control. If there is any real difference between the two, then I’d suggest poor calibrations, drift corrections or differences in housings. Surely all this was addressed before we put these out.
PS. How does one make past data more precise? Precision and accuracy are pretty much determined at the times of construction and measurement.

Mike T
Reply to  Bob Greene
July 5, 2014 6:38 am

Bob, I’ve addressed the difference between liquid-in-glass and electronic sensors elsewhere, but to reiterate, it’s my feeling that mercurial maximum thermometers aren’t as sensitive as electronic probes. In other words, around the time of TMax, there is inertia in the mercury which prevents it getting to the same temp as the more sensitive probe. Also, due to contraction in the mercury in the column (as opposed to the bulb) mercurial max thermometers read slightly lower the next day./ Usually, the previous day’s TMax is used, but some stations may only read the Max at 0900, when it’s reset, along with the min thermometer. So it could be 0.2 to 0.4 degrees lower than the electronic probe, assuming the station has both types, the probe is the “official” temperature.

North of 43 and south of 44
July 5, 2014 5:53 am

Dale Hartz says:
July 4, 2014 at 1:04 pm
How can the time of observation change the temperature for a day?
_____________________________________________________
Midnight today where you are isn’t midnight one time zone or further either side of you and so forth maybe they want the temperatures all taken at the same time? Since they aren’t maybe they try to adjust them?
I always adjust my temperatures to match the nearest city says the screencaps for my town on weather.com check yours they are always adjusting things I hope they use that technique when confronted with a speeding ticket ;).

Solomon Green
July 5, 2014 5:59 am

A C Osborn forgot to explain that Steve McIntyre actually wrote in the same post when referring to TOBS “Yes, overall it slightly increases the trend, but this has a rational explanation based on historical observation times. Again I donā€™t see a big or even small issue.”
As Anthony indicates there is perhaps/probably more bias (unintentional) from infilling than from TOBS.
I like David in Cal’s approach. But then actuaries are trained to examine the raw data and determine how to make best use of it while introducing the minimum of error. They are also trained to reject spurious accuracy. As a fellow professional I have always felt that all long term temperature forecasts based on climate science models are invalid because of they assume the data to be more accurate than is justified.
David in Cal’s approach would obviate the need for messy infilling, TOBs etc and would at least provide a sounder basis for the trend lines that litter so many charts. The trend lines are fairly worthless except to tell us what has happened in the past and what might happen in the future if, by some miracle the trend continued unabated.
Lorenz found that limiting his readings on only twelve variables to three rather than six decimal places so significantly affected his longer term (more than three months) forecasts that they were worthless. From the standard literature I once calculated that there were not twelve but more than forty variables that could affect climate. Enough said?

peter azlac
July 5, 2014 6:16 am

Willis
Can you present your Ibuki data in the same map format as that given by JAXA so that we can see the European values more clearly.
http://global.jaxa.jp/projects/sat/gosat/topics.html#topics1840

Ron C.
July 5, 2014 6:20 am

How about a statistical analysis of land surface temperatures where each site is treated as a distinct microclimate. I have always been uncomfortable with the adjusting, anomalizing and homogenizing of land surface temperature readings in order to get global mean temperatures and trends. Years ago I came upon Richard Wakefieldā€™s work on Canadian stations in which he analyzed the trend longitudinally in each station, and then compared the trends. This approach respects the reality of distinct microclimates and reveals any more global patterns based upon similarities in the individual trends. It is actually the differences between microclimates that inform, so IMO averaging and homogenizing is the wrong way to go.
In Richardā€™s study he found that in most locations over the last 100 years, extreme Tmaxs (>+30C) were less frequent and extreme Tmins <-20C) were less frequent. Monthly Tmax was in a mild lower trend, while Tmin was strongly trending higher , resulting in a warming monthly average in most locations. Also, Winters were milder, Springs earlier and Autumns later. His conclusion: What's not to like?
Now I have found that in July 2011, Lubos Motl did a similar analysis of HADCRUT3. He worked with the raw data from 5000+ stations with an average history of 77 years. He calculated for each station the trend for each month of the year over the station lifetime. The results are revealing. The average station had a warming trend of +0.75C/century +/- 2.35C/century. That value is similar to other GMT calculations, but the variability shows how much homogenization there has been. In fact 30% of the 5000+ locations experienced cooling trends.
Conclusions:
"If the rate of the warming in the coming 77 years or so were analogous to the previous 77 years, a given place XY would still have a 30% probability that it will cool down ā€“ judging by the linear regression ā€“ in those future 77 years! However, it's also conceivable that the noise is so substantial and the sensitivity is so low that once the weather stations add 100 years to their record, 70% of them will actually show a cooling trend.
Isn't it remarkable? There is nothing "global" about the warming we have seen in the recent century or so.The warming vs cooling depends on the place (as well as the month, as I mentioned) and the warming places only have a 2-to-1 majority while the cooling places are a sizable minority.
Of course, if you calculate the change of the global mean temperature, you get a positive sign ā€“ you had to get one of the signs because the exact zero result is infinitely unlikely. But the actual change of the global mean temperature in the last 77 years (in average) is so tiny that the place-dependent noise still safely beats the "global warming trend", yielding an ambiguous sign of the temperature trend that depends on the place."
http://motls.blogspot.ca/2011/07/hadcrut3-30-of-stations-recorded.html

Owen
July 5, 2014 6:21 am

IF it is Confirmation Bias that has resulted in the distorted weather data then they are incompetent. Fire them ! Start holding these clowns accountable and maybe, just maybe, we will end up with an honest weather service for once.

Latitude
July 5, 2014 6:21 am

IBUKU?…..
Chiefio penned this already………Japanese Satellites say 3rd World Owes CO2 Reparations to The West…..don’t you guys remember?
http://chiefio.wordpress.com/2011/10/31/japanese-satellites-say-3rd-world-owes-co2-reparations-to-the-west/

July 5, 2014 6:45 am

I’ll give NOAA credit for something. Their new webpage allows graphing Min/Max and Avg. And it goes back to 1895. And the mapping is good.
Even with TOBS and all the other adjustments it dhows that the decade with hottest July’s were the 1930s.
http://www.ncdc.noaa.gov/cag/time-series/us/110/00/tmax/1/07/1895-2014?base_prd=true&firstbaseyear=1901&lastbaseyear=2000
And it shows the classic UHI signature of higher minimums in the present.
http://www.ncdc.noaa.gov/cag/time-series/us/110/00/tmin/1/07/1895-2014?base_prd=true&firstbaseyear=1901&lastbaseyear=2000

jim2
July 5, 2014 7:38 am

It has come out in the discussion of the Luling station that instrumental failures at stations are not included in the station records. This greatly complicates the effort to find good stations. It will probably take either a government funded effort, not likely, or crowd sourcing to thoroughly investigate each station, including trying to find people who know about station integrity.

catweazle666
July 5, 2014 8:04 am

The NCDC also notes that all the changes to the record have gone through peer review and have been published in reputable journals.
So was “Mann’s ‘Nature’ Trick”, which is why, in the field of climate science at any rate, “peer review” is now more commonly understood to mean “pal review”.

July 5, 2014 8:30 am

Mike T says:
July 5, 2014 at 6:38 am
Bob, Iā€™ve addressed the difference between liquid-in-glass and electronic sensors elsewhere, but to reiterate, itā€™s my feeling that mercurial maximum thermometers arenā€™t as sensitive as electronic probes. In other words, around the time of TMax, there is inertia in the mercury which prevents it getting to the same temp as the more sensitive probe. Also, due to contraction in the mercury in the column (as opposed to the bulb) mercurial max thermometers read slightly lower the next day./ Usually, the previous dayā€™s TMax is used, but some stations may only read the Max at 0900, when itā€™s reset, along with the min thermometer. So it could be 0.2 to 0.4 degrees lower than the electronic probe, assuming the station has both types, the probe is the ā€œofficialā€ temperature.
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++
The mass differences may well allow the electronic thermometer to react faster to transients. I don’t have any data on response times of the LIG or MMTS thermometers. Mercury contracting in the glass before the bulb? The capillary is much more insulated than the reservoir (bulb) and it is the thermal expansion and contraction in the bulb that moves the mercury in the capillary. How else could I stick the bulb of a thermometer in an oven at 110Ā°C that is in a room at 25Ā°
C and read the oven temperature and have the glass outside the oven closer to room temperature? There is a delay in mercury thermometers going from as sudden high to low because you cool the mass in the bulb, but I assure you it is not 24 hours. Electronic thermometers also have some delay. The resolution in the Nimbus (MMTS) is 0.1Ā°F and the accuracy is about 0.3Ā°F (span dependent). So, how do you see with confidence that 0.2Ā° difference?

Ron C.
July 5, 2014 9:00 am

Further to records of Tmax, a comment from some time ago:
ā–  Temperature is measured continuously and logged every 5 minutes, ensuring a true capture of Tmax/Tmin
That is why it is hotter in 2014 than in the 1930ā€²sā€¦ they were not measuring Tmaxā€™s every five minutes in the ā€™30ā€²s. I have downloaded daily since June 22nd the Oklahoma City hourly records and never were the highest hourly maximum what was recorded for the maximum of the day, the maximum was consistently two degrees Fahrenheit greater than that of the highest HOUR but evidently they count 5 minute mini-microbursts of heat today instead. I guess hourly averages are not even hot enough for them (yeah, blame it on CO2). That, by itself, invalidates all records being recorded today to me, I donā€™t care how sophisticated their instruments areā€¦ the recording methods themselves have changed and anyone can see it in the ā€œ3-Day Climate Historyā€, the hourly readouts, given at every city on their pages. Donā€™t believe me, see it for yourself what is going on in the maximums. Minimums rarely show this effect for cold is the absence of thermal energy, not the energy which can peak up for a few minutes, much more than cold readings.

kadaka (KD Knoebel)
July 5, 2014 9:49 am

From Zeke Hausfather on July 4, 2014 at 5:29 pm:

There is still a 10,000+ station reporting network.
http://berkeleyearth.lbl.gov/auto/Regional/TAVG/Figures/global-land-TAVG-Counts.pdf

Besides that file sloppily notifying viewers it originated from a WinDoze machine as “C:\Users\ROBERT~1\AppData\Local\Temp\tp69a51b4e_ec25_42c2_9f81_af39b86c036d.ps”, the graph needlessly stuffed into a pdf is showing there was a recent precipitous drop to only about 600 stations “Within Region”.
The BEST site sidebar says:

Temperature Stations in Region
Active Stations: 17,444
Former Stations: 18,863

Inside the global land mean dataset it states:

% This analysis was run on 12-Oct-2013 00:45:15

% The current region is characterized by:
% Latitude Range: -90.00 to 83.46
% Longitude Range: -180.00 to 180.00
% Area: 145555387.95 km^2
% Percent of global land area: 98.926 %
% Approximate number of temperature stations: 36307
% Approximate number of obeservations: 14472170

It is interesting to note they have enough temperature station coverage to account for Antarctica including the pole which even satellites don’t cover.
That original graph does show a recent step down to about 10,000 stations before the precipitous drop to about 600. The sidebar does not indicate either of those.
As BEST is apparently keeping note of former and active stations, it would be helpful if the dataset could identify how many stations went into a particular month’s “global” mean. That lone 1743 monthly entry, for November, has a story worth telling, as does the smattering of “global” entries between April 1744 and April 1745 before the big nothingness until 1750. As it stands, it effectively overstates the reliability of the entries.
The precipitous drop could be a programming issue with an automatically-generated graph, which would be a programming error, for which there is apparently no one at BEST who bothered to do a quick visual doublecheck before publishing.
And the “latest” dataset is 9 months old. Not only can’t BEST be bothered to do timely updates, they didn’t even set up automatic updates.
All in all, BEST is clearly not a dataset suitable for serious work, more at the level of a group hobby.
I shall stop using it for even informal comparisons.

Editor
July 5, 2014 10:01 am

peter azlac says:
July 5, 2014 at 6:16 am

Willis
Can you present your Ibuki data in the same map format as that given by JAXA so that we can see the European values more clearly.
http://global.jaxa.jp/projects/sat/gosat/topics.html#topics1840

Not sure what you mean by “in the same manner”, Peter.
w.

Ron C.
July 5, 2014 10:04 am

David in Cal says:
July 4, 2014 at 2:29 pm
“IMHO a better way to calculate the average change in temperature from year to year would be to average temperature changes, rather than temperatures.”
David, you are right. See my comment above at 6:20am for a link to a study that does what you propose.

Frank
July 5, 2014 10:16 am

“Clearly, replication by independent researchers would add confidence to the NCDC results.” What was BEST – if not replication by independent researchers who have been publicly critical of some aspects of the consensus?
The problem is that the historical record contains may examples of clear inhomogeneity (breakpoints) in the data that on the average resulted in reporting of somewhat cooler temperatures going forward. The number of breakpoints being detected is surprisingly large – about 1 per decade at the average station (if my memory is correct). Only a small fraction can be due to understood phenomena like changes in TOB. When those breakpoints are corrected, 20th-century warming increases substantially (0.2 degC?). Should all these breakpoints be corrected? We don’t know for sure, because we don’t know what causes them: a) A sudden change to new reporting conditions? b) A gradual deterioration of conditions (bias) that is corrected by maintenance? c) Some combination of both.
It would be nice if Zeke and others would acknowledge that without metadata, they can’t prove that breakpoint correction results in an improved temperature record. Breakpoint correction is an untested hypothesis, not a proven theory. They test breakpoint correction algorithms against artificial data containing known artifacts, but they don’t know cause of most of the breakpoints in the record. Comparing one station to neighbors, which also have an average of a [corrected] breakpoint per decade, also seems problematic. Getting accurate trends from such data is extremely difficult. Modesty seems more appropriate than hubris under these conditions.

richard verney
July 5, 2014 10:57 am

Willis Eschenbach says:
July 5, 2014 at 3:08 am
///////////////////
Willis
Your post looks significant, but the map makes detailed interpretation almost impossible. Can it be rescaled with the globe split into say 4 quarters. I would like to see in detail each country, or at any rate, each continent. We can then compare that with population figures and per capita emissions.
My glance at you map suggests that the US is one of the lowest emitters, and Europe looks to be one of the hifghest. That certainly conflicts with per capita emissions (set out at the top of this article) and population data.
Australia seems to suffer least from distortion (due to its centred position) such that one can clearly identify the largest cities, and yet high emissions do not seem to correlate well with the highest density of population. In the south east (Melbourne, Canberra, Sidney etc), CO2 is even blue, ie., a negative ,The northern territory is sparsely polpulated (even its local capital city Darwin has a polulation of less than 150,000) and yet the northeern territory appears to have the highest levels of CO2.
Looking at Australia (this is not cherry picked but rather it is the only country that is not distorted and on which I can readily identify its geography), your assertion “As you can see, where there are concentrations of humans, we get CO2” is not well supported..
I would certainly like to see this important data presented in a clearer form. Thanks for your trouble

July 5, 2014 12:04 pm

@Ivan at July 4, 12:47 pm

ā€œAnthony Wattsā€”proprietor of Watts Up With That, a website popular with climate change skepticsā€”tells me that he does not think that NCDC researchers are intentionally distorting the recordā€

What is exactly the evidence for this claim?
The statement: “he does not think that NCDC researchers are intentionally distorting” requires no evidence. It is in fact the Null Hypothesis. But it is a hypothesis worthy of testing scientifically.
The following statements require evidence”
“he does not thinks that NCDC researchers are intentionally distorting”
:”he KNOWS that NCDC researchers are not intentionally distorting”

Ron C.
July 5, 2014 12:09 pm

Frank says:
July 5, 2014 at 10:16 am
I have heard that breakpoint techniques, like BEST’s scalpel, have the effect of amplifying whatever the trend in the dataset. And since surface temperatures have been warming since the LIA, more warming is what you will get.

Editor
July 5, 2014 12:10 pm

richard verney says:
July 5, 2014 at 10:57 am

Willis Eschenbach says:
July 5, 2014 at 3:08 am
///////////////////
Willis
Your post looks significant, but the map makes detailed interpretation almost impossible. Can it be rescaled with the globe split into say 4 quarters. I would like to see in detail each country, or at any rate, each continent. We can then compare that with population figures and per capita emissions.

Thanks, Richard. Can do, but it might take a while. Things are kinda crazy around here, and getting busier. I’m speaking next week at the ICCC-9 climate conference, and I have no idea what I’ll say …

My glance at you map suggests that the US is one of the lowest emitters, and Europe looks to be one of the hifghest. That certainly conflicts with per capita emissions (set out at the top of this article) and population data.

You’re looking a two very different measurements, emissions per capita and emissions per square metre. From memory, population density in Europe is an order of magnitude above that of the US.

Australia seems to suffer least from distortion (due to its centred position) such that one can clearly identify the largest cities, and yet high emissions do not seem to correlate well with the highest density of population. In the south east (Melbourne, Canberra, Sidney etc), CO2 is even blue, ie., a negative ,The northern territory is sparsely polpulated (even its local capital city Darwin has a polulation of less than 150,000) and yet the northeern territory appears to have the highest levels of CO2.
Looking at Australia (this is not cherry picked but rather it is the only country that is not distorted and on which I can readily identify its geography), your assertion ā€œAs you can see, where there are concentrations of humans, we get CO2ā€³ is not well supported..

First, you’ve misinterpreted what I said. I didn’t say that where humans are is the ONLY source of CO2, there are obviously others.
Second, Australia has about half the population of California spread over an area the size of the US. Nowhere are there significant concentrations of people compared to say eastern China …

I would certainly like to see this important data presented in a clearer form. Thanks for your trouble

I’ll likely write up a post on this ā€¦ but I have an unsolved problem with their data. It is said to be in units of g/m2/day, with a global average of 0.026. Multiplying this by 5.11E14 (square metres of earth surface), dividing by 10^15 (grams to gigatonnes), and multiplying by 365.25 (days/year) gives us 4.8 gigatonnes of carbon emitted per yearā€¦ which is about half the conventional estimate.
Of course, that may just be the inaccuracy in the UBUKI data, and it may be because there’s only one year of data but I want to look into it a bit further. And I want to do a country-by-country analysis, but that’s gonna take some code writing ā€¦ could be a while. So many drummers ā€¦ so little time.
My best to you,
w.

July 5, 2014 12:22 pm

<a href="http://wattsupwiththat.com/2014/07/04/practicing-the-dark-art-of-temperature-trend-adjustment/#comment-1676385"D. Cohen, the reason you don’t read about “possible errors in the temperature-adjustment process” is climate scientists assume the central limit theory applies to the systematic error in the global temperature measurements. So, they spend a lot of time trying to estimate an average bias. When that bias is subtracted, it’s assumed the residual error is normally distributed around zero, and so averages to near zero.
There’s no substantial basis for that assumption, and it’s never discussed in any detail at all in the literature. But once in awhile one reads an author saying that ‘systematic error is as often positive as negative, and so tends to zero,’ or something to that effect. It’s a very convenient hand-waving argument, it’s widely accepted in the field, it’s never been tested or demonstrated, and it lets workers in the field go on to do superficially quantitative but substantively meaningless work.
Eventually that body of work, and the portentous conclusion-mongering it has allowed, will be put apart from science as a recognized elaboration of nonsense, and become a prime object of sociological study about how culture and its pressures can cause scientists themselves (not all, fortunately) to set aside the plain and obvious methods of science and nevertheless call their work science.

July 5, 2014 12:28 pm

@sunshinehours1 at 6:45 am
Iā€™ll give NOAA credit for something. Their new webpage allows graphing Min/Max and Avg. And it goes back to 1895
I beg to differ. The panel: Options: “Display Base Period” and “Display Trend”, each with inputs for years, is very deceptive. The result, regardless of inputs is the Trend from 1895 to 2014.
Input Average Temperatures for the period 1998 to 2010, and use these years also for the options.
The plotted trend is clearly counter to the plotted data. What is shown is NOT wrong — the legend says in the fine print that it is 1895-2014. But neither is it what you asked for using input parameters THEY PROVIDED. That makes it deceptive.
http://www.ncdc.noaa.gov/cag/time-series/us/110/00/tmin/1/07/1895-2014?base_prd=true&firstbaseyear=1901&lastbaseyear=2000
http://www.ncdc.noaa.gov/cag/time-series/us/110/00/tmin/1/07/1895-2014?base_prd=true&firstbaseyear=1998&lastbaseyear=2010
Changing the parameters in the URL doesn’t make any difference.

July 5, 2014 12:32 pm

Richard, the FOIA link to your email, as provided in your submission, no longer works. Your email can be found at: http://di2.nu/foia/1069630979.txt

richardscourtney
July 5, 2014 12:44 pm

Pat Frank:
Thankyou for the information in your post at July 5, 2014 at 12:32 pm which says

Richard, the FOIA link to your email, as provided in your submission, no longer works. Your email can be found at: http://di2.nu/foia/1069630979.txt

However, the Parliamentary Submission is a Hansard record so is permanent, the Submission says the email is part of the Climategate leak, and the email is included as Appendix A of the Submission.
Richard

July 5, 2014 12:52 pm

@Willis Eschenbach at 3:08 am
Is there an easy way to make the plot with 0 deg Longitude as the center?
If the connection of CO2 to population is being made, the putting the populace portion of Eurasia on the edge of the plot is not as good as rotating it 180 degrees toward the center.

July 5, 2014 2:11 pm

Stephen Rasey … the NOAA panel works. Unfortunately the panel ignores the settings saved in URL’s.. So you have to manually change the panel even if the URL has the panel settings.

richard verney
July 5, 2014 6:03 pm

Willis Eschenbach says:
July 5, 2014 at 12:10 pm
////////////////////////
Willis
Thanks. Please take your time.
I am not suggesting that your conclusion is wrong, and I am extremely sceptical that Terri Jackson’s assertion is correct. That would seem a stretch to me, and I would certainly wish to see something extremely compelling before being led towards his conclusion.
I consider that it would make an interesting article and you could shed a lot of light on it, depending upon the quality of the data. Obviously you raise a legitimate issue, but estimates can often be widely wrong, but that wrong…?
If you can get a proper handle on the data, perhaps this could be linked with IR data (ie., the data on DWR and OLR) to see whether there is any correlation between ‘hot’ and ‘cold’ spots of CO2 and DWR and/or OLR. It will also cast light on to what extent CO2 is a well mixed gas, although that to some extent is a matter of subjective interpretation depending upon one’s own views as to what one considers by the term well mixed.
PS. I recall that you posted an article (probably this year) on DWR and OLR data, and your take on this so much of the ground work for such a comparison has probably been done.
PPS. A long time ago, you uploaded an index on your articles. I don’t think that it has been updated (although I may be wrong on that), and given that you have written so many articles, which I suspect many would like to revisit from time to time, as and when related isssues arise, it would be good to have an updated index. I would suggest that that is sorted into 3 different classes, subject matter, alphabetical and chronological. Having three index citations would make it easy for people to track down what they are looking for.
As you are a main contributor in recent years to WUWT, I consider that it would be extremely useful if WUWT had your articles easily referenced and searchable, since they are a valuable source of information (especially since you have developed a tendancy to link your data).

peter azlac
July 6, 2014 12:00 am

“Willis Eschenbach says:
July 5, 2014 at 10:01 am
peter azlac says:
July 5, 2014 at 6:16 am
Willis
Can you present your Ibuki data in the same map format as that given by JAXA so that we can see the European values more clearly.
http://global.jaxa.jp/projects/sat/gosat/topics.html#topics1840
Not sure what you mean by ā€œin the same mannerā€, Peter.”
Willis, note that I did not use the word manner but map format of which there are many:
http://en.wikipedia.org/wiki/List_of_map_projections
Your map is great if you do not live in Europe, but for those of us who do your use of what looks like a Lambert cyclindrical equal area projection has the effect of compressing Europe such that it is not possible to see which countries are the main emitters and is it related to latitude. In addition, the view you have chosen cuts Europe in half, which is no doubt fine for your purpose. The reference I gave to the JAXA map shows all regions clearly as would the use of Robinson, Natural Earth or Van der Grinden projections.
This is not a criticism just a request for clarity for us in Europe since the EU is leading the charge on Ā“climate reparationsĀ“to our cost – the Ibuki data for the EU may well by now be obsolete as they have so far succeeded in moving a large part of our CO2 emitting industry to the USA (Texas) as wellas India and China for steel and aluminium!. However, I see from your post today that you give the source of your code so we can now make our own maps. One comment on todayĀ“s post – how much do you think the emissions of CO2 from the oceans has affected the measurements at Mon Loa and in the Arctic area, if you can compute that from the JAXA data.

July 6, 2014 9:29 am

“PS. How does one make past data more precise? Precision and accuracy are pretty much determined at the times of construction and measurement.”
Bob Greene has said it all. If you need better data, get a better instrument. There is just no other way. “Zombie stations,” are you kidding me?
This is all politics now. Senator Inhofe, where are you? How about a Senate hearing where he gets Morano to quiz these clowns on C-SPAN…