Did Federal Climate Scientists Fudge Temperature Data to Make It Warmer?
Ronald Bailey of Reason Magazine writes:
The NCDC also notes that all the changes to the record have gone through peer review and have been published in reputable journals. The skeptics, in turn, claim that a pro-warming confirmation bias is widespread among orthodox climate scientists, tainting the peer review process. Via email, Anthony Watts—proprietor of Watts Up With That, a website popular with climate change skeptics—tells me that he does not think that NCDC researchers are intentionally distorting the record.
But he believes that the researchers have likely succumbed to this confirmation bias in their temperature analyses. In other words, he thinks the NCDC’s scientists do not question the results of their adjustment procedures because they report the trend the researches expect to find. Watts wants the center’s algorithms, computer coding, temperature records, and so forth to be checked by researchers outside the climate science establishment.
Clearly, replication by independent researchers would add confidence to the NCDC results. In the meantime, if Heller episode proves nothing else, it is that we can continue to expect confirmation bias to pervade nearly every aspect of the climate change debate.
Read it all here: http://reason.com/archives/2014/07/03/did-federal-climate-scientists-fudge-tem
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.
American science at its finest. With excuses. We only get huffy when we think people in other countries are doing it. Then we’re vociferously anti-GIGO.
Never ascribe to malice what can be explained by confirmation bias …
Willis. Inspired.
+1
Personally I think it is much more likely all of this is purposeful then not. We are told catastrophe is undeniable, we are told there has been in depth debate, all the questions are answered. The only thing left is for deniers to shut up and severely limit the first world, and keep the third world on its knees forever.
Terri Jackson says:
July 5, 2014 at 1:29 am
Thanks, Terri. I went to your website, and to the Japanese IBUKU website, and dug deep on Google, and nowhere did I find the actual data. Oh, there’s lots of pretty pictures, but where can I find the actual values, month by month, of CO2 fluxes of the various regions?
Because until I see the numbers and analyze them myself … it’s just pretty pictures, none of which show net annual fluxes, and I don’t make claims based on pretty pictures.
Finally, I can’t see how you get from the IBUKU data to saying that their pretty pictures “completely negates the claim that carbon dioxide in the atmosphere is coming from humans”. To me, it shows the opposite—as best as I can tell from their cruddy pictures, all of the identified locations of positive net CO2 flows are areas of high human density … I doubt very much that that is a coincidence, but of course without the data I would never make a sweeping statement such as yours.
Any assistance in finding the data gladly accepted …
w.
OK, Terri, I located the IBUKU data and mapped it up. Here are the results:

As you can see, where there are concentrations of humans, we get CO2. Some is from biomass burning, some is from fossil fuel burning, some is from cement production.
Now, you can certainly make the case that this shows that humans in the developing world are a major contributor to CO2, and that the common meme that the developed nations are to blame is not true … but you can’t make the case that
as you state above.
Best regards, and thanks for the pointer to the IBUKU dataset,
w.
Does anyone know how many stations NCDC/GISS/CRU etc have where there was a reasonable overlap period between the use of the traditional glass thermometers and the MMTS thermometer replacements and what the analyses of the overlaps show. This seems to me to be the very minimum action that should have been taken and if it was there should be little need for all the BEST etc adjustments, they should have been done for the individual stations.
Mike T says:
July 4, 2014 at 5:21 pm
From the linked article: “They’ve clarified a lot this way. For example, simply shifting from liquid-in-glass thermometers to electronic maximum-minimum temperature systems led to an average drop in maximum temperatures of about 0.4°C and to an average rise in minimum temperatures of 0.3°C”. This is the opposite of the real effect. Electronic sensors generally read higher than liquid-in-glass thermometers for the maximum and usually, lower for minimum thermometers.
________________________________________________________________________
The only reason a well-maintained electronic temperature min-max would be different from liquid in glass is a continuous recording feature. Both the liquid-in-glass and electronic thermometers should read the true temperature in some reasonably tight range above and below. That’s assuming the manufacturing processes were in statistical control. If there is any real difference between the two, then I’d suggest poor calibrations, drift corrections or differences in housings. Surely all this was addressed before we put these out.
PS. How does one make past data more precise? Precision and accuracy are pretty much determined at the times of construction and measurement.
Bob, I’ve addressed the difference between liquid-in-glass and electronic sensors elsewhere, but to reiterate, it’s my feeling that mercurial maximum thermometers aren’t as sensitive as electronic probes. In other words, around the time of TMax, there is inertia in the mercury which prevents it getting to the same temp as the more sensitive probe. Also, due to contraction in the mercury in the column (as opposed to the bulb) mercurial max thermometers read slightly lower the next day./ Usually, the previous day’s TMax is used, but some stations may only read the Max at 0900, when it’s reset, along with the min thermometer. So it could be 0.2 to 0.4 degrees lower than the electronic probe, assuming the station has both types, the probe is the “official” temperature.
Dale Hartz says:
July 4, 2014 at 1:04 pm
How can the time of observation change the temperature for a day?
_____________________________________________________
Midnight today where you are isn’t midnight one time zone or further either side of you and so forth maybe they want the temperatures all taken at the same time? Since they aren’t maybe they try to adjust them?
I always adjust my temperatures to match the nearest city says the screencaps for my town on weather.com check yours they are always adjusting things I hope they use that technique when confronted with a speeding ticket ;).
A C Osborn forgot to explain that Steve McIntyre actually wrote in the same post when referring to TOBS “Yes, overall it slightly increases the trend, but this has a rational explanation based on historical observation times. Again I don’t see a big or even small issue.”
As Anthony indicates there is perhaps/probably more bias (unintentional) from infilling than from TOBS.
I like David in Cal’s approach. But then actuaries are trained to examine the raw data and determine how to make best use of it while introducing the minimum of error. They are also trained to reject spurious accuracy. As a fellow professional I have always felt that all long term temperature forecasts based on climate science models are invalid because of they assume the data to be more accurate than is justified.
David in Cal’s approach would obviate the need for messy infilling, TOBs etc and would at least provide a sounder basis for the trend lines that litter so many charts. The trend lines are fairly worthless except to tell us what has happened in the past and what might happen in the future if, by some miracle the trend continued unabated.
Lorenz found that limiting his readings on only twelve variables to three rather than six decimal places so significantly affected his longer term (more than three months) forecasts that they were worthless. From the standard literature I once calculated that there were not twelve but more than forty variables that could affect climate. Enough said?
Willis
Can you present your Ibuki data in the same map format as that given by JAXA so that we can see the European values more clearly.
http://global.jaxa.jp/projects/sat/gosat/topics.html#topics1840
How about a statistical analysis of land surface temperatures where each site is treated as a distinct microclimate. I have always been uncomfortable with the adjusting, anomalizing and homogenizing of land surface temperature readings in order to get global mean temperatures and trends. Years ago I came upon Richard Wakefield’s work on Canadian stations in which he analyzed the trend longitudinally in each station, and then compared the trends. This approach respects the reality of distinct microclimates and reveals any more global patterns based upon similarities in the individual trends. It is actually the differences between microclimates that inform, so IMO averaging and homogenizing is the wrong way to go.
In Richard’s study he found that in most locations over the last 100 years, extreme Tmaxs (>+30C) were less frequent and extreme Tmins <-20C) were less frequent. Monthly Tmax was in a mild lower trend, while Tmin was strongly trending higher , resulting in a warming monthly average in most locations. Also, Winters were milder, Springs earlier and Autumns later. His conclusion: What's not to like?
Now I have found that in July 2011, Lubos Motl did a similar analysis of HADCRUT3. He worked with the raw data from 5000+ stations with an average history of 77 years. He calculated for each station the trend for each month of the year over the station lifetime. The results are revealing. The average station had a warming trend of +0.75C/century +/- 2.35C/century. That value is similar to other GMT calculations, but the variability shows how much homogenization there has been. In fact 30% of the 5000+ locations experienced cooling trends.
Conclusions:
"If the rate of the warming in the coming 77 years or so were analogous to the previous 77 years, a given place XY would still have a 30% probability that it will cool down – judging by the linear regression – in those future 77 years! However, it's also conceivable that the noise is so substantial and the sensitivity is so low that once the weather stations add 100 years to their record, 70% of them will actually show a cooling trend.
Isn't it remarkable? There is nothing "global" about the warming we have seen in the recent century or so.The warming vs cooling depends on the place (as well as the month, as I mentioned) and the warming places only have a 2-to-1 majority while the cooling places are a sizable minority.
Of course, if you calculate the change of the global mean temperature, you get a positive sign – you had to get one of the signs because the exact zero result is infinitely unlikely. But the actual change of the global mean temperature in the last 77 years (in average) is so tiny that the place-dependent noise still safely beats the "global warming trend", yielding an ambiguous sign of the temperature trend that depends on the place."
http://motls.blogspot.ca/2011/07/hadcrut3-30-of-stations-recorded.html
IF it is Confirmation Bias that has resulted in the distorted weather data then they are incompetent. Fire them ! Start holding these clowns accountable and maybe, just maybe, we will end up with an honest weather service for once.
IBUKU?…..
Chiefio penned this already………Japanese Satellites say 3rd World Owes CO2 Reparations to The West…..don’t you guys remember?
http://chiefio.wordpress.com/2011/10/31/japanese-satellites-say-3rd-world-owes-co2-reparations-to-the-west/
I’ll give NOAA credit for something. Their new webpage allows graphing Min/Max and Avg. And it goes back to 1895. And the mapping is good.
Even with TOBS and all the other adjustments it dhows that the decade with hottest July’s were the 1930s.
http://www.ncdc.noaa.gov/cag/time-series/us/110/00/tmax/1/07/1895-2014?base_prd=true&firstbaseyear=1901&lastbaseyear=2000
And it shows the classic UHI signature of higher minimums in the present.
http://www.ncdc.noaa.gov/cag/time-series/us/110/00/tmin/1/07/1895-2014?base_prd=true&firstbaseyear=1901&lastbaseyear=2000
It has come out in the discussion of the Luling station that instrumental failures at stations are not included in the station records. This greatly complicates the effort to find good stations. It will probably take either a government funded effort, not likely, or crowd sourcing to thoroughly investigate each station, including trying to find people who know about station integrity.
The NCDC also notes that all the changes to the record have gone through peer review and have been published in reputable journals.
So was “Mann’s ‘Nature’ Trick”, which is why, in the field of climate science at any rate, “peer review” is now more commonly understood to mean “pal review”.
Mike T says:
July 5, 2014 at 6:38 am
Bob, I’ve addressed the difference between liquid-in-glass and electronic sensors elsewhere, but to reiterate, it’s my feeling that mercurial maximum thermometers aren’t as sensitive as electronic probes. In other words, around the time of TMax, there is inertia in the mercury which prevents it getting to the same temp as the more sensitive probe. Also, due to contraction in the mercury in the column (as opposed to the bulb) mercurial max thermometers read slightly lower the next day./ Usually, the previous day’s TMax is used, but some stations may only read the Max at 0900, when it’s reset, along with the min thermometer. So it could be 0.2 to 0.4 degrees lower than the electronic probe, assuming the station has both types, the probe is the “official” temperature.
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++
The mass differences may well allow the electronic thermometer to react faster to transients. I don’t have any data on response times of the LIG or MMTS thermometers. Mercury contracting in the glass before the bulb? The capillary is much more insulated than the reservoir (bulb) and it is the thermal expansion and contraction in the bulb that moves the mercury in the capillary. How else could I stick the bulb of a thermometer in an oven at 110°C that is in a room at 25°
C and read the oven temperature and have the glass outside the oven closer to room temperature? There is a delay in mercury thermometers going from as sudden high to low because you cool the mass in the bulb, but I assure you it is not 24 hours. Electronic thermometers also have some delay. The resolution in the Nimbus (MMTS) is 0.1°F and the accuracy is about 0.3°F (span dependent). So, how do you see with confidence that 0.2° difference?
Further to records of Tmax, a comment from some time ago:
■ Temperature is measured continuously and logged every 5 minutes, ensuring a true capture of Tmax/Tmin
That is why it is hotter in 2014 than in the 1930′s… they were not measuring Tmax’s every five minutes in the ’30′s. I have downloaded daily since June 22nd the Oklahoma City hourly records and never were the highest hourly maximum what was recorded for the maximum of the day, the maximum was consistently two degrees Fahrenheit greater than that of the highest HOUR but evidently they count 5 minute mini-microbursts of heat today instead. I guess hourly averages are not even hot enough for them (yeah, blame it on CO2). That, by itself, invalidates all records being recorded today to me, I don’t care how sophisticated their instruments are… the recording methods themselves have changed and anyone can see it in the “3-Day Climate History”, the hourly readouts, given at every city on their pages. Don’t believe me, see it for yourself what is going on in the maximums. Minimums rarely show this effect for cold is the absence of thermal energy, not the energy which can peak up for a few minutes, much more than cold readings.
From Zeke Hausfather on July 4, 2014 at 5:29 pm:
Besides that file sloppily notifying viewers it originated from a WinDoze machine as “C:\Users\ROBERT~1\AppData\Local\Temp\tp69a51b4e_ec25_42c2_9f81_af39b86c036d.ps”, the graph needlessly stuffed into a pdf is showing there was a recent precipitous drop to only about 600 stations “Within Region”.
The BEST site sidebar says:
Inside the global land mean dataset it states:
It is interesting to note they have enough temperature station coverage to account for Antarctica including the pole which even satellites don’t cover.
That original graph does show a recent step down to about 10,000 stations before the precipitous drop to about 600. The sidebar does not indicate either of those.
As BEST is apparently keeping note of former and active stations, it would be helpful if the dataset could identify how many stations went into a particular month’s “global” mean. That lone 1743 monthly entry, for November, has a story worth telling, as does the smattering of “global” entries between April 1744 and April 1745 before the big nothingness until 1750. As it stands, it effectively overstates the reliability of the entries.
The precipitous drop could be a programming issue with an automatically-generated graph, which would be a programming error, for which there is apparently no one at BEST who bothered to do a quick visual doublecheck before publishing.
And the “latest” dataset is 9 months old. Not only can’t BEST be bothered to do timely updates, they didn’t even set up automatic updates.
All in all, BEST is clearly not a dataset suitable for serious work, more at the level of a group hobby.
I shall stop using it for even informal comparisons.
peter azlac says:
July 5, 2014 at 6:16 am
Not sure what you mean by “in the same manner”, Peter.
w.
David in Cal says:
July 4, 2014 at 2:29 pm
“IMHO a better way to calculate the average change in temperature from year to year would be to average temperature changes, rather than temperatures.”
David, you are right. See my comment above at 6:20am for a link to a study that does what you propose.
Zeke Hausfather wrote: “Actually NCDC makes all their papers available for free. You can find all the USHCN ones here: ftp://ftp.ncdc.noaa.gov/pub/data/ushcn/papers/”
Zeke that isn’t fully true, Vose 2014 isn’t there. Judith Curry recently complained about it being behind the AMS paywall.
She wrote:
“A new paper has been published by the NOAA group that is unfortunately behind paywall: Improved Historical Temperature and Precipitation Time Series for U.S. Climate Divisions (you can read the abstract on the link). ”
http://journals.ametsoc.org/doi/abs/10.1175/JAMC-D-13-0248.1
“Clearly, replication by independent researchers would add confidence to the NCDC results.” What was BEST – if not replication by independent researchers who have been publicly critical of some aspects of the consensus?
The problem is that the historical record contains may examples of clear inhomogeneity (breakpoints) in the data that on the average resulted in reporting of somewhat cooler temperatures going forward. The number of breakpoints being detected is surprisingly large – about 1 per decade at the average station (if my memory is correct). Only a small fraction can be due to understood phenomena like changes in TOB. When those breakpoints are corrected, 20th-century warming increases substantially (0.2 degC?). Should all these breakpoints be corrected? We don’t know for sure, because we don’t know what causes them: a) A sudden change to new reporting conditions? b) A gradual deterioration of conditions (bias) that is corrected by maintenance? c) Some combination of both.
It would be nice if Zeke and others would acknowledge that without metadata, they can’t prove that breakpoint correction results in an improved temperature record. Breakpoint correction is an untested hypothesis, not a proven theory. They test breakpoint correction algorithms against artificial data containing known artifacts, but they don’t know cause of most of the breakpoints in the record. Comparing one station to neighbors, which also have an average of a [corrected] breakpoint per decade, also seems problematic. Getting accurate trends from such data is extremely difficult. Modesty seems more appropriate than hubris under these conditions.
Willis Eschenbach says:
July 5, 2014 at 3:08 am
///////////////////
Willis
Your post looks significant, but the map makes detailed interpretation almost impossible. Can it be rescaled with the globe split into say 4 quarters. I would like to see in detail each country, or at any rate, each continent. We can then compare that with population figures and per capita emissions.
My glance at you map suggests that the US is one of the lowest emitters, and Europe looks to be one of the hifghest. That certainly conflicts with per capita emissions (set out at the top of this article) and population data.
Australia seems to suffer least from distortion (due to its centred position) such that one can clearly identify the largest cities, and yet high emissions do not seem to correlate well with the highest density of population. In the south east (Melbourne, Canberra, Sidney etc), CO2 is even blue, ie., a negative ,The northern territory is sparsely polpulated (even its local capital city Darwin has a polulation of less than 150,000) and yet the northeern territory appears to have the highest levels of CO2.
Looking at Australia (this is not cherry picked but rather it is the only country that is not distorted and on which I can readily identify its geography), your assertion “As you can see, where there are concentrations of humans, we get CO2” is not well supported..
I would certainly like to see this important data presented in a clearer form. Thanks for your trouble
@Jan Frykestig at July 4, 12:47 pm
What is exactly the evidence for this claim?
The statement: “he does not think that NCDC researchers are intentionally distorting” requires no evidence. It is in fact the Null Hypothesis. But it is a hypothesis worthy of testing scientifically.
The following statements require evidence”
“he
does notthinks that NCDC researchers are intentionally distorting”:”he KNOWS that NCDC researchers are not intentionally distorting”