Did Federal Climate Scientists Fudge Temperature Data to Make It Warmer?
Ronald Bailey of Reason Magazine writes:
The NCDC also notes that all the changes to the record have gone through peer review and have been published in reputable journals. The skeptics, in turn, claim that a pro-warming confirmation bias is widespread among orthodox climate scientists, tainting the peer review process. Via email, Anthony Watts—proprietor of Watts Up With That, a website popular with climate change skeptics—tells me that he does not think that NCDC researchers are intentionally distorting the record.
But he believes that the researchers have likely succumbed to this confirmation bias in their temperature analyses. In other words, he thinks the NCDC’s scientists do not question the results of their adjustment procedures because they report the trend the researches expect to find. Watts wants the center’s algorithms, computer coding, temperature records, and so forth to be checked by researchers outside the climate science establishment.
Clearly, replication by independent researchers would add confidence to the NCDC results. In the meantime, if Heller episode proves nothing else, it is that we can continue to expect confirmation bias to pervade nearly every aspect of the climate change debate.
Read it all here: http://reason.com/archives/2014/07/03/did-federal-climate-scientists-fudge-tem
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.
And group think caused by lavish funding.
From the linked article: “They’ve clarified a lot this way. For example, simply shifting from liquid-in-glass thermometers to electronic maximum-minimum temperature systems led to an average drop in maximum temperatures of about 0.4°C and to an average rise in minimum temperatures of 0.3°C”. This is the opposite of the real effect. Electronic sensors generally read higher than liquid-in-glass thermometers for the maximum and usually, lower for minimum thermometers.
If NASA can crash a probe by accident onto Mars due to a simple error, then confirmation bias is possible. If your funding thrives on continued global warming, confirmation bias is possible. Money is at the root of all eeeeeeevil and pal review. Sorry, but there it is.
I suppose NCDC’s Tom Karl misrepresented his academic credentials because of confirmation bias.
GeologyJim,
There is still a 10,000+ station reporting network.
http://berkeleyearth.lbl.gov/auto/Regional/TAVG/Figures/global-land-TAVG-Counts.pdf
You can download data from is here: http://berkeleyearth.org/data
” Genghis says:
July 4, 2014 at 4:54 pm
mjc says:
July 4, 2014 at 4:05 pm
“There is only one maximum per 24hr period…now if that period doesn’t coincide with a calendar ‘day’ then so be it. If the high temp on Tuesday was at 3:01 pm on Monday and the temps are being checked every day at 3:00 pm, then that is the reporting period and the ACTUAL high temperature for that period…it’s not complicated, really.”
Lets do a quick test to see if your math skills are up to par. Day one has a high of 30 and a low of zero and is checked at 3 pm. The average and high temp is 15. Sometime after 3 pm the temperature plummets and drops to zero and stays at zero. The next day the thermometer will indicate that the high was 30 and the low was zero. Clearly that is an error.
It incorrectly indicates one maximum for two 24 hour periods.”
Not if the check is 3pm, every day. The reporting period and ‘day’ don’t match, sure, but for THAT period, the high was what was recorded at the START of that period. The 24 hr period is 3pm to 2:59 pm and as long as that remains consistent there is only ONE maximum temp recorded in THAT 24 hr period. It doesn’t matter what time of day that occurs. If the period is consistent, then the min/max will be consistent, for that location and that period. The only time an ‘adjustment’ will need to be made, then is when the period is changed.
What law states that an observation period and ‘day’ have to align?
And without knowing how long and how consistent any non-alignment was, applying an arbitrary ‘adjustment’ is just about the worst thing you can do to protect data integrity.
Genghis says:
“……Lets do a quick test to see if your math skills are up to par. Day one has a high of 30 and a low of zero and is checked at 3 pm. The average and high temp is 15. Sometime after 3 pm the temperature plummets and drops to zero and stays at zero. The next day the thermometer will indicate that the high was 30 and the low was zero. Clearly that is an error.”
Clearly that is NOT an error. The average temperature for these records is taken as (Tmax – Tmin)/2 and on both of theses days the Tmax and Tmin are the same.
It would be an error if we were taking a true average and reading the temperature every single minute, then dividing the total by 1440, then the result would be in error.
However using your reasoning. the same could be said that if a warm front arrived shortly before the temperature was taken and being typically slow moving held the temperature up for most of the day until a cold front arrives just before the next reading is taken which causes the temperature to plumet. Day 2 would now show COLDER than the true average.
As long as we are using a simple average of (Tmax – Tmin)/2 then no correction is needed, and any uncertainty should be shown by the error bars.
adjusted raw data is no longer good data … mixing infilled data points with raw and adjusted data is like mixing a certain % of dog crap into your vanilla ice cream … the question is always how small a % can you go and still eat it ?
They don’t have any data prior to the satellites … period … they have a bunch of numbers that have been beaten into uselessness for the purpose of studying climate change or global temps …
You should be hammering that point home Anthony …
Goddard destroyed TOBS with his what’s-2-times-zero argument (in reference to 100 degree days, which can’t be double counted when they simply don’t occur). The temptation to do TOBS station by station is understandable, but given a large data set the assumption that the errors cancel out is probably wisest.
mjc says:
July 4, 2014 at 5:43 pm
” If the period is consistent, then the min/max will be consistent, for that location and that period. The only time an ‘adjustment’ will need to be made, then is when the period is changed.”
That would be correct if all of the other locations were measured at 3 pm too, they aren’t. Now you have to decide which record to adjust and you are back to exactly the same problem.
Jantar says:
July 4, 2014 at 5:48 pm
“Clearly that is NOT an error. The average temperature for these records is taken as (Tmax – Tmin)/2 and on both of theses days the Tmax and Tmin are the same.”
Lets say we have two identical thermometers side by side. Thermometer A gets read in the morning and thermometer B gets read in the afternoon. On April 2nd the thermometer gets read in the morning and April 1’st High temperature gets entered into the record. In the afternoon, thermometer B gets read, is it the high temperature for April 1’st or April 2nd? Could thermometer A’s reading be in error?
How do you determine the absolute temperature for that day based on those two high and low temperature records at the exact same place? More importantly, what is the procedure that you use to solve that puzzle day after day?
The question is when does it stop. When do the adjustments stop cooling the past?
To meet the theory’s predictions, temperatures have to rise 2.5C in the next 86 years.
Will temperatures increase that much or will the NCDC of 2099 cool the past by another 2.5C?
Sounds ridiculous doesn’t it. But nobody is stopping them from cooling the past right now at the same rate per year that would be required to cool the past by 2.5C by 2100.
They have to be stopped or everyone will have 2 windmills in their backyard and solar panels on their roof while their backyard has real temperatures that are no different than 1903.
Bill Illis says:
July 4, 2014 at 7:21 pm
“The question is when does it stop. When do the adjustments stop cooling the past?”
When the past says exactly what they want it to say.
What is really funny, is that a good argument can be made for all of the adjustments.
Any technician who adjusts “data” does not know what “data” is. These individuals are technicians, doing as they are told. No professional engineer nor any genuine scientist would ever ever “adjust” data. Data is all you have to work with, if you adjust it, you have nothing…
This proposal is too wimpy. The US has an Information Quality Act that mandates all government agencies maintain data quality.
Now is the time for Congress to insist upon an official audit of temperature data, its generation, collection, processing, adjustment and reporting.
This posting just encourages the warmists and the NCDC/They will NEVER, NEVER change or admit to any wrondoing. All the evidence (not only NCDC but ALL the evidence), I repeat, adds up to just plain intentional fraud and fabrication to met an agenda. You still don’t get it Mr Watts.
July 4, 2014 at 1:11 pm | DirkH says
———
😉 No matter what ‘data’ is fed into the ‘algorithm’ … a hockey stick is achieved !
July 4, 2014 at 3:26 pm | Katherine says:
Indeed, cooling the urban stations to conform to the rural data would seem more pertinent, after all, it will help sort out the UHI bias that (supposedly) doesn’t happen.
Mr. Bailey, you say:
Since its inception, NCDC has done numerous “adjustments” of the raw climate data. How many adjustments have been made for all the “confounding factors” you mention: (TOBS, urban encroachments, “upgrades” to MMTS (which have their own set of issues), “lazy” reader artifacts, etc)? If you can obtain this as an absolute number, I, for one, would love to know: how many of those adjustments resulted in a warmer modern period, and how many resulted in a cooler modern day? Assuming NCDC’s (or NOAA’s) intent was the relentless pursuit of “greater precision”, as you say, we should expect they made just as many corrections that resulted in a warmer past / cooler modern day as in a cooler past / warmer modern day.
The revelations over the last 10 or so years have suggested quite the opposite: the ratio of adjustments we’ve been privy to (and I doubt they are as transparent as you think) have shown a predilection for “warming” the present, and, if I may say, stirring the pot on every possible opportunity. Considering that these adjustments are merely mathematical and statistical corrections of former inaccuracy, there is no reason to expect one-sidedness, is there?
People who provide cursory historical “analyses” of these government institutions without providing any of their own data are not contributing much to the overall understanding of what NOAA has been doing, whether they are reliable, unbiassed proprietors of climate data. From what I can see, they’ve done little but fudge the data.
Sweden have good, maybe the worst example of adjustments to “fix” warmer trend.
First Swedish text: Mest skrämmande exemplet går att finna här i Sverige: Från Forskning och Framsteg: ”Vår rekonstruktion av vinter- och vårtemperaturens variationer över ett halvt årtusende visas i bild 1. Mätningarna efter 1860 är korrigerade för den artificiella uppvärmning som orsakats av staden Stockholms tillväxt, så att kurvan visar de mer naturliga förändringarna.” Källa: Forskning och Framsteg nr 5 2008, 500 års väder.
Quick translation to English language:
One of the most frightening example of “adjustments” can be found here in Sweden: From Forskning och Framsteg (Swedish science magasin): “Our reconstruction of winter and spring temperature variations over the last half millennium is shown in Figure 1. Measurements since 1860 are corrected for the artificial warming caused by the city of Stockholm’s growth, so the curve shows the more natural change. “ Source: Forskning och Framsteg No. 5, 2008, 500 års väder. (English: 500 years of weather.)
Looking at other countries, US included, I have seen the same strange behavior among those who call themselves Scientists…..
“The NCDC folks never rest in their search for greater precision.”
If this was intended ironically, I apologize for my conclusion above… though I would still like to know how many of NCDS;s adjustments actually favor a neutral or cooling trend.
The claim of peer review is very often hollow. It adds nothing that the adjustments may have been reviewed by other climate scientists who know little of the issues raised.
The simple point when considering data significant trends, and the collation of such data is whether it has been peer reviewed by statistians,
The idea that one can have approximately 40% of the data not actual but rather constructed, and not be an issue, is absurd.
In fact we know there is a problem with the data merely from the fact that it is continually being adjusted. If there wwas only only one, or perhaps two adjustments then there may be valid reasons. But when one adjusts the same data a dozen or so times, it shows that one does not know what the adjustment should be. It shows that later adjustments are adding to the need to revisit and remake adjustments that you hhad made earlier therebu suggesting some positive feedback loop in the adjustments that you are making.
When you make one adjustment, you create a margin of error. Of course, the adjustment may be made with the view that your adjustments is ‘correcting’ but there is a risk that your adjustment is not correcting. When you make 10 such adjustments, you create the possibility of 10 errors, a 1000, the possibility of a 1000 errors etc etc. Hopefully, some errors will cancel out, but that need not by necessity be the case, especially when based upon the same algorithym which has an underlying flaw.
The fact that this data set diverges from the satellite data record also adds weight to the fact that there is a problem with the ‘adjusted’ data set.
Personally, I would ditch the land based data set post 1979, and I would reassemble it up to the period of 1979. There would be better station coverage if the record up to 1979 was used, and to some extent, the effects of UHI may be lessened. You are likely to get a cleaner data set.
After 1979 use the staellite. It has better spatial coverage, and is meant to use our most sophisticated and advanced measuring technology. It should be a better record (although it has some issues of its own).
Definitely do not splice the satellite record post 1979 onto the land based record upto 1979.
Bill Illis says:
July 4, 2014 at 3:48 pm
The TOBs adjustment continues to grow every day. How is that possible? The first paper published on TOBs was 1854 and was fully fixed by the NCDC/Weather Bureau in 1890, in 1909, in 1954, in 1970, in 1977, in 1983, in 1986, in 2003, yet it continues to change every day.
////////////////////
I had not read Bill’s comment when I posted my earlier comment.
I consider this to be one of the most material points. I can understand someone saying that we need to make an adjustment for TOB, or we need to make an adjustment for a station move, or an instrument change etc. But if we knew what we were doing that would be a one off adjustment. The adjustment is made, the issue has been ‘corrected’
But the problem is that we are making adjustments to old records not just once or twice, but sometimes a dozen or so times. Just ask yourselves, how many times has the 1930s data undergone an adjustment? This establishes that we do not know what we are doing, period. This is very clear when one super-imposes trends based upon the record as it was drawn over the years. If climate scientists cannot see that there is an issue there,,well it says a lot about their abilities and way of their thinking.
It would be interesting to know whether a difference would result if we were to adjust modern data to bring it in line with old data collection, or old data to bring it in line with modern data collection.
Imperative to act immediately and as it is USA based temperature that needs to be audited, your Senate should set up a Senate backed Inquiry on the validity and data processing of the historic and present temperature record(s)
The effect of using the present rational whereby algorithms are allowed to change past and present temperatures.
The Senate should ensure that am audit team is assembled to assist in examining and auditing the various contributing agencies.
As to the physical makeup of the professional audit team, I am sure that Steve at Climate Audit could be trusted to set up an unbiased team of auditors with the qualifications and experience to get to the nub of this problem.
The audit team would report back on the level of co-operation and transparency of the organisations involved, and if necessary the Senate would have the power to subpoena key staff and executives to ensure data and internal directives are properly produced for audit and evidence adduced to get answers on the who what and why, that created the present errors and or omissions, bias or whatever.
No one should object to this precautionary audit on the principle that the American people must have absolute confidence when trillions of American Taxpayers money have been, and will be expended, and the end product must be an absolute and trustworthy historic temperature record. Once this confidence and trust is affirmed politicians can then make the sort of energy and policy decisions that might be expected to flow from that information.
Sceptic blogs might like to propose leaders and members for the audit team with a public review by way of Senate oversight.
Lastly as this is the most urgent problem facing human society as some say, the audit team and support staff should be funded by equal contributions from current budgets of the organisations involved, the issues of trust and confidence are all important considering the economic and social considerations underlying these issues and it is not one where the agencies themselves should be trusted with simply relying on internal audits. There is too much at stake!
It is the United States historic temperature record, and the people of the United states must have confidence in all the decisions that build on that record – it does not belong to any organisation or particular political persuasion of government and the methods must be open and transparent to the American people and the Senate is the appropriate body to represent those people.
Over to you, we also need this sorted out as it appears our Australian records have suffered similar “adjusting” and manipulation and loss of confidence.
The Japanese IBUKU climate satellite data completely negates the claim that carbon dioxide in the atmosphere is coming from humans. The IBUKU results confirm that the CO2 in the atmosphere is a result of temperature induced and moisture induced releases from mainly equatorial high vegetation regions. In personal communication with Professor Richard Lindzen I find he agrees with IBUKU.