Siting related temperature bias at the Dutch Meteorological Institute (KNMI)

“This a complicating factor for the homogenization of daily temperature series.”

WUWT first reported on the issues with the KNMI De Bilt weather station in October 2009 here. About that time, Dr. Pielke Sr. and I were given access to a KNMI preliminary report on the siting and subsequent bias problems at De Bilt, but we decided to wait until the final report was available before saying anything about it. Those waiting on GMU/Wegman take note – universities move like molasses, chill. Why is this station important? It just so happens that KNMI De Bilt is the only station in the Netherlands used for NASA GISTEMP, and now it has been shown to have problems related to siting.

The issue became front page news in the Netherlands, partly because the KNMI climate data was challenged by a private meteorological firm because it always read too warm:

Weather specialists from the Wageningen-based Meteo Consult have been expressing their distrust for years, because the KNMI figures in De Bilt were always a bit warmer than in Cabau, 16 km away, where there is also a KNMI thermometer. The position of both places could, according to Meteo Consult, not explain the temperature difference of on average half a degree (Celsius). It was also not taken into account that De Bilt is located in a more built-up, and probably therefore warmer, surroundings than Cabau, near IJsselstein.

KNMI-DeBilt_GISS

Above: GISS Temperature plot for De Bilt KNMI – notice the step function. Click for source data.

To KNMI’s credit, they have conducted an exhaustive parallel study to determine the magnitude of effects from siting changes. The results and their conclusion show clearly that siting does matter.

Dr. Roger Pielke Sr. writes:

Important New Report “Parallel Air Temperature Measurements At The KNMI observatory In De Bilt (the Netherlands) May 2003 – June 2005″ By Theo Brandsma

There is an important, much-needed, addition to the scientific literature which adds to our conclusions in

Fall, S., A. Watts, J. Nielsen-Gammon, E. Jones, D. Niyogi, J. Christy, and R.A. Pielke Sr., 2011: Analysis of the impacts of station exposure on the U.S. Historical Climatology Network temperatures and temperature trends. J. Geophys. Res., in press. Copyright (2011) American Geophysical Union.

that siting of climate reference stations do matter in terms of long term temperature trends and anomalies. This new report is

Parallel air temperature measurements at the KNMI observatory in De Bilt (the Netherlands) May 2003 – June 2005

(16 MB PDF from KNMI – alternate, faster download here KNMI_DeBilt_WR2011-01)

The summary reads (emphasis added)

Air temperature measurements at the KNMI-observatory in De Bilt are important mainly because the observatory has a long and relatively homogeneous record and because its observations often serve as an indicator of changes in climate for the Netherlands as a whole. Among others, relocations of the temperature measurement sites and (gradual) changes in surroundings influence the measurements. To improve the homogeneity of the long-term temperature record and to study the representativeness of the current measurements, a parallel experiment was carried out at the observatory of KNMI in De Bilt from May 2003 through June 2005.

Five sites at the KNMI-observatory, including the (at that time) operational site WMO 06 260 (further denoted as DB260), were equipped with identical (operational) instruments for measuring temperature and wind speed at a height of 1.5 m (see for an overview of the sites Figure 1.1). The instruments were calibrated each half-year and the calibrations curves were used to correct the data to minimize instrumental errors. With the measurements at the Test4 site (operational site since 25 September 2008) as a reference, the temperature differences between the sites were studied in connection with the local wind speed and its differences and operationally measured weather variables at the KNMI-observatory. In September/October 2004 the area west of the operational site DB260 was renovated and made into a landscaped park. From 1999 onwards that area slowly transformed from grassland into a neglected area with bushes (wasteland). The parallel measurements provided the opportunity to study the impact of this new inhomogeneity in detail.

The results show that changes in surroundings complicate or impede the use of present-day parallel measurements for correcting for site changes in the past. For instance, the (vertical) growth of the bushes in the wasteland area west of DB260, caused increasing temperature differences between the operational site DB260 and four neighboring stations. The effects were most clearly visible in the dry summer of 2003, when the mean monthly maximum temperatures at DB260 were up to about 0.4C larger than those at the reference Test4. This increase was more than counteracted by a decrease in the mean monthly minimum temperature of up to 0.6C. After the renovation of the wasteland area, the temperature differences between DB260 and Test4 became close to zero (< 0.1C). The comparison of DB260 with four neighboring stations showed that the renovation restored to some extent the temperatures of the old situation of before the year 1999. However, the land use west of the DB260 has been changed permanently (no longer grassland as in the period 1951-1999, but landscaped park land with ponds). Therefore, operational measurements at DB260 became problematic and KNMI decided to move the operational site to the Test4 site in September 2008. The Test4 site is the most open of five sites studied in the report.

The results increase our understanding of inter-site temperature differences. One of the most important causes of these differences is the difference in sheltering between sites. Sheltering stimulates the build up of a night-time stable boundary layer, decreases the outgoing long-wave radiation, causes a screen to be in the shade in the hours just after sunrise and before sunset, and increases the radiation error of screens due to decreased natural ventilation. Depending on the degree and nature of the sheltering, the net effect of sheltering on temperatures may be a temperature increase or decrease. DB260 is a sheltered site where the net effect is a decrease of the mean temperature (before the renovation). The former historical site Test1 is an example of a site where the net effect is a temperature increase. The monthly mean minimum temperature at Test1 is up to 1.2C higher than the reference and the maximum temperature is up to 0.5C higher than that at Test4. The mean temperature at Test1 is, however, only slightly higher than the mean at Test4. This is caused by the relatively low temperatures in the hours after sunrise and before sunset, when the screen at Test1 is in the shade. Both the Test1 and Test4 location are probably not affected by the renovation.

The renovation of the wasteland area causes not only a shift of the location of the pdf of the daily temperature differences but also a change in the shape. This means that for the homogenization of daily temperature series it is not sufficient to correct only the mean.

We showed that the magnitude of the inter-site temperature differences strongly depends on wind speed and cloudiness. In general the temperature differences increase with decreasing wind speed and decreasing cloudiness. Site changes directly affect wind speed because they are usually accompanied by changes in sheltering. Some effects, like the built up and (partly) breaking down of the stable boundary layer near the surface, are highly non-linear processes and therefore difficult to model. The fact that these processes are mostly active at low wind speeds (< 1.0 m/s at 1.5 m) further complicates the modeling. Regular cup anemometers are not really suited to measure low wind speeds. Operationally these anemometers have a threshold wind speed of about 0.5 m/s and this threshold wind speed often increases with the time during which the anemometer is in the field. In addition, anemometers are mostly situated at a height of 10 m. During night-time stable conditions the correlation between wind speed at 10 m and wind speed at screen height is weak. This complicates the homogenization of daily temperature series.

0 0 votes
Article Rating
38 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Noelene
May 18, 2011 9:23 am

typo
Why is this station importnat?

May 18, 2011 9:35 am

“The effects were most clearly visible in the dry summer of 2003, when the mean monthly maximum temperatures at DB260 were up to about 0.4C larger than those at the reference Test4. This increase was more than counteracted by a decrease in the mean monthly minimum temperature of up to 0.6C.”
No way!!! We’re told its climate change that increases the extremes!!! Could this possibly be a partial explanation of warmcold?

Latitude
May 18, 2011 9:39 am

There’s that night time temperature increase again…………

RockyRoad
May 18, 2011 9:44 am

No way! Siting can’t have an impact. Put that thermometer under a bushel basket, or on top of the tar-covered roof–same results!
/sarc off

Kev-in-Uk
May 18, 2011 9:55 am

But ANY scientist would actually know and appreciate that siting is important – But it’s only the skeptical ones (real scientists) that voiced concern and didn’t buy into the ‘it’s been properly adjusted’ BS as promoted by Mr and Mrs C Arbonscam and their deeply passionate disciples!

feet2thefire
May 18, 2011 10:10 am

Only about 1,000 to go now…

May 18, 2011 10:23 am

Kev-in-Uk says:
May 18, 2011 at 9:55 am
But ANY scientist would actually know and appreciate that siting is important –
===============================================
You mean when, where and how we observe things may influence the results of the observation? Someone needs to alert the schools and have them start to teach that in their science classes!!!

stephen richards
May 18, 2011 10:27 am

Vindication !!

Editor
May 18, 2011 10:46 am

I wonder how Meteo Consult will take to NCDC’s latest adjustment of De Bilt in GHCN v3: ftp://ftp.ncdc.noaa.gov/pub/data/ghcn/v3/products/stnplots/6/63306260000.gif (Update: well it was here when I posted this, but as of 23 May the directories have been removed)

Shevva
May 18, 2011 10:55 am

30 years, it’s a start but why is it private individuals and private companies that have to do this work?
Gambling’s legal over here in the UK so I’ll start a book although someone else will have to work out the odds for which pro-AGW site and what method they will use to de-bunk this?
RC 40-1
Skeptical Science 10-1
open Mind 4-1
NASS will have to comment as it’s used by them I guess.
Straw man, incorrect understanding of the science, pea under the thimble, completely ignore?
To stay on topic though I’ll let the experts at CA, mosh, Antony and the brains around here to comment first. Although the time it’s taken them I hope they double checked everything.

David O.
May 18, 2011 11:03 am

Peterson figured all of that out simply by counting light bulbs from space.

Kev-in-Uk
May 18, 2011 11:50 am

James Sexton says:
May 18, 2011 at 10:23 am
LOL – absolutely Sir! – trouble is, unless its spewed out a by a computer/smartphone or some d*ickhead celebrity whilst puking on grub worms in the jungle – they just wont listen !

Paul in Sweden
May 18, 2011 11:50 am

Observational data integrity only CLOUDs model conclusions. /sarc

Jim
May 18, 2011 12:14 pm

ahhhhhhh, so that’s must be the reason why arctic icesheets are rapidly melting……
I suppose that recent sailing trip around the arctic was not because of ice sheets has disappeared, but simply because it didn’t come up in anyone’s mind before, to do so?

Ken Harvey
May 18, 2011 12:43 pm

The first (of many) requirements for valid statistics is not access to Excel or OpenOffice, but data that is as near factual as is practical. In the absence of that first of many requirements, no valid conclusions can be drawn. That is not an hypothesis but an absolutely self-evident truth.

sky
May 18, 2011 1:38 pm

As if the daily data uncertainties arising from micrositing issues aren’t bad enough, they pale in comparison to what is done by GHCN and GISS in “homogenizing” the monthly and annual means. The “step function” evident in the De Bilt record from the 50’s to the 80’s–which proves to be quite coherent with other European stations–virtually disappears and the entire yearly series is given a strong positive trend. Ever-changing micrositing issues add only a non-descript “noise” to the record. I wish that they were all that stood in the way of obtaining reliable century-long records.

P Walker
May 18, 2011 2:12 pm

Noelene ,
The importance of the site is mentioned in the first paragraph . Try reading it .

Ian George
May 18, 2011 2:39 pm

This used to be my favourite station to show my friends how ‘cleaning up’ helps ‘global warming’.
Here’s the homogeneized data.
http://data.giss.nasa.gov/cgi-bin/gistemp/gistemp_station.py?id=633062600003&data_set=2&num_neighbors=1

Eric Anderson
May 18, 2011 3:34 pm

Jim, what are you talking about?

Theo Goodwin
May 18, 2011 3:37 pm

“Sheltering stimulates the build up of a night-time stable boundary layer, decreases the outgoing long-wave radiation, causes a screen to be in the shade in the hours just after sunrise and before sunset, and increases the radiation error of screens due to decreased natural ventilation.”
I hope this makes front page in some major American newspapers. These issues have been downright squelched by Big Warming and the MSM. However, they are incredibly important.
A clause from above reads: “decreases the outgoing long-wave radiation.” Aha! There it is: proof that Earth should not be treated as a black body. There are a gazillion things on Earth that are not GHGs but that decrease outgoing long-wave radiation.
For example, suppose I am a long-wave radiation (a bit) and I am at the bottom of Wall Street at dusk. My street runs east-west and is illuminated by the Sun from dawn to dusk. Yet when it is time for me to escape and I look up, I cannot even see the sky. All I can see is the skyscraper across the street that I am about to hit full-speed, head-on. After I bounce off it, all I can see is the skyscraper that I came from and that I am about to hit full-speed, head-on. So, how the heck do I get to space? Notice, no GHG has a role in this puzzle. All rational solutions will be acknowledged in a later post.
Because there are a bazillion of bits of radiation like me, don’t we have some effect on our environment? Don’t we make Wall Street warmer or at least cause higher readings on geiger counters? After all, some of us are not eventually escaping to space. Some of us end our existence there.

Kev-in-Uk
May 18, 2011 4:18 pm

sky says:
May 18, 2011 at 1:38 pm
Well said.
Unfortunately, we, the general public can only be drip fed the homogenised, sanitised, adjusted data to suit those that produce (or fund?) it.
Worse, it is somewhat doubtful that all original raw data exists in a readily accesible form (even as paper records?) – certainly as far as I am aware there is no complete CRU record. The list of adjustments ‘record’ is probably even less ‘existant’, if indeed it ever was ‘recorded’!!
It makes a mockery of the early work done by record keepers that their records, whether good or bad in quality, which have been essentially lost or corrupted forever. Even the bad records can have scientific use (for example when compared to nearby stations, in showing how the errors occurred!) and should be retained!

nomnom
May 18, 2011 4:56 pm

so there’s a good chance the recent warming has been underestimated

JFD
May 18, 2011 5:55 pm

I did a study of the De Bilt data set a couple of months ago. Using a linear curve fit to De Bilt (or any data set with breaks or steep step functions) is an incorrect procedure. The data can be broken down into segments and analysed with a linear fit or some other curve fitting method but a linear fit across steep step functions is not proper. Using a correct approach may not change the three year siting study by KNMI but it could/should reveal the problems with using a linear fit in deternmining trend values.
Below I am pasting my study without any changes. I believe that it contains insights that I have not seen before in the global warming word wars. The study posted by email to a discussion group that I am a member of. We know each other so there is a friendly tone to it.
start paste
INSIGHT INTO ATMOSPHERE TEMPERATURE CHARTS
If you have been keeping up with global warming, you have seen many graphs showing either temperature data or temperature anomaly data over long periods of time. The trend line is usually up, denoting global warming. Interpreting data and graphs seems straight forward, but in actuality, sometimes the truth is not as it appears to be. The Deniers always want to be able to get the raw data and check the methods used by the Non-Deniers to “adjust” the data. The Non-Deniers don’t want to release the data or the methods, even if Freedom of Information requests are made. This refusal is very clearly against the scientific rule that data must be verified and the method must produce the same answers when used by others. You have read me lament many times that I like raw data, unadjusted by humans. There is a reason for that and it is pretty simple – humans can make mistakes in arithmetic and use the wrong approach to problems. I like to use my own approaches and methods to see if I arrive at the same conclusion. My approaches and methods are not necessarily better, but if I don’t get the same answer or draw the same conclusions as the authors then I start searching for reasons why not.
I am going to lay out several graphs using the same data set (1900 to 2010) from KNMI for De Bilt, Netherlands. KNMI is the Koninklijk Nederlands Meteorologisch Instituut (Royal Netherlands Meteorological Institute) located at De Bilt. The data set is attached, click on Sheet 1. It is degrees C times 100 so they can use three decimal places. Using three decimal points for atmosphere temperature seems kinda strange to me, but when in Rome do as the Romans do.
Just below is the De Bilt dataset charted in Excel, it is also Graph 1 in the attachment to the e-mail.
Your eyes probably see the “Global Warming” starting in the mid 1980s. This is not exactly a hockey stick but it definitely goes up, doesn’t it? Let’s add a trendline and see what it shows.
It shows about 1.1 C rise in 110 years or about 1C per century or .1C per decade. Sounds familiar, doesn’t it?
Let’s do what I do routinely and that is look for break points. Essentially all kinds of processing facilities have break points where something gets too large to build the same way, so one must be careful to not design something that is too large to build using standard techniques. Likewise project economics often show a break point as recovery increases. Break points in data are not unusual.
Now look again at the first chart and see if you can spot any break points. If you can’t then look below at next two charts and you can see what I see. Here is the first:
I have restricted the data to 1901 to 1987 because I see a break point in 1988. Let’s add a trendline to this chart. Here it is:
As you can see there is an uptrend of .41C in 86 years or ~ .048C per decade.
Now let’s look at the 1988 to 2010 chart:
There is a clear cycling in the graph but there may also be a slight increase in trend during this period. Let’s add the trendline and see what we get:
The change in temperature is .016C in 22 years or .072C per decade. This trend is somewhat higher the .048C from 1901 to 1987, but given the easily observed cyclic nature of the data, a polynomial trend may reflect the trend better. Below is a 5th order polynomial fit of the data:
Your eyes can see that the polynomial fitting of the data is obviously very good. It shows an overall cooling trend from 1988 and a sharp down trend the last 4 years. We know from simply being alive that we have been in a cooling period during the last 4 years.
Okay, what I have demonstrated is that one can easily misuse and misinterpret even very good data, such as the KNMI De Bilt set. One has to be on the lookout for break points in data. One cannot extend trend (or correlation) lines across break points without distorting the answer or conclusion. What happened in the year ending in 1988 when the average De Bilt temperature increased from 8.88C to 10.37C (or 1.49C in one year)? There are a few potential reasons and some explanations:
1. A natural cycle in climate could have gone more positive. Since the uptrend lasted for 18 years, one would first check the AMO (Atlantic Multidecadal Oscillation). The AO (Atlantic Oscillation should also be checked.
2. The method of handling the data could have been changed, such as using Hansen’s homogenization of temperature measurements over a length up to 1200km. This doesn’t seem too likely to me since it would mean that the current cooling cycle is even steeper than one would think.
3. The measurement station could have been moved during 1988. This didn’t happen. It has been many decades since the station was last moved.
4. An Urban Heating Island could have been built near the measurement station. This didn’t happen. Recent photographs show a wide open space all the way around the De Bilt station.
5. There could be a conspiracy of adding a fudged number to the data starting in 1987 when Hansen first testified before Congress. This would mean either that KNMI got scared in 2007 and removed the “fudge” or the recent cooling trend is even steeper than one would think. I doubt a conspiracy and completely doubt that the Dutch would add a fudge by themselves.
Let’s look at the AMO, go to:
http://www.esrl.noaa.gov/psd/data/correlation/amon.us.data
This is the short, unsmoothed data file starting in 1948. There is a warming change starting about 1988 and continuing until 2007 when it started dropping. The change in the AMO was noticeable, but not exactly sharp in1988; however a change in the Atlantic currents could possibly be the cause of the observed warming trend. Realistically, the change in 1988 would have to be from an abnormal low temperature to an abnormal high temperature to account for the De Bilt data set change of 1.49C in one year. The AO index is a daily change that is averaged over 3 month periods for yearly index purposes. Thus, the AO should have more finite data for 1987-1988. The AO index may be found at:
http://www.atmos.colostate.edu/ao/Data/AO_SATindex_JFM_Jan1851March1997.ascii
For ease, I have pasted below the relevant 1987-1988 data below:
1.9870000e+03 -1.0263523e+00
1.9873333e+03 2.8507547e-01
1.9876667e+03 -9.6188910e-01
1.9880000e+03 5.1969111e-01
1.9883333e+03 -3.0814385e-01
1.9886667e+03 -2.5720335e-01
As you can see there was a sharp drop from +.285 to -.962 in third quarter 1987. This was followed by a sharp increase to +.512 in first quarter 1988. Thus, the anomalous change in 1988 is (-.962 to +.512) = 1.47C, almost exactly the 1.49C change in the AMO for 1988. Bingo, we have our answer to the break point in the De Bilt historical data. Was this really changes in the earth’s climate or was there data manipulation by the Non-Deniers? Beats me, but without more work I would put my money on a blocking high. My technical curiosity currently doesn’t extend to checking other data sets to see if this anomaly is present in them. I have too much work to do to overcome the anomalous global cooling damage to my shrubs and plants this winter.
One thing with absolute certainty is that carbon dioxide could not be the cause of the De Bilt step discontinuity between 1987 and 1988. Carbon dioxide in the atmosphere simply does not and cannot act in this manner. Likewise, carbon dioxide in the atmosphere could not be the cause of the step discontinuity between 2009 and 2010.
I hope that this illustration of handling data via charts and graphs is helpful. The climatological literature is full of improper data graphs and resulting conclusions.
I leave you with one final thought. Earth’s climate is known from geology to be highly variable over time. If the sudden rise of 1.49C in 1988 was the result of a natural climate variation then a sudden drop of 1.49C is also possible. De Bilt had a 1.37C drop in 2010. What goes up comes down, for whatever reason.
Once again, I have numerically debunked carbon dioxide as being the root cause of global warming and pointed out an error in procedure by the Non-Deniers. The UAH lower Troposphere satellite anomaly for the reference period 1981 to 2010 for January 2011 was minus.01 degree C. Mother Nature has proven me correct.
end paste
Well, the graphs did not copy but the results from the graphs are shown, so my conclusions should still be clear to readers.
JFD

Pamela Gray
May 18, 2011 6:05 pm

Wow. Am I the only one who thought of the study potential and grant money available measuring the differences between gnat ass #1, #2, #3, and #4? A half of a degree difference really makes me want to change my clothes. Does it do that for you? Let’s hope that Anthony’s study, along with this one, will shed light on the silly idea of measuring temperature trends that are essentially, no greater than the difference between ass #1 and ass #4.

Geoff Sherrington
May 18, 2011 6:24 pm

Sky – so you remain confident that an acceptable solution is eventually possible?

May 18, 2011 7:46 pm

This is a timely posting as just yesterday I uploaded a page to my site detailing the history of the BoM’s official recording location since 1897 for the Western Australia capital city of Perth.
http://www.waclimate.net/perth-temperature-history.html
The top of the page deals with claims by the Bureau of Meteorology that Perth has just endured record maximum summer temperatures. It demonstrates a .1C difference between the BoM’s method of averaging rounded monthly temperatures and a more sensible method of averaging the actual temperatures recorded on each day.
The rest of the page shows what happens when Perth’s temperatures were recorded at one location (61 metre elevation on the outskirts of the city) from 1897 to 1967, to another location (19 metre elevation two kilometres away in the middle of the city) from 1968 to 1992. Both locations contributed to the records of the one station, Perth Regional Office (9034), which is the benchmark site against which all modern temperature readings are compared for this city.
Perth Regional Office saw a sudden mean temperature increase around .8C as of 1967.
Location 9034 was closed in 1992 when a new official Perth temperature site (Perth Metro – 9225) was set up four kilometres to the north (elevation 25 metres). Average maxima immediately increased in “Perth” by about .4C – thus the claims of record maximum temperatures by the BoM. However, average minima immediately fell by about 1.6C.
If you have a look at Perth’s historic temperature records and locations, you’ll be left wondering why the BoM wants to make any claims about comparative historic records in this capital city beyond those at its current (stationary) location established in 1992.

Layne Blanchard
May 18, 2011 8:14 pm

“We showed that the magnitude of the inter-site temperature differences strongly depends on wind speed and cloudiness. In general the temperature differences increase with decreasing wind speed and decreasing cloudiness. Site changes directly affect wind speed because they are usually accompanied by changes in sheltering.”
I can imagine, due to the particular arrangement of shelter, that the direction of wind may also be important. Wind from one direction having a wholly different effect than that from another.

Phil
May 18, 2011 8:57 pm

Sensor readings are suspect when sensors are placed in the boundary layer. Why should this engineering principle not apply to climate science?

sky
May 18, 2011 11:06 pm

Geoff Sherrington says:
May 18, 2011 at 6:24 pm
The “acceptable solution” is to work exclusively with long, intact, raw records from vetted small-town stations. Unfortunately this narrows the geographic coverage drastically and is vulnerable to charges of “cherry-picking” from bureaucratic minds intent on using ALL the data, no matter how corrupt. Working this way obtains wider uncertainty bands, but it circumvents the most egregious systematic bias.

sky
May 18, 2011 11:35 pm

Kev-in-Uk says:
May 18, 2011 at 4:18 pm
In the USA , Matthew Fontaine Maury’s 19-th century visionary plan of establishing a network of small-town stations for reliable monitoring of temperatures produced a national treasure that is being destroyed version by version in USHCN. . Analysts might permitted to do what they might to data compilations, but archivists have no right to manufacture/invent data by altering what was recorded by instruments. That should be made a federal crime.

May 19, 2011 2:02 am

Tonight NASA is hard at work:
http://oi55.tinypic.com/or822a.jpg
Creating serious ideas:
http://oi51.tinypic.com/20ho86e.jpg
[maybe ask Josh to join in? ~ac]

Alexander Vissers
May 19, 2011 4:25 am

The “De Bilt” site is next to/ part of the Dutch Royal Meteorological Institute KNMI of which Buys Ballot was the first governor and which has the longest record in the Netherlands. It has great historical and cultural value. Temperature readings are not biased they just are what they are, it is the conclusions drawn that are biased. Of course siting issues play a major role in an over 100 year record: in this time span the nearby Ijsselmeer (lake) resulted from damming off the Zuiderzee (sea), and one third of the area of the former Zuiderzee was turned into new land (Flevoland-polder). In this centre of one of the most densely populated countries in the world large surfaces of roads have been covered with tarmac and the nearby city of Utrecht grew several times ist original size in the timespan of the readings. It is not the temparature or temperature readings that is inaccurate or biased or wrong , it is the naïvety that the local data series is a usefull data set for “global” warming analysis that is biased and naïve. It is refreshing that despite the Dutch gouvernment’s CO2 doctrine and related restricted freedom of speech for the institute’s staff, such a report has been published.

May 19, 2011 6:09 am

De Bilt goes back much further than NASA cuts it off at. It’s one of only two very long running thermometer records that actually forms a little local hockey stick:
http://oi47.tinypic.com/2zgt4ly.jpg
Luckily, nobody’s ever heard of De Bilt.
The rest of the old records don’t show a recent spike:
http://oi52.tinypic.com/2agnous.jpg
“It is good to be learned in the things that are hidden from the wise and the intellectual ones of the world but are revealed, as if by nature, to the poor and simple, to women and little children.” – Vincent van Gogh (letter to Theo van Gogh, 1878)

May 19, 2011 8:07 am

This particular theme is probably repeated all over the world. Takes me back to the incident that got my suspicions aroused about the CAGW scam when a Greenpeace activist and avid supporter of one G Monbiot attacked me (in the Guardian of London’s CIF, which I rarely read now) for asking if standards existed for siting and reading instruments.
It’s quite timely that Chris Gillham posted yesterday about similar problems with the Perth (Aus) site(s).
That faint roaring noise we can hear in the background is the climate shibboleths falling, one after the other and in rapid succession.

May 19, 2011 10:43 am

JFD says: May 18, 2011 at 5:55 pm
I did a study of the De Bilt data set a couple of months ago. Using a linear curve fit to De Bilt (or any data set with breaks or steep step functions) is an incorrect procedure. The data can be broken down into segments and analysed with a linear fit or some other curve fitting method but a linear fit across steep step functions is not proper.
I disagree. A linear regression calculation is not a ‘curve fit’ like a polynomial or a running mean would be – it is simply a statistical method of determining the long term average trend of data over a given period. While I admire your detailed analysis and your general conclusions, I question both the usefulness of measuring a trend over a short time interval and also whether it is possible to draw meaningful conclusions on causes, a point you make yourself when you draw up a list only to express some doubt that it is possible to decide between them (but my bet is also on the AMO).
Eyeballing the temperature chart in the original article, there does indeed appear to have been a sudden anomalous drop in temperature that lasted (with the usual annual ups-and-downs) from the mid 1950s to the late 1980s. This then reversed around 1990, and carried on very much as it had before the mid-1950s. But, instead of seeing that as simply an extended low temperature period that subsequently corrected itself around 1990, the eye is instead easily tricked into seeing an ‘alarming’ climb all the way from the mid-1950s up to the present day.
To illustrate this psychological effect, I downloaded exactly the same data as you did and generated the following chart, containing three different linear trends for three different time spans as follows:
http://www.thetruthaboutclimatechange.org/KNMIDeBilt.html
1. The blue linear trend line over the full 130 year period shows a statistically insignificant fall of 0.06degC per century.
2. The grey linear trend line over the first 75 years period to 1955 shows a statistically insignificant rise of 0.1degC per century.
3. The red linear trend line for the 55 years from 1956 to 2010 shows a very alarming rise of 3.5degC per century.
The shorter red linear trend is clearly the anomaly, an artifact of the chosen method of analysis.
This demonstrates in a dramatic way how easy it is to get fooled into false alarmism by selecting a short term trend while ignoring the full long term linear trend of the data series. And, more importantly, an exactly similar thing has happened to the world temperature series, as is shown in the following chart:
http://www.thetruthaboutclimatechange.org/temps.html
The official world temperature data shows a decidedly un-alarming long term average trend of 0.41degC per century. But superimposed on this is a roughly sinusoidal cycle of around 67 years duration with an amplitude of plus and minus 0.25degC. Climate alarmism gathered pace during the last 35 years of the 20th century precisely because from 1964 to 1998 the world was on the upward swing of this cycle. The temperature rise over that period was 0.83degC. That’s equivalent to an alarming rise of about 2.4degC per century, quite close to the 3degC rise that some CAGW enthusiasts had been predicting from CO2 theory by 2100, but quite atypical when compared with the 160 year temperature trend of only 0.41degC.
During the 1980s and 1990s, climate alarmists began increasingly to believe that this late 20th century high rate of temperature rise was the clear ‘fingerprint’ of global warming they had been expecting. They assumed that it was set to continue at that rate until 2100. Disappointingly for them, the temperature rise tailed off subsequently and it now looks like it will be heading back towards maintaining a long term un-alarming average of around 0.41degC.
If this does happen, as most skeptics expect, the next decade will doubtless see the end of the alarmist bandwagon. If not, the skeptics will be the ones having to eat humble pie. It’s as simple as that.

sky
May 19, 2011 3:39 pm

Alexander Vissers says:
May 19, 2011 at 4:25 am
“Temperature readings are not biased they just are what they are, it is the conclusions drawn that are biased. ”
You’re entirely correct. The assumption that a particular station record is representative of temperature variations over a much larger area than just the instrument site proves unwarranted in many cases. The systematic bias that I refer to is relative to temperatures in that larger area.

May 20, 2011 1:31 am

KNMI immediately corrected the temperature data of De Bilt as soon as Meteo Consult raised the alarm over the De Bilt data. Although the whole matter put KNMI red with shame, they generally deliver reliable data, particularly compared with the temperature data from GISS showed in this article. The suggestion as if KNMI played false to increase De Bilt temperature can be ignored: the temperature range of the mean monthly temperature in De Bilt did not show any increase during the last 14 years, lineair trend was zero. See:
http://www.klimaatgek.nl/cms/?De_feiten:Temperatuur_De_Bilt

John Brookes
May 23, 2011 5:26 am

Chris Gillham, I liked your article about the Perth temperature records. It must be very difficult to manage the transition from one weather station to another. I’m struck by a few things in Perth’s recent weather history. Firstly, we don’t seem to get the same really hot days we used to. Days over 42C are few and far between. Secondly, we do get a lot of very warm nights in summer. The winters should be colder, since it gets freezing when it doesn’t rain, and it doesn’t rain much these days. (Freezing in Perth is any time the minimum temperature drops below 5C (~40F) – we don’t really have cold weather).
I guess the important thing is to get as broad a range of weather stations as possible, so that errors here cancel out errors there, giving a reliable average.