Little Ice Age thermometers – History and Reliability

File:Oldthermometers.jpg
Thermometers of the 19th century Image: Wikipedia - click to enlarge

This is an excerpt of a larger document by Tony Brown (Tony B on WUWT and other blogs) that he will be happy to make available. He writes:

Some months ago I passed you a preliminary study of mine into historic temperatures and their accuracy. The latest version is attached. It seems to me that taking an extract from the start to the end of Section 2 would be highly relevant to the debate. I have come to the reluctant conclusion that we can not rely on the historic record for anything other than a general direction of travel (we are probably a little warmer than we were 350 years ago) It is quite impossible to parse the data to tenths of a degree.

As I say, if people would like to read the rest of the document I will email it to them but would be grateful for any comments, especially if they have a particular knowledge of the topic. My address is tonyAtclimatereason dot com.

Part Two-The Rumsfeld factor

“There are known knowns; there are things we know that we know. There are known unknowns; that is to say, there are things that we now know we don’t know. But there are also unknown unknowns; there are things we do not know we don’t know.”

Foreword

Of course, Donald Rumsfeld was not specifically referring to climate science back in 2002, yet there can be few other disciplines so riven with uncertainties from top to bottom that are still able to attract voluble proponents enthusiastically promoting the latest findings as incontrovertible facts, to a world largely unable to question the work of scientists.

This article -the second in a three part series entitled ‘Little Ice Age Thermometers-History and Reliability’-examines some of those many uncertainties in climate science and discusses the means by which our modern temperature records-surface and sea- have come about, as they are a useful proxy for many of the other measurements we are now asked to take for granted-such as sea levels and arctic ice- that will be examined in Part 3 of this series.

This article assumes the reader may have some general knowledge of the subject without being an expert, and hopes they will be prepared to undertake the work necessary to come to a better understanding of this complex subject by examining, with an open mind, some of the referenced material contained here.

To help the reader to understand how we arrived at this stage it would be useful to read Part 1 in this series which dealt with the basics by examining the development of the means of recording weather events. This took us on a journey from the Thermoscope of the Ancient Greeks through to the creation of the Stevenson screen by way of Romans, Russianized Vikings and of 1200 year old Byzantium accounts of great icebergs hitting the walls of Constantinople, with a large dose of contemporary scientific studies mixed in.

http://wattsupwiththat.com/2009/11/14/little-ice-age-thermometers-%e2%80%93-history-and-reliability/

In this second article we re-examine related events concerning the 1850/1880 CRU/Giss temperature records, and then pay particular attention to the reliability of those readings that have become the basis of our modern climate industry, examine those who carried out the original observations, and look at the circumstances under which data was collected.

This should help answer the question as to whether the temperature readings-viewed through the harsh prism of history and context-are a reliable record from which Governments can draw definitive conclusions that will affect global policy.

Section 1 – Unknown Knowns and Known unknowns

There are many factors influencing the production of accurate records, some of the key ones were identified in Part 1 of this series, others will be introduced here for the first time.

Until very recently, following the widespread advent of Automatic weather stations (AWS) in the 1980’s and digital recording some years after, obtaining an accurate manual reading from an instrument was highly problematic, being reliant on numerous variable factors, any of which may cause concern over the reliability of the end result.

The skill and diligence of the observer were of course paramount, as was the quality of the instrumentation and that a consistent methodology was employed, but this did not prevent the numerous variables conspiring to make the end result-an accurate daily temperature reading-very difficult to obtain. Indeed the errors inherent in taking measurements are often greater than the amounts being measured.

Many of these basic concerns can be seen in this contemporary description from a 1903 book which relates how temperature recordings of the time were handled. The “Handbook of Climatology” by Dr Julius von Hann (b. 23 March 1839 d. 1 October 1921) is the sometimes acerbic observations of this Austrian, considered the ‘Father of Meteorology.’

The book touches on many fascinating aspects of the science of climatology at the time, although here we will restrict ourselves to observations on land temperatures. (It can be read in a number of formats shown on the left of the page on the link below).

http://www.archive.org/details/pt1hanhdbookofcli00hannuoft

This material is taken from Chapter 6 which describes how mean daily temperatures are taken;

If the mean is derived from frequent observations made during the daytime only, as is still often the case, the resulting mean is too high…a station whose mean is obtained in this way seems much warmer with reference to other stations than it really is and erroneous conclusions are therefore drawn on its climate, thus (for example) the mean annual temperature of Rome was given as 16.4c by a seemingly trustworthy Italian authority, while it is really 15.5c.”

That readings should be routinely taken in this manner as late as the 1900’s, even in major European centers, is somewhat surprising.

There are numerous veiled criticisms in this vein;

the means derived from the daily extremes (max and min readings) also give values which are somewhat too high, the difference being about 0.4c in the majority of climates throughout the year.”

Other complaints made by Doctor von Hann include this comment, concerning the manner in which temperatures are observed;

“…the combination of (readings at) 8am, 2pm, and 8pm, which has unfortunately become quite generally adopted, is not satisfactory because the mean of 8+2+ 8 divided by 3 is much too high in summer.”

And; “…observation hours which do not vary are always much to be preferred.”

That the British- and presumably those countries influenced by them- had habits of which he did not approve, demonstrate the inconsistency of methodology between countries, cultures and amateurs/professionals.

The book was published more than 20 years after the establishment of the US weather service-the year from which James Hansen commenced his Global Giss records (see Section five)

Dr von Hann seems to have pinpointed the remarkable coincidence-also observed by this author in an article referenced later- whereby Giss started recording just as a sharp down turn in temperatures came about, thereby exaggerating the subsequent upturn. Von Hann observes the reading in Washington DC in January 1880 as being 5.5c and a year later 2.4c.

https://i0.wp.com/data.giss.nasa.gov/gistemp/graphs/Fig.A.gif?resize=600%2C402

http://data.giss.nasa.gov/gistemp/graphs/

It seems that Dr von Hann didn’t like the habit of believing that results are so accurate that they can be parsed to fractions of a degree (a practice that continues to this day) and makes the point that even long observations of monthly means are untrustworthy in regions where they vary greatly year by year.

It is on the Urban Heat Island effect (UHI) that his observations become especially pertinent to modern day conditions. UHI is tackled in Section Three, but briefly, he observed that temperatures were routinely around 1 degree C (around 1.8F) higher in cities than in rural areas, and cites the United States where differences of 2.8 to 15 degrees (F) are noted between the voluntary observers in rural areas, and the paid ones in the adjacent cities. He makes the perhaps obvious suggestion that stations would be better placed near cities than in them.

A listing of the many factors likely to affect the accuracy of readings-including UHI- is given in Section Two ‘Compendium of Uncertainties,’ although no doubt to this can be added many as yet ‘unknown unknowns.’

Section Two – Compendium of Uncertainties

This is an appropriate stage to provide a compendium of the various factors that might impact on the accuracy of the basic data-in this case that of the land thermometer reading.

A recognisable thermometer became available in the 1650’s and instrumental records commenced around then in Italy and Britain. The most famous is the series from which Central England Temperature (CET) derives, which commenced in 1659. Thermometers rapidly evolved into an expensive, precision (in the context of its inherent limitations), scientific instrument.

The device spread quickly around the developed world. Monarchs and Universities considered it prestigious (and scientifically valuable) to take temperatures as one of a number of weather parameters, and we have many of their records to this day. Because of the cost and prestige factors, in its early days such instruments were generally used by trained observers.

* The nature of its construction and materials meant that its inherent accuracy is no more than approx. plus or minus 1 degree F. (up to 0.5 degree C) An accuracy concern that prevails until relatively recently (see Note 2)

* The limited accuracy of the instrument can be compounded by the methods by which it was housed and subsequently read. There was a vogue in the early days to place instruments in unheated north facing rooms, and even in metal cages attached to the north wall of a building, often on the first floor. This is far higher-and under very different conditions- than current standards permit, so like for like comparison is not possible without considerable adjustments. ‘Adjustments’ is a favourite word in the Climate Science Dictionary.

* The construction of the magnifying glass, through which the indicator mark of the instrument would be read, was often imperfect and caused distortions.

* Similarly, depending on the coarseness of the graduations of the temperature gauge, exact readings might be compromised.

* Both these last two factors are important considerations, as readings might be rounded up or down to the nearest whole degree.

* The invention of the minimum/maximum thermometer in 1780 by James Six was an important milestone.

http://brunelleschi.imss.fi.it/museum/esim.asp?c=410041

Up to this time measurements were (supposed) to be taken at least three times in a 24 hour period, so until this device became universally adopted neither the actual maximum or minimum temperature would necessarily be recorded, unless coincident to the exact time of the reading. However, measurements were still taken routinely, at various inconsistent times of the day even into the 20th century and mean temperatures calculated in different manners, so in practice the instrument took many years to come into common use. Modern analysis always assumes it was subsequently re-set and one day’s reading was not carried over to the following one.

* The Urban Heat Island effect (UHI) had the potential to greatly distort the readings of individual stations. (See Section Three.)

* The creation of the Stevenson screen in the latter years of the 19th Century helped to standardize the diverse conditions under which readings were taken, but was not in universal use until the first or second decade of the 20th century.

* Methodology remained inconsistent, and various crucial factors such as a standardized height for a properly calibrated and screened instrument are a relatively modern innovation.

* Whilst modern automatic weather stations-introduced from around 1980- have removed many of the human frailties, and its instrumentation has the potential to be generally accurate, to this day problems arise with inappropriate siting, some examples of which can be seen in this site.

http://www.surfacestations.org/odd_sites.htm

and here;

http://wattsupwiththat.com/2011/01/16/the-past-is-not-what-it-used-to-be-gw-tiger-tale/#more-31814

Extract; “Why are the stations so close to artificial heat sources? Well, fifty or more years ago, all the readings were taken manually by volunteer observers once a day. Some volunteers were not about to walk the length of a football field to do so. Even as automatic reporting stations were introduced, the stations had to be close to buildings so the data cable could be run to the display. Even though the originally specified maximum cable distance was 1/4 mile, most automated COOP observer MMTS sensors ended up within 10 meters (33 feet) of the building, mostly due to the inability of the NWS to trench under driveways and sidewalks which acted as barriers to putting the temperature sensor in open spaces.” (So what with Stevenson screens and the AWS, recording devices had a tendency over the years to move closer to buildings-sources of artificial heat).

* The majority of readings were taken in the Northern Hemisphere and records are biased towards this.

* Many records of the time are incomplete for a variety of reasons, of which war, or the death of the incumbent observer are but two, and these data omissions may be ‘interpolated’ (another favourite word of Climate Science) and then ‘adjusted’ many years later by modern computer methods. This is an unfortunate exercise as temperatures can vary greatly day by day and inventing figures does not mean they are the correct invented figures, as Dr von Hann observed over a century ago.

* There are a very small amount of stations worldwide and their numbers and locations continually fluctuate, making like for like comparison difficult.

* Instruments were fixed at inconsistent heights. Readings change considerably even with modest height changes.

* How tall was the observer, were they viewing the instrument straight on or at an angle? Was it an alcohol or mercury instrument where the meniscus curves in opposite directions to each other? How often did the observer change? Were they trained?

* Was there moisture or snow on the instrument?

* Was data ‘invented’ by the observer due to inclement weather or their other duties?

* Were the measurements translated accurately from one scale to another, for example from Reaumur degrees to Fahrenheit then to Centigrade?

* What was the defined accuracy of the thermometer claimed by the manufacturer-(likely to be around plus or minus 1 degree)? Was it subsequently recalibrated to maintain this accuracy level? Were subsequent thermometers at the same location bought from a different maker with different standards?

* Were the thermometers properly screened?

* Was there an adequate free flow of air round the bulb or was it restricted by fixing the instrument to a wall or the screen?

* Was the instrument set above bare ground, grass, tarmac, stone-all of which will affect results?

* Did it subsequently become affected by shade from the growth of trees or removal of an object which allowed more wind or the sun to reach an unscreened thermometer?

* Was the instrument always in the same location? There is a history of them migrating from the micro climate of a field in one part of the city to a warm airport many miles away, representing a completely different micro climate.

* Many of the cities with the longest temperature records- generally in Europe and North America- were industrialising rapidly as the thermometer came into widespread use in the 1700’s. Smog caused by the burning of coal, wood, and later gas, became increasingly widespread. Sunshine levels in the UK are said to be 40% higher now than during the worst years in London that culminated in the 1952 killer smog, which caused the various clean air acts to be enacted fully. Pollutants are said to create a cooling effect and it is easy to understand that foggy urban areas were likely to be substantially cooler than if the sun was shining. Inversions caused by these layers of air would have also helped to create temperatures that were vastly different-mostly lower-than they might otherwise have been. What effect this had on the overall temperature record over the centuries, in the many cities that smog affected to a greater or lesser degree, is impossible to calculate, but it must have been significant.

Perversely, smog became a tourist attraction and many artists flocked to great cities such as London to observe and paint the effects it caused.

Waterloo Bridge London in 1900 by Monet showing chimneys and smog. http://www.artnet.com/Magazine/features/nkarlins/karlins7-7-04.asp

An account of the historical development of smog is mentioned here;

http://en.wikipedia.org/wiki/Pollution

* Dr von Hann also expressed a number of other concerns already cited in the previous section. Blogger Adam Soereg echoes those reservations with his modern day take on those 1903 observations;

Between 1780 and 1870, Hungarian sites observed the outdoor temperature at 7-14-21h, 6-14-22h or 8-15-22h Local Time, depending on location. How can anyone compare these early readings with contemporary climatological data? (The National Met. Service defines the daily mean temperature as the average of 24-hourly observations) The average annual difference between 7-14-21h LT and 24-hr readings calculated from over a million automatic measurements is -0.283°C. This old technique causes a warm bias, which is most pronounced in early summer (-0.6°c in June) and negligible in late winter/early spring. Monthly adjustments are within 0.0 and -0.6°c. The accuracy of these adjustments are different in each month, 1-sigma standard error varies between 0.109 and 0.182°C. Instead of a single value, we can only define an interval for each historical monthly and annual mean.”

Summary;

Thermometers were only designed to approximately measure the micro climate immediately around them, but the relationship of the readings to its original micro climate often became lost as the decades and centuries passed and thermometers migrated from cool open fields to warm airports many miles away. Throughout the history of instrumental records the growing need for a more consistent methodology can be readily seen that would enable measurements to be compiled on a like for like basis. The invention of the Stevenson screen and the Maximum/Minimum thermometer were arguably the first steps towards the standardisation of readings. However, these events coincided with the thermometer becoming a cheaper mass produced item instead of a scientific precision instrument of such cost and prestige that it was generally only used by a qualified observer.

Consequently, what could previously be considered a ‘scientific’ observation, (with all the numerous caveats) lost part of that status once the great expansion of the weather station network commenced in the 1880’s. Less skilled observers, using a cheaper product in different circumstances, were almost certain to come up with figures inconsistent with those of their predecessors, and comparisons become difficult, even into modern times. As was commented on earlier, full time paid professionals often had different standards to volunteers, or those paid a retainer.

Consistent reliability of readings through quality of instrumentation and methodology-height, screening, correct times of observation etc. – could not be guaranteed until the advent of the Automatic Weather Station in the 1980’s, but even then some of these have arguably been compromised by concerns over siting.

None of this is to say that many original observations were not done with great diligence and skill, just that there are so many variable parameters affecting the accuracy of the reading that a direct comparison to today’s values is impossible. To believe we have a highly accurate data base of even individual records that can be parsed to fractions of a degree is an illusion, and this uncertainty is multiplied many times when considering the accuracy of a ‘global’ temperature.

I will leave it to Dr Floor Anthoni to sum up, in a somewhat tongue in cheek manner, the preceding information, in a short article on Temperature reading errors;

http://www.seafriends.org.nz/issues/global/climate3.htm#Ocean_temperature_measurement

Suppose we have stations with the finest thermometers inside the most standard Stevenson screens and located in rural areas, away from urban disturbances, then surely, readings must always be accurate? They are not, for various reasons:

Readings are done by humans. It involves going out in the rain, snow and sleet to the remotely placed weather station. There the finely scaled thermometers must be read to within 0.1 degrees, with fogging spectacles and suchlike. The data must be written up with a pen that won’t work on soggy paper, etc. So shortcuts are taken.

  • Let’s skip today because it is much like yesterday and we’ll use those figures instead.

  • John is sick and no-one else can do it today

  • Who will do it during the summer holidays?

  • The broken thermometer has still not been replaced.

  • Etc.(Added by Anthony – there are polar bears outside and I don’t want to risk my life for a temperature reading. See: Fabricating Temperatures on the DEW Line)

There can also be a bias caused by the time that the reading is done. Air warms up during the day and is warmest a couple of hours after mid-day. During the night it cools and is coolest just before dawn. So in the morning one reads the maximum of previous day and the minimum of today. Are these two noted down for the same date? In the afternoon the reading shows today’s maximum and today’s minimum.

In addition to these problems, there are more serious ones related to location:

  • Temperature decreases with height at the standard lapse rate of 0.6ºC per 100m altitude, but this is not always true.
  • Stations located near the sea measure sea temperature during sea winds and land temperature during land winds, with usually a large difference between them. What do we want to measure? Air temperature over land or sea temperature?

The upshot of all this is that a large number of sites and observations are needed to even out reading errors, but one can never truly correct for UHI, altitude and distance to the sea.”

Technical references;

The 1903 book already referenced by Dr von Hann is linked below.

http://www.archive.org/details/pt1hanhdbookofcli00hannuoft

Citing it again provides the opportunity to comment that exactly the same concerns over accuracy and context that he expressed over a century ago are still of great relevance today, as the four subsequent papers demonstrate..

This interesting article with useful illustrations provides a practical tutorial on the accuracy of the thermometer.

http://pugshoes.blogspot.com/2010/10/metrology.html

Extracts and discussions from a new peer reviewed paper on uncertainties in the global record.

http://noconsensus.wordpress.com/2011/01/20/what-evidence-for-unprecedented-warming/#more-11278

This 2005 paper is headed ‘Uncertainty estimates in regional and global observed temperature changes-a new dataset from 1850.’

http://www.cru.uea.ac.uk/cru/data/temperature/HadCRUT3_accepted.pdf

Phil Jones amongst others were involved in this attempt to make comparisons between modern and historic temperature readings

http://www.springerlink.com/content/g111046235jnv572/

0 0 votes
Article Rating
51 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Jeff Carlson
May 23, 2011 7:56 am

I would think we could build all future automatic sites with solar power and wireless data transmission to allow for optimal siting … the technology is now off the shelf and may actually be cheaper to install than cabled data and power installations … (trenching can be expensive for long distances)

Ken Smith
May 23, 2011 8:02 am

“If we can find out how far the understanding can extend its view; how far it has faculties to attain certainty; and in what cases it can only judge and guess, we may learn to content ourselves with what is attainable by us in this state.”
John Locke, quoted in William Lawhead, _Voyage of Discovery_, 1st ed., p. 296.
Discovering the limitations of our knowledge (and that of our experts) is the first step toward enlightenment.

Curiousgeorge
May 23, 2011 8:02 am

It might be of use for readers to also review the applicable ISO standards. http://www.iso.org/iso/iso_catalogue/catalogue_ics/catalogue_ics_browse.htm?ICS1=17&ICS2=200&ICS3=20 . Unfortunately, these standards did not exist until relatively recently, or perhaps we could have a bit more faith in antique records of temperature.

Latitude
May 23, 2011 8:14 am

Tony, are you saying that +300 year old thermometers were not accurate to 1/100th of a degree? and that it’s just a coincidence that thermometers were invented right at the end of the LIA when temperatures started getting back to “normal”……………/snarc

Spen
May 23, 2011 8:34 am

Question.
If the accuracy of temperature readings is +/- 0.5 deg. is the average of any set of readings (say 10 or 10 million) also restricted to +/- 0.5 deg. ?

Gary Pearse
May 23, 2011 8:35 am

I trust the old records were also adjusted for standard time. Prior to the1880s, the time of high noon varied from place to place by 1 minute per 18km of east-west difference in distance. It was the spread raillway systems that made it essential and, although a few others also sought to resolve the problems created for railway time schedules, Sanford Fleming builder of the Canadian Pacific Railway convened an international meeting at which the present 24 time zones centred on Greenwich were internationally adopted.
http://bing.search.sympatico.ca/?q=invention%20of%20standard%20time&mkt=en-ca&setLang=en-CA
The “heat of the day”, therefore is 22 minutes later in Toronto than in Ottawa.

P.F.
May 23, 2011 8:49 am

Are precise temperature data from a few hundred years ago all that relevant or necessary, except int he context of local events? For the larger picture, some sort of smoothed reference is more important than wildly fluctuating daily recordings covering only a small amount of territory. Sea level strikes me as a more reasonable measure of past climate conditions and the present trend on a global scale.
I’ve found the work of Rhodes Fairbridge in the early 1970s and the more recent work by Bilal Haq particularly useful in getting a sense of what used to be. I compare that to the current state and I’m able to decide if global warming is even occurring or if it’s something about which to be concerned. (Results: we’re cooler now than during most of the Late Holocene and what rise there was in the 1990s was well within natural variations of the period. That forms the anchor of my AGW skepticism and no alarmist has been able to refute the results.)

Theo Goodwin
May 23, 2011 9:03 am

According to Warmista theory, CO2 in the atmosphere acts like a blanket and slows Earth’s cooling but cannot heat Earth. So, new daytime highs cannot be the result of warming caused by CO2. So, Warmista cannot say that new daytime highs are caused by CO2 and they must agree that new highs are caused by something other than CO2. Therefore, Warmista must adjust downwards temperature readings of new highs to reflect the fact that they could not have been caused by CO2. Warmista must adjust downward all historical data of new highs taken from the decades when, according to them, manmade CO2 had become the primary driver of climate change.
So, when recording temperatures, Warmista cannot use daytime high readings that exceed readings from a decade
Doesn’t this mean that Warmista must adjust downward all daytime high readings that exceed readings from decades before CO2

Tom T
May 23, 2011 9:07 am

I have never understood why we ever thought we could use system set up to measure local weather to measure global climate. It is like using one foot ruler to measure the distance from L.A. to New York.

oeman50
May 23, 2011 9:10 am

I have long maintained (to any one who will listen) that any discussion of temperature when applied to climate should contain error bars or a +/- tolerance. That’s so when it is claimed “this was the hottest month/year/decade, etc.” by 0.01 C, people will know this is just a claim that cannot be backed up with data. When your tolerance is +/- 0.5 C before you even do any calculations, that will tell you the 0.01 C delta T is not significant. This is a “known known.”
I remember the first time I was required to do an error analysis on data gathered during a chemistry lab. What an eye opener! You mean I have to put all of the errors into the equations, even multiplying them? Yes, I had to, and all of the errors quickly added up. I did not like it because this implied my lab work was not as good as I wanted it to be. Hmmm, sounds familiar…….

tallbloke
May 23, 2011 9:11 am

Nice read, thanks Tony!

Latitude
May 23, 2011 9:29 am

oeman50 says:
May 23, 2011 at 9:10 am
=================================
You can’t put error bars in this…………….
The error is larger than the temperatures they claim.

John F. Hultquist
May 23, 2011 9:29 am

tonyb,
I only had time for a fast look at this. It seems fascinating. Some parts, of course, I have read as you earlier posted results of your historical searching. So, I will (later) give it the attention it deserves. As the day has warmed enough to work outside I need to go – I’m way behind from being chased indoors by cold, wind, and rain — in Washington State east of the Cascade Mountains.

May 23, 2011 9:35 am

Thanks Tony, you have put the errors that can arise in the technical aspect of measuring temperature into perspective. Anybody who has ever made the same item a great number of times knows that variations from specs wander in then wander out again. No action, even reading a dial or a guage, is easy or simple to standardise over multiple readings over time. I don’t believe accounts of temperatures recorded to within an accuracy of 1% and I believe the supposed warming over most land masses during the past century is within the bounds of noise.

Mike M
May 23, 2011 9:37 am

“There are known knowns; there are things we know that we know. There are known unknowns; that is to say, there are things that we now know we don’t know. But there are also unknown unknowns; there are things we do not know we don’t know.”
Hold on Donald, in tribute to Murphy you have to include – things that we don’t know that we know.

JLawson
May 23, 2011 9:38 am

I’ve seen alarmists arguing that it’s possible to take temperature readings that are up to 3-4c off and by homogenizing the data over multiple sites it’s possible to get an accurate temperature for the whole area.
My response is “WTF?” You CAN’T mix in bad data with good, run it through a metaphorical blender, and not have it affect the output!
Maybe it’s just my time doing geodetic survey, but if you’ve got crap readings to start with you can’t ‘correct’ them into something you can use for high precision surveying. To take a temperature record rife with errors, insist it’s accurate down to a tenth of a degree, and then insist that computer models made with that data are accurate enough to force a complete restructuring of the world’s economy is just plain idiocy.

Ryan Welch
May 23, 2011 9:43 am

This is a fascinating post Anthony and I really appreciate all you have done to enlighten people with respect to science in general and the climate debate specifically.
Although I am not a climate scientist I have been following the debate quite closely for about four years and during that time I have read many hundreds of articles regarding climate science. My educational process has moved me from being an AGW “believer” (really only for a month or so) to the point where I am firmly placed in the “skeptic” camp and WUWT has played a key role in that education.
Your surface stations project helped me to understand the problems with measuring temperature and this article just accentuates that point. What I have come to realize is that the more we know about climate science, the more we know that we don’t know.

Scottish Sceptic
May 23, 2011 9:45 am

I was told by an old lighthouse keeper of another keeper who had a heart condition. He struggled just to climb the steps to the light. He could not also make his way down just to take a thermometer reading at the prescribed time. Instead he would simply make up the readings and send them off to the Met Office.
This is just typical for any working environment. I used to work in a factory where readings were taken once an hour using an analogue meter which wobbled about … as if by magic every single one was the same. The operators simply couldn’t understand that if one person had taken the reading then obviously the next one ought to be the same, so they just checked the speed and then recorded the “proper” reading.
I’ve no doubt many many readings were missed and made up later or just simply forged. That’s the real world – one which the climate “scientists” don’t seem to inhabit.

Alan S. Blue
May 23, 2011 9:45 am

One of the complications that is repeatedly left off of the the list of complications involves making a formal distinction between the instrumental error associated with a standard point-source thermometer and the value we’re actually trying to observe – average gridcell temperature.
IOW: The argument over what, exactly, the precision of the instrument is when confounded by tall observers or differing standard procedures is probably swamped by the accuracy errors associated with the geographic extrapolation.
A precise, accurate, “±0.1C” thermometer with wet/dry bulb correction at a CRN1 site simply doesn’t measure either “The gridcell temperature to ±0.1C” or “The gridcell anomaly to ±0.1C”
It provides a site reading – which can be higher, lower, higher-on-cloudy-days, equal-on-blustery-days, lower-all-winter, or whatever than the “gridcell temperature” they’re actually being used to determine.
But the instrumental error sure looks like it is the error that ends up being propagated in the calculations, and that gives the entire surface record a vastly understated set of error bars.

May 23, 2011 9:47 am

I’m glad you noted the importance of having a regular time of observation. The time of observation correction is one of the most important corrections made to the record.
oeman50: Yes the error bars are important. For example, CRUTEM reports 1850
as -.423 C. +-.02C. Some People construe this report in a wrong headed fashion thinking
that a claim of 3 digits of significance is being made. It might be in interesting experiment to create an artificial temperature series. Assume a thermometer with a large recording error. Sample that series every day for a year. tell us what you get for the error bars for 365 measures.

Vince Causey
May 23, 2011 9:54 am

Measuring the worlds average temperature to tenths of a degree is like trying to measure the ‘height’ of the sea using a tide gauge in rough weather. No matter how many readings you take and how many ‘smoothings’ you make, an impartial observer will quickly conclude that your results are meaningless. The best guide is anecdotal evidence, and from that we can infer that today’s climate is a little warmer than 100 years ago.

Alan S. Blue
May 23, 2011 10:13 am

“Assume a thermometer with a large recording error. Sample that series every day for a year. tell us what you get for the error bars for 365 measures.”
Hm. I have a thermometer with large systemic error bars. It has reasonable precision when calibrated to a standard though.
Though I haven’t actually done the measurements, the error bars I expect would be exceedingly tight – and completely bogus. Because we’re talking about my thermostat’s ‘inside temperature’ reading.

Paul Murphy
May 23, 2011 10:23 am

Nice work –
Now let me add two things:
1 – averaging doesn’t reduce error. My mistake plus his mistake does not an accuracy make…
2 – even if we had accurate (i.e. well bounded estimates) readings for specific locations spanning some region, we have no theoretical basis on which base a transform from those readings to a regional measure.
So not only don’t we know much about climate in the last centuries, we don’t know much about it in this century either.

geo
May 23, 2011 10:36 am

I am reminded of a story that was told me by a security guard at a sugar beet processing facility in North Dakota (or was it South Dakota? One of those, I’d have to check my records) that I visited as a surfacestations volunteer.
The Stevenson screen was at one end of this giant asphalt parking lot for the employees, maybe 20′ from the guard shack. It was in a little strip of grass tho, and there were mature trees planted along that strip that was shading the Stevenson screen.
What was the net-net of all that asphalt vs the trees? Heckifino.
Anyway, they had noticed (she said) that the min-max thermometer was getting cloudy and hard to read, so they asked for a new one, and one wasn’t forthcoming. So they bought their own thermometer (I seem to recall it having a TWC logo on it) with a remote read out and put it in the screen instead. From her description this all happened around Jan/Feb of that year that I was there.
Now, the suspicious type might wonder if the added advantage of the new thermometer that they purchased themselves having a remote read-out that allowed them to not exit the guard shack in the middle of deep Dakota winter had something to do with their feeling about how unreliable the old officially-issued thermometer was.
I won’t offer an opinion on the matter, tho of course the thought did cross my mind when she told me when the switch was made.
Anyway, did the switch in thermometers have an impact on the record at that site? Dunno.
But the human factor is always with us, and should never be ignored.

geo
May 23, 2011 10:59 am

Ah, Hillsboro, ND. That was it. . . I was there in late June of 2009, and that would put the thermometer switch around January of 2009, if anyone cares to take a look at their station record for funsies.

Genghis
May 23, 2011 11:02 am

It seems to me that what is important is the trend, in other words the slope.
If that is the case, then it doesn’t really matter what the absolute temperature is, or the errors ( as long as they are somewhat consistent).
Mann tried to show that the slope was flat until after the 1960’s, then we had a slope upwards. If Mann’s flat slope for hundreds of years and the upslope of the 80’s was correct, that would prove CO2 induced warming, if there was a direct correlation between CO2 levels and warming.
Conversely if the warming slope in the 30’s was anywhere close to the warming slope in the 80’s that would falsify the CO2 hypothesis. Also the recent flat slope falsifies the CO2 hypothesis.
Obviously the 80’s upward slope is not anomalous or continuing, that proves that there isn’t a correlation between CO2 and warming. Is there any current justification for the CO2 hypothesis?

Brian Haskell
May 23, 2011 11:08 am

I am not a climate scientist or a metrologist, computer scientist, mathematician, statistician or any other type of scientist. What I am, for good or ill, is an attorney. In this capacity, I have faced off against any number of “experts” who asserted with absolute confidence (and perhaps even believed) their scientific conclusions. But you don’t need to be a great chef to know good cooking. As with all professions, there is a broad range of skill, diligence, knowledge, independence and bias in the scientific community. And scientists are susceptible to a major hobgoblin of the legal industry, namely, an emotional attachment to your position. Some are better than others at minimizing it or at least recognizing it and compensating for it, but I have yet to meet one lawyer or scientist that is immune to it. This emotional attachment creates a blindness to inconvenient or contradictory information. So I believe that in addition to the measurement margin of error, there is a “scientist” margin of error, which can compound (or, I suppose, cancel out) the measurement margin of error. In any event, juries are remarkably good at spotting someone who is blinded by his science.

Mike M
May 23, 2011 11:39 am

Brian Haskell says: This emotional attachment creates a blindness to inconvenient or contradictory information.

But emotional attachments pale in comparison to government funded paychecks and pensions. Those cast a permanent darkness that doesn’t wash off.

Steve R
May 23, 2011 12:47 pm

Life is just way to short to worry about such things.

Dave Wendt
May 23, 2011 12:51 pm

“There are known knowns; there are things we know that we know. There are known unknowns; that is to say, there are things that we now know we don’t know. But there are also unknown unknowns; there are things we do not know we don’t know.”
Of course Rummy left out the largest category of all, i.e. “The things that everyone knows for sure which just aren’t true.”

sky
May 23, 2011 1:09 pm

Tony B provides a very useful introduction to the uncertaintainties in temperature measurements and their implications for time series over climatic time scales. I take qualified exception, however, to the idea that absolute measurement accuracy is necessary for analytic studies of temperature variabilty. Consistency of measurement is paramount there, notwithstanding the error in determining the true daily average. Also, near-shore stations always measure the AIR temperature, which can be influenced by SSTs, but are NOT particulary coherent with them even when measured at the same bouy. Nevertheless, this series of posts should open laymen’s eyes to the fragility of claims that we know the “global temperature” variations going back to the mid-1800s.

Stephen Wilde
May 23, 2011 1:17 pm

I think that we will soon have to acknowledge that natural regional climate variability is far greater than anything experienced since 1900.
The sparsity of early observations means that we can have no adequate indication of the extremes that occurred between sites.
The relatively flat handle of the so called ‘hockey stick’ is most likely a wholly misleading indication of the extent of real world natural variability.

May 23, 2011 1:27 pm

Steven Wilde says:
“I think that we will soon have to acknowledge that natural regional climate variability is far greater than anything experienced since 1900.”
If so, it’s going in the wrong direction for the alarmist crowd.

manacker
May 23, 2011 2:15 pm

TonyB
Interesting and entertaining read (like part 1).
Max

May 23, 2011 2:20 pm

Excellent work as always, Tony.
Re your section on smog and the effects of air pollution on the clarity of the atmosphere, I’m reminded of a study mentioned in Brian Fagan’s book The Little Ice Age:
“In a fascinatingly esoteric piece of research, Hans Neuberger studied the clouds shown in 6,500 paintings completed between 1400 and 1967 from forty-one art museums in the United States and Europe. His statistical analysis revealed a slow increase in cloudiness between the beginning of the fifteenth and the mid-sixteenth centuries, followed by a sudden jump in cloud cover. Low clouds (as opposed to fair-weather high clouds) increase sharply after 1550 but fall again after 1850. Eighteenth- and early nineteenth-century summer artists regularly painted 50 to 75 percent cloud cover into their summer skies. The English landscape artist John Constable, born in Suffolk in 1776 and a highly successful painter of English country life, on average depicted almost 75 percent cloud cover. His contemporary Joseph Mallord William Turner, who traveled widely painting cathedrals and English scenes, did roughly the same.
After 1850, cloudiness tapers off slightly in Neuberger’s painting sample. But skies are never as blue as in earlier times, a phenomenon Neuberger attributes to both the “hazy” atmospheric effects caused by short brush strokes favored by impressionists and to increased air pollution resulting from the Industrial Revolution, which diminished the blueness of European skies.”
Reference: Hans Neuberger, “Climate in Art”, Weather 25 (2) (1970): 46-56
Brian Fagan: The Little Ice Age, p.201

May 23, 2011 2:24 pm

Thanks Tony.
Great article.

charles nelson
May 23, 2011 2:36 pm

Great to hear some common sense on the subject. I have always been amazed and slightly appalled by the Warmists and their .1 of a degree measurements….
after all, I can get three different temperature readings INSIDE your fridge!

NikFromNYC
May 23, 2011 3:58 pm

I have little problem with workmanship from centuries past, be they thermometers or violins: http://oi53.tinypic.com/k2kac8.jpg

Robert of Ottawa
May 23, 2011 5:10 pm

Anyone purporting to know the “global temperature” to tenths of a degree over the past 1000 years is a snake oil salesman.

Robert of Ottawa
May 23, 2011 5:13 pm

To follow up from my previous posting, when I hit send too soon:
We can take a gazillion measurements over 100 years with an accuracy of 5 degrees, average and smooth and get a continuous temperature recrd to an “accutracy” of 0.01 degrees.
This is the fallacy of false precision. Our smoothed average is a product of computation, not of measurement.

Feet2theFire
May 23, 2011 9:54 pm

Somehow I missed the original post (Part I) back in Nov 2009.
In reading it and looking at the graphics in Part I (good thing they give us pictures, so we can comprehend!…lol), I noticed that Figure 3 seemed to have as much constancy to its periods as, say the sunspot cycles. In looking at the white band at the bottom, the blue lines seem to demarcate minima – quite like we do for sunspots, even, to mark the beginning of sunspot cycles.
For what it is worth, it looks like there are 21 minima marked from about 1490 to about 1947. 467 years divided by 21 cycles = 22.23 yrs. Whether just coincidental or meaningful, that is quite close to the length of two average sunspot cycles.
The sunspot cycles are just about as erratic as the periods shown in Figure 3. Some sunspot cycles were pushing 17 years (as I read them) and some less than 10 years. Figure 3 shows that one with the dashed line, which seems to represent a minima that never minimized. If that is considered a real minima, then the longest cycle shown seems to be about 30 years (1891-1921), and there are three that are about 12 years long. It is nothing exact, but the magnitude of the variability is fairly close.
I’m just sayin’…

Keith Minto
May 24, 2011 12:02 am

Thanks Tony, very interesting.
I remember reading Mark Coopers blog article on T reading errors before, but lost the link, so I am glad you provided it here. At the end he says……

The resolution of an astute and dedicated observer would be around +/-1F.
Therefore the total error margin of all observed weather station temperatures would be a minimum of +/-2.5F, or +/-1.30c…

…….very sobering.
I rather like the idea of the Max/Min T’s pioneered by James Six that you mentioned. Although manually read, the markers, assuming that they do not move, would provide a relief from the meniscus error in alcohol and mercury types, and, if read at say 9 am, they would have recorded the previous days Tmax and the overnight Tmin. At that time they could be rest. An ‘accurate’ Max and Min, whenever it occurred (hopefully before 9am) and one reading/reset per day.

mike restin
May 24, 2011 12:06 am

The earth is warming out of an ice age.
CO2 increasing may or may not be aiding, it should.
There are more people on earth.
Ice and snow are melting providing more fresh water.
Improved farming is increasing our food source.
More land is opening up for expansion.
Our species is evolving and living longer.
Technology improves lives everyday.
Why would anyone want to upset this?
Sounds to me like the system is working.

May 24, 2011 2:00 am

Alexjc38, a very interesting comment. Constable’s cloudscapes were widely and admiringly commented on by contemporary country-dwellers, who were much impressed by Constable’s observational accuracy. I have maintained for many years that the painterly effect known somewhat romantically as ‘aerial perspective’ in which the middle ground becomes indistinct and the background dissolves in a lovely haze and said to be a device for imparting an impression of the depth of feild in a painting was, in fact, honest and quite literal depictions of the air quality, air which carried the smoke and residue from fires burning charcoal, wood, peat or dried animal droppings in the eras prior to extensive mining of coal, remembering that those fuels, along with water and animal and human muscle were the only sources of energy. Once coal mining and industrialisation were truly underway in the UK, contemporary paintings of industrialised areas had a hellish aspect imparted by the lowering black cloudscapes lit from underneath by the fires of industrial processes.

Brian H
May 24, 2011 2:59 am

“Unknown knowns”, huh? Like, “Jeez, I didn’t know I knew that!”
“My mind is going, Frank. I can feel it.”

Brian H
May 24, 2011 3:05 am

Oops, an unknown unknown slipped in there. It’s “I’m afraid, Dave. Dave, my mind is going. I can feel it. I can feel it. My mind is going. There is no question about it. I can feel it. …”

John Marshall
May 24, 2011 4:15 am

So we know exactly the climate within the Stevenson Screen but nothing about global climate. (sarc)
Excuse my sarcasm. Excellent article by one who knows what he is talking about.

Editor
May 24, 2011 4:20 am

Thanks for everyones comments.
As regards painting as a proxy of climate, I definitely think there is something in it. Brian Fagan goes into this in some detail in his book as does Hubert Lamb. Trying to sort out artistic licence, smog and reality does need some work though. The old illustrations of ice fairs and paintings of hard winters-both referenced in my works- then need to be set beside bucolic paintings of early harvest-which could be evidenced by crop records. We can start to draw some interesting conclusions when combined with temperature records-observational and instrumental.
Trouble is that these are usually dismissed as ‘anecdotal’ whilst computer models using dubious data are considered sciemtific and factual.
tonyb

May 24, 2011 8:33 am

Tony B, your point that paintings are classed as ‘anecdotal evidence’ but the products of inaccurate computer models are classed as ‘scientifically acceptable’ proves that Scientific Man is incredibly gullible, but only within a narrow range, of course! 🙂

Editor
May 24, 2011 9:05 am

AlexanderK
Ah, but that narrow range is robust 🙂
tonyb

Ryan
May 25, 2011 7:48 am

A rather more obvious problem with thermometers at ground level in Stephenson screens is that they are measuring air temperature as it is affected by clouds and wind direction.
On a sunny day in July the temperature might be expected to be 35Celsius, but if it is cloudy and the wind is blowing from the north the temperature may be much lower – say 10Celsius. In what way can that measurement be a reflection of CO2 impact unless you can say with any certainty that both cloud cover and wind direction are directly influenced by CO2? If climate charts indicate a trend over time with rising temperatures that would tend to suggest less cloud cover, not greater heat energy trapped in the atmosphere. Suggesting CO2 is the culprit is jumping to a conclusion when so many other variables could be to blame, some rather more obvious than CO2.