A typical day in the Stevenson Screen Paint Test

Click for full sized image

Experiment config on July 13th. The screens were subsequently moved further apart. Note aspirated air temperature reference in forground.

I’ve been sorting through some of the data, and have found many similarities with many days of data. So I thought I’d present a typical day from this summer. The is 8/27/07, exactly one month after I started the experiment data logging.

I chose to show this day for starters because I can be certain that the whitewash was fully cured and no “newness effects” related to the conversion of CaOH to CaCO3 would remain. Whitewash cures by chemical reaction with air, not by drying.

First here is a 24 hour plot of the raw data, at 15 second sampling intervals, note that it gets noisy on the way to Tmax due to afternoon winds about 5-10 mph. Another factor for noises is that the response time of the NIST calibrated thermistors is fairly fast, in seconds.

paint-test-082707-raw-520.png

Full size graph: paint-test-082707-raw.png

Since it is harder to visually determine separate Tmax and Tmin with noisy data, I ran it though a data smoothing algorithm to produce this plot.

paint-test-082707-smoothed-520.png

Full size graph: paint-test-082707-smoothed.png

Here is the report from my data plotter on peaks for this graph:

Air Temp

Minimum = 55.38 at X = 8/27/2007 6:52:48 AM

Maximum = 95.04 at X = 8/27/2007 3:40:50 PM

Whitewash

Minimum = 56.22 at X = 8/27/2007 6:54:35 AM

Maximum = 96.94 at X = 8/27/2007 3:43:07 PM

Latex

Minimum = 55.92 at X = 8/27/2007 6:40:25 AM

Maximum = 97.74 at X = 8/27/2007 3:42:06 PM

Bare Wood

Minimum = 56.36 at X = 8/27/2007 6:39:24 AM

Maximum = 98.47 at X = 8/27/2007 3:42:36 PM

Since in regular use, COOP/USHCN stations report their daily max and min to NCDC for inclusion in the climatic database, I have provided zoomed graphs on the Tmax and Tmin periods.

Here is the zoomed Tmax graph:

paint-test-082707-tmaxzoom-520.png

Full size graph: paint-test-082707-tmaxzoom.png

And here is the zoomed Tmin graph:

paint-test-082707-tminzoom-520.png

Full size graph: paint-test-082707-tminzoom.png

As a secondary reference besides my own aspirated air temperature, I’m fortunate to have a CDF RAWS station within about 200 yards, on similar terrain and ground cover. It has a non-aspirated shield, and also has wind, and solar radiation among other sensors.

Here is the temperature plot from it:

temp-chi-raws-082707-520.png

Full size graph: temp-chi-raws-082707.png

A complete suite of plots, including wind and solar radiation in watts/m2 can be had from the CDEC data query web page.

As I wade though this data, I’ll be publishing additional days, and more detailed analyses, followed by a final report that covers everything that I’ve learned.

In the spring of 2008, I’ll be repeating this experiment at a slightly different location, there is a late afternoon bias for tree shading to the west that I want to remove from the experiment.

I’ll remind all readers that this is a single day, and that conclusions should not be drawn from it as it represents a single data point for Tmax/Tmin. Please be patient while I do further analysis.

5 1 vote
Article Rating

Discover more from Watts Up With That?

Subscribe to get the latest posts sent to your email.

32 Comments
Inline Feedbacks
View all comments
dscott
March 17, 2008 7:08 am

As I understand it, the observer record the max and min daily temp, plus temp at time of observation on a small notepad form, then transcribes to a B91 form to mail in to NCDC. The observer rounds the max and min and observed temperature to the nearest whole number on the B91 form.
NCDC takes the B91 form and transcribes into a computer database. From there (as I understand it) a daily average is created, and from those, yearly average which is what me most often see in climate studies. – Anthony,

So this isn’t even a weighted average to give a statisticly significant representation of the area? A weighted average using the hourly readings would give a more realistic representation of temperature. What you described here using a simple average to determine the climatic conditions is pretty much worthless data, IMO. Any conclusion based on this information will be as equally flawed as the data it is based upon. Just because you run it through a computer doesn’t enhance the data, in the business world we call this money laundering. Start with a false premise, apply flawless logic (computer), you end up with a flawlessly false conclusion. I am just disgusted with how far science has fallen, this wouldn’t have been acceptable even in the high school math classes of my day.

Trevor
March 21, 2008 5:34 am

Just to add another level of bias to the “rounding issue”:
Say the digital thermometers have 0.1-degree precision. And say that the actual temperature is 74.45. If the obesrver knew what was there in the hundredths place, he would round this down to 74, because it’s clearly closer to 74 than to 75. But, if he’s looking at a digital thermometer that only has 0.1-degree precision, all he would see is 74.5 (74.45 rounded to the nearest 0.1 degree). The observer would round this up to 75 when he recorded it on the NCDC form. So, due to this “double-rounding” issue, not only is anything greater than or equal to x.5 degree being rounded up – anything greater than or equal to x.45 degree is being rounded up! This double-rounding issue quite clearly only works one way, because 74.54 would round up to 75 whether you rounded it directly or in two stages.
But still, as long as the same equipment and rules were in place 30 years ago, the double rounding issue couldn’t account for any of the observed rise in temperature. However, 30 years ago, I suspect most thermometers were analog, mercury-bulb thermometers. Assuming they had marks at .5 degrees, even if the mercury level was at 74.49 degrees, a keen-eyed observer could still tell that the temperature was slightly below 74.5, and when he recorded the temperature on the form, he would round it down to 74.
If, on the other hand, the analog thermometers did not have marks at .5 degrees, then there would be some range (its size depending on the eyesight of the observer, but lets say from x.45 to x.55) where the observer couldn’t tell whether it was closer to x or x+1. Though an individual observer might have a personal bias that made him consistently round such “close calls” one way (thus introducing bias for an individual station), you can make the assumption that, for every such observer, there’s another observer that consistently rounds the other way, and on average, there is no bias.
Based on this analysis, I conclude that though individual analog thermometers probably have more bias (not to mention outright error) than individual digital thermometers, an average of a statistically-large-enough sample of analog thermometers clearly has less bias than that of digital thermometers. And that IS what we are looking at – an average of a huge sample of thermometers – when we say that “global average temperatures have increased by 0.6 degrees Celcius over the last 100 years”.

Tim Doyle
March 28, 2008 11:51 am

Then there’s the question of what are the equipment, procedures and practices used in other countries that might bias the data used for “global” averages.

April 20, 2008 9:52 am

[…] used to collect temperature data to “study” global warming, and why that data is compromised. A typical day in the Stevenson Screen Paint Test Watts Up With That? Depending how the lil stations are painted shows that their temp could fluctuate by 1 degree. […]

Luke Davis
April 23, 2008 11:12 am

Trevor,
What you say is certainly true about analog thermometers–the short of it is, they’re rather precise, especially with a good reader.
As for digital thermometers, I am not sure you’re right. When they give you a number, e.g. 79.5, it is not necessarily the case that any rounding at all has occurred. The measurement is likely +/- 0.1 degree (marked somewhere on either the thermometer or the manual that accompanied it). There is no reason to believe that the thermometer takes readings of 79.45 and rounds them (like a human would)–why couldn’t it just truncate? Or, perhaps it doesn’t take a measurement of the last decimal place in the first place–but +/- 0.1 from 79.45 means it could report either 79.4 or 79.5 and still be accurate within its defined precision.

Trevor
April 24, 2008 12:00 pm

Luke:
I don’t think I made myself very clear. I’m not claiming that the digital thermometer actually MEASURES hundredths of a degree, then, for some strange reason, reports in tenths of a degree. My point is that the thermometer is a tenth-of-a-degree approximation of an actual temperature, one that could be any of an infinite number of possible temperatures between 0.05 degrees below and 0.04999999… degrees above the reported temperature. I used x.45 as an example of the lowest possible fraction of a degree that could be reported as x.5 (and then rounded, by the observer, up to x+1). But x.454 would also be rounded up, as would x.46, x.49, x.451, x.4501, x.45001, and x.45000000000000001. The point is that, due to this double-rounding effect, actual temperatures between x.45 and x+1.44999999999… will always be rounded to x+1. Across a uniform probability distribution (which is obviously what temperatures, at this range of precision, are), the true average of temperatures in this range is x.95, not x+1, but the reported temperatures will all be x+1, and therefore the average of the reported temperature will be x+1. This means that there is, in addition to everything else, a positive 0.05 degree double-rounding bias that can be blamed on the use of digital thermometers that report temperature to the nearest tenth degree and observers that round those to the nearest whole degree.
Let me try an example. Say you have 1,000 thermometers spread out over an area that varies in temperature by 10 degrees. And say that the ACTUAL temperature at each of these sites is defined as T(i) = 24+.01i, for i = 1 to 1,000, resulting in a uniform probability distribution between 24.01 and 34 degrees. The average temperature across all these stations can easily be shown to be exactly 29.005 degrees.
But what is the average REPORTED temperature? Well, the first 44 sites (between 24.01 and 24.44) would all be reported as 24 degrees. The next 100 sites (between 24.45 and 25.44) would all be reported as 25. There would also be 100 sites reported as 26, 27, 28, 29, 30, 31, 32, and 33. And finally, there would be 56 sites (between 33.45 and 34.00) reported as 34 degrees.
A weighted average of these reported temperatures would be [44(24)+100(25)+100(26)+100(27)+100(28)+100(29)+100(30)+100(31)+100(32)+100(33)+56(34)]/1000, or 29.06, which is 0.055 degrees higher than the ACTUAL average temperature over these stations. (This is actually slightly higher than the 0.05-degree bias I stated earlier, but only because I’m using a DISCRETE uniform probability distribution to approximate a CONTINUOUS uniform probability distribution. With a continuous uniform probability distribution, the resulting bias would be exactly 0.05 degrees.)
Note that, if the thermometer reported temperature to the nearest WHOLE degree, though obviously less precise, it would nullify the double-rounding bias, resulting in a reported-average temperature of 29.01, just 0.005 off from the actual average (with a continuous probability distribution, even this small bias would completely disappear)
If the thermometers have an error above and beyond the 0.05 measurement degrees that can be blamed on rounding only to the nearest tenth of a degree, that’s another issue. But the measurement error itself can never be more than 0.05 degrees for a device with a precision of 0.1.
Unless, as you hypothesize, the device is merely truncating the temperature rather than rounding it. I don’t believe that is the case. But if it is, the measurement error wouldn’t be +/- 0.1 degree. It would be only MINUS 0.1 degree, i.e., the device-reported temperature would almost always be LESS THAN the actual temperature and would NEVER be more than the actual temperature (though on rare occasions, the two would be equal). I’m not sure, but I think that would cancel out the positive bias of the observer rounding. But again, there is no evidence that digital thermometers work that way.

Richard Patton
August 4, 2008 8:48 pm

Evan, having been 20 years as a Navy ‘weather guesser’ I always thought that the standard method of determining the mean temperature for the day did not give an honest picture of what was happening. For example the following hourly temperature sequence from when I was at NAS Fallon NV in the Mid `80’s. (First observation at 1Am subsequent observations on the hour thereafter)
14 13 13 13 13 14 14 15 17 18 20 21 21 20 20 20 19 19 18 18 18 20 32 45
An inversion had been in place for a couple of weeks with the temperature 50′ above ground thirty degrees warmer than at the surface. Winds were dead calm throughout the day and kicked in near midnight to scour out the cold air.
Taking the standard (Tmax+Tmin)/2 for the mean temperature of the day gives 29 degrees. Averaging the hourly observations yields 18.96 degrees, a considerable difference!

Verified by MonsterInsights