Mean and reported “Mean” temperatures and the consequences of the difference
Guest essay by Tom Quirk
The convention in meteorology is to report mean temperatures as the average of minimum and maximum temperatures. This assumption has been tested using temperatures recorded every 30 minutes through the 24 hour day at various locations in Australia by the Bureau of Meteorology and made available on their website. The period examined is from the middle of March 2013 to the end of April 2013. Analysis shows that distortions are introduced by the use of thermometers that measure minimum and maximum temperatures and more importantly that the averaging of minimum and maximum temperatures does not represent the mean for the period examined. Whether this is also true for the entire year should be tested.
The Bureau of Meteorology (BOM) on its website (http://www.bom.gov.au/australia/index.shtml) provides temperatures recorded every 30 minutes through the 24 hour day at various locations in Australia, an example, Canberra is at http://www.bom.gov.au/products/IDN60903/IDN60903.94926.shtml.
The convention in meteorology is to report daily, monthly or yearly mean temperatures as the average of minimum and maximum temperatures. This assumption can be tested using the BOM data.
13 locations around Australia have been selected for analysis. Figure 1 shows the average of the 30 minute intervals for 43 days in March and April for Cairns and Alice Springs. The figures show errors on the mean, not standard deviations. The data for the 13 sites divided into continental and coastal locations are shown in the Appendix.
The measurements at Alice Springs and Cairns are a perfect illustration that the mean is not always the average of minimum and maximum temperatures. For Alice Springs the average of the minimum and maximum temperatures is 0.12 +/- 0.12 above the average of all 48 30 minute readings while for Cairns the average of all 48 30 minute readings is 0.45 +/- 0.07 below the average of the minimum and maximum temperatures.
Figure 1: Temperatures measured at 30 minute intervals through a 24 hour day. The sample is for 37 days in March and April and the errors not the standard deviations are shown. The difference for Mean (30 minute Tmin & Tmax) – Mean (all 30 minute T) is -0.13 +/- 0.10 for Alice Springs and 0.46 +/- 0.07 for Cairns.
The minimum and maximum temperatures that are reported by the BOM are a result of readings from the two thermometers at 9.00 am each morning. This gives a record of the minimum temperature for the day of the reading since minima in general occur between midnight and about 7.30 am local time. This can be seen in Figure 1. The maximum temperature is a record from 9.00 am on the previous day. In general this maximum occurs before midnight.
As a test of the 30 min readings, the temperature differences of the 24 hour minimum thermometer and the 30 minute thermometer minimum value and likewise the temperature differences of the 24 hour maximum thermometer and the 30 minute thermometer maximum value were calculated. The results are shown in Figure 2. There are biases with the 24 hour readings being equal to or below the 30 minute minimum and the 24 hour maximum readings being equal to or above the 30 minute readings.
Figure 2: Maximum or minimum temperature differences for the 24 hour and 30 minute measurements as a function of the maximum or minimum 30 minute temperature measurement.
However these thermometer differences are not dependent on temperature as shown in Figure 2 where there are no significant trends. Note that there are a number of large differences. Some are due to the 24 hour record assumption that minimum temperatures occur between midnight and 9.00 am where in fact the minimum comes from the previous day after 9.00 am but before midnight. Other measurements may not to date have had a quality control check.
There is evidence of a systematic error in Figure 2 that is more obvious in Figure 3 and detailed in Table 1.
Figure 3: Maximum or minimum temperature differences for 24 hour thermometer temperatures and 30 minute temperature measurements
This bias is not unexpected as an extreme might occur in the 30 minute interval between regular measurements. A measure of this is to look at the temperature differences that occur in the 30 minute interval before and after the extreme minimum or maximum. The scatter is 0.6 ^{0}C for the maximum readings and -0.2 ^{0}C for the minimum readings, the same magnitude as the difference in Figure 3 and Table 1.
Table 1: 24 hour thermometer reading – 30 minute temperature readings
Coastal | Continental | |||
Temperature extremes | Minimum | Maximum | Minimum | Maximum |
Number | 400 | 435 | 123 | 125 |
24 hour value – 30 minute value | -0.18 | 0.51 | -0.26 | 0.60 |
Standard Deviation | 0.19 | 0.43 | 0.20 | 0.31 |
The effect that these systematic errors have on the mean temperature is given in Table 2 and shown in Figure 4. The average systematic error from the 24 hour thermometer readings is an increase in mean temperature of 0.14 +/- 0.04 ^{0}C.
Table 2: Difference for mean temperature for 24 hour thermometer reading – 30 minute temperature readings
Latitude^{0}S | Longitude^{0}E | 24 hour value – 30 minute value^{0}C | +/- Error^{0}C | |
Continental | ||||
Alice Springs | 24 | 134 | 0.22 | 0.03 |
Kalgoorlie | 31 | 121 | 0.13 | 0.03 |
Broken Hill | 32 | 142 | 0.15 | 0.03 |
Coastal | ||||
Darwin | 12 | 131 | 0.12 | 0.02 |
Cairns | 17 | 146 | 0.10 | 0.02 |
Port Hedland | 20 | 119 | 0.16 | 0.03 |
Brisbane | 27 | 153 | 0.18 | 0.03 |
Perth | 32 | 116 | 0.18 | 0.03 |
Sydney | 34 | 151 | 0.25 | 0.05 |
Canberra | 35 | 149 | 0.10 | 0.05 |
Wangaratta | 36 | 146 | 0.06 | 0.05 |
Melbourne | 38 | 145 | 0.09 | 0.05 |
Hobart | 43 | 147 | 0.03 | 0.05 |
Figure 4: Location differences of mean temperature for 24 hour thermometer reading – 30 minute temperature readings. The overall difference is 0.14 +/- 0.01 ^{0}C.
The corrections to the mean temperatures are therefore increased if the BOM 24 hour thermometer measurements are used rather than the 30 minute measurements.
This systematic error is a consequence of the “one-way” temperature recording where, for example, a 10 minute 1^{0}C fluctuation in temperature would give a 0.5^{0}C increase in mean temperature rather than the properly weighted 0.01^{0}C change.
Summary of comparison
The results of the analysis of mean temperatures are presented in Table 3 and Figure 5. The comparison of the average of the minimum and maximum temperatures with a mean of 48 measurements throughout the day shows an overestimate of the mean temperature from averaging minimum and maximum temperatures. All the differences are equal or increased with the use of the BOM 24 hour thermometer measurements. The values highlighted in yellow are over 2 standard deviations from no difference of mean and “mean”. If the distribution were normal this is a probability of 98% that the difference is real.
The variations in temperature difference are a function of latitude and longitude. For this analysis the locations have been grouped as coastal and continental. The map of Australia shows the locations selected for temperature analysis.
Table 3: Temperature differences comparing the average of Tmin and Tmax with a 24 hour mean.
Figure 5: Temperature differences comparing the average of Tmin and Tmax with a 24 hour mean.
These temperature variations are complicated as shown by the correlation coefficients for locations where the correlation coefficients for Wangaratta and Canberra are significantly different to other coastal locations while the continental locations are no different to the remaining coastal locations (Table 4 and Figure 6).
Table 4: Correlation coefficients for Tmin and Tmax
Latitude ^{0}S | Longitude^{0}E | Numberof days | CorrelationMin & Max T | Error | |
Continental | |||||
Alice Springs | 24 | 134 | 43 | 52% | 11% |
Kalgoorlie | 31 | 121 | 42 | 55% | 11% |
Broken Hill | 32 | 142 | 43 | 74% | 7% |
Coastal | |||||
Darwin | 12 | 131 | 43 | -7% | 15% |
Cairns | 17 | 146 | 43 | -14% | 15% |
Port Hedland | 20 | 119 | 43 | 13% | 15% |
Brisbane | 27 | 153 | 43 | 40% | 13% |
Perth | 32 | 116 | 43 | 35% | 13% |
Sydney | 34 | 151 | 51 | 58% | 9% |
Canberra | 35 | 149 | 40 | 12% | 16% |
Wangaratta | 36 | 146 | 43 | 19% | 15% |
Melbourne | 38 | 145 | 49 | 59% | 9% |
Hobart | 43 | 147 | 43 | 60% | 10% |
Figure 6: Correlation coefficients for Tmin and Tmax by latitude.
Discussion
The data and analysis covers up to 45 days of 48 temperature measurements made every 30 minutes. The results indicate significant systematic distortion of the reported mean temperature. Variations in this difference should be expected as the daylight hours are longer in summer than in winter with the extremes being in January and July. This is also a function of latitude where in Melbourne the extremes are 10 to 15 hours of daylight and Darwin 11 to 13 hours of daylight. However the period covered is from mid March to the end of April and lies between the extremes. It may well represent the average result.
However a full year is needed to establish the extent of the systematic distortions.
Conclusion
There is a systematic error using minimum and maximum recording thermometers. This is a consequence of the “one-way” temperature recording where, for example, a 10 minute 1^{0}C fluctuation in temperature would give a 0.5^{0}C increase in mean temperature rather than the properly weighted 0.01^{0}C change.
This preliminary analysis shows that around the Australian coast the mean temperature has been overestimated by 0.6 ^{0}C. If this is the general case throughout the year then the overall Australian temperature has been over estimated.
There is clearly a need to re-examine the reported Australian temperature record in the light of this analysis rather than the seemingly endless reworking of minimum and maximum temperature by adjustments.
If the mean land temperatures are overstated from averaging minimum and maximum temperatures and the air temperatures over the oceans are measured mean values then the blending of the two data sets creates a systematic distortion.
Computer models tuned by back-casting to reported measurements will in turn overstate feedback effects. This could be particularly the case for regional modelling and consequent projections.
Appendix
Selected locations show the average of the 30 minute intervals for over 43 days from 18th March to 30th April. The figures show local times and errors on the mean, not standard deviations.
Continental | Latitude ^{0}S | Longitude^{0}E | Numberof days |
Alice Springs | 24 | 134 | 43 |
Kalgoorlie | 31 | 121 | 42 |
Broken Hill | 32 | 142 | 43 |
Coastal | Latitude ^{0}S | Longitude^{0}E | Numberof days |
Darwin | 12 | 131 | 43 |
Cairns | 17 | 146 | 43 |
Port Hedland | 20 | 119 | 43 |
Brisbane | 27 | 153 | 43 |
Perth | 32 | 116 | 43 |
Sydney | 34 | 151 | 51 |
Canberra | 35 | 149 | 40 |
Wangaratta | 36 | 146 | 43 |
Melbourne | 38 | 145 | 49 |
Hobart | 43 | 147 | 43 |
Maybe I am misunderstanding the situation, but the average (or middle) value between minimum and maximum values in a set of data is call the median, not the mean. It sounds like you are taking an average of all the median values during a time period and averaging them together. I would call that an average of median values. This not a mean, but it is still a measure of central tendency.
Do I have it wrong?
Bob
From Merriam Webster’s Dictionary:
Median
Arithmetic mean
For the purposes of trying to establish temperature anomalies, does it really matter much as long as the methodology at any one station is the same over time?
BTW, taking the average of the maximum and minimum values is neither a mean nor a median of a full set of samples… which is kind of the point to the essay.
While it is a good idea to compare the max-min average temperatures to the more reasonable average of multiple half-hour measures, the analysis is hindered by the inability to know on which day the maximum or minimum occurred. This is the time-of-observation (TOBS) problem, which has been grappled with by various investigators (Vose et al, 2003):
Russell S. Vose, Claude N. Williams Jr., Thomas C. Peterson, Thomas R. Karl, and David R. Easterling. An evaluation of the time of observation bias adjustment in the U.S. Historical Climatology Network. Geophysical Research Letters, VOL. 30, NO. 20, 2046, doi: 10.1029/2003GL018111, 2003.
I believe the Vose et al adjustment method is an OK attempt, but it is not perfect, and therefore there will be unavoidable errors in comparing the max-min average with the “true” daily average.
This problem makes it hard to achieve what Tom Quirk wishes to do here–detect the errors associated with the min-max average. The best that can be done is probably to use the Vose method or some other TOBS adjustment method to try to pick out the proper day to associate with each minimum or maximum reading and go from there.
The new Climate Reference Network of 125 stations in the US does not have a TOBs problem, because it uses automated continuous measurements of temperature to determine a minimum and maximum 5-minute average associated with each hour, and then can unambiguously determine a minimum and maximum for each day. The CRN has been in operation for about 10 years, although only at full strength for about the last 5. An analysis of the bias associated with the min-max averages over the last 4 years was made on WUWT last August. An interesting result was that the bias was largely in one direction for the coastal stations, but the other direction for the continental stations. A typical size of the bias was 0.2 C, although it could extend almost to a full degree C. A given station seemed to maintain its bias directionality through all seasons and all years. A relation with relative humidity (RH) and latitude was suggested. It happened that about as many stations had a negative as a positive bias, so that the national average temperature was about the same using both methods. However, a nation with largely coastal stations or mostly continental stations could possibly have an overall bias associated with the max-min method. It would be interesting to see if Australia, with mostly coastal stations, might have such an overall bias. However, it would first be necessary to remove the TOBS bias by some means.
Considerable discussion occurred in the WUWT entries linked below regarding the possible effect on estimated temperature trends. If the bias for a given station stays relatively constant (as found for a majority of the stations), there would be little or no effect on the trend. For that matter, if the bias varies more or less randomly, there would also be little or no long-term effect on the trend. However, some persons continued to hold a different position.
http://wattsupwiththat.com/2012/08/30/errors-in-estimating-temperatures-using-the-average-of-tmax-and-tmin-analysis-of-the-uscrn-temperature-stations/
http://wattsupwiththat.com/2012/09/12/errors-in-estimating-mean-temperature-part-ii/
Will this be yet another reason to push the older temperature record down even further, thus CREATING an even greater temperature trend for the warmist agenda.
“There is clearly a need to re-examine the reported Australian temperature record ”
“Computer models tuned by back-casting to reported measurements will in turn overstate feedback effects”
Or are BOM preparing for the coming cooler period, and will use this as an excuse for their wild warmist projections?
Time will tell.
Sorry, Tom. I am still having a difficult time understanding exactly what you are doing. I believe that on each site’s chart, each dot represents 43 separate medians, taken from 43 sets of max-min temp readings over 43 days at that exact time.
The 24 hour mean you are talking about is the daily maximums over 43 days, with the mean drawn for that value, and the mean of 43 days of minimum daily readings also represented on the chart.
Your question is why do you see the magnitude of the differences that you see.
As you noted in the text, you should not be surprised with differences in the 24 hour and 30 minute metrics. I don’t how the magnitudes of the inevitable differences should look, but that may be a function of season, etc.
Interesting article.
Thinking about this a bit more, since you have the 30-minute measurements, you should be able to pick out with near-perfect accuracy the days when either the minimum or maximum occurred on the “wrong” day. So you could use the daily 30-minute measurements to determine the minimum and maximum of the day and not be limited to the “official” minimum and maximum reported by the BOM. Of course, the min-max thermometer might pick up slightly lower instantaneous minima and slightly higher maxima, due to responding to all the values in a day, but this is probably a small effect.
If you could do this for a few years for each station, it should result in a useful analysis.
“Note that there are a number of large differences. Some are due to the 24 hour record assumption that minimum temperatures occur between midnight and 9.00 am where in fact the minimum comes from the previous day after 9.00 am but before midnight.”
Does that assumption affect the “TOBS ‘adjustment?'”
I can’t see the point of this. The “mean” is defined as the mean of max and min, not the mean of 24 hr. There is no reason why it should be adjusted to the latter value.
There is a very good reason why it is defined as it is. We have a long record of min/max mean from older technology. We can continue it. We have only a short, recent record of 24 hr temperatures.
@David L. Hagen
David: It was not my intent to invite basic definitions of mean and median. Sorry if I conveyed that impression.
As you know, depending in the data set, the median can be radically different from the mean. both of which are expressions of central tendency of the data. I am trying to figure out what the data is, and how it was processed.
1. There are 43 days of 24 hour maximum and minimum readings for each site. Tom averaged the max and min readings over the 43 days and showed these values on the chart.
2. There are 43 days of other temperature readings, with with 48 readings taken each day, one every 30 minutes. Tom takes the max and min readings of each day.
3. The max-min 30 readings for each day added, then divided by two. This gives an mean which is identical to a median if you have only two data points. This is NOT equivalent to a mean of the 48 readings taken every 30 minutes. He then finds the mean of the 43 medians (not the 30 minute mean for the day).
4. The 43 daily max-min readings (medians) are then averaged. This is the value compared to the 24 hour reading max-min means.
It is obvious that there will be differences in the two values, since one was based on an average of medians, and the other was taken at different times, and was a process of taking means, only.
I am in danger of over-analyzing this thing, and my question is, did I get Tom’s method correct?
@Nick Stokes “The “mean” is defined as the mean of max and min…”
You just defined the median. The mean is the average of all values in the data set.
“In statistics and probability theory, the median is the numerical value separating the higher half of a data sample, a population, or a probability distribution, from the lower half.” In other words, the median is the middle value, not the average value.
I didn’t intend to get into a discussion of elementary statistical parameters. Best intentions…
“Analysis shows that distortions are introduced by the use of thermometers that measure minimum and maximum temperatures and more importantly that the averaging of minimum and maximum temperatures does not represent the mean for the period examined. Whether this is also true for the entire year should be tested.”
Easy to understand. And this gives me a déjàvu with John Daly more than 10 years ago?
Another point is that UHI in urban places affects the minimum temperature reading most.
@Bob
May 10, 2013 at 8:42 pm
@Nick Stokes “The “mean” is defined as the mean of max and min…”
I think that’s CRU’s definition, not Nick’s.
I’d like to know how to calculate the mean temperature of my house, including hotplates, oven and refrigerator … and what is the mean supposed to mean ?
Dr Burns: I think the best way to calculate the mean temperature in your house it to get a thermometer, at least one, or several to get the mean temperature. In my house I mean to get one of those automatic, lockable thermostats for the hvac to keep my bride’s hands from dictating that mean.
In his comment Lance Wallace linked to a really good article he had written on pretty much the same subject. It is interesting and like this article has a lot of number crunching. I suppose the object is to come up with a number that is the best representative for the mean temperature of a given day, and that’s why they are using the Min/Max method to estimate that mean. There are obvious differences to be expected.
I suppose my basic question boils down to, “What exactly do we need to know about temperatures during any given time period?” As Lance said in his article, there may not be enough data to get the “true” mean.
Bob;
I suppose my basic question boils down to, “What exactly do we need to know about temperatures during any given time period?” As Lance said in his article, there may not be enough data to get the “true” mean.
>>>>>>>>>>>>>>>>
It is my position that the issue is more complex still. CO2 supposedly changes the energy flux at earth surface. We don’t measure energy flux in degrees, we measure it in watts per square meter (w/m2). Since P(w/m2) varies with T raised to the power of 4 (Stefan-Boltzmann Law of physics) even if we had incredibly detailed temperature data, there is no way to average it and get a meaningful number.
As an example, at 0 degrees C it takes 4.6 w/m2 to raise the temperature 1 degree. At 30 degrees C, it takes 6.3 w/m2 to raise the temperature 1 degree. In other words, averaging w/m2 will give you a different trend average over the course of a day than will averaging degrees. It isn’t average temp that tells us if CO2 is changing the energy balance at earth surface, it is average w/m2 that provides this information. Different methods of finding a mean temperature or average temperature (etc) are just different methods of getting the wrong answer.
Tom Quirk’s method is “less wrong” however.
I suppose my basic question boils down to, “What exactly do we need to know about temperatures during any given time period?”
Wind speed, air pressure, and humidity???
The temperature record is a PROXY for the heat retention in the system caused by increasing GHG’s. Not including the above you still are not computing the energy in the system to a reasonable level. Yes it is probably small, but then so is .9 W/M2!! (snicker)
davidmhoffer,
Good point about the difference in energy flux (w/m^2) required to increase temp (degrees Celsius) at different temperatures. I have been following this topic for many years and had never thought about that point.
It just reinforces my opinion that AGW is so much nonsense about so little, with very little actual science to back it up.
Wow, Canberra and Wangaratta are coastal! That’s news to the people who live there!
@Nick Stokes
Nick, we’ve crossed swords (pens? keyboards?) on this before, but for a new small audience, I just want to say that for what you are interested in (trends) I agree that it doesn’t make a difference which measure we use. But if one is interested in the physical processes of the climate, it would be the 24-h mean that counts, not the artificial, slightly biased min-max average, which has the further problem of being sometimes seriously influenced by the time of observation.
Suppose the coastal-continental effect has some validity. Then a mostly coastal country next to a landlocked one may have temperature estimates 0.2 C below the true mean, and the neighboring country 0.2 C above the true mean. Then the Global Climate Model goes crazy trying to fit the wrong data.
I’m reminded of Newton trying to fit the Moon into his new theory of gravitation. It didn’t work, so he sat on his calculations for some years. Then people found out their estimate of the distance of the Moon was wrong. Newton plugged the new distance into his old equation and voila! the new estimate agreed “pretty nearly,” and the rest is history.
Air temperature by itself is a poor indicator of the thermal state of the air. It is only a loosely coupled proxy for the temperature of the surface. The surface is by far; the most significant radiator of heat into space. Using air temperature as a proxy for surface temperature in a radiation “balance” seems to me to be so bad as to not even be wrong.
Before plunging into computations to divine meaning from mountains of data, one should step back and think if it makes physical sense. I’m a lazy Engineer. Taking the time to gather perspective saves doing a lot of meaningless work.
Australian Mean for the weekend.
In excel it say an average =Returns the average of its arguments.
And a Median =Returns the median of the given numbers.
I tried this on my weather Data for May so far I got..
average-8.6c.
Median=9.6c.
Christchurch NZ.
Sorry=Median=9.4c.
Nick Stokes;
There is a very good reason why it is defined as it is. We have a long record of min/max mean from older technology.
>>>>>>>>>>>>>>>>>
Ah yes, we have lots of it, so we must use it, even if it has been demonstrated that the result is meaningless.
The important issue is to what extent the temperature trend over the last 50 or so years results from using min/max temperatures. That is, to what extent is the warming trend an artifact of using min/max temperatures.
I wrote about this using the work of statistician Jonathan Lowe, and more than 40% of the warming over the last 60 years is an artifact of using min/max temperatures and isn’t real.
http://www.bishop-hill.net/blog/2011/11/4/australian-temperatures.html
The median is defined as the “middle” number in the set of ordered data,
If you have an odd number of data points, then the median is easily defined.
If you have an even number of data points, the median is the average of the 2 middle numbers.
So..If you have only a maximum and a minimum, then the mean (average) and the median are the same thing.
If you want to compare old readings to new readings you MUST stick to the same calculation.
If you want to start a new system, then you cannot compare it to the old system.
That is why the loss of 4000 odd stations from the so-called “global average” calculations, in the 70’s and 80’s basically started a new measuring system. NOTHING before then can accurately be compared with anything afterwards.
ONLY with a reasonably consistent measuring paradigm can systems be compared.
The only accurate world-wide temperature measuring system we have at the moment started in 1979 with the satellites. This forced GISS and HadCrud into compliance and basically stopped “the adjustments”
Nothing before this date is relevant, as it cannot be accurately compared !
So they should just put it back how it should be and stop playing silly buggers. !
It would be interesting to note what a gridded network of stations across the whole of the earth’s land mass would look like. How many coastal and how many continental?
You would suspect a proponderance of continental in North America, Asia and Africa and a proponderance of coastal in Central America and Western Europe.
Surely Climate Project 2020 should be to establish a global network of automated weather stations measuring temperature in real time, covering the earth rigorously in a systematic gridded way?
Then we could start to ditch all the arguments based on how people are adjusting data sets because the ways they were collected were inconsistent?
People are discussing basic mathematics here, and managing to get very confused, such as the difference between the Median and Mean. These quantities are extremely well defined by very precise definitions. There is no room for any debate. It is as simple as 1+1=2. If you don’t understand the difference between the Median and the Mean then you ought not comment on anything related to data analysis.
The article is extremely clear. It is not confusing. It is showing that using the daily average of the Maximum and Minimum temperatures is not the same as using the average of all of the temperature readings.
(T1+T2+T3+…+Tn)/n != (Max(T1, T2,…,Tn) + Min(T1, T2,…,Tn))/2
They are both useful measures of daily temperature, but they are different measures. Unfortunately in the past only the latter were recorded. Can we find a transformation f such that “on average”:
f((Max(T1, T2,…,Tn) + Min(T1, T2,…,Tn))/2) ~ (T1+T2+T3+…+Tn)/n
Lance Wallace says: May 10, 2013 at 11:24 pm
“But if one is interested in the physical processes of the climate, it would be the 24-h mean that counts, not the artificial, slightly biased min-max average…”
But Lance, for that purpose I can’t see how you’d use a global mean at all; certainly not an anomaly mean. It is really all about trend.
Bob says: May 10, 2013 at 8:42 pm
“You just defined the median. The mean is the average of all values in the data set.”
Which data set? Who said? I’m talking about the set of two numbers – max and min. They have a mean.
Or more precisely, usually, the monthly average daily max, and corresponding min, is what people usually work with.
Bernd Felsche says: May 10, 2013 at 11:37 pm
“Using air temperature as a proxy for surface temperature in a radiation “balance” seems to me to be so bad as to not even be wrong.”
Do you know of anyone who actually does this? It’s done in Trenberth’s budget, I guess, but there are no great claims of precision there, just best estimates. I don’t think anyone calculates planetary energy balance that way.
“I’m talking about the set of two numbers – max and min. They have a mean.”
And if you have only 2 numbers, the mean = median.. by definition.
Adam says.
“The article is extremely clear. It is not confusing. It is showing that using the daily average of the Maximum and Minimum temperatures is not the same as using the average of all of the temperature readings.”
I don’t know why anyone would ever expect it would be. !!
“Can we find a transformation f such that “on average”:……..”.
If one assume that there has been some climate change, then one must assume that daily temperature patterns would probably be the first thing to change….
so basically, you could try, but you would have no real idea if you were correct or not.
Unless you have enough REAL data both now and then (which we don’t) then any transformation would be purely supposition.. ie a wild a**e guess !!!
Ref fig 1 The measurements at Alice Springs and Cairns are a perfect illustration that the mean is not always the average of minimum and maximum temperatures. For Alice Springs the average of the minimum and maximum temperatures is 0.12 +/- 0.12 above the average of all 48 30 minute readings while for Cairns the average of all 48 30 minute readings is 0.45 +/- 0.07 below the average of the minimum and maximum temperatures.
Not according to the graph it isn’t.
davidmhoffer said: “Nick Stokes;
There is a very good reason why it is defined as it is. We have a long record of min/max mean from older technology.
>>>>>>>>>>>>>>>>>
Ah yes, we have lots of it, so we must use it, even if it has been demonstrated that the result is meaningless.”
Nice pun, David: MEANingless.
I’m with Nick on this one. Max/min thermometers have been around for a long time, so the mean = (max+min)/2 is a statistic of longstanding value. If it doesn’t reflect the true mean over the 24 hours, who cares? Well, apparently some people in this thread, but I can’t fathom why.
Personally, and I study the CET records regularly, I focus on the max rather than the mean or the min. My theory is that if global warming is occurring then it is those new peaks of temperature that are likely to distress flora and fauna.
Of course, if global cooling should become established, I might have to switch to min instead of max! (I do study the number of air frost nights per winter, which is becoming interesting reading in the last few years.)
Rich.
For the record, the correct technical term for the average of the maximum and the minimum is the mid-range:
http://en.wikipedia.org/wiki/Mid-range
Here is a question for the authors of the original work. In an experiment will you measure minimum and maximum temperature to arrive at the mean temperature? If not why? The assumption could have made sense if the earth had no atmosphere, the distance between the earth and the sun remained constant, the sun out remained constant, the earth did not have tilted axis, the oceans remained static, the earths magnetic flux remained constant, etc etc. If all these are not reasonable assumptions, then taking the average of minimum and maximum temperatures is outright erroneous..
I suppose I get the idea of this. As a skeptic, based on my engineering background, I realize I am not the target for the article, but the problem for me is the article started with an assumed understanding that is not readily apparent. Use of ‘Average’ and ‘Mean’ early on bothered me, somehow, as suggested by others’ comments.
An intelligent layman knows the difference between Average, Mean, and Median, as well. And that there’s a valid term ‘Average Mean’ (among sites/seasons). Which I can also see by graphing datapoint distributions. I think.
To me, it all gets back to site placement. Even ‘Mean’ metrics are distorted by spikes.
OTOH, I wont be linking this to ‘undecideds’ to make any points, other than there’s real science discussion going on here.
See – owe to rich;
I’m with Nick on this one. Max/min thermometers have been around for a long time, so the mean = (max+min)/2 is a statistic of longstanding value. If it doesn’t reflect the true mean over the 24 hours, who cares? Well, apparently some people in this thread, but I can’t fathom why.
>>>>>>>>>>>>>>>>>>>
Nothing like stating for the record that you failed to understand the discussion! Let me dumb it down for you. The mean could be yielding an increasing temperature trend when the earth is actually cooling, or vice versa. That’s why.
“Jameel Ahmad Khan says:
May 11, 2013 at 6:21 am
Here is a question for the authors of the original work. In an experiment will you measure minimum and maximum temperature to arrive at the mean temperature? If not why”
This is “work” from the Aussie BoM, it’s tainted. So your question is void.
Bernd Felsche says:
May 10, 2013 at 11:37 pm
Air temperature by itself is a poor indicator of the thermal state of the air.
==========
Unless you correct for humidity, average temperature is meaningless. You cannot average two temperatures that have different humidity and arrive at a meaningful answer. The result is nonsense. It has no physical meaning.
For a given amount of power (W/m^2) air heats at different rates depending upon the humidity. So, if you try and calculate power from temperature, you cannot unless you also know the humidity. The exact same amount of sunlight and GHG back-radiation will result in two different surface temperatures, depending upon whether the air is dry or moist.
Yet climate science averages temperatures from different regions and different times, without any regard for humidity, and then claims accuracies of 1/100’s of a degree. From this it pretends to calculate power down to 1/100th of a W/m^2. Garbage in, garbage out.
You cannot compute average temperature from temperature alone and arrive at a physically meaningful number. It is an average of apples and oranges. Full of pits.
“ferdberple says:
May 11, 2013 at 7:39 am
Bernd Felsche says:
May 10, 2013 at 11:37 pm
Air temperature by itself is a poor indicator of the thermal state of the air.
==========
Unless you correct for humidity, average temperature is meaningless. You cannot average two temperatures that have different humidity and arrive at a meaningful answer. The result is nonsense. It has no physical meaning.”
Alarmism sells! Facts? Pah!
See – owe to Rich says:
May 11, 2013 at 4:06 am
Max/min thermometers have been around for a long time, so the mean = (max+min)/2 is a statistic of longstanding value. If it doesn’t reflect the true mean over the 24 hours, who cares?
===============
garbage in, garbage out. (apples + oranges) / 2 = pears.
Anyone that has lived in Oz will tell you that 30C in Darwin is not the same as 30C in Ayers, and that 30C in Darwin during January is not the same as 30C in July.
In reality 30C in Darwin in July is pleasant, while 30C in January you want to kill yourself. Yet climate science would have you believe they are identical and that you can average them out.
Does anyone know how satellite data is computed. I would assume there are not all that many checks for each location in a given day and it would be different times for different locations. Just wondering as it might affect comparisons between land/sea data and satellite data.
dmh @ 7:37:
If that’s to me..
Apparently I didnt state my case well. I did finally get it, and fully understand {did before, too} that the ‘mean’ can register increase trend while global is decreasing.. I just didn’t say that. I also understand customs in the metrics and I agree with you.
Siting, as I said, matters especially on the hi/low metric.. and of course – shifted weather patterns in the short term.
The only reason I posted was to comment on the somewhat murky beginning, from my viewpoint.
If sites like this dont need folks like me, just say so…
(Just kidding .. I dont really care what you think about me. )
😉
I would like to see a real in-depth discussion of the difference between using real measured data to draw conclusions and using “fictitious data” – data not itself actually measured but calculated from measured data and assigned a meaning, a significance, a new identity.
This article is a good example of the problems that arise when one substitutes fictitious data for real data. The thermometer readings themselves are real data. They are measurements of the readings of such-and-so type of temperate measuring instrument under a certain hopefully standard list of conditions, and taken at periodic points in time (30 min intervals) or with a hi/low thermometer.
Once one begins to use anything but those measured data, once creates fictitious data. In this case, there is the attempt to create the fictitious “average daily temperature at [weather station site] for DD-MM-YYYY”. In the actual, real, reach-out-and-touch-it world, such a datum does not exist and, further, if one manufactures such a datum, it may hold a vastly different meaning to the real world than the one imagined by the manufacturer.
There have been many papers written on the effects of air temperature on plant growth and productivity, for instance — and they illustrate the problems of correlating “average daily temperature” with various measures of plant biological activities. Many plants seem to ‘care’ much more about Max and Min, or things like ” # of hours > or < XX°".
Willis discusses Trenberth's reanalysis of ocean heat content — as if the heat content of the Earth's ocean had actually been even approximately measured to any significant level of accuracy. It has not and probably will not be in our lifetimes at least, if ever. It is true, there are some measured data actually used — but they are transformed almost at once to fictitious data which are then used to manufacture further sets of fictitious data. Point in space and time temps, averaged over a whole ocean, smeared over time, to a degree of accuracy greater than the original measurements, then used to create the fictitious "ocean heat content" which is then used in a reanalysis (which does not mean simply re-analyzed).
Willis, like me, has spent a fair portion of his life on and in the sea — and can tell you the huge temperature differences experienced in any ocean on a simple shallow free-dive — differences up to ten or more degrees in depths from surface to thirty feet — and this ignores cold or warm micro-currents (localized to a single bay or anchorage). How one pretends to measure the average temperature of the Earth's oceans to an accuracy of 10ths of 100ths of a degree utterly escapes me.
So there is a topic — the use of fictitious data in Climate Science.
I know, most will scream "But that's all the data we have to work with — of course we have to put it through mathematical transformations, statistical analysis, etc". maybe — but first you would have to PROVE that the newly created fictitious data is "real" and actually describes some feature of the physical world that is pertinent and useful to the instance it is being used in. This is a huge part of this discussion — when created (derived from some other measured actual data) metrics (the selection of which are subjective and must be scientifically justified whether measured or fictitious) are used, we must first thoroughly investigate what the metric means in the real world, and whether or not our use of it is valid for our purpose.
I think the concern is misplaced, it isn’t temperature that’s in question, it cooling. And I think we have enough data to get an idea if there’s been a loss of cooling, and there hasn’t been.
Today’s max minus today’s min, minus today’s max minus tomorrows min. As a bonus you can determine how this difference changes as the ratio between the length of day changes as the seasons change.
I’m working on a paper to send to Anthony of this work.
You can find early bits of it by searching on my name at http://WWW.science20.com
Jim A says:
May 11, 2013 at 8:05 am
dmh @ 7:37:
If that’s to me..
>>>>>>>>>>>>>>>
It wasn’t.
For me this is more spurious precision. The quantity of interest is energy, as heat. The measurement is already a proxy — temperature. The summary of the proxy, T(avg), is an estimate derived from T(max) and T(min). And the results are accumulated and reported in 1/100ths of a degree. I understand that the math works, but the physical reality does not follow.
To the students of the history of science, this reminds us of the study of human health, for which a proxy of interest is “normal” body temperature, taken as samples from T(rectal), T(oral) and T(axial, under the armpit) and reported in the literature as 37 degrees Celsius. Which becomes 98.6 F, and mothers of America go running to the doctor when a home thermometer measures a kid with a “fever”of 98.9 F. The math, converting C to F, is absolutely correct. The physical interpretation doesn’t follow. The actual data and measurement doesn’t support the action, or the panic. Tiny variations in method of measure or reporting tool have impacts that overwhelm the changes in the subject of actual interest.
The measurements have distortions comparable to the size of the effect. There may be indications of a change, but there may be changes (Gaia’s rectum versus Gais’s armpit?) that are less studied.
Panic is not yet warranted, and “business as usual” should not be disrupted on the measurement taken. Send the ‘feverish’ kid to school, as scheduled.
Richard M says:
May 11, 2013 at 8:03 am
Does anyone know how satellite data is computed. I would assume there are not all that many checks for each location in a given day and it would be different times for different locations. Just wondering as it might affect comparisons between land/sea data and satellite data.
>>>>>>>>>>>>>>>>>>>
These things are taken into account. The UAH satellite record is run by Dr Roy Spencer. There’s way too much detail to put into a blog comment, I suggest you start with the articles on his site.
http://www.drroyspencer.com/
I come at this from a different direction. What does the measurement statistic do to our uncertainty about the system?
We CAN take the mathematical average of a day’s min and max and get a number we call the “Daily Mean”
What is the length of the error bar on that value?
Zero.
That’s its definition, the average of the min and max, there is no uncertainty.
But is that (min+max)/2 value an unbiased estimate of the day’s “average temperature” at that location and what is the error of that estimate?
The root paper here tells us that
1) it is NOT an unbiased estimate
2) that the bias is not constant across locations
3) that we don’t know if the bias is constant all months of the year.
A fourth point the paper does not note I make here:
4) We have no control on whether the bias is constant year to year, decade to decade. Min-Max thermometers are sensitive to contamination of a minute or two. Min/Max thermometers at airports subject to wafts of jet exhaust are only one of many examples where we should expect the bias to change across decades. UHI changes affect the bias.
To address Nick Stokes comment about long temperature records:
There is a very good reason why it is defined as it is. We have a long record of min/max mean from older technology.
I’m not sure I’d call it a “good” reason. It is convenient.
We can continue it.
But should we? For what purposes? For what decisions?
We can use it, but only if we be honest with the uncertainty that convenience imparts on the analysis. I don’t think we are.
Suppose I want to know the mean and mean std error of March 2013 at one station. Should I:
1. use 31 daily min-max means?
2. use 62 min and max raw readings?
3. use =24*31 temp readings on the hour?
4. use =12*24*31 temp reading every 5 minutes?
Now the “mean” calculated from 1 and 2 will be the same, but the mean std error will be much larger with #2. I think that matters when we are investigating significance of changes in the mean. The paper shows that the means from #1 and (#3 and #4) will give different means. And if the bias was constant across decades no problem. But we don’t know that and thanks to UHI and micrositing, we have reason to distrust that assumption.
RomanM – Thanks for the clarification on terms. Using range as the description is more accurate.
Phillip Bratby – I read your Bishop Hill article, and learned a lot. It seems to me that I am trying to re-invent the wheel with my questions, and your linked article makes a lot of sense. I am looking forward to reading Jonathan Lowell’s work. It sounds very reasonable.
Stephen Rasey – Thanks for your comment. So far, nobody has answered my question of what the question is that we are trying to answer with all this number crunching. Deep Thought, indeed!
Also, davidmhofer and Kip Hansen have made great comments that have helped me understand some of this stuff.
As I mentioned earlier, it is obvious that Tom Quirk will see differences in the 30 min calcs vs the 24 hr max/min calcs, however small they may be. My question becomes, what do we need to know about the heat of the day given the existing data? We all understand the math, but what can the data tell us?
If one went to wunderground and you pull up a private station near you that updates every five min or so, and you calculate an average using hourly, 30 min, 15 min, and 5 min readings, you’ll come up with varying means. Using 9 am reading times with same protocol will obviously give different means. But you can also calculate a more precise TOB adjustment as well.
TOB adjustments that are applied currently are just an average of a range. The TOB adjust is not robust on a monthly basis, but tend to average out for the annual.
I have found in N MN, the TOB adjust for am and pm both leave negative residuals in winter and positive residuals in the summer when compared to midnight stations. Spring and fall were the transition months.
Last years monthly CRN readings versus the USHCN 2.5 readings had the same issue. CRN tended warmer in winter and cooler in summer. USHCN, since 2000, only applies TOB adjustments. Since the residuals showed up even on the national dataset of USHCN, then the TOB adjust needs to be further investigated and refined. With all the midnight stations that are out there now, a more precise TOB can be calculated on a monthly basis. In fact, when Vose 2003 investigated this, they could have used the data from the 500 stations they used to apply a more precise TOB to the record and continued with that approach. A program could be constructed to do this.
Weather patterns greatly affect the TOB temps. How have weather pattern changes on decadal scales affected the TOB temps?
Duluth, MN has hourly obs since 1941. Looks like I can pick a month and a couple obs times and investigate the matter. Anyone else game for an investigation of this sort?
Nick Stokes says @ May 10, 2013 at 8:15 pm
“I can’t see the point of this. The “mean” is defined as the mean of max and min, not the mean of 24 hr. There is no reason why it should be adjusted to the latter value.
There is a very good reason why it is defined as it is. We have a long record of min/max mean from older technology. We can continue it. We have only a short, recent record of 24 hr temperatures.”
I understand that.
But a 24h record from 1951-2012 (WMO station #260, De Bilt KNMI-NL) shows a STDEV of 5,23K for the Min-Max method and a STDEV of 2,39K for the 24h mean.
Using MinMax data should give greater uncertainties when used in models.
I think this is a useful analysis but I think we knew the conclusions already. AGW is predicated on the mean/median/mid-range temperatures but these can be affected just as much by increasing min temps as max. And as I understand it, it’s the min temps that are increasing and not the max.
Of course, this is not emphasised (mentioned?) by AGW fanatics because it’s not very exciting. It’s much better to allow people to believe that it’s the max temps that are increasing. If people realised that average temp increases were being driven min temps then they’d probably be pleased that, for example, nights weren’t quite so cold.
@Peter Ward,
Actually daily max – min temps are fairly consistant, ~18 some degrees F or so. Follow the link in my name to the updated charts page.
Robert_G
I hesitate to add my two cents (and maybe this has already been discussed and I missed it), but shouldn’t all these different ways of comparing the different sampling “means,” really be compared to the measured temperatures integrated over time to get the area “under the graph”.
The unit would be a temperature (degree)-day. Short-lived extremes which are reported as representing the day’s “average,” would be appropriately muted, even though they could seriously impact the (min-max)/2 technique.
NOT A COMMENT not sure how else to contact you. [Sorry Anthony, I’ve only commented once before. i’d like “Robert_G” if that is OK]
[Well, you’ve commented, and thus, you’ve contacted him. But you have established no reason why he should spend time, money and effort contacting you. 8<) What do you need to say (to communicate) and what will he gain (the rest of us gain) or learn from that communication? Mod]
My original comment stands, although now probably quite derailed.
I thought that since it’s status was “Awaiting Moderation,” that the “Moderator” (apparently not A. Watts) would have the chance to make an editorial correction without notifying the entire world. I didn’t expect a hair-trigger response. Sorry for my misunderstanding. I apologize for any inconvenience to you and the readership.
The temp records are riddled with such errors. For example, in Perth (Metro 9225) on 14 March 2013 the real maximum was 19.7C just after 6am but the BoM official max for that day of 24.4C actually occurred just before 9am the following day, 15 March. About seven hours later around 4pm on 15 March the temperature reached 29.2C, so the same day effectively scored two different maxima. It might seem trivial but the inaccurate additional 4.7C logged for 14 March means the average max for the month of March in the capital of Perth ends up at 28C instead of 27.8C.
The Perth daily press printed very early in the morning of 15 March couldn’t possibly know that the previous day’s maximum was yet to occur more than six hours after the paper was printed (???) so the real max of 19.7C was “incorrectly” printed … http://www.waclimate.net/imgs/14-mar-2013-perth.gif. Look up BoM max for Perth (9225 – http://www.bom.gov.au/climate/data/) on 14 March and it’s 24.4C. There are similar errors at various surrounding stations including rural affected by the abnormal cold front on 14 March – the fourth coldest early March day in Perth since 1897, but that’s not what the record books will tell you.
Also worth looking at a brief BoM paper which signals thousands of questionable Melbourne temps from 1979 to 2008 – http://www.amos.org.au/documents/item/392
… and Ed Thurstan on ACORN errors … http://www.warwickhughes.com/agri/ThurstanACORN28apr13.pdf
… and section 8 (p66) of the ACORN techniques … http://cawcr.gov.au/publications/technicalreports/CTR_049.pdf … which includes:
The one-minute data indicate that the only historical observation practice which shows substantial systematic differences from the current standard is the measurement of minimum temperatures using a 0000-0000 day (i.e., midnight to midnight). Averaged across the 32 stations, this gives mean minimum temperatures 0.25°C cooler than the current standard, whilst the impact on extremes is stronger, with the mean value of the highest minimum temperature of each month being 0.58°C cooler on average. All 32 stations show cooler minimum temperatures for a 0000-0000 day than a 0900-0900 day, but the differences were smallest (typically near 0.1°C) in the tropics. They were largest (0.4-0.6°C) at some southern coastal stations (Fig. 22). As about 30% of the network was using the 0000-0000 day in some form prior to 1964, these results would suggest a potential inhomogeneity in Australian mean minimum temperatures of approximately +0.08°C in 1964.
The ACORN techniques section also deals with the unresolved issues of Automatic Weather Stations introduced since the early 1990s and the effect of 1972 metrication (http://www.waclimate.net/round/acorn/index.html).
It all seems a bit pointless comparing historic records, no matter how much raw, HQ, ACORN, AWAP “correction” is or isn’t applied by the BoM at every station for every day back to 1910.
davidmhoffer said:
“Nothing like stating for the record that you failed to understand the discussion! Let me dumb it down for you. The mean could be yielding an increasing temperature trend when the earth is actually cooling, or vice versa. That’s why.”
Ouch! Moderator – condescension alarm! You use the word “could”. I think that’s a bit wishy-washy – all sorts of things _could_ be happening. I have no objections to a new 24-point (or whatever) mean being used where it is possible, but to compare to older temperatures only a 2-point mean is available.
And I do have some sympathy with ferdberple’s nihilistic assertions about non-equivalence of temperature with respect to humidity, but how is science to progress except on the basis of analysis of objectively measured observations? Rejection of analysis of a 2-point mean must surely attract use of the dreaded D-word.
Rich.
Looked at a station nearby on wunderground. The station updated every 5 min.
The max/min mean was 44.5. The mean of the entire days readings was 43.4(-1.1). When using just the hourly readings it was 43.3. Then I included the half hour readings which gave me 43.4. So yeah, averaging out the entire days readings does net a cooler mean.
A problem with using a simple average of max and min temps as a mean temp for a day can be illustrated by areas where sudden weather changes are common. On the south coast of Western Australia for example you often get the situation on a hot day in summer where there is cooling from midnight to say 5 or 6am to about 18-20C, then rapid warming to around late morning or early afternoon, to around 40C. With the passage of a front or trough the temperature could drop back to 18C within an hour and the rest of the day would remain cool, 18C or below. The maximum would be 40c, the minimum say 16C so the mean would be 28C but in fact the true average temperature for the day would be well below 28C.
It’s all to do with the area under the curve.
Coming late to this discussion, I am baffled by the characterisation of Canberra as “coastal”. Ours is very much an inland climate, with great extremes between winter and summer (up to 50C). The daily fluctuation can be as much as 30C. It is much more like that of Alice Springs in central Australia than that of the closest coastal area some 200kms away.
This is relevant to the discussion for a couple of reasons. One is that the BOM has just invented a new metric called the National Average Temperature, which is completely bogus thanks to the inclusion of dodgy data and the inevitable unequal distribution pattern of weather stations – even allowing that such a metric has any intrinsic value or meaning, which is debatable.
The post above illustrates that local factors greatly influence the outcome of any generic statistical technique applied across diverse locations. In the tropical north, temperatures do not vary much either within a day or between seasons, compared to those in landlocked areas like Canberra (we are also in the lower parts of a mountain range here). Then, as someone pointed out above about the weather in coastal WA, some places are prone to dramatic temperature changes in a short space of time, which makes any averaging technique problematic.
As someone who has the BOM’s Canberra weather tab permanently open and checks it regularly, I can also attest that it is often just plain wrong. It is updated every 10 minutes, and has been known to show fluctuations of 5C between adjacent readings when no such event occurred.
We should all be grateful for the work of people like Anthony and his volunteers in the US, and Tom Quirk, Geoff Sherrington and others here, to try to put some rigour into the poor quality data and data analysis that our national weather agencies have been dishing out for so long.