A condensed version of a paper entitled: “Violating Nyquist: Another Source of Significant Error in the Instrumental Temperature Record”.

By William Ward, 1/01/2019

The 4,900-word paper can be downloaded here: https://wattsupwiththat.com/wp-content/uploads/2019/01/Violating-Nyquist-Instrumental-Record-20190112-1Full.pdf

The 169-year long instrumental temperature record is built upon 2 measurements taken daily at each monitoring station, specifically the maximum temperature (Tmax) and the minimum temperature (Tmin). These daily readings are then averaged to calculate the daily mean temperature as Tmean = (Tmax+Tmin)/2. Tmax and Tmin measurements are also used to calculate monthly and yearly mean temperatures. These mean temperatures are then used to determine warming or cooling trends. This “historical method” of using daily measured Tmax and Tmin values for mean and trend calculations is still used today. However, air temperature is a signal and measurement of signals must comply with the mathematical laws of signal processing. The Nyquist-Shannon Sampling Theorem tells us that we must sample a signal at a rate that is at least 2x the highest frequency component of the signal. This is called the Nyquist Rate. Sampling at a rate less than this introduces aliasing error into our measurement. The slower our sample rate is compared to Nyquist, the greater the error will be in our mean temperature and trend calculations. The Nyquist Sampling Theorem is essential science to every field of technology in use today. Digital audio, digital video, industrial process control, medical instrumentation, flight control systems, digital communications, etc., all rely on the essential math and physics of Nyquist.

NOAA, in their USCRN (US Climate Reference Network) has determined that it is necessary to sample at 4,320-samples/day to practically implement Nyquist. 4,320-samples/day equates to 1-sample every 20 seconds. This is the practical Nyquist sample rate. NOAA averages these 20-second samples to 1-sample every 5 minutes or 288-samples/day. NOAA only publishes the 288-sample/day data (not the 4,320-samples/day data), so to align with NOAA the rate will be referred to as “288-samples/day” (or “5-minute samples”). (Unfortunately, NOAA creates naming confusion with their process of averaging down to a slower rate. It should be understood that the actual rate is 4,320-samples/day.) This rate can only be achieved by automated sampling with electronic instruments. Most of the instrumental record is comprised of readings of mercury max/min thermometers, taken long before automation was an option. Today, despite the availability of automation, the instrumental record still uses Tmax and Tmin (effectively 2-samples/day) instead of a Nyquist compliant sampling. The reason for this is to maintain compatibility with the older historical record. However, with only 2-samples/day the instrumental record is highly aliased. It will be shown in this paper that the historical method introduces significant error to mean temperatures and long-term temperature trends.

NOAA’s USCRN is a small network that was completed in 2008 and it contributes very little to the overall instrumental record. However, the USCRN data provides us a special opportunity to compare a high-quality version of the historical method to a Nyquist compliant method. The Tmax and Tmin values are obtained by finding the highest and lowest values among the 288 samples for the 24-hour period of interest.

 

NOAA USCRN Examples to Illustrate the Effect of Violating Nyquist on Mean Temperature

The following example will be used to illustrate how the amount of error in the mean temperature increases as the sample rate decreases. Figure 1 shows the temperature as measured at Cordova AK on Nov 11, 2017, using the NOAA USCRN 5-minute samples.

clip_image002

Figure 1: NOAA USCRN Data for Cordova, AK Nov 11, 2017

The blue line shows the 288 samples of temperature taken that day. It shows 24-hours of temperature data. The green line shows the correct and accurate daily mean temperature that is calculated by summing the value of each sample and then dividing the sum by the total number of samples. Temperature is not heat energy, but it is used as an approximation of heat energy. To that extent, the mean (green line) and the daily-signal (blue line) deliver the exact same amount of heat energy over the 24-hour period of the day. The correct mean is -3.3 °C. Tmax is represented by the orange line and Tmin by the grey line. These are obtained by finding the highest and lowest values among the 288 samples for the 24-hour period. The mean calculated from (Tmax+Tmin)/2 is shown by the red line. (Tmax+Tmin)/2 yields a mean of -4.7 °C, which is a 1.4 °C error compared to the correct mean.

Using the same signal and data from Figure 1, Figure 2 shows the calculated temperature means obtained from progressively decreased sample rates. These decreased sample rates can be obtained by dividing down the 288-sample/day sample rate by a factor of 4, 8, 12, 24, 48, 72 and 144. Therefore, the sample rates will correspond to: 72, 36, 24, 12, 6, 4 and 2-samples/day respectively. By properly discarding the samples using this method of dividing down, the net effect is the same as having sampled at the reduced rate originally. The corresponding aliasing that results from the lower sample rates, reveals itself as shown in the table in Figure 2.

clip_image004

Figure 2: Table Showing Increasing Mean Error with Decreasing Sample Rate

It is clear from the data in Figure 2, that as the sample rate decreases below Nyquist, the corresponding error introduced from aliasing increases. It is also clear that 2, 4, 6 or 12-samples/day produces a very inaccurate result. 24-samples/day (1-sample/hr) up to 72-samples/day (3-samples/hr) may or may not yield accurate results. It depends upon the spectral content of the signal being sampled. NOAA has decided upon 288-samples/day (4,320-samples/day before averaging) so that will be considered the current benchmark standard. Sampling below a rate of 288-samples/day will be (and should be) considered a violation of Nyquist.

It is interesting to point out that what is listed in the table as 2-samples/day yields 0.7 °C error. But (Tmax+Tmin)/2 is also technically 2-samples/day with an error of 1.4°C as shown in the table. How can this be possible? It is possible because (Tmax+Tmin)/2 is a special case of 2-samples per day because these samples are not spaced evenly in time. The maximum and minimum temperatures happen whenever they happen. When we sample properly, we sample according to a “clock” – where the samples happen regularly at exactly the same time of day. The fact that Tmax and Tmin happen at irregular times during the day causes its own kind of sampling error. It is beyond the scope of this paper to fully explain, but this error is related to what is called “clock jitter”. It is a known problem in the field of signal analysis and data acquisition. 2-samples/day, regularly timed, would likely produce better results than finding the maximum and minimum temperatures from any given day. The instrumental temperature record uses the absolute worst method of sampling possible – resulting in maximum error.

Figure 3 shows the same daily temperature signal as in Figure 1, represented by 288-samples/day (blue line). Also shown is the same daily temperature signal sampled with 12-samples/day (red line) and 4-samples/day (yellow line). From this figure, it is visually obvious that a lot of information from the original signal is lost by using only 12-samples/day, and even more is lost by going to 4-samples/day. This lost information is what causes the resulting mean to be incorrect. This figure graphically illustrates what we see in the corresponding table of Figure 2. Figure 3 explains the sampling error in the time-domain.

clip_image006

Figure 3: NOAA USCRN Data for Cordova, AK Nov 11, 2017: Decreased Detail from 12 and 4-Samples/Day Sample Rate – Time-Domain

Figure 4 shows the daily mean error between the USCRN 288-samples/day method and the historical method, as measured over 365 days at the Boulder CO station in 2017. Each data point is the error for that particular day in the record. We can see from Figure 4 that (Tmax+Tmin)/2 yields daily errors of up to ± 4 °C. Calculating mean temperature with 2-samples/day rarely yields the correct mean.

clip_image008

Figure 4: NOAA USCRN Data for Boulder CO – Daily Mean Error Over 365 Days (2017)

Let’s look at another example, similar to the one presented in Figure 1, but over a longer period of time. Figure 5 shows (in blue) the 288-samples/day signal from Spokane WA, from Jan 13 – Jan 22, 2008. Tmax (avg) and Tmin (avg) are shown in orange and grey respectively. The (Tmax+Tmin)/2 mean is shown in red (-6.9 °C) and the correct mean calculated from the 5-minute sampled data is shown in green (-6.2 °C). The (Tmax+Tmin)/2 mean has an error of 0.7 °C over the 10-day period.

clip_image010

Figure 5: NOAA USCRN Data for Spokane, WA – Jan13-22, 2008

 

The Effect of Violating Nyquist on Temperature Trends

Finally, we need to look at the impact of violating Nyquist on temperature trends. In Figure 6, a comparison is made between the linear temperature trends obtained from the historical and Nyquist compliant methods using NOAA USCRN data for Blackville SC, from Jan 2006 – Dec 2017. We see the trend derived from the historical method (orange line) starts approximately 0.2 °C warmer and has a 0.24 °C/decade warming bias compared to the Nyquist compliant method (blue line). Figure 7 shows the trend bias or error (°C/Decade) for 26 stations in the USCRN over a 7-12 year period. The 5-minute samples data gives us our reference trend. The trend bias is calculated by subtracting the reference from the (Tmaxavg+Tminavg)/2 derived trend. Almost every station exhibits a warming bias, with a few exhibiting a cooling bias. The largest warming bias is 0.24 °C/decade and the largest cooling bias is -0.17 °C/decade, with an average warming bias across all 26 stations of 0.06C. According to Wikipedia, the calculated global average warming trend for the period 1880-2012 is 0.064 ± 0.015 °C per decade. If we look at the more recent period that contains the controversial “Global Warming Pause”, then using data from Wikipedia, we get the following warming trends depending upon which year is selected for the starting point of the “pause”:

1996: 0.14°C/decade

1997: 0.07°C/decade

1998: 0.05°C/decade

While no conclusions can be made by comparing the trends over 7-12 years from 26 stations in the USCRN to the currently accepted long-term or short term global average trends, it can be instructive. It is clear that using the historical method to calculate trends yields a trend error and this error can be of a similar magnitude to the claimed trends. Therefore, it is reasonable to call into question the validity of the trends. There is no way to know for certain, as the bulk of the instrumental record does not have a properly sampled alternate record to compare it to. But it is a mathematical certainty that every mean temperature and derived trend in the record contains significant error if it was calculated with 2-samples/day.

clip_image012

Figure 6: NOAA USCRN Data for Blackville, SC – Jan 2006-Dec 2017 – Monthly Mean Trendlines

clip_image014

Figure 7: Trend Bias (°C/Decade) for 26 Stations in USCRN

Conclusions

1. Air temperature is a signal and therefore, it must be measured by sampling according to the mathematical laws governing signal processing. Sampling must be performed according to The Nyquist Shannon-Sampling Theorem.

2. The Nyquist-Shannon Sampling Theorem has been known for over 80 years and is essential science to every field of technology that involves signal processing. Violating Nyquist guarantees samples will be corrupted with aliasing error and the samples will not represent the signal being sampled. Aliasing cannot be corrected post-sampling.

3. The Nyquist-Shannon Sampling Theorem requires the sample rate to be greater than 2x the highest frequency component of the signal. Using automated electronic equipment and computers, NOAA USCRN samples at a rate of 4,320-samples/day (averaged to 288-samples/day) to practically apply Nyquist and avoid aliasing error.

4. The instrumental temperature record relies on the historical method of obtaining daily Tmax and Tmin values, essentially 2-samples/day. Therefore, the instrumental record violates the Nyquist-Shannon Sampling Theorem.

5. NOAA’s USCRN is a high-quality data acquisition network, capable of properly sampling a temperature signal. The USCRN is a small network that was completed in 2008 and it contributes very little to the overall instrumental record, however, the USCRN data provides us a special opportunity to compare analysis methods. A comparison can be made between temperature means and trends generated with Tmax and Tmin versus a properly sampled signal compliant with Nyquist.

6. Using a limited number of examples from the USCRN, it has been shown that using Tmax and Tmin as the source of data can yield the following error compared to a signal sampled according to Nyquist:

a. Mean error that varies station-to-station and day-to-day within a station.

b. Mean error that varies over time with a mathematical sign that may change (positive/negative).

c. Daily mean error that varies up to +/-4°C.

d. Long term trend error with a warming bias up to 0.24°C/decade and a cooling bias of up to 0.17°C/decade.

7. The full instrumental record does not have a properly sampled alternate record to use for comparison. More work is needed to determine if a theoretical upper limit can be calculated for mean and trend error resulting from use of the historical method.

8. The extent of the error observed with its associated uncertain magnitude and sign, call into question the scientific value of the instrumental record and the practice of using Tmax and Tmin to calculate mean values and long-term trends.

Reference section:

This USCRN data can be found at the following site: https://www.ncdc.noaa.gov/crn/qcdatasets.html

NOAA USCRN data for Figure 1 is obtained here:

https://www1.ncdc.noaa.gov/pub/data/uscrn/products/subhourly01/2017/CRNS0101-05-2017-AK_Cordova_14_ESE.txt

NOAA USCRN data for Figure 4 is obtained here:

https://www1.ncdc.noaa.gov/pub/data/uscrn/products/daily01/2017/CRND0103-2017-AK_Cordova_14_ESE.txt

NOAA USCRN data for Figure 5 is obtained here:

https://www1.ncdc.noaa.gov/pub/data/uscrn/products/subhourly01/2008/CRNS0101-05-2008-WA_Spokane_17_SSW.txt

NOAA USCRN data for Figure 6 is obtained here:

https://www1.ncdc.noaa.gov/pub/data/uscrn/products/monthly01/CRNM0102-SC_Blackville_3_W.txt

5 1 vote
Article Rating
575 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
January 14, 2019 2:12 pm

“NOAA, in their USCRN (US Climate Reference Network) has determined that it is necessary to sample at 4,320-samples/day to practically implement Nyquist. 4,320-samples/day equates to 1-sample every 20 seconds. This is the practical Nyquist sample rate. ”

How about far simple alternative: put a digital thermometer inside a 5 gallon bucket of water, housed in a Stevenson screen, then read temperature just once a day at 2pm.

Reply to  vukcevic
January 14, 2019 2:47 pm

From where I am standing that bucket of water would have been frozen solid a couple of weeks ago. ;o)

William Ward
Reply to  Greg F
January 14, 2019 3:25 pm

Greg F: LOL! Maybe antifreeze needs to be added.

vukcevic: Interesting idea: using a specified thermal mass to integrate the heat energy. I’m not sure if this would be good or not, but might be worthy of exploring. Of course, you would still need to determine what the maximum frequency content was for that situation and sample at the rate that complied with Nyquist. You still have a signal, but likely an even slower one. But what NOAA did with the USCRN is probably the way to go. We would need a much better global coverage with stations of this quality and then we would need to process all of the 288 samples/day for each location. Not mentioned in the paper but is one of the next obvious failings of the way climate science is done, is the concept of “global average temperature”. What is the science – what is the thermodynamics that says averaging temperature signals from multiple locations has any meaning at all? In engineering, we feed signals into characteristic equations. What do we feed the “global average temperature” into? First, we don’t even end up with a signal. We end up with a number. This is a science fail. But what it is fed into is the Climate Alarmism machine and thats about all.

Samuel C Cogar
Reply to  William Ward
January 15, 2019 7:33 am

Quoting from article: William Ward, 1/01/2019

The 169-year long instrumental temperature record is built upon 2 measurements taken daily at each monitoring station, specifically the maximum temperature (Tmax) and the minimum temperature (Tmin). These daily readings are then averaged to calculate the daily mean temperature as …… etc., etc.

The author, William Ward, is assuming things that are not based in fact or reality.

First of all, there is no per se “169-year long instrumental temperature record” that defines or portrays global, continental, national or regional near-surface air temperatures. But now there is, or is claimed to be, 1 or 2 or 3 or so, LOCAL +/- 169-year long instrumental temperature records whose contents covering their 1st 100 years have to be adjudged as highly questionable and of little importance other than use as “reference data/info”.

Secondly, the same as above is true for the per se, United States’ 148-year long instrumental temperature record.

And thirdly, in the early years of recording the US’s “temperature record” there were only 19 stations reporting temperatures, ……. all of them east of the Mississippi River …… and those temperature were recorded twice per day, at specified hours, ……. which had absolutely nothing to do with the daily Tmax or Tmin temperatures.

Now just about everything you wanted to know about the US Temperature Record and/or the NWS, …… and maybe a few things you don’t want to know, …… can be found at/on the following web sites, ….. just as soon as the current “government shutdown” is resolved, 😊 😊 ….. to wit:

The beginning of the National Weather Service
http://www.nws.noaa.gov/pa/history/index.php

National Weather Service Timeline – October 1, 1890:
http://www.nws.noaa.gov/pa/history/timeline.php

History of the NWS – 1870 to present
http://www.nws.noaa.gov/pa/history/evolution.php

Reply to  Samuel C Cogar
January 16, 2019 9:07 am

Thank you, Mr. Cogar, for commenting on the first sentence:
“The 169-year long instrumental temperature record …”

There is no 169-year real-time record
of global temperatures !

The Southern Hemisphere had insufficient
coverage until after World War II — an argument
could be made that pre-World War II
surface temperatures represent only
the Northern Hemisphere !

How can we take this author seriously
when his FIRST sentence is wrong !

In addition, the author seems obsessed
with the sampling rate of surface
thermometers.

There are MANY major problems
with surface temperature “data”
BEFORE we get to the sampling rate !

How about a MAJORITY of the data
being infilled (wild guessed) by government
bureaucrats with science degrees, for grids
with no data, or incomplete data?

How about the CHARACTER of the
people compiling the data — the same
people who had predicted a lot of
global warming, have a huge
conflict of interest because they
also get to compile the “actuals”,
and they certainly want to show
their predictions were right,
so they make “adjustments”
that push the actuals closer
to their computer game predictions ?

How about the fact that weather satellite data,
with far less infilling, has been available
since 1979, but are ignored simply because they
show less warming than the surface numbers?

How about the fact that weather balloon data
confirm that satellite data are right, but the
surface data are “outliers” — yet the weather
balloon data are also ignored ?

The biggest puzzle of all is why one number,
the global average temperature, is the
right number to represent the “climate”
of our planet. After all, not one person
actually lives in the “average temperature” !

My climate science blog:
http://www.elOnionBloggle.Blogspot.com

William Ward
Reply to  Samuel C Cogar
January 16, 2019 1:31 pm

Samuel and Richard,

Thanks for commenting here. I actually agree with some of what you said. However, I think maybe you judged my paper too quickly based upon your critiques about the instrumental record comments. My point here is to show problems with how climate science is conducted with the use of the data in the record. With a limited number of words for this post, I didn’t want to waste them on details that were not relevant to the core message. Please feel free to add details about the record history – that is helpful. And I’m not negating all of the other problems with temperature measurements you mention. In a post to someone else I listed out 12 things wrong with the “record”.

I have cataloged 12 significant “scientific” errors with the instrumental record:
1) Instruments not calibrated
2) Violating Nyquist
3) Reading error (parallax, meniscus, etc) – how many degrees wrong is each reading?
4) Quantization error – what do we call a reading that is between 2 digits?
5) Inflated precision – the addition of significant figures that are not in the original measurements.
6) Data infill – making up data or interpolating readings to get non-reported data.
7) UHI – ever encroaching thermal mass – giving a warming bias to nighttime temps.
8) Thermal corruption – radio power transmitters located in the Stevenson Screen under the thermistor or a station at the end of a runway blasted with jet exhaust.
9) Siting – general siting problems – may be combined with 7 and 8
10) Rural station dropout – loss of well situated stations.
11) Instrument changes – changing instruments that break with units that are not calibrated the same or instruments that are electronic where previous instruments were not. Response times likely increase adding greater likelihood to capture transients.
12) Data manipulation/alteration – special magic algorithms to fix decades old data.

This paper just focuses on 1 of them. Many others have already written about the other 11 here.

Samuel C Cogar
Reply to  Samuel C Cogar
January 17, 2019 4:32 am

William Ward – January 16, 2019 at 1:31 pm

My point here is to show problems with how climate science is conducted with the use of the data in the record. …………… ………………. And I’m not negating all of the other problems with temperature measurements you mention. In a post to someone else I listed out 12 things wrong with the “record”.

It is MLO that, the per se, US Historical Temperature Record (circa 1870-2018), …… that is controlled/maintained by the NWS and/or NOAA, …… is so corrupted that it is utterly useless for conducting any research or investigations of an actual, factual scientific nature.

William Ward, I would like to point out the fact that your …… “list of 12 things wrong with the record” ……. are only a minor part of “the problem”.

In actuality, there is NO per se, actual, factual US Historical Temperature Record prior to 1970’s, or possibly prior to 1980’s. And the reason I say that is, all temperature data recorded at specified locations by government employees and/or volunteers, from the early days in 1870/80’s, up thru the 1960’s/1970’s, was transmitted daily to the NWS where its ONLY use was for generating 3 to 7 day weather forecasts, ….. originally only for areas east of the Mississippi River.

“DUH”, the first 70+ years of the aforesaid “daily temperatures” were only applicable for 7 days max, and then of no value whatsoever, other than garbage reference data.

And that NWS archived near-surface temperature “data base” didn’t become the US Historical Temperature Record until the proponents of AGW/CAGW gave it its current name after adopting/highjacking “it” to prove their “junk science”.

Reply to  Greg F
January 15, 2019 1:29 am

/sarc is in short supply, to be use in moderation.

[Oh yes, we use /sarc all time in our secret moderator forum. Oh wait, you meant something else. Nevermind. -mod]

getitright
Reply to  Greg F
January 15, 2019 11:03 pm

Because the constant lag time of you integrating scheme would produce a variable error as the temperature derivative is never constant.

Samuel C Cogar
Reply to  vukcevic
January 15, 2019 4:07 am

vukcevic – January 14, 2019 at 2:12 pm

How about far simple alternative: put a digital thermometer inside a 5 gallon bucket of water, housed in a Stevenson screen, then read temperature just once a day at 2pm.

“HA”, apparently no one liked that “idea” when I proposed a similar one 3 or 4 years ago ….. even though my proposal suggested using a one (1) gallon enclosed container (aluminum), ….. a -30F water-antifreeze mixture ……. and two (2) temperature sensors, ……. with said sensors connected to a radio transmitter located underneath or on top of the Stevenson screen ….. and which is programmable to transmit temperature data at whatever frequency desired.

The liquid in the above noted container …… NEGATES any further “averaging” of daily temperatures ….. because it is a 100% accurate real-time “average’er”.

Peta of Newark
Reply to  vukcevic
January 15, 2019 5:14 am

Thank you Vuk – nailed it
Is it really beyond possibility that the observed temp changes are due to a decrease in te water comtent of the land/ground/Earth’s land surface?
Might explain an ever so gentle sea-level rise in places?

If you wanna measure Earth Surface Temperature – measure it.
Put a little datalogger running at 4 samples per day max, pop it in a plastic bag with some silica gel, put that inside an old jam-jar with a good fitting/sealing lid..
THEN: bury it under 18″ of dirt.

Return once per month to unload it and check its battery.
If you fancy an average of a wide area of ground, locate the underground stop-cock tap of your home’s water supply pipe and put the data-logger in there.
Saves digging a hole in your garden too.

If you wanna be *really* scientific, run a twin data-logger, in a classic style Stephenson Screen directly above or nearby.
Maybe then, get Really Hairy and put a *third* logger inside a nearby forest or densely wooded area.
A city slicking pent-house dwelling friend might be a handy thing to have for even greater excitement.
😀

Then you might realise that temp is maybe not the driver or the cause of Climate – temp is the symptom or the effect. As Vuk intimates by attempting to measure *energy* rather than temperature.
If anyone is being violated by Climate Science, it is in fact Boltzmann, Wien and Planck.
Nyquist is ‘a squirrel’. A distraction.

Editor
Reply to  Peta of Newark
January 15, 2019 5:15 pm

Peta => Nyquist is another way of demonstrating that the long-term historical surface temperature record is not fit for purpose when applied to create a global anomaly in the range of less than 1 or 2 degrees.

You are quite right that climate models particularly violate a lot of “more important” physical laws by using “false” (simplified) versions of non-computable non-linear equations.

If one considers the temperature record as a “signal” (which is one valid way of looking at it) then Nyquist applies if one expects an accurate and precise result. Since the annual GAST anomaly changes by a fraction of a degree C, the errors and uncertainty resulting from the (necessary for historical records) violation of Nyquist.

January 14, 2019 2:15 pm

Maximum error is what the Warmistas want as it best fits the CAGW theory.

Steve O
January 14, 2019 2:18 pm

What if, instead of taking 2 samples per day we took 730 samples per year?

Gary Pearse
Reply to  Steve O
January 14, 2019 3:48 pm

Randomly of course!

Ferdberple
Reply to  Gary Pearse
January 14, 2019 10:39 pm

730 random samples would actually give a more reliable annual average.

ghl
Reply to  Steve O
January 14, 2019 4:11 pm

Steve
Make it 732, then you would see some real aliasing.

William Ward
Reply to  Steve O
January 14, 2019 4:11 pm

Steve O – LOL!

January 14, 2019 2:22 pm

Nonsense about Nyquist
“The Nyquist-Shannon Sampling Theorem tells us that we must sample a signal at a rate that is at least 2x the highest frequency component of the signal. This is called the Nyquist Rate. Sampling at a rate less than this introduces aliasing error into our measurement.”
The Theorem tells you that you can’t resolve frequencies beyond that limit. But we aren’t trying to resolve high frequencies. We are trying to get a monthly average. It isn’t a communications channel.

As for aliasing, there are no frequencies to alias. The sampling rate is locked to the diurnal frequency.

As for the examples, Fig 1 is just 1 day. 1 period. You can’t demonstrate Nyquist, aliasing or whatever with one period.

Fig 2 shows the effect of sampling at different resolution. And the differences will be due to the phase (unspecified) at which sampling was done. But the min/max is not part of that sequence. It occurs at times of day depending on the signal itself.

There is a well-known issue with min/max; it depends on when you end the 24 hour period. This is the origin of the TOBS adjustment in USHCN. I did a study here comparing different choices with hourly sampling for Boulder Colorado. There are differences, but nothing to do with Nyquist.

Fig 3 again is just one day. Again there is no way you can talk about Nyquist or aliasing for a single period.

MarkW
Reply to  Nick Stokes
January 14, 2019 2:33 pm

If you miss fast moving changes in temperature then you can’t claim to have an accurate estimate of the daily average.
If you don’t have an accurate daily average then there’s no way to get an accurate monthly average.
If your monthly averages aren’t accurate, then there’s no way to claim that you have accurately plotted changes over time.

Steve O
Reply to  MarkW
January 14, 2019 3:37 pm

Let’s say your instruments could take continual measurements such that you capture every moment for a year and you compared that to my measurements, where I record the high and the low, each rounded to the nearest whole number, and divided by two.

Your computation of the average temperature would be a precise measurement, but how close do you suppose my average for the year would be to your average for the year?

William Ward
Reply to  Steve O
January 14, 2019 4:37 pm

Steve O,

You can do this with the USCRN data for a station. You can compare both methods as follows:

For “historical method” [(Tmax+Tmin)/2): Take NOAA published monthly averages for the year and add them up and divide by 12. The data for each month is arrived at (by NOAA) by averaging Tmax for the month and averaging Tmin for the month. Then these 2 numbers are averaged to give you the monthly average/mean.

For Nyquist method: Add up every sample for the station over the year and divide by the number of samples. There are 105,120 samples for the year (288*365).

Try Fallbrook CA station for example. You will find that each year from 2009 through 2017 the yearly mean error is between 1.25C and 1.5C.

Some stations have more absolute mean error than others and some stations show more error in trends and less in means. Some show large error in both and some show little error in both.

The reason for the variance is explained in the frequency analysis. The daily temperature signal as seen by plotting the 288-samples/day varies quite a bit from day-to-day and station-to-station. The shape of the signal defines the frequency content. At 2 samples per day, you always have aliasing unless you have a purely sinusoidal signal. We never see this. The amount of mean error and trend error is defined by the amount of the aliasing, where it lands in frequency (daily signal or longer-term trend), and the phase of the aliasing compared to the phase of the signal at those critical frequencies.

I encourage you and all readers to read the full paper if you want to learn more about the theory and mechanics of how aliasing manifests. I give simple graphical examples that explain this.

Steve O
Reply to  William Ward
January 15, 2019 4:18 am

I’ll definitely need to read the full paper. Thanks.

ShanghaiDan
Reply to  Steve O
January 14, 2019 5:53 pm

We see above (in the post, with real data) the error can be 1.4 deg C; when we’re talking about a 1.5 deg C change being a calamity, that much error is, itself, a calamity.

William Ward
Reply to  ShanghaiDan
January 14, 2019 6:47 pm

Amen Shanghai Dan!

Samuel C Cogar
Reply to  ShanghaiDan
January 15, 2019 8:04 am

A published report of last week’s (month’s/year’s) average near-surface temperatures ….. doesn’t have as much “value” as does a copy of last week’s newspaper. 😊 😊

Bob boder
Reply to  Steve O
January 15, 2019 10:38 am

Yep and my fancy dancy RTD probe reacts a lot quicker than that good old mercury thermometer did, on the other hand mercury thermometers don’t drift over time RTDs do and all in the same direction to boot.

Reply to  Bob boder
January 15, 2019 2:26 pm

See:
https://www.nist.gov/sites/default/files/documents/calibrations/sp819.pdf

Where NIST describes why a liquid in glass thermometer needs re-calibration….

All measurement sensors require calibration.

Paramenter
Reply to  Steve O
January 16, 2019 8:05 am

Hey Steve,

Your computation of the average temperature would be a precise measurement, but how close do you suppose my average for the year would be to your average for the year?

Do you mean comparing arithmetic mean, per each year, calculated directly from the 5-min sampled data with the ‘standard’ procedure of averaging per each year monthly averages that in turn come from averaging of daily midrange values (Tmin+Tmax)/2? I did a quick comparison for boulder, Colorado 2006-2017: as expected errors propagate from daily errors higher up, for instance for 2006 the error is 0.3 C; for 2010 the error is 0.35 C; for 2012 the error is 0.25 C. Full list below:

Year Error
2006 0.31
2007 0.26
2008 0.26
2009 0.23
2010 0.35
2011 0.17
2012 0.25
2013 0.14
2014 0.07
2015 0.19
2016 0.15
2017 0.16

Crispin in Waterloo but really in Beijing
Reply to  MarkW
January 14, 2019 3:52 pm

Global warming is about energy, not temperature per se, which is acknowledged above as a proxy for energy. Without considering the humidity and air pressure as well as its temperature at the time the instruments are read, the energy has not been ascertained, merely a proxy for it.

Is this really so difficult to comprehend?

The claim is simple enough: human activities are heating the atmosphere. The heat energy in the atmosphere is not well represented by thousands of temperature measurements. If the humidity were zero or fixed, it could be, but it is not.

The whole world could “heat up” and the temperature could go down if there is a positive water vapour feedback (currently not observed).

“Climate scientists” are wandering in the realm of thermodynamicists as effectively as they wandered about in the realm of statisticians. They must hand out hats with those degrees because they are always talking through them.

Nyquist is fundamental to signal analysis, and temperature is a signal, as is humidity and air pressure. With those three the enthalpy of an air parcel can be determined accurately. If there are trends in the results at the surface, we might project them, whether up or down.

The rest is, as that say, noise. Turn up the squelch and it disappears. Wouldn’t that be nice?

William Ward
Reply to  Crispin in Waterloo but really in Beijing
January 14, 2019 4:10 pm

Crispin you nailed it!!! Everything you said hits the bullseye!

ghl
Reply to  Crispin in Waterloo but really in Beijing
January 14, 2019 4:20 pm

Crispin
What you say is true Xcept it is the temperature that extinguishes species and magnifies wildfires. Your argument may be inverted to “energy interesting, but temperature kills.”

MarkW
Reply to  ghl
January 14, 2019 5:36 pm

There are no species being “extinguished” and there has been no increase in wildfires.

Gordon
Reply to  ghl
January 14, 2019 6:02 pm

Fires in California have been tragic because Moonbeam and his cronies do not know how to manage forests.

Reply to  ghl
January 15, 2019 6:13 am

You make a statement of fact that may or may not be true. Temperature for cold blooded animals and insects may be important but I suspect that it is different for different species. Same for warm blooded animals. For them I suspect humidity plays a role also, otherwise why do weather folks always dwell on “feels like” temperatures.

I think you are relying on too many studies that quote “climate change” as the reason for species population changes without doing the hard work to actually determine how temperatures changes actually affect a species. What studies have you read that have data that shows what higher temps and lower temps do to a species so that one can determine the “best range” of temperatures. Basically the ASSUMPTION is that higher temps are bad. You never see where lower temps are either good or bad.

Phoenix44
Reply to  ghl
January 15, 2019 8:25 am

How does a higher average annual global temperature magnify a local wildfire?

Reply to  ghl
January 22, 2019 9:03 am

Gordon,

Also, apparently, because PG&E and its regulators do not understand how to manage and maintain electrical infrastructure.

Louis Hooffstetter
Reply to  Crispin in Waterloo but really in Beijing
January 14, 2019 5:51 pm

“The whole world could “heat up” and the temperature could go down if there is a positive water vapour feedback…”

Crispin, as usual you are absolutely correct. But I really wish you hadn’t given the climate alarmists another angle from which to argue. Now I expect to see one or more papers published in Nature Climate Change using this excuse to explain the pause or some other nonsense.

Reply to  Crispin in Waterloo but really in Beijing
January 14, 2019 5:54 pm

The claim is NOT that human activity is heating anything other than in urban environments, which is quite a different thing from heating the surface fluids generally. The claim is that CO2 is causing the heating by retaining solar energy, not human activity energy. Possibly the CO2 comes from human activity.

Alan Tomlin
Reply to  Crispin in Waterloo but really in Beijing
January 15, 2019 7:48 am

Love it Crispin….especially the “wandering and hats” paragraph…..

LdB
Reply to  MarkW
January 14, 2019 5:17 pm

It sort of works like Nick wants if the heat up and cool down behaviour is identical so it averages out (the first comment by vukcevic with the bucket is the same sort of idea).

Whether the behaviour up and down is symmetrical I have no idea has anyone measured. What is amusing is they do put the sites in a white box which inverts the argument … ask Nick why a white box 🙂

The point made however is correct they are trying to turn a temperature reading to a proxy for energy in the earth system balance which Nick does not address in his answer.

So I guess it comes down to are you trying to measure temperature for human use or trying to proxy energy .. make your choice.

Reply to  MarkW
January 14, 2019 8:54 pm

As for Blackville SC, where the discrepancy between daily average of 5-minute sampling and (Tmax+Tmin)/2 is greatest: The .24 degree/decade discrepancy is between 1.11 and 1.35 degrees C per decade. This is almost 22% overstatement of warming at the station with warming rate most overstated (in degree/decade terms) by using (Tmax+Tmin)/2.

Notably, all other stations mentioned in the Figure 7 list have lesser error upward in degree/decade terms – and some of these have negative errors in degree/decade trends from using (Tmax+Tmin)/2 instead of average of more than 2 samples per day. The average of the ones listed in Figure 7 is .066 degree/decade, and I suspect these are a minority of the stations in the US Climate Reference Network.

And, isn’t USCRN supposed to be a set of pristine weather stations to be used in opposition to USHCN and US stations in GHCN?

William Ward
Reply to  Donald L. Klipstein
January 14, 2019 9:34 pm

Hi Donald,

If I understand your comment correctly, let me clarify. My apology in advance if I’m misunderstanding you.

What I show in Fig 7 is not critical of USCRN. What it shows, using USCRN data, is how 2 competing methods work to deliver accurate trends. The “Nyquist-compliant” method uses all of the 5-minute samples over the period, which may be 12 years in some cases. The linear trend is calculated from this data. I’m considering it the reference standard. If the “historical method” of using max and min values worked correctly, without error, then when the trend generated from this method is subtracted from the reference there should be no error – the result should be zero. It is not zero. I refer to error as “bias” in that figure. Each station shows a bias. A positive sign means the method gives a trend warmer than the reference. A negative sign means the method gives a trend that is cooler than the reference. But to be clear, it is the historical method that is being exposed and criticized, not USCRN or its data. Maybe I should say I’m criticizing how that data is used. We have the potential to get good results from USCRN but the method used gives results worse than the reference.

Reply to  Donald L. Klipstein
January 15, 2019 7:31 am

“I suspect these are a minority of the stations in the US Climate Reference Network.”
I calculated below all stations in ConUS with 10 years of data. The Min/max showed less warming than the integrated. But I think it was just chance. The variability of these 10 year trends is large.

William Ward
Reply to  Nick Stokes
January 16, 2019 1:48 pm

Interesting Nick.

MarkW
Reply to  Nick Stokes
January 14, 2019 2:34 pm

Anyone who believes you can get an accurate estimate of the daily average from just the high and the low, has never been outside.

Reply to  MarkW
January 14, 2019 2:48 pm

“has never been outside”
You’d have to stay outside 24 hrs to get an accurate average. But the thing about min/max is that it is a measure. It might not be the measure one would ideally choose, but we have centuries of data in those terms. We have about 30 years of AWS data. So the question is not whether it is exactly the same as hourly integration, but whether it is an adequate measure for our needs.

Again, I looked at that over a period of three years for Boulder Colorado.

Reg Nelson
Reply to  Nick Stokes
January 14, 2019 3:46 pm

True but meaningless. We may have centuries of Tmin\Tmax temperature data, but we don’t have centuries of Global Tmin\Tmax temperature data. To suggest so is simply absurd. And even if we did, the Earth is 4.5 billions of years old — a few centuries is nothing, and to make inferences from such a minuscule data record is unscientific and deliberately misleading.

MarkW
Reply to  Nick Stokes
January 14, 2019 3:53 pm

Once again, Nick admits that the data they have isn’t fit for the purpose they are using it for, but they are going to go ahead and use it because it’s all they’ve got. As the article demonstrated, daily high and low is not adequate for the purpose of trying to tease a trend of only a few hundredths of a degree out of the record.

What a sad post to hang an entire ideology on.

Greg Cavanagh
Reply to  Nick Stokes
January 14, 2019 4:19 pm

The maximum and minimum temperature of the day is of interest, and are real values; but averaging those two values gives you a nonsense value that tells you nothing.

Clyde Spencer
Reply to  Greg Cavanagh
January 14, 2019 4:34 pm

Greg
I agree that that high and low are of interest and probably of more value than the mid-range value calculated from them. The information content is actually reduced by averaging. We don’t know whether an increasing mid-range value is the result of increasing highs, lows, or both. If both, we don’t know what proportion each contributes, which may have a bearing on the mechanism or process causing the change.

But, if the public were made aware that the primary result of “Global Warming” was milder Winters, not unbearable Summer heat waves, they might not get very excited.

LdB
Reply to  Nick Stokes
January 14, 2019 5:25 pm

It would be easy enough to cover Nick just make a couple of high precision sites on existing temp sites and integrate the energy into and out of the sites at millisecond resolution and check the temperature to energy proxy. You could also do the same with sites near oceans etc to get proper values for land/ocean interface when you try and blend stuff.

William Ward
Reply to  Nick Stokes
January 14, 2019 6:01 pm

Nick,

You allude to the problem. We have centuries of data and “we” feel compelled to use it, knowing it is bad. The question is often asked, “what are we supposed to do, nothing?”. As an engineer I can answer that question. Yes – do nothing with it. It is not suitable for any honest scientific work. No engineering company would ever commission a design, especially if public safety or corporate profit were at risk, with data as flawed as the instrumental record. If a bridge were to be designed with data this bad would you put your family under it while it is load tested? If an airplane was designed and built with data this flawed would you put your family on the first flight? We have known for a long time about a long list of problems with the record. Violating Nyquist is a pretty “hard” issue – not really something anyone can honestly dismiss. We have the theory. USCRN allows us to demonstrate the extent of the problem. The full range of error cannot be fully known but we are seeing magnitudes of error that stomp all over the claimed level of changes that are supposed to cause alarm.

Are there any engineers reading this who want to say they disagree? In other words, you think engineering projects are routinely undertaken with data as bad as the instrumental record.

Duane
Reply to  William Ward
January 14, 2019 6:55 pm

As an engineer I can say with confidence that we use the data we have, not the data we’d wish for in a perfect world. We are constantly faced with the constraints of inadequate data for purposes of design.

We deal with imperfect data through several methods:

1) Statistical analysis that allows us to determine variability, and confidence limits of limited data sets

2) We use analysis of systems and components and materials to determine modeled performance under various condititions, such as “finite element analysis”, that are subject to digital modeling, including extensive analysis of identified failure modes

3) We use “safety factors”, or what some may call “fudge factors” to account for residual uncertainty that otherwise is not subject to direct analysis

4) We attempt to identify, quantify, and account for measurement errors

5) We use extensive peer review design administration processes to avoid design errors or blunders in assessing all of the above.

The bottom line is that engineering is primarily a matter of risk analysis that is built upon the notion of imperfect and insufficient data and how to deal with that while developing economical, efficient designs.

In the real world you never have sufficient or good enough data. Engineers are trained to design stuff anyway.

Curious George
Reply to  William Ward
January 14, 2019 7:06 pm

William, it is Tmin/Tmax data. It is not “bad data”. And it is NOT “2 samples a day”; it is a true Tmin and Tmax for each 24-hour period, recorded maybe an hour from each other; it has nothing to do with Nyquist (regular sampling). And if you demand nonexistent data, maybe you are not a good engineer. An engineer works with the best available data and methods. For example, a bridge designer may be highly interested in the minimum and maximum temperatures.

William Ward
Reply to  William Ward
January 14, 2019 7:22 pm

Duane,

Although you seem to counter my statement, I think you actually agree with it. I asked if any engineering project uses data as bad as the instrumental record. I didn’t say or suggest the data is perfect.

I didn’t hear you say you use uncalibrated instruments and get mail clerks to run them with out training. You didn’t say you ignore quantization error and inflate precision.

You said you do examine the confidence limits of your data and add in guard bands and safety factors. You conduct design reviews and assess your risks. You model, emulate, simulate. Note stated but I assume you build prototypes and do functional testing and reliability testing, etc.

I’m not sure why you took a counter position as I’m not saying anything different. Are you saying you think the instrumental record and the way climate science uses the record matches your processes?

If climate science used the “bad” data they have and bound the error and stopped making up data and changing it continually decade after decade then they could approach what engineers do. But the data is still aliased. Would you design with data aliased as bad as the instrumental record?

William Ward
Reply to  William Ward
January 14, 2019 7:27 pm

Curious George,

A very accurately measured Tmax and Tmin are bad for calculating mean temperature. I’m sorry if you object to the word “bad” but you need to show how it is “good” and what it is “good” for. Since mean temperatures and temperatures trends are what drives climate science you need to show they are “good” for this purpose.

Reply to  William Ward
January 14, 2019 7:53 pm

“If a bridge were to be designed with data this bad would you put your family under it while it is load tested? ”

Apparently some people would, with the exception that they would test it with random strangers under it.

https://www.nbcmiami.com/news/local/Six-Updates-on-Bridge-Collapse-Investigation-492009331.html

Reply to  William Ward
January 14, 2019 8:34 pm

William
“No engineering company would ever commission a design, especially if public safety or corporate profit were at risk, with data as flawed as the instrumental record.”
The engineering project, in this case, is to put a whole lot of CO2 in the air, and see what happens. We work that out with the data we have. By all means cancel the project if you think the data is inadequate.

William Ward
Reply to  William Ward
January 14, 2019 9:38 pm

Menicholas – Did I hear them say that the bridges collapsed due to too much anthropogenic CO2 reacting with their concrete mix?

Phil
Reply to  William Ward
January 14, 2019 9:47 pm

@ Nick Stokes January 14, 2019 at 8:34 pm

… put a whole lot of CO2 in the air…

Actually only about 2 parts per million per year.

William Ward
Reply to  William Ward
January 14, 2019 9:52 pm

Nick said: “The engineering project, in this case, is to put a whole lot of CO2 in the air, and see what happens. We work that out with the data we have. By all means cancel the project if you think the data is inadequate.”

I see no evidence to fear CO2. But pumping CO2 is not the project. The “project” is pumping countless billions of taxpayer dollars into the research world in a Faustian Bargain that clubs the public into a political and social agenda based upon bad science – all the while claiming to be the champions, protectors and sole wielders of science. I’d cancel that one.

Nick, I don’t believe for a nano-second that politicians actually believe in the nonsense they pedal. Or if they do they are completely incompetent. How much would it cost the developed governments of the world to blanket the world with USCRN type stations? This could have been started in the 1980s or 1990’s for sure. With good data we can analyze both the Nyquist compliant way and the historical way. How much is the US spending on the new USS Gerald R Ford – next gen carrier with full task force and compliment of F35s, submarines and Aegis destroyers? $40B? How about annually to maintain it? How much are we wasting on redundant social programs? How much are we spending on a network that allows us to collect better data globally? Why aren’t you and other in the field demanding better instruments and data? Why, because the tall peg would get beaten down. No one wants to do anything except keep the money flowing and keep the nonsensical research pumping out day after day.

Ferdberple
Reply to  William Ward
January 14, 2019 11:02 pm

For example, a bridge designer may be highly interested in the minimum and maximum temperatures.
========
Correct. But the engineer would have almost no interest in (min+max)/2. Bridges don’t fail because of averages.

MarkW
Reply to  William Ward
January 15, 2019 10:21 am

As usual Nick’s best effort amounts to a pathetic attempt to change the subject.

We do have data regarding what happens to the globe when CO2 levels increase.
CO2 levels have been 15 to 20 times above what we have now in the past and nothing bad happen. Not only did nothing bad happen, life flourished.

Duane
Reply to  William Ward
January 15, 2019 10:57 am

William,

You asked if engineers use data as bad or as imperfect as the environmental temperature record. Civil engineers use exactly that environmental data as the basis for loads or effects on a wide variety of engineered systems. I”m a civil engineer. We use the existing environmental record for air and water temperatures, humidity, wind speeds and directions, rainfall (depths, durations, and frequency of storms), water flow rates and flood stages for rivers and channels and surface runoff, seismic data, aquifer depth data, water quality data, just to name a few of the very many types of environmental data routinely used just in civil engineering design.

Most of these environmental data have relatively short records of data (very frequently just a few decades worth), with frequently missing data, or questionable data.

Fortunately, a combination of government agencies like NOAA or USGS and private standards setting organizations like ASTM have invested many decades in data collection and analysis made readily available with pre-calculated statistical analyses.

So then we use all the various tools such as the ones I listed above to provide the safest yet still economical design we can produce.

If we refused to use or settle for “bad data” or incomplete data or questionable data, in many instances we’d have no data at all. The better the data we have, the better, as in safer and more economical, designs we can produce.

William Ward
Reply to  William Ward
January 15, 2019 11:11 pm

Duane,

You said: “You asked if engineers use data as bad or as imperfect as the environmental temperature record. Civil engineers use exactly that environmental data as the basis for loads or effects on a wide variety of engineered systems…”

Again, I really don’t see any disagreement with you… for some reason it feels like you have taken that position. Civil engineers have a lot of standards they adhere to and I doubt a 1C increase in global average temperature makes you rewrite your standards or run out and reinforce a dam. Right? Sure you use rainfall data to make decisions, but I’m sure there is a lot of careful margin built in. Right? But you would not specify a concrete slump or strength or size a critical beam with methods similar to those in the instrumental record would you? I won’t labor over this, but I think/hope we agree more than disagree. I suspect you took issue with the way I made my point and that is ok, but I’m not seeing anything of substance here that is in contention.

Frank
Reply to  William Ward
January 16, 2019 9:16 pm

William Ward wrote: “I see no evidence to fear CO2.”

Obviously William knows nothing about radiative transfer of heat – the process that removes heat from our planet. An average of 390 W/m2 of thermal IR is leaving the surface of the planet, but only 240 W/m2 is reaching space. GHGs are responsible for this 160 W/m2 reduction and the current relative warmth of this planet. However, there is no doubt that rising GHGs will someday be slowing down radiative cooling by another 5 W/m2.

We don’t need a Nyquist-compliant temperature record – or any temperature record at all – to know that some danger exists and that it is potentially non-trivial. Energy that doesn’t escape to space as fast as it arrives from the sun – a radiative imbalance – must cause warming somewhere. That is simply the law of conservation of energy.

Reply to  Nick Stokes
January 15, 2019 1:09 am

“But the thing about min/max is that it is a measure” Looking into goat entrails to ‘guess’ the future is also a measure. Really, Nick, you are funny. Really funny.

Reply to  MarkW
January 14, 2019 3:19 pm

MarkW, you raise an important point most people here seems to have missed since the reliance on just two data points a day barely cover anything of the day itself.

For a recorded high of 102 degrees F at 4:40 pm doesn’t tell us how long it was over 100 degrees F, how long it was over 95 degrees F, how long it was over 90 degrees F.

A 100 degree F day can actually be hotter than a 102 degree day when it reached 90 degrees F two hours earlier and 95 degrees 1 1/2 hours earlier than the 102 degrees F day did.

It is the total amount of heat of the whole sunny day into the late evening that counts the most, not the daily high that might have lasted just 5 minutes when it was recorded.

It was a LOT hotter in the early 1990’s in my area than now due to the summer heat persisting long into the night, when it used to stay over 90 degrees late as 10:00 pm., now it never does that anymore since then often dips below 90 by around 8:30 or so. The average high still about the same, even reached a RECORD high of 112 degrees F with additional days of 110, 111 and a couple of 109, just 4 years ago.

The nights no longer so hot as it used to be.

gregole
Reply to  Sunsettommy
January 14, 2019 5:38 pm

True. I live in a hot place, Phoenix, Arizona. Some years back I started recording and plotting temperature with a sample rate of 3 minutes. It is clear to see that the hottest day in terms or max T may not be the hottest day if one considers, for example, a day that is slightly cooler by say 2 deg F; but the high is reached earlier in the day and persists for a longer time. It’s easy to see graphically. Many of the hot days peak and then cool off.

Reply to  Sunsettommy
January 14, 2019 6:00 pm

That is regional weather. Here, last summer, it stayed hot late very often. It was still near 100 at midnight more than a few times. Of course, things could have been different on the next block.

William Ward
Reply to  Sunsettommy
January 14, 2019 6:11 pm

Sunsettommy:

Yep: it really is about the temperature-time product, if we are going to use temperature.

There is a difference between your pizza being in the 400F oven for 1 minute and 20 minutes.

Clyde: But I do agree with you. A better use of the record (although still not good) would be to work with the max and min temperatures independently (do not average them) and add error bars to each. When things like reading error, quantization error, UHI, etc are added in for an honest range, the problem is we get a range so wide that I doubt we can say we know what has or is happening. We just know things are happening in a large range.

When violating Nyquist is factored in, I don’t think we can even really say it has warmed since the end of the Little Ice Age. I’m NOT making the claim that it didn’t warm. I’m making no claims except that the data is so bad that we need to admit we really don’t know. We have anecdotes and they may be correct, but we don’t really have the science or data to back it up honestly.

A C Osborn
Reply to  William Ward
January 15, 2019 3:52 am

This.
The other point is that as Curious George said the design of the Max/Min Thermometer was to derive the max and min readings per day, not measure the heat content, or the daily variations, just the max and min.
So in that respect it is NOT a 2 sample frequency from hundreds of samples.
It has worked well and did it’s job.
The introduction of continuously reading Electronics has introduced it’s own Errors, some of which are even worse.
As has been shown in the Australian BOM records if the data is mis-handled, ie not averaging over a long enough period the new Electronic devices pick up transient peaks.
These peaks can have many causes but do not actually impact the overall temperature.
The other classic example of course is using Electronics at Airports where they don’t measure the Climate Temperature, but how many Jet Engines pass by.

Tom Abbott
Reply to  William Ward
January 15, 2019 10:36 am

“Clyde: But I do agree with you. A better use of the record (although still not good) would be to work with the max and min temperatures independently (do not average them) and add error bars to each.”

I think so, too. If we use TMax charts, the 1930’s show to be as warm or warmer than the 21st century. And this applies to all areas of the world.

TMax charts show the true global temperature profile: The 1930’s was as warm or warmer than subsequent years. Therefore, the Earth, in the 21st century is not experiencing unprecedented warming. It was as warm or wamer than today back in the 1930’s, worldwide.

The CAGW narrative is dead. There is no “hotter and hotter” if we go by Tmax charts.

Or, CAGW is also dead if we go by unmodified surface temperature charts, which also show the 1930’s as being as warm or warmer than subsequent years.

Reply to  Nick Stokes
January 14, 2019 2:42 pm

The Theorem tells you that you can’t resolve frequencies beyond that limit.

It also tells you that frequencies above half the sampling rate will fold back to a frequency below half the sampling rate. A frequency at .6 the sample rate will appear as a frequency at .4 the sample rate.

It isn’t a communications channel.

Irrelevant. It is sampled time series data.

Reply to  Greg F
January 14, 2019 2:52 pm

“Irrelevant.”
Of course it is relevant. You have said that “A frequency at .6 the sample rate will appear as a frequency at .4 the sample rate.”. But the only result that matters is the monthly average. All these regular frequencies, aliased or not, make virtually zero contribution to that.

Reply to  Nick Stokes
January 14, 2019 3:15 pm

But the only result that matters is the monthly average.

Average is a low pass filter in the frequency domain. If the signal is aliased the monthly average will be wrong.

Reply to  Greg F
January 14, 2019 3:23 pm

All the frequencies in Nyquist talk are diurnal or more. Yes, monthly average is a low pass filter, and they will all be virtually obliterated, aliased or not.

Plus the effect of the locking of sampling to diurnal. What component do you think could be aliased to low frequency?

Greg Cavanagh
Reply to  Greg F
January 14, 2019 4:24 pm

Play some music through that monthly low pass filter and the song Hey Jude would be a constant note of C. Is that useful?

Reply to  Greg F
January 14, 2019 5:22 pm

“would be a constant note of C”
Do, it would be silent, or a very soft rumble. There aren’t any low frequency processes there that we want to know about. But with temperature there are.

Reply to  Greg F
January 14, 2019 6:18 pm

Sampling theory says the data is exactly correct for the bandwidth chosen, not approximately correct, if the sample rate is as least twice the highest frequency. Aliasing occurs only when the input signal isn’t bandwidth limited to fit the chosen sample rate.

I think one has to determine the instantaneous rate of temperature change to arrive at the optimum sample rate if one wants to come up with the most correct number. I suspect that rate varies considerably in different environments. I’m not sure that the calculated average will still be any more meaningful but the actual trend of that number over time should be more accurate, which seems to be the main point of this article.

The trend is the most important claim in today’s version of climate and the most important number in projecting a forecast of likely future results. However, it still seems pretty meaningless in view of what the real world actually seems to do.

William Ward
Reply to  Greg F
January 14, 2019 6:54 pm

Nick said: “But the only result that matters is the monthly average.”

Reply: The data is aliased upon sampling at too low of a rate. You can do all of the work you want after the sampling but the aliasing is in there. You can’t get it out later. I replied earlier but maybe you didn’t get it yet.

You can design your system and filter before sampling according to your design. You can do this if your system is electronic. You can’t filter the reading of a thermometer read “by-eye”.

Reply to  Greg F
January 14, 2019 10:15 pm

” You can do all of the work you want after the sampling but the aliasing is in there. “
The process is linear and superposable. I set out the math here. Aliasing may shift one high frequency to another. But monthly averaging obliterates them all.

Johann Wundersamer
Reply to  Nick Stokes
January 14, 2019 2:59 pm

Thought the same, Nick -after all it’s not about “real” temperatures but to determine the difference between yesterday’s measurement and today’s measurement.

William Ward
Reply to  Johann Wundersamer
January 14, 2019 6:58 pm

Johann,

We do get quoted annual record temperatures don’t we? I hope you know now that not one of those record temperature years is correct.

Your comment to Nick is about trends. I address trends. They do not escape aliasing.

Reply to  William Ward
January 14, 2019 10:18 pm

“I address trends.”
To no effect. You show that you get different results, sometimes higher, sometimes lower. But it’s all well within the expected variation. There is no evidence of systematic bias.

Scott W Bennett
Reply to  Nick Stokes
January 15, 2019 6:33 am

“There is no evidence of systematic bias. – Nick Stokes”

On the contrary, It is discussed in the literature that the bias between true monthly mean temperature (Td0) – defined as the intergral of the continuous temperature measurements in a month – and the monthly average of Tmean (Td1) is very large in some places and cannot be ignored. (Brooks, 1921; Conner and Foster, 2008; Jones et al., 1999)

The WMO(2018) say that the best statistical approximation of average daily temperature is based on the integration of continuous observations over a 24-hour period; the higher the frequency of observations, the more accurate the average; as the head post suggested!

“Td1 may exaggerate the spatial heterogeneities compared with Td0, because the impact of a variety of geographic (e.g. elevation) and transient (e.g. cloud cover) factors is greater on Tmax and Tmin (and hence in Td1) than that on the hourly averaged mean temperature, Td0 (Zeng and Wang, 2012).

Wang (2014) compared the multiyear averages of bias between Td1 and Td0 during cold seasons and warm seasons and found that the multi‐year mean bias during cold seasons in arid or semi‐arid regions could be as large as 1 °C.

WMO (1983) suggested that it is advisable to use a true mean or a corrected value to correspond to a mean of 24 observations a day. “Zeng and Wang (2012) argued from scientific, technological and historical perspectives that it is time to use the true monthly mean temperature in observations and model outputs.”

The WMO(2018) have suggested that Td1 is the least useful calculation available if attempting to improve the understanding of the climate of a particular country!

WMO Guide to Climatological Practices 1983, 2018 editions
Brooks C. 1921. True mean temperature. Mon. Weather Rev. 49: 226–229, doi: 10.1175/1520-0493(1921)492.0.CO;2.
Conner G, Foster S. 2008. Searching for the daily mean temperature. In 17th Conference on Applied Climatology, New Orleans, Louisiana.
Jones PD, New M, Parker DE, Martin S, Rigor IG. 1999. Surface air temperature and its changes over the past 150 years. Rev. Geophys. 37: 173–199, doi: 10.1029/1999RG900002.
Li, Z., K. Wang, C. Zhou, and L. Wang, 2016: Modelling the true monthly mean temperature from continuous measurements over global land. Int. J. Climatol., 36, 2103–2110, https://doi.org/10.1002/joc.4445.
Wang K. 2014. Sampling biases in datasets of historical mean air temperature over land. Sci. Rep. 4: 4637, doi: 10.1038/srep04637.
Zeng X, Wang A. 2012. What is monthly mean land surface air temperature? Eos Trans. AGU 93: 156, doi: 10.1029/2012EO150006.

Reply to  Nick Stokes
January 15, 2019 7:24 am

” It is discussed in the literature that the bias between true monthly mean temperature (Td0)”
As is clear from context, I am saying that there is no systematic bias between trends. If you think there is, please say which way it goes.

Scott W Bennett
Reply to  Nick Stokes
January 15, 2019 7:02 pm

“As is clear from context, I am saying that there is no systematic bias between trends. If you think there is, please say which way it goes. – Nick Stokes”

Thorne et al. (2016) found a consistent overestimation of temperature by the traditional method [(Tmax + Tmin)/2], for the CONUS:

Moreover, the traditional method overestimates the daily average temperature at 134 stations (62.3%) underestimates it at 76 stations (35.4%), and shows no difference at only 5 stations (2.3%)…On average, the traditional method overestimates the daily average temperature compared to hourly averaging by approximately 0.16°F, though there is strong spatial variability*.

The explanation for the long-term difference between the two methods is the underlying assumption for the twice-daily method that the diurnal curve of temperature is symmetrical. In particular, the Yule–Kendall index is positive for all 215 CONUS stations, indicating that the daily temperature curve is right skewed; that is, more hourly observations occur near the bottom of the distribution of hourly temperatures (i.e., around Tmin) than near the top of the distribution (around Tmax). – Thorne et al. 2016*

It is interesting to note from that study that spatial patterns in the annually averaged differences between the temperature-averaging methods are readily apparent. The regions of greatest difference between the two methods resemble previously defined climatic zones in the CONUS.

What concerns me most, is that this study found that the shape of the daily temperature curve was changing, such that more hours per day were spent closer to Tmin than Tmax during the (2001-15) period versus the base period (1981-2010), which doesn’t bode well for a changeover to the hourly method because it will have the effect of masking the old errors and sexing up any warming! ;-(

*Thorne, P. W., and Coauthors, 2016: Reassessing changes in diurnal temperature range: A new data set and characterization of data biases. J. Geophys. Res. Atmos., 121, 5115–5137, https://doi.org/10.1002/2015JD024583.

Scott W Bennett
Reply to  Nick Stokes
January 15, 2019 7:22 pm

“As is clear from context, I am saying that there is no systematic bias between trends. If you think there is, please say which way it goes. – Nick Stokes”

Wang (2014)* analyzed approximately 5600 weather stations globally from the NCDC and found an average difference between the two temperature-averaging methods of 0.2°C, with the traditional method overestimating the hourly average temperature. They found that asymmetry in the daily temperature curve resulted in a systematic bias. And also that Tmean resulted in a more random sampling bias by under sampling weather events such as frontal passages.

Wang, K., 2014: Sampling biases in datasets of historical mean air temperature over land. Sci. Rep., 4, 4637, https://doi.org/10.1038/srep04637

Reply to  Nick Stokes
January 15, 2019 8:44 pm

Scott W Bennett January 15, 2019 at 7:02 pm

“As is clear from context, I am saying that there is no systematic bias between trends. If you think there is, please say which way it goes. – Nick Stokes”

Thorne et al. (2016) found a consistent overestimation of temperature by the traditional method [(Tmax + Tmin)/2], for the CONUS

Scott, it seems you’re not discussing what Nick is discussing. He’s discussing trends, you’re discussing values.

The traditional method (max+min/2) definitely overestimates the true average value. However, as near as I can tell, it doesn’t make any difference in the trends. I ran the real and traditional trends on 30 US stations and found no significant difference. I also investigated the effect of adding random errors to the real data and found no difference in the trends.

I’m still looking, but haven’t found any evidence that there is a systematic effect on the trends.

w.

Phil
Reply to  Nick Stokes
January 15, 2019 9:26 pm

@ Scott W Bennett on January 15, 2019 at 7:02 pm:

Your quotes are actually from Bernhardt, et al, 2018.

Bernhardt, J., A.M. Carleton, and C. LaMagna, 2018: A Comparison of Daily Temperature-Averaging Methods: Spatial Variability and Recent Change for the CONUS. J. Climate, 31, 979–996, https://doi.org/10.1175/JCLI-D-17-0089.1

https://journals.ametsoc.org/doi/pdf/10.1175/JCLI-D-17-0089.1

S W Bennett
Reply to  Nick Stokes
January 15, 2019 10:01 pm

Willis, I’m not sure if you read my post immediately below about Wang(2014):

“They found that asymmetry in the daily temperature curve resulted in a systematic bias.”

Now I’m confused because I’m not sure what you mean by “trends” here.

I thought it was clear that several studies found long term global and regional systematic bias.

Perhaps I might have obscured the point that trends will be biased because the shape of the daily temperature curve was found not just to be skewed but also changing.

My understanding is that this will create a spurious trend in either method, unless explicitly teased out (By comparison with changes in humidity):

“Stations in the southeast CONUS experience more time in the lowest quarter of their daily temperature distribution due to higher amounts of atmospheric moisture (e.g., given by the specific humidity) and the fact that moister air warms more slowly than drier air.*”

For example, Thorne et al. (2016) using half ASOS and half manual observations – found a considerable difference in the two temperature-averaging methods in the cold-season results between Wang’s – ASOS only – observations but only a negligible difference for the warm season.

That’s me for now, I’ve exhausted my contribution to this subject! 😉

*Thorne, P. W., and Coauthors, 2016: Reassessing changes in diurnal temperature range: A new data set and characterization of data biases. J. Geophys. Res. Atmos., 121, 5115–5137, https://doi.org/10.1002/2015JD024583.

Scott W Bennett
Reply to  Nick Stokes
January 15, 2019 10:20 pm

Phil,

You are right! My mistake, referencing is not my best skill, despite appearances. In my defence there is a cross-reference that I’ve obviously confused. I have all these notes and ideas in my head and it is bloody hard to go back and remember where I got them all from. I know, I should be more rigorous but I was trying to get the ideas across as informally as possible. I figure in this day and age it is almost superfluous as any diligent reader will quickly google it.

cheers,

Scott

William Ward
Reply to  Nick Stokes
January 16, 2019 1:53 pm

The trend difference is the simple accumulation of the error. As discussed in other posts, there is a nature of the error that has a quality approaching random. I don’t think it is random – but I’ll let someone more knowledgeable try to measure that and report. It still appears to me that we see the accumulated error in the trends.

Frederick Michael
Reply to  Nick Stokes
January 14, 2019 3:06 pm

Nick is right. This paper has fundamental conceptual errors.

It is obviously wrong to claim that averaging the daily high and the low is WORSE than averaging two samples 12 hours apart. The high and the low are based on a huge number of “samples.” To treat each one as if it’s a single sample is not valid.

Let me say this a different way. The sampling theorem does not apply to statistics not based on samples. A mercury max/min thermometer uses an effectively infinite number of samples. In fact, it’s analogue; there are no samples at all.

MarkW
Reply to  Frederick Michael
January 14, 2019 3:58 pm

About as far from correct as one can be and still be speaking English.
No, a mercury max/min thermometer is not taking an infinite number of samples, it takes two samples, the high whenever that occurred and the low, whenever that occurred.
If it were taking an infinite number of samples you could reconstruct the temperature profile of the previous day, second by second.
The best you can say is that while it’s constantly taking samples, it only stores and reports two of those samples.

Frederick Michael
Reply to  MarkW
January 14, 2019 6:15 pm

Let me repeat, “It is obviously wrong to claim that averaging the daily high and low is WORSE than averaging two samples 12 hours apart.”

Is this, or is it not, obvious to you?

Ever used the sampling theorem at work?

I have.

Phil
Reply to  Frederick Michael
January 14, 2019 10:00 pm

@Frederick Michael

You are ignoring the shape of the curve. The shape of the daily temperature curve varies from one day to the next and is what is causing the error in the daily “average.” As you know, the “average” of a time series is not an average: it is a smooth or filter. Information is being discarded by the max-min thermometer. Without knowing the shape of the curve (which implies knowing exactly when the max and the min happened), the real daily smooth cannot be known. It is known that a sine wave is a bad model to assume for the daily temperature curve.

Frederick Michael
Reply to  Frederick Michael
January 15, 2019 8:30 am

Phil – The error you cite is real. That’s the problem with using (min+max)/2. It is not a perfect method.

But that has absolutely nothing to do with the sampling theorem or aliasing.

MarkW
Reply to  Frederick Michael
January 15, 2019 10:25 am

Nice dodge there.
From the rest of my comment it is obvious I was addressing your ridiculous claims that a high and a low reading is the result of an infinite number of samples.

Hugs
Reply to  Frederick Michael
January 15, 2019 11:48 am

Mercury minmax can well be considered as having a very large sampling frequency.

But in reality, very short pulses of changing temperature don’t have time to fully express in the reading. So an electronic device may react faster and have practically much more volatility. How to mimic the mercury minmax with electronic thermometer sampling the temp is a hard question not put enough effort on.

Nick says the anomaly trend is maybe not affected. Well yeah, science is not about believing stuff, it is about proving it. For me it is clear that mercury minmax can’t be compared to different electronic devices, due to a number of reasons. The best reason is the most used thermometers are simply not planned to detect changes of 0.001 degrees during a year.

The record is contaminated every time something happens near the measuring point. It is contaminated by sheltering changes, both abrupt and slow. It is contaminated by some electronic jitter and smoothing protocols.

So far from certain, but then, you can argue we should play safe. What exactly is playing safe, is disagreed, notable opinions coming from Curry, Pielke Jr, and Lomborg.

Gary Pearse
Reply to  Frederick Michael
January 14, 2019 4:03 pm

Nick would be right if the daily profile of continuous sampling were to be the same each day. One day is not a sufficien exploration. 1st commenter vukevich had the best idea which is to measure the heat in a bucket of water each day at, say 2pm (rather a fluid that doesn’t freeze). This might be automated by encapsulating the fluid with an air cupola on top fited with a pressure guage “thermometer”.

LdB
Reply to  Frederick Michael
January 14, 2019 5:39 pm

You have an average temperature but it bears no relationship to earth energy balance which you are trying to work out in climate science 🙂

To understand why consider a cold rainy day your temperature all day could have been 20deg C the cloud clears for a brief spell and the temperature rises to 24 deg C .. so that is you max. Your average will be MAX-MIN/2 so it inflates your average …. you think there is a lot more energy that day than there is. Remember at the end of the day what they use this temperature average for is to create a radiative forcing so they are proxying earth energy budget.

Matt Schilling
Reply to  LdB
January 15, 2019 7:45 am

I see this regularly where I live in upstate NY, especially on predominantly cloudy days: The weather report calls for a certain high temp and we get nowhere near it all day long. Then, late afternoon, the clouds break, and the temp moves up quickly to the predicted high, only to stay there for a short time before dropping back down as the sun sets.
We, in fact, hit the high temp called for, but we spent the majority of the day in noticeably cooler weather. The high temp for the day was almost meaningless, as the day was dominated by cooler temps.

William Ward
Reply to  Frederick Michael
January 14, 2019 6:41 pm

Frederick Michael,

Please show the math or conceptual errors. Be advised you are fighting proven mathematics of signal analysis. You challenge is to show 1) that there is no energy at 1-cycle/day, 2-cycles/day or 3-cycles/day. Good luck showing there is no energy in the signal at 1-cycle/day. Or you need to show how 1 and 3-cycles/day do not alias to 1-cycle/day. Or you need to prove why Nyquist was wrong. Or show that every electronic device ever made that converts analog and digital data and has followed Nyquist, has done so needlessly.

A mercury max/min thermometer only delivers 2-samples/day. If you think the number is infinite then you wouldn’t mind giving me an example where you get say … 3-samples/day. Give me the 3 values you get please. You will quickly see that you are mistaken. If you have a high quality, calibrated max/min thermometer next to a USCRN site and compare your 2 values (samples) after a full day (properly sampled with no TOBS), and compare it to the 288-samples that day, you will find the max and min extracted from the 288-samples match what your max/min device gives. Max/min thermometers give you 2 samples.

You can play with USCRN data for yourself and show by example that the 2 samples taken at midnight and noon (or any 2 times separated by 12 hours) will usually give a lower error mean than the average of max and min. With clock jitter you get into some strange effects. It depends upon when the samples land in time relative to the spectral content for the day.

Frederick Michael
Reply to  William Ward
January 14, 2019 7:59 pm

Thanks for your response.

The conceptual error is describing the min and the max as samples. While the (min+max)/2 method is less than perfect, it’s the right approach, given older technology. Of course, it is worse than a true average.

I do not believe that there’s no energy at frequencies higher than 1 cycle per day. Of course there is. But we are interested in a low pass filtered result, so the “energy” at higher frequencies doesn’t somehow contribute to the average temperature we seek. The min & max act as low pass filters and are appropriate.

Still, your empirical point overrules any theoretical argument I can make. I will play with the USCRN data you link to above. If it shows greater error from averaging min and max than averaging two individual samples 12 hours apart, I will gladly concede,

after changing my underwear.

Frederick Michael
Reply to  Frederick Michael
January 14, 2019 8:04 pm

Dang. Looks like I have to wait for the shutdown to end before checking the data.

Kneel
Reply to  Frederick Michael
January 14, 2019 8:23 pm

“The min & max act as low pass filters and are appropriate.”

Not even wrong.
This does NOT act as a low pass filter AT ALL. The maths of calculating the average from the full 288 samples is a low-pass filter, min and max are not.

“…the “energy” at higher frequencies doesn’t somehow contribute to the average temperature we seek.”

Yes it does – by aliasing, and that is the point.

Dave Fair
Reply to  Frederick Michael
January 14, 2019 9:14 pm

Comparing USCRN stations’ trends with the others’ over the same time period would yield useful information.

William Ward
Reply to  Frederick Michael
January 14, 2019 10:10 pm

Frederick Michael,

Your underware?!! Now that is TMI (too much information)! LOL! Thanks for the laugh.

Here is what you are missing: once you sample you have to deal with what has aliased. So if you are interested in a low pass filtered result then you need to low-pass filter before you sample. Not after – unless you sample without aliasing. If you sample without aliasing then you have all the signal has to offer and you can process it in the digital domain just as you would process it in the analog domain before you sampled it. You can also (and this is going to drive Nick crazy!) convert the sample from digital back to analog without losing any of the original data!

When we look at a thermometer with our eyes we can not filter anything. Even a max/min thermometer. There is really no practical way to filter. Filtering has to be done electronically before sampling. Then the signal can be filtered before sampling and you can use 2 samples/day if you filter appropriately.

I show in my paper what happens to trends with aliasing. How do you explain the trend bias/error otherwise? If Nick were right that you can just “extract the monthly” data despite the aliasing then you would not have the trend error.

Some of you are really holding on tight to this, but do so at your own detriment. You need to show how Nyquist doesn’t apply. You need to show how the content at 1, 2 and 3 cycles/day doesn’t alias your signal trend and daily signal. I give you the graph (full paper). Just show me how it is wrong. Otherwise use it to make your work better. I didn’t invent it, I’m just pointing it out.

Reply to  Frederick Michael
January 14, 2019 10:27 pm

“Or you need to show how 1 and 3-cycles/day do not alias to 1-cycle/day. “
Well, they can’t. You get sum and difference frequencies.

You have never dealt with the issue of locked frequencies. Sampling at regular sub-day frequencies cannot alias diurnal harmonics to low frequency values. It can go to zero, but that just reproduces the known resolution variation.

Paramenter
Reply to  Frederick Michael
January 15, 2019 3:21 am

The conceptual error is describing the min and the max as samples.

What’s wrong with that? Granted – those samples are taken usually not evenly spaced. Furthermore you can have one daily max with the same value or two, or three, we don’t know. But irregularity of sampling just adds additional burden to the original problem; not alleviates such problem.

A C Osborn
Reply to  Frederick Michael
January 15, 2019 4:07 am

The ERROR is combining them to obtain an AVERAGE of them per day.
Average them individually per week, per month, per year to obtain useful comparisions.
It is patently obvious that taking the average of the 2 readings is only a very poor approximation of the Real Average each day.
But even using the Area under the curve of all the readings of an Electronic device still does not tell you the heat content of the day unless you also have the Humidity readings as well.

Reply to  Frederick Michael
January 15, 2019 10:56 am

A C Osborn –> Bingo1 And you know what? They do have daily min, max, and avg humidity data that could be used. Go to your local NWS and you can find the daily history of humidity, it is there. I’m not a data expert on weather data, so I don’t know how long humidity has been stored and collected for, but it is there now.

I’m working on an essay about uncertainty of the averages and something I notice here is that this paper doesn’t address the errors in measurements, nor should it necessarily do so. However, those also contribute to the errors involved, to the point where most, if not all, of the trends are within the error range and should be treated as noise, not real temperature changes.

In other words, as the article says, the monthly averages are not suitable for purpose.

Clyde Spencer
Reply to  Nick Stokes
January 14, 2019 4:02 pm

Stokes,
You said, “It isn’t a communications channel.” You are stating the obvious and what is a non sequitur.

The important thing, which I suspect you are capable of understanding, but choose to ignore, is that Ward has demonstrated that the true, daily mean-temperature can and does vary from the mid-range daily temperature, and the difference is determined by the shape of the temperature time-series. In other words, the phase components of the various frequencies shape the time-series envelope and strongly effect the accuracy of the mid-range compared to the true mean. What can be concluded from this is that a trend in the change in the shapes of the envelopes can create the appearance of a temperature change in the mean that may not be valid.

Yes, I know that historically all we have to work with is two daily temperatures. But, one shouldn’t be quick to assign unwarranted accuracy and precision to data that have been demonstrated to be unfit for purpose. The mid-range value is a measure of central tendency essentially equivalent to a degenerate median. As such, it may give us some insight on the tendency of long-term temperature changes. However, an honest broker would acknowledge the shortcomings and not claim accuracy and precision of mid-range averages that have little value for estimates of accumulating energy.

Gary Pearse
Reply to  Clyde Spencer
January 14, 2019 4:45 pm

Clyde, I take your point on the invalidity of such reported accuracy of the measurements. But, since we are trying to get an early warning system for disasterous warming, it isn’t necessary to report in hundredths of a degree. I would take advantage of polar enhancement of temperature, what, 3x the avg global temp change? Put a dozen recording thermometers in each polar region and average the readings.

Ive argued before that measuring sea level in millimeters a year and even adding corrections is ridiculous if what we are worried about is sea level rise of a meter or three a century. Tide guages are sufficient and even yardsticks would suffice – hey, ax handles are good enough measuring instruments if we are talking about the Westside Hwy in Manhattan going under 10ft of water by 2008. All this baloney about accuracy to tenths and hundredths is part of the propaganda to make people feel the scientists must he right if they can measure things to a gnat hair.

Another bit of agitprop is TOBS and the station moves. Yeah there has been a bonified reason for doing this – habitat encroachment, different sampling times etc, but it also presents another ‘degree of freedom’ that might be employed by charlatans to move, remove, and adjust stations that have been running too cool, say, and to leave ones running too hot alone except for here and there to look unbiased. They did deep six a preponderance of rural stations in the “Great Thermometer Extinction Event^тм” in the USA over the past two decades. I hate myself for coming to think this way, but after the ham-handed political adjustments to the record by Hansen and Karl on the eve of their retirements, all the goodwill enjoyed by scientists earlier times has been used up and a buyer beware mood has set in re these rent seekers.

kevink
Reply to  Nick Stokes
January 14, 2019 4:18 pm

Nick Stokes wrote

“The Theorem tells you that you can’t resolve frequencies beyond that limit. But we aren’t trying to resolve high frequencies. We are trying to get a monthly average. It isn’t a communications channel.”

No, the Theorem tells you that signal frequencies above 1/2 the sample rate WILL BE CONFUSED WITH (aliased into) frequencies below 1/2 the sample rate. This adds errors to the data.

Mr. Ward is quite correct.

Cheers, Kevin

Reply to  kevink
January 14, 2019 4:48 pm

“This adds errors to the data.”
How does aliasing sub-diurnal frequencies affect the monthly average?

William Ward
Reply to  Nick Stokes
January 14, 2019 7:09 pm

Nick – I recommend you read the Full Paper. The first 11-12 pages I go over how the aliasing happens, with graphics, to illustrate how the frequency content at/near 2-cycles/day aliases to the trends.

William Ward
Reply to  kevink
January 16, 2019 2:13 pm

I appreciate your comments KevinK.

William Ward
Reply to  Nick Stokes
January 14, 2019 5:46 pm

Well hello Nick! Happy New Year. I see you have 1) not taken the time to learn signal analysis since we last spoke and 2) didn’t read either my short or full paper as you replied seconds/minutes after the link was live.

Nick said: “The Theorem tells you that you can’t resolve frequencies beyond that limit. But we aren’t trying to resolve high frequencies. We are trying to get a monthly average. It isn’t a communications channel.”
Well hello Nick! Happy New Year. I see you have 1) not taken the time to learn signal analysis since we last spoke and 2) didn’t read either my short or full paper as you replied seconds/minutes after the link was live.

Nick said: “The Theorem tells you that you can’t resolve frequencies beyond that limit. But we aren’t trying to resolve high frequencies. We are trying to get a monthly average. It isn’t a communications channel.”

My reply: No, you don’t understand sampling it seems. First, you CAN NOT (sorry to shout) separate frequencies when sampling unless you first filter out the frequencies you don’t want prior to sampling. If you ignore higher frequencies because you are not interested in them and you sample anyway, disregarding Nyquist, then those frequencies you don’t care about come crashing down on the information you do care about! There is no UNDO button for aliasing. There is no magic-super-special-climate-sciencey algorithm you can run to undo the damage if you alias. We are talking about signal analysis 101, first week of class stuff here. I have no idea why you bring up a communication channel. We are talking about a signal. The theorem doesn’t care what the signal is.

Nick said: “As for aliasing, there are no frequencies to alias. The sampling rate is locked to the diurnal frequency.”

My reply: Get the USCRN data for Cordova AK and take a look at the data for Nov 11, 2017. I use this in my paper, so I recommend we try that. Run an FFT on a day’s worth of data. Or you can run it for many days or many months or years. As I explain in my FULL paper, the frequency content down around 0-cycles/day is the very long term trend signal. [Side note: Electrical engineers usually use Hertz (Hz) to discuss frequency. For atmospheric air temperature the signals are relatively slow compared to most things in electronics. So, we would actually use micro-Hertz (uHz). This is awkward and unintuitive. So, I’m using cycles/day or samples/day. As long as we stay consistent it works.] The 10-year and 1-year signals are down very close to 0-cycles/day. So, if you want to know what can alias your long term signals or trends then you look at the spectral content of the signal at near 2-cycles/day if we are using 2-samples/day. Any energy in our signal at this frequency will come crashing down on your trends! If the amplitude of the content at 2-cycles/day is low, then the impact to trends is likely low – but phase of the signal also matters. The phase relationship between the 0-cycle and 2-cycle content will determine just how additive or subtractive the aliasing is. Figure 8 of my FULL paper shows how this content aliases in a graphical format. [Note: Don’t get figures between the full and short versions of the paper confused. The Full paper is a superset of the short paper published here.] Furthermore, at 2-samples/day, the content at 1-cycle/day and 3-cycles/day will alias to the 1-cycle/day (daily) signal – this affects daily mean values.

See image here: https://imgur.com/xaqieor

This image was not intended for public consumption, as it has some issues, but I can use it to illustrate the point. Feel free to do your own FFT. It shows an FFT for Cordova AK for Nov 11, 2017. This is the same signal I use in Figs 10 and 12 in the Full paper and Figs 1 and 3 in the WUWT paper. Note that the temperature signal looks more like a square wave than a sinusoid. Notice the many sharp transitions. These all equate to higher frequency content. The FFT image shows the signal being sampled and the positive spectral image at 12-samples/day. You can see the overlap of content. The overlap is aliasing! If the sampling were shown at 2-samples/day, then the green image would be almost on top of the blue – the aliasing would be even greater, and more aliasing would occur to the daily mean and long term trend. The frequency content is there, and the aliasing is there. You should study the frequency content of more signals to see the variation that exists. It is not easy to visually ascertain the extent that the aliasing shows up in the time domain – in the mean and trend values. The means and trends need to be calculated. The frequency charts just prove what is going on, in full compliance with the theorem.

The sampling is not locked to the diurnal frequency. Higher frequency events, like cloud cover changes, precipitation changes, moving fronts, etc., all contribute to when the max and min happen. Using max/min does absolutely give you 2-samples/day, just with what is equivalent to “clock jitter”. The fact that there is error in the periodic nature of the “clock” when max and min are the samples does not invalidate Nyquist. But the conditions violate Nyquist, and this produces error. (Summary: Nyquist not invalidated. Violated.]

Nick said: “As for the examples, Fig 1 is just 1 day. 1 period. You can’t demonstrate Nyquist, aliasing or whatever with one period.”

My reply: Of course, you can. What uses Nyquist that everyone can relate with? Digital music. Is there a limit to the length of a digitally sampled song? No. Can a short clip of a digitally sampled song contain aliasing? Of course it can. A signal is a signal.

Nick said: “Fig 2 shows the effect of sampling at different resolution. And the differences will be due to the phase (unspecified) at which sampling was done. But the min/max is not part of that sequence. It occurs at times of day depending on the signal itself.”

My reply: I’d like to see your dissertation on that (differences due to phase at which sampling was done). My Figure 3 (WUWT/Short paper) shows graphically what happens when the sample rate is decreased. It is visually obvious that much content is being missed as the sample rate declines. This gives an intuitive feel for what is happening, but Nyquist explains the theory. The Min and Max ARE a part of the samples. The Min and Max are selected from the 288-samples. If there were just 2 clock pulses corresponding to those 2 samples then you would get just those 2 values. Clock jitter. You can see that in this case the jitter is very destructive. A much better result would have been achieved just by sampling at midnight and noon!

Regarding your study on TOBS, I read it multiple times long ago. I think it is a very good study! It explains TOBS well. But TOBS has NOTHING to do with this issue. When you sample according to Nyquist you are free to redefine your day as much as you like, and it will never bite you. No TOBS ever! You will just get the correct mean for the day as defined.

Nick, signal analysis is not a controversial subject. It is used for literally every kind of technology we all enjoy and depend upon today. There are no special carve-outs or waivers for Nyquist. It rules. Violate it and the effects are well defined. We have known about it for 80+ years. What should be controversial is climate “science” not knowing about it or outright dismissing it. I’m not sure what the reasons are, and I don’t care. I’m tired of Bullsnot data being shoved on the public as evidence of some imaginary crisis. The use of the instrumental record needs to be shot in the head (figuratively speaking). It is a dumpster fire and an embarrassment that climate science propagates upon its foundation.

William Ward
Reply to  William Ward
January 14, 2019 7:04 pm

Mod – I “fat fingered” a cut-and-paste in my reply above to Nick where I start: “Well hello Nick!”

If I can get one free mercy edit of my post I would appreciate it. It should be obvious that a paragraph got copied over top of another and thus duplicated.

Sorry to Nick and others who may be trying to read it! I’m not sure how this works, but if a moderator can do a quick repair and then remove this request (or not). Thanks and my apology!

[edited ~ctm]

Reply to  William Ward
January 14, 2019 9:06 pm

William,
“If you ignore higher frequencies because you are not interested in them and you sample anyway, disregarding Nyquist, then those frequencies you don’t care about come crashing down on the information you do care about!”

No, they don’t. They can’t, to the monthly average, because of attenuation. Here is the math:

Suppose we have a frequency component exp(2πift), where f is the frequency in 1/hour, t time. Sampled values are at ft=a*n, where a is a sampling ratio (.5 the Nyquist value) and n integer. The average over 1 month is (sum Σ n=0:N, N=720*f/a, 720 hrs in month)

1/N*Σ exp(2πia*n) = (1/N)(1-exp(2πia*(N+1)))/(1-exp(2πia))

It’s linear, so that is the contribution of that frequency to the whole average. Now for close adherence to Nyquist a is small, the last term is approx i/(2πa), and the whole is an attenuated but accurate version of the integral. As a rises to .5 and beyond, it is attenuated but inaccurate. But it doesn’t matter, because the attenuation factor is about 1/(2πaN) = 1/2π/f/720. f is the frequency of the component that you think might be harmed by aliasing, less than 1, it seems. So the attenuation, relative to approx monthly frequency is 3 or (much) more orders of magnitude. It just doesn’t matter if these terms are incorrectly averaged.

William Ward
Reply to  Nick Stokes
January 14, 2019 10:25 pm

Nick – on this you are hopelessly stubborn. See Fig 4 from the short/WUWT version. It shows the daily mean error for every day in the year. Aren’t months made up from days? How can the mean be wrong by +/- 1, 2, 3, or 4C every day of the year and somehow magically the months are all just fine? Did you run an FFT? Do you see content at and around 2-cycles per day? Did you see Fig 8 of the full paper? Do you see how the aliasing works to affect the trends (content near 0-cycles/day)? Have you looked at any USCRN station data and compared the 2 competing methods for monthly mean (or median)? Did you try to calculate any long-term trends using the 2 methods?

Reply to  William Ward
January 14, 2019 10:57 pm

“Have you looked at any USCRN station data and compared the 2 competing methods for monthly mean (or median)? Did you try to calculate any long-term trends using the 2 methods?”
Yes, right here. And the point is, the results differ, but well within the expected variation for the trend of such a short period. There is no statistical difference from which you can make deductions.

“Do you see content at and around 2-cycles per day?”
I’m sure there is. But what is left after 6 db/octave attenuation? about 1/1440π.

Reply to  William Ward
January 14, 2019 10:58 pm

“1/1440π”
Oops, 1/60π. Still tiny.

Reply to  William Ward
January 15, 2019 3:07 am

“Did you try to calculate any long-term trends using the 2 methods?”
More results here.

A C Osborn
Reply to  William Ward
January 15, 2019 4:18 am

As I said up thread the problem is taking an average at all, it is only a poor approximation.
There is nothing wrong with the Max/Min readings of themselves, it is what you try and do with them that is wrong.
First of all we have the problem of the Nomenclature used here, the AVERAGE of the 2 readings is 100% accurate.
But the average of the 2 readings is NOT THE MEAN of the day’s temperatures.

Phil
Reply to  Nick Stokes
January 14, 2019 9:36 pm

Time of Observation (TOBS) should add more jitter error.

Ferdberple
Reply to  Nick Stokes
January 14, 2019 10:50 pm

There are differences, but nothing to do with Nyquist.
=========
By that logic on can drive from LA to Las Vegas, take your max speed + min speed, divide the sum by 2 and get your average speed.

So stuck in LA my min speed is 0, and somewhere out on the highway I might get up to 100, so my average for the trip will 50. And based on this result I will decide to buy a new car or not.

Frederick Michael
Reply to  Ferdberple
January 15, 2019 7:10 pm

How does that have anything to do with the Nyquist? Your example doesn’t even require sampling.

Jim G
Reply to  Nick Stokes
January 14, 2019 11:51 pm

Nick.
I am wondering if you could explain your reasoning a little better.

As was demonstrated in the example above, Tmin and Tmax are not the peaks of a sinusoidal wave.
Air temperatures over land just don’t behave that way.
With the exception of deserts, most cities will have clouds on any given day. As the clouds pass the recording station, the air will cool a bit. When it passes, it warms back up. In this case, the signal will look like the noisy square wave and not a sine.

Min/Max are meaningless on a square wave signal.

Reply to  Jim G
January 15, 2019 3:16 am

None of this has anything to do with Nyquist. But the diurnal behaviour of locations is reasonably repetitive. The mean is one measure, min/max is another. Both will respond to climate changes.

Again I’ll point to my Boulder analysis. It compares min/max with various reading times, and the integrated (in black). The min/max show a shift with reading time, so of course they don’t agree with the integrated. But they are just variably offset, and some reading times come very close. The post is here.

Reply to  Nick Stokes
January 15, 2019 2:09 am

Nick,

Signal theory applies to all signals. All time-variant sequences of numbers are signals, whether temperatures or shot records from a seismic survey.

Every time Mann splices the instrumental record onto a proxy reconstruction, he violates just about every principle of signal theory… as did Marcott’s “up-tick,” as does everyone who claims the recent warming is unprecedented based on comparisons of the instrumental record to proxy reconstructions.

Paramenter
Reply to  Nick Stokes
January 15, 2019 2:17 am

Hey Nick,

The Theorem tells you that you can’t resolve frequencies beyond that limit. But we aren’t trying to resolve high frequencies.

From where the difference between daily (Tmax+Tmin)/2 and daily true arithmetic mean (or area under the temperature curve) comes from? For particular shapes of daily temperature signal having just two samples per day yields significant error. That’s because you cannot ‘resolve’ the signal. I would say it has something to do with Nyquist.

We are trying to get a monthly average.

Below error magnitude for Boulder, CO Jan 2006-Dec 2017 between direct integration (per each month) of subhourly signal (sampled every 5 min) and NOAA monthly record based on averaging daily midrange values (Tmax+Tmin)/2. Error visible per daily records is also visible for monthly. I don’t quite understand why such daily drift should not propagate into monthly and yearly.

Boulder. Monthly vs subhourly

Paramenter
Reply to  Paramenter
January 15, 2019 2:43 am

Valid link to the chart:
Boulder. Monthly vs subhourly

January 14, 2019 2:24 pm

NOAA averages these 20-second samples to 1-sample every 5 minutes or 288-samples/day.

An average is a pretty crude low pass filter. You would think they could do better.

MarkW
Reply to  Greg F
January 14, 2019 2:35 pm

What makes you think they want to do better?

MarkW
January 14, 2019 2:30 pm

Not only is the sampling rate grossly inadequate temporally, it is grossly inadequate spatially as well.

LdB
Reply to  MarkW
January 14, 2019 5:42 pm

Yes I agree with that Mark they need some high precision sites to integrate energy to the proxy fully and I would have a lot more confidence.

Frederick Michael
Reply to  MarkW
January 14, 2019 6:29 pm

Bingo; that’s the key point. The spatial problem is severe.

While averaging min and max is far from perfect, it’s a reasonable approach given the technology that existed just a few decades ago.

Anthony has found huge location flaws in the temperature data base. That was substantive and well documented. This Nyquist nonsense is an embarrassment, and has no place here.

MarkW
Reply to  Frederick Michael
January 15, 2019 10:35 am

They are both real flaws.
The fact remains that you cannot calculate an accurate daily average temperature from just a high and a low reading.
Nyquist is one way to calculate how many readings you need in order to calculate a quality average.

DHR
January 14, 2019 2:32 pm

“The USCRN is a small network that was completed in 2008 and it contributes very little to the overall instrumental record…”

You make the USCRN sound to be of little value. But the USCRN sites were chosen to be evenly distributed over the States and to be distant from any known human-caused heat sources such as cities, airports, highways and so forth. It seems to me that it ought to present a better average view of CONUS temperature trends for its period of operation than the Historic Climate Network of which many stations are very poorly cited and subjected to various corrections. Perhaps you could do an analysis comparing the two.

William Ward
Reply to  DHR
January 14, 2019 3:55 pm

DHR,

Thanks for your comment, I didn’t intend that my comment would make USCRN out to be of little value. Thanks for the opportunity to clarify and expand my thoughts on this. I’m a big fan of USCRN. Based upon what I know, it is a high quality network capable of doing what is needed to accurately sample a temperature signal – and I believe the siting is also good. USCRN should eliminate the long list of problems with measuring temperature. My comment was made to show that unfortunately, this high quality data is not used to calculate the global averages and trends that get reported in the datasets (HADCRUT, GISS, Berkley). Even if the max and min data from these stations were used we still have aliasing. We need to use the 5-minute samples. I wanted to be clear that USCRN provides us an ideal opportunity to compare the 2 methods but doesn’t by itself improve the reported means and trends.

Does this clarify my position?

I have done some preliminary work comparing means and trends from USCRN stations and corresponding nearby stations from the historical network (which is badly sited and suffers from UHI, thermal corruption, quantization error, reading error, infill, etc). I found wildly different results during my limited work.

Reg Nelson
Reply to  William Ward
January 14, 2019 5:07 pm

The USCRN, like the satellite temperature data and the ARGO buoy data, were supposed to be the Gold Standards of climate data. But when the results didn’t match the confirmation bias, actions were needed. So the propagandists employed the IAA method of PR. Step one I = Ignore. Step two = Attack. Step three = Adjust.

Dave Fair
Reply to  William Ward
January 14, 2019 5:37 pm

William, in your estimation does Anthony Watts’ work on the differences between well- and poorly-sited measuring stations hold water? I understand the trends are significantly different.

William Ward
Reply to  Dave Fair
January 14, 2019 7:50 pm

Hi Dave,

I’m a big fan of Anthony’s Surface Stations Project (if I remember the name…)! I really don’t know how to apply what I have done here with Nyquist to what Anthony did.

Anthony’s work on that seems to benefit from the saying “a picture paints a thousand words”. Just seeing the stations situated next to a wall of air conditioners blowing hot exhaust is very persuasive. I can’t find any logic to fault that work and I only find logic to support it. I use that in my mind as the benchmark reference for UHI/thermal corruption. I’m sorry I don’t have anything more substantial to say, except it is a very valuable work that should be effective to blunt Alarmism. But logic does not seem to work.

I have cataloged 12 significant “scientific” errors with the instrumental record:
1) Instruments not calibrated
2) Violating Nyquist
3) Reading error (parallax, meniscus, etc) – how many degrees wrong is each reading?
4) Quantization error – what do we call a reading that is between 2 digits?
5) Inflated precision – the addition of significant figures that are not in the original measurements.
6) Data infill – making up data or interpolating readings to get non-reported data.
7) UHI – ever encroaching thermal mass – giving a warming bias to nighttime temps.
8) Thermal corruption – radio power transmitters located in the Stevenson Screen under the thermistor or a station at the end of a runway blasted with jet exhaust.
9) Siting – general siting problems – may be combined with 7 and 8
10) Rural station dropout – loss of well situated stations.
11) Instrument changes – changing instruments that break with units that are not calibrated the same or instruments that are electronic where previous instruments were not. Response times likely increase adding greater likelihood to capture transients.
12) Data manipulation/alteration – special magic algorithms to fix decades old data.

A C Osborn
Reply to  William Ward
January 15, 2019 4:28 am

The problem with what you are saying about using more samples is that you can no longer compare those results to the historic data averages.
In fact even the Electronic Devices have changed in Accuracy and frequency of samples Recorded over the last 20 years or so.
Only the max & min should be compared under those circumstances.

Reply to  William Ward
January 15, 2019 11:37 am

Thank you. 3, 4, and 5 are my pet peeves and no one calling themselves a scientist should be able to ignore the measurement errors that these introduce.

Imagine if you were buying target rifles for a team and the salesman told you that all their rifles shoot within 1 minute of angle. When you get them, sure as shootin, 1/2 shoot right on, but the other half shoot at 2 minutes of angle. Did you get your money’s worth?

If you average a max temp with a min temp when the readings are only +- 0.5 degrees, do you get a mean temperature accurate to +- 0.25 degrees?

Alastair Gray
January 14, 2019 2:37 pm

As a former practising seismic geophysicist in the evil oil industry I find your arguments flawless and fully endorse them but who gives a cuss about an old doodlebugger. It does seem to make the practice of time of day correction a bit prissy. Any comments.
Maybe our Aussie pals should use this to beat their climistas over the head with

Clyde Spencer
Reply to  Alastair Gray
January 14, 2019 4:40 pm

Alastair
Perhaps this is a question best addressed to Stokes or Mosher, but how does time of day corrections matter when there are only two temperatures and they can occur at any time?

Reply to  Clyde Spencer
January 14, 2019 9:12 pm

Clyde, people used to examine the old-style min-max thermometers in the evening. At that point both the min and the max are from the same day.

But then they decided to read them in the morning … which means that the minimum is from today, but the maximum is from yesterday afternoon …

And of course, this plays havoc with using (max+min)/2 as mean temperature …

w.

Reply to  Willis Eschenbach
January 14, 2019 10:09 pm

“And of course, this plays havoc with using (max+min)/2 as mean temperature”
It doesn’t really. The monthly average is of 31 max’s and 31 min’s. It matters only a little which day they belong to. Just at the end, where one max or min might shift to the next month.

But the time of reading does matter statistically, because of the possibility of double counting. That is where TOBS comes in.

Jim G.
Reply to  Willis Eschenbach
January 14, 2019 11:58 pm

That’s something that I don’t really understand.

If you are reading today’s high, tomorrow;
Wouldn’t you just leave today’s high blank and fill it in the next morning?

As an auditor or data user;
What do you do if some folks backfilled data and others wrote it in on the same day?

It makes the dataset interesting, but limited in its usefulness.

Reply to  Jim G.
January 15, 2019 3:02 am

As an observer, you just record what you see. At the agreed time, it shows a certain max and min. It isn’t your job to say when they occurred.

Anthony Banton
Reply to  Clyde Spencer
January 15, 2019 1:47 am

” but how does time of day corrections matter when there are only two temperatures and they can occur at any time?”

Because the thermometers are reset to the temperature at THAT time by virtue of the max having a constriction the mercury has to pass and the min an indicator within, that is left behind at the lowest point. Resting them zeros the thermometers at the exact temp at the time of reading (TOBS).
On a hot day in summer that temperature is often not that far lower than the maximum of the day.
So next day, should it be cooler, the maximum thermometer is stuck back up at the reset temp of the previous evening, meaning the ‘heat’ from the previous day is recorded TWICE.

A C Osborn
Reply to  Anthony Banton
January 15, 2019 4:32 am

Yes, that COULD happen, but then you have to KNOW it happened to make a correct ADJUSTMENT.
Not adjust everything just in case it happened.

Anthony Banton
Reply to  A C Osborn
January 15, 2019 7:31 am
Gary Pearse
Reply to  Alastair Gray
January 14, 2019 5:00 pm

Actually, Wards analysis makes the algorithm/model adjustments to temperatures performed continuously to “unbias them” (BEST, and others), and the infilling over, in some cases 1200km, a sick joke. Mark Steyn remarked cogently in a Senate hearing on climate data, that we have a situation where we have a higher degree of confidence in what the weather will be like in 2100 than what the weather will be like in 1950!

n.n
January 14, 2019 2:38 pm

The low and sporadic sampling rate is one problem for “science” practiced outside of the near-domain.

Gums
January 14, 2019 2:39 pm

Salute!

It’s not only Nyquist, but the time period at certain temperatures. In other words, 16 hours at 80 degrees and 8 hours at 40 degrees has a greater effect upon criops than the reverse. Ask any gardener about growomg a tomato.

So my interest from the climate/temperature/weather folks is about how we treat the latent heat when the high temperature is not present and the low temperature is only present for a short time. Therefore, a simple average of max and min is not a good repersentation of the “climate” at the measurement station. It would seenm to me that a long high temperature would bias the overall energy equation to a high number, and vice versa. From trying to grow some “tropical” veggies, I can tellya that the latent heat in the soil at night is a big player for the growing season. The exception seems to be in the desert or up at my mountain cabin when radiation of the warm earth takes place and where a true “greenhouse” is inmvaluable.

Jez asking….
Gums…

Richard Patton
January 14, 2019 2:40 pm

Thanks for the confirmation of my suspicions about the uselessness of TMEAN. I remember one day when I was a forecaster at NAS Fallon NV we had a very cold shallow inversion. For 23+ hours of the day the temperature was about 5 deg F. Several times during the day a slight breeze would mix down warmer air from above and push the temperature up to the mid 30’s. So the “official” mean temperature for the day was 20 even though everyone knew that the true mean was more like 8 or 9 degrees F.

William Ward
Reply to  Richard Patton
January 14, 2019 8:01 pm

Hi Richard – perfect example and a real world experience!

Guy Leech
January 14, 2019 2:46 pm

This seems to be an important point put across in a somewhat over complicated way. If we want to measure the heat content of the atmosphere at the surface, which is what might be a result of AGW, a correctly constructed average temperature can be a proxy for that. It is only a useful proxy if it is averaged over very short periods of time, as explained in the post. Since the thermometer record is averaged over only two data points per day, it is not a proxy for atmospheric heat content, so is not a useful data series from which to evaluate changes in there heat content of the atmosphere over time. Wind speed & direction and cloud cover are probably significant determinants of daily minimum & maximum temperatures, and there are probably many other determinants.

William Ward
Reply to  Guy Leech
January 14, 2019 8:11 pm

Guy,

You are right it is a bit complicated, especially if you have not worked with signal analysis for a long time. It would be nice if climate scientists just did this right so we didn’t have to come along and correct them with a complex treatise. But science has been hijacked by the Alarmists and they tell us THEY OWN THE SCIENCE. And if we don’t agree with them we get little pet names like “science denier”. So unfortunately, we have to use actual math and science and shove it in their faces to show them just how wrong they are. I take no pleasure in it. I will go back to “nice mode” just as quickly as they give up the game. I find it difficult to ignore when we start to get elected officials (think Alexandrix Ocasix-Cortex) who are barely out of diapers and have no understanding of science or civics and they prescribe an insane “Green New Deal”. I think Climate Alarmism Rejectors are a bit too passive and not armed with information well enough to fight back against the insanity.

Krishna Gans
January 14, 2019 2:47 pm

Tmin and Tmax seem most usefull for DTR evolution over time, not more, not less.

Krishna Gans
Reply to  Krishna Gans
January 14, 2019 2:49 pm

Forgot a link
Only as example…

Tractor Gent
January 14, 2019 2:47 pm

So what is an appropriate sample rate of temperature? Once every 20 seconds sounds a bit arbitrary & probably ties more to equipment capability than to theoretical considerations. The sample from Alaska looks quite noisy, even with the 5 minute averaging. It would probably look a lot noisier in the full 20 sec/sample record. So, what’s the source of the noise? Is this instrumental (noise in the sensor or the amplifier & A/D converter) or is it genuine noise in the measured temperature, due to wind turbulence, say? The noise is important: if the actual noise bandwidth is greater than half the sampling frequency then there will be noise aliasing, effectively increasing the apparent noise level. I wonder if NOAA have any publicly available docs on the rationale for their choice of sampling frequency?

Tractor Gent
Reply to  Tractor Gent
January 14, 2019 2:55 pm

Just had a quick Google for info, but it looks like the furlough has got(ten) in the way 🙁

Greg Cavanagh
Reply to  Tractor Gent
January 14, 2019 4:33 pm

I very good question. I know from experience working as a surveyor in the field that random hot gusts of wind happen, I don’t know from where.

Clyde Spencer
Reply to  Greg Cavanagh
January 14, 2019 4:51 pm

Greg
And I remember a particularly hot day in California (~120 deg F, July 4th 1968) when my wife and younger brother and I were swimming in the North Fork of the American River (out of necessity to keep cool!). Every so often a really hot blast of air would come up the canyon and flash evaporate the surface of the river water. It was momentarily like being in a steam sauna and all three of us would cough after breathing the very hot and humid air. It would last for much less than a 5 minute AWS sampling interval.

William Ward
Reply to  Clyde Spencer
January 14, 2019 8:25 pm

Tractor,

Yes the government shutdown has NOAA site down. Bummer. Also, yes, good questions you ask. NOAA complicates things with the 5-minute averaging. I don’t know why they do this, except perhaps to reduce the total amount of data. I could not find a paper on it.

My first pass at this was to take the USCRN operating parameters and consider them as satisfying Nyquist. Going through a sample reduction process seems to indicate that the 288-samples per day seems to converge to under 0.1C error compared to the next rate I tried of 72-samples per day. Someone would need to decide the absolute accuracy needed. I agree that the transitions observed on some days at some stations suggest that 5-minute samples is not capturing all of the transients. Is that noise or valuable signal. Clyde’s example of the hot blast of air in the canyon would suggest we should not be averaging down to 5-minutes. Data converters are cheap and amazingly accurate. Memory is cheap. Processing power is immense. And hey, the fate of the world and survival of humanity is in the balance – so we can afford to splurge and oversample! So what if we have excess data. Decimation after the fact is easy! I just left a company (industry actually) where we were sampling microwave and millimeter wave signals in CMOS processes with performance good enough for high speed communications. Sampling temperature is glacial by comparison.

288-samples/day seems to be really good, however. I just can’t say if it is a bit of overkill or we should use all 4,320-samples/day. Either way, it doesn’t change the essence of what I present here.

thingadonta
January 14, 2019 2:50 pm

I am currently working in the desert in Australia. The other day it got to 46.1 degrees, however this was a spike reading between two 30 minute readings. The data is here:

http://www.bom.gov.au/products/IDW60801/IDW60801.94429.shtml

The highest 30 minute reading for the day was at 3:30pm at Mount Magnet airport (near a cleared hard runway-wasn’t there 50 years ago-another issue as the record is over 100 years old) and it was 45.8, but the highest recorded reading for the for day was at 46.1 sometime between 2:30-3:30pm–which spiked for a very short time-but doesn’t say exactly when.

My point being, this was likely a wind gust coming off the hard hot concrete; if they didn’t use the same ‘spike’ method 50 years ago (which I’m told they didn’t) , the max temperatures for the day would be different. So by adding these spikes to the longer term record you are splicing two different sample sets, apples and oranges, and getting an enhanced trend that isn’t there.

The argument that cold wind gust spikes would also enhance colder temperatures might also be true by using this method.

http://www.bom.gov.au/climate/dwo/IDCJDW6090.latest.shtml

William Ward
Reply to  thingadonta
January 14, 2019 8:30 pm

Thingadonta,

Another good real world example. The spikes of temperature you mention are higher frequency content. Depending upon what frequency components they contain determines the impact of the aliasing.

Sample according to Nyquist and these spikes will not inaccurately affect your mean calculations.

I’ll mention here, it would be nice if we could get away from mean calculations and just feed the sampled signals into some equations the climate scientists discover that explain how climate works! Now that would be real science. Not this sciencey-looking stuff.

A C Osborn
Reply to  William Ward
January 15, 2019 4:42 am

Not the mean, no, but they will if you are trying to analyse Max or Min and not Mean. Which is even more important data to understand Climate.

Reply to  A C Osborn
January 15, 2019 12:25 pm

+100

Ian Macdonald
January 14, 2019 2:58 pm

Since the values taken are max and min rather than values at random times of day, this is not quite the same thing as sampling a signal at discrete points along a cycle. In electronic terms it would be more like summing the outputs of two precision rectifiers, one positive voltage and the other negative voltage responding.

The problem here is that any asymmetry in the waveform, for example the warmest period of the day being only 5min whilst the coolest period covers several hours, is going to leave you with a totally wrong average value. It seeming that the short warm period has equal significance to the much longer cool period, when in fact the brief warm period is an outlier and not representative of anything.

A single thermometer embedded in a large lump of iron might be a better idea. (Water is not a good choice because of its heat of evaporation)

William Ward
Reply to  Ian Macdonald
January 14, 2019 8:39 pm

Hello Ian,

The iron idea is interesting, except it might rust. LOL! Maybe stainless steel? Either better than water.

I think we agree on fundamentals, but let me niggle about the 2 samples issue. They really are 2 samples. The max and min actually happen at a time during the day. Look in to clock jitter. This explains what happens when you add error to your sample clock rate.

Alastair Gray
January 14, 2019 3:00 pm

Reply to nick stokes if actual daily record were a boxcar sort of square sinusoid then accordingto Fourier (a godfather of global warming) you would needthe high frequencies to capture abrupt rise and fall of temperature- back to school for you laddie!

Reply to  Alastair Gray
January 14, 2019 3:26 pm

“you would need the high frequencies “
Wearily – they are computing monthly averages.

MarkW
Reply to  Nick Stokes
January 14, 2019 4:00 pm

Not relevant.
The point is that they are computing the monthly average from data that has discarded meaningful data and as a result the average isn’t as accurate as many on the alarmist side have been claiming.

The idea that you can calculate an average to a few hundredths of a degree from this mess is gross incompetence.

A C Osborn
Reply to  MarkW
January 15, 2019 4:46 am

This is the whole point, the “Average” of 2 readings is 100% correct for those 2 readings.
But is meaningless with regards to the Mean Temperature of the whole day, which is what they are trying to use it for.
The again historically it is all we have, just stop using the Average, present the Max & Min.

Tom Abbott
Reply to  A C Osborn
January 15, 2019 11:59 am

“The again historically it is all we have, just stop using the Average, present the Max & Min.”

I agree. I think they are making things way too complicated. I, personally, note the high temperature on my home thermometer and the low temperature and that’s all I need. I have no need to average the two numbers. The two numbers tell me everything I need to know.

Look at all the Tmax charts in this link below, all of which looked like they “Tmaxed out” around the 1930’s. There is no unprecedented warming in the 21st century according to Tmax.

https://wattsupwiththat.com/2018/12/15/it-is-the-change-in-temperature-compared-to-what-weve-been-used-to-that-matters-part-2/

MarkW
Reply to  MarkW
January 15, 2019 10:48 am

Another point is that historically the high and low were only recorded to the nearest whole degree.

juan slayton
January 14, 2019 3:05 pm

Temperature is not heat energy, but it is used as an approximation of heat energy.

Non-physicist here, and I’ve been puzzling about this for some time. I think a commenter some time ago compared Miami with high humidity to Phoenix with a much higher temperature and low humidity. Miami actually had significantly more heat. I suppose temperature might be a rough approximation if you assume that average humidity in a given location is somewhat stable and all you are interested in is trends, rather than absolute values, but that’s not a warranted assumption. Seems to me that energy is what we really want to know about, and measuring that would require simultaneous sampling of both temperature and humidity.

tty
Reply to  juan slayton
January 15, 2019 6:58 am

In short, what you need to measure energy in the climate system is the enthalpy.

Michael S. Kelly LS, BSA Ret.
Reply to  juan slayton
January 15, 2019 3:59 pm

Bingo! I’ve been saying that for over a year in these comments. The greenhouse effect is about retention of energy in the lower atmosphere that would normally have been radiated away to space. On Mars, the atmosphere is close to 100% carbon dioxide. Though it is extremely low pressure (and density), there is 54 times as much CO2 per unit surface are on Mars as there is on Earth. But there is no water vapor in the atmosphere, and no other greenhouse gases. On Mars, temperature is an exact indicator of atmospheric energy content.

On Earth, the presence of vast amounts of liquid water changes the game completely. The enthalpy of dry air is directly proportional to temperature. At 21 C, the enthalpy of dry air is 21 kJ/kg. At 21 C, 30% relative humidity, it is 33 kJ/kg. At 21 C, 83% relative humidity, it is 54 kJ/kg.

So at constant temperature, the difference between 83% RH and 30% RH enthalpies is equal to the total enthalpy of the dry air by itself.

Temperature, by itself, is useless in determining any shift in the energy balance of the Earth’s atmosphere.

henkie
January 14, 2019 3:05 pm

What about time constants? The time it takes to heat a blob of mercury is not comparable to the time it takes to heat up a tiny Pt sensor. You cannot use Nyquist without this information.

D. J. Hawkins
Reply to  henkie
January 14, 2019 3:46 pm

The tau for Pt thermometers is about 20 seconds, and LIG up to 60 seconds, depending on the study, hence the the 4,320 samples. Readings for Pt sensors are taken every 2 seconds however. This is a problem the Aussies have, reporting the Pt spikes as Tmax for the day without smoothing to approximate the mercury thermometer.

Alan Tomalty
Reply to  D. J. Hawkins
January 14, 2019 8:15 pm

I was wondering how they came up with 2160 as the Nyquist frequency. There is a problem here. 2160 really isn’t the frequency of the original signal. It is the minimum frequency of the measuring tool. Does the UAH temperature data operate on the same temperature measuring Tmax and Tmin of a GPS point in the atmosphere?

William Ward
Reply to  Alan Tomalty
January 14, 2019 8:51 pm

Henkie, Alan and D.J,

Good points and questions but we may be getting in deeper than we should. You would think a standard would be defined – based upon research – that specify sensor type, sensitivity, thermal mass, sensor response time, amplifier response time, front-end filtering, power supply accuracy and ripple, and drift, etc. Agreed that a different front-end will respond differently if co-located and experiencing the same stimulus. This should be a global standard, but of course every country or region will need their own standards body and each body will push for their favorite architecture and years will go by with no agreements and eventually every region will adopt their own inconsistent standards. This will leave the door open for more magic climate algorithms to be applied. Meanwhile humanity is in the crosshairs of climate disaster while the experts would fuss over minor details all selected to give themselves some kind of personal benefit.

Man, that sounds cynical…

A C Osborn
Reply to  William Ward
January 15, 2019 4:51 am

That is why we have the WMO, who do set the standard, which is ignored by the Australian BOM.
Not only that but they have also been curtailing the Lower Readings as well.
They are very bad boys down under.

tty
Reply to  Alan Tomalty
January 15, 2019 7:06 am

The satellites measure the radiation temperature which is in practice instantaneous. However they usually only measure any specific area once a day. With station-keeping satellites this is done at the same time every day, but non-station-keeping will drift, so the time-of-day will change. This is one of the reasons for the difference between RSS and UAH. RSS uses a theoretical model for daily temperature changes to correct for drift while UAH uses measured (but rather noisy) daily temperature cycle data for correction.

William Ward
Reply to  tty
January 16, 2019 2:32 pm

Alan,

You asked: “Does the UAH temperature data operate on the same temperature measuring Tmax and Tmin of a GPS point in the atmosphere?”

As I understand it, 2 satellites are involved. Each satellite passes over each location once each day, so 2 passes total for every location. Ideally the passes are spaced 12 hours apart so 2 measurements are made each day. What is measured is microwave radiation at some altitude in the troposphere. This is compared to the background microwave radiation of deep space. Then several thousand calculation are done to relate this to temperature. I’m sure someone more knowledgeable on this might correct a minor detail here, but if there are 2 samples taken in a day we have the a similar problem to the surface stations. As I explained in the paper, 2 samples taken at regular intervals will likely yield better results than finding max and min for the day. But (as has been reported in other sources) satellite measurements suffer from their own specific issues not related to undersampling.

henk
January 14, 2019 3:07 pm

What about time constants? The time it takes to heat a blob of mercury is not comparable to the time it takes to heat up a tiny Pt sensor. You cannot use Nyquist without this information.

tty
Reply to  henk
January 15, 2019 7:12 am

WMO has a standard for this, so as to make measurements comparable (but not necsessarily correct).

What it all boils down to is that before modern electronic sensors there really wasn’t any way to measure the average temperature of a medium with rapidly changing temperatures, like the atmosphere.

Krishna Gans
January 14, 2019 3:12 pm

The problem to not apply the Nyquist Rate is, that you can’t find periodic events, say a 25 year time series can’t show you periodic events > 25 years.

William Ward
January 14, 2019 3:15 pm

I’ll try to reply to all comments, but I wanted to start with a few words expressing my gratitude to a few people who helped me with this project. I’d like to thank Kip Hansen for reviewing the paper and guiding me in a number of ways to optimize the paper for for the WUWT forum. Kip’s kindness and generosity really stand out. I would also like to thank Piotr Kublicki for encouraging me to go forward with formalizing this information into a paper and for his collaboration on the numbers. In addition to doing multiple reviews, Piotr did most of the work to run the analysis on the long term trends. For the Nyquist-compliant method this involved finding the linear trend by using over 1.2 million samples for each location. I’m grateful for a few of my friends in the industry who have reviewed the paper for conceptual accuracy. These people are experts in the field of data acquisition and signal analysis. And of course, I would like to thank Anthony and Charles for taking the time to review this and to do the work to publish it on WUWT for further discussion.

Reply to  William Ward
January 15, 2019 12:45 pm

Kudos to all who participated. This is the kind of stuff that should make climate scientists say WHOA, are we going to look stupid in a few years!

January 14, 2019 3:16 pm

Siting issues. This issue. (The day’s average temp for the day is in the middle of the day’s high and low with no regard to how long in the 24 hours the temp was near either?!)

Bottom line (again). We don’t really know what was the past “Global Temperature”. We don’t really know what it is now.

Who’s willing to bet Trillions of dollars on that?
Those who are out to achieve something other than “Saving the Whales … er … Planet”.

Reply to  Gunga Din
January 14, 2019 3:27 pm

PS Another issue the actual number of sites and how widespread they were (and are) for any kind of “Global” temperature record.

commieBob
January 14, 2019 3:21 pm

What is the highest frequency you have to worry about?

There’s a rule of thumb for a signal that looks something like a square wave.

BW = 0.35/RT

where:

BW = bandwidth = the highest frequency you have to worry about.
RT = rise time = roughly how fast the temperature goes from 10% of the distance from Tmin to Tmax to 90%

Suppose Tmin is 0C and Tmax is 20C. 10% of the difference is 2C.
We’re interested in how fast the temperature goes from 2C to 18C.
Suppose that it takes 3 hours or 1/8 of a day.
The highest frequency we have to worry about is 0.35/(1/8) = 2.8 cycles per day.

In Figure 3 above, notice the big jump between approx. sample 125 and approx. sample 135. That’s about ten samples out of 288 or around 1/30 of a day. If we applied the rule of thumb to that we’d get a maximum frequency of 10 cycles per day. Do we have to worry about that? Maybe. If we guess that the jump was about half the distance between Tmin and Tmax, that would reduce the amplitude of that frequency component.

There’s a big gotcha. When we talk about Nyquist rates and about Fourier analysis, we’re talking about a repeating waveform. Daily temperatures are only approximately a repeating waveform. That means Nyquist is over optimistic. Based on my guess of 10 cycles per day for the signal in Figure 3, you could get away with 20 samples per day. You clearly can’t. In other words, Nyquist isn’t really what you’re aiming for. You have to do a lot better.

Reply to  commieBob
January 14, 2019 3:29 pm

“You have to do a lot better.”
To calculate a monthly average?

MarkW
Reply to  Nick Stokes
January 14, 2019 4:04 pm

If you want to calculate an accurate monthly average, you have to start with accurate daily averages.
Lacking the first, the second is meaningless.

Reply to  MarkW
January 15, 2019 12:54 pm

No wonder discussions on measurement errors fall on deaf ears. Folks, these aren’t made up facts and procedures to make people look bad like the SJW’s do. Metrology, Nyquist, systemic error analysis, etc. have been around for years. There are college courses that teach these in detail.

Climate scientists need to buck up and admit they don’t know what they don’t know! Involve other fields and let them be co-authors. You may have to share grant money, but your output will be up to accepted standards.

Clyde Spencer
Reply to  Nick Stokes
January 14, 2019 4:58 pm

Stokes
If you input daily garbage to calculate a monthly average, you will get monthly garbage!

commieBob
Reply to  Nick Stokes
January 14, 2019 5:16 pm

If you’re calculating an energy balance, you’re talking about a couple of watts per square meter of forcing. Let’s be generous and call it 3 w/m^2.

The solar constant is around 1388 w/m^2. So, 3 w/m^2 is around 0.2%.

Depending on what you’re doing, the extra precision is actually important.

IMHO, the more I think about it, the more I think invoking Nyquist is barking up the wrong tree. A straight numerical analysis demonstrates the problem more than adequately.

Your question about a monthly average makes sense if you’re thinking about Nyquist. In other words, I think invoking Nyquist actually leads us astray.

Reply to  commieBob
January 14, 2019 7:36 pm

“If you’re calculating an energy balance”
But we aren’t. And we aren’t reconstructing a sub-diurnal signal. We are calculating a monthly average temperature. That is the frequency of interest, and should be the focus of any Nyquist analysis.

commieBob
Reply to  Nick Stokes
January 15, 2019 4:36 am

The numerical analysis above clearly demonstrates that you are wrong.

A C Osborn
Reply to  Nick Stokes
January 15, 2019 4:55 am

Come on guys, we all know that these “Errors” all cancel each other out.
The Climate Scientists tell us it is so.
/Sarc off

Editor
Reply to  Nick Stokes
January 15, 2019 10:12 am

Nick ==> We have to start out with the right question: If we simply want to know a generalized “monthly temperature average”, and are willing to have large error/uncertainty bars then we can forget all this Nyquist business — of course we can. Get your GAST (land and sea) tack on your 1 to 2 degrees C uncertainty and you are home free.

But the reason we are interested in GAST is because there is an hypothesis that rising atmospheric CO2 concentrations are causing the Earth system to retain an increasing amount of the Sun’s energy — so we need to ask the question: How much energy is the Earth system retaining? Is that rising or falling? We know that incoming energy is transformed into/by all kinds of physical processes– some (and only some) of it sensible heat of the air and seas. So if we are looking at air and water temperatures for this purpose, then we need to accurately determine the energy the temperatures represent — therefore we need a more accurate view of the changes, the temperature profile, throughout the day — thus Nyquist.

MarkW
Reply to  Nick Stokes
January 15, 2019 10:53 am

Nick, you are still ignoring the fact that the daily averages that you are using to calculate your monthly average aren’t accurate.
Using bad daily averages to create a monthly average means that your monthly average is also bad.

commieBob
Reply to  Nick Stokes
January 15, 2019 1:20 pm

Probably nobody’s going to see this but …

There is a wonderful guide to Digital Signal Processing written for people who might actually want to do some Digital Signal Processing (DSP). It is The Scientist and Engineer’s Guide to Digital Signal Processing. You can read it for free.

The author does not skimp on the fundamentals. Time after time, after time, I see scientists and commenters here on WUWT and elsewhere getting the fundamentals wrong. That means that everything that follows is wrong.

When you invoke Nyquist, explicitly, or implicitly, whether you know it or not, you are talking about reconstructing the waveform you sampled. Chapter 3

So, what can you reconstruct with two samples per day? You can reconstruct a sinewave whose frequency, phase, and amplitude do not change. If the daily temperature follows a sinewave and Tmax and Tmin do not change from day to day, you’re good. If that isn’t the case, two samples per day is not enough. As William Ward points out in the article above, that leads to quite significant errors.

FabioC.
Reply to  commieBob
January 14, 2019 7:52 pm

“Suppose Tmin is 0C and Tmax is 20C. 10% of the difference is 2C.
We’re interested in how fast the temperature goes from 2C to 18C.
Suppose that it takes 3 hours or 1/8 of a day.
The highest frequency we have to worry about is 0.35/(1/8) = 2.8 cycles per day.

In Figure 3 above, notice the big jump between approx. sample 125 and approx. sample 135. That’s about ten samples out of 288 or around 1/30 of a day. If we applied the rule of thumb to that we’d get a maximum frequency of 10 cycles per day. Do we have to worry about that? Maybe. If we guess that the jump was about half the distance between Tmin and Tmax, that would reduce the amplitude of that frequency component.”

I have been doing some research in my spare time on temperature time-series and a physical integrator for temperature measurements and those above are exactly some of the issues I had to face.

If you can let me have a direct contact, I’d like to talk about those problems.

Clyde Spencer
January 14, 2019 3:35 pm

William,
You said, ” These daily readings are then averaged to calculate the daily mean temperature as Tmean = (Tmax+Tmin)/2.”

We have gone over this before. Despite NOAA calling the calculation a “mean,” it is at best a median, and more properly called the “mid-range” statistic. [ https://en.wikipedia.org/wiki/Mid-range ]

The essential point being that a mid-range does not have an associated standard deviation, standard error of the mean, or a probability distribution function. It is a simplistic measure of central tendency similar to a median, but calling it a “mean” (Which is calculated from a large number of samples.) implies that it has the associated statistics of a mean, and suggests that it is more robust and descriptive than what it is. Your Figure 1 demonstrates this deficiency clearly.

William Ward
Reply to  Clyde Spencer
January 14, 2019 9:24 pm

Hi Clyde,

You are correct sir! I struggled with whether to comply with NOAA language or circle the wagons to clarify. Kip did a good job of setting up a word counter in my head. Being verbose by nature, I was extra attentive to this. It was a challenge to be efficient with less words. I opted to leave out the detail you point out with the hope that someone like you would come along and add the point. So thank you! I also admit I get a little lax with this detail …

A C Osborn
Reply to  William Ward
January 15, 2019 4:58 am

Actually it is the Average of 2 values and nothing to do with actual means or even medians of the Temperature.

Clyde Spencer
Reply to  A C Osborn
January 15, 2019 12:10 pm

ACO

Did you bother to read the Wiki’ link I provided?

While the arithmetic procedure for calculating the misnomer is the same as a mean (i.e. dividing the sum of the values by the number of values.) It is not a mean in the usual sense. I have called it elsewhere a “degenerate median” because, unlike a typical median, which is the mid-point of a sorted list, it is (as you point out) the average of just two values, which is what even a typical median comes down to if there are an even number of entries in the list.

That is why it is best to refer to it as what is a recognized as a legitimate mathematical and statistical term, the “mid-range value,” and not give it an unwarranted status by calling it a “mean.”

January 14, 2019 3:35 pm

“Beware of averages. The average person has one breast and one testicle.” Dixie Lee Ray

The average of 49 & 51 is 50 and the average of 1 and 99 is also 50.

First they take the daily min and max and compute a daily average. Then they take the daily averages and average them up for the the year an then those annuals are averaged up from the equator to the poles and they pretend it actually means something.

Based on all that they are going to legislate policy to do what? Tell people to eat tofu, ride the bus and show up at the euthanization center on their 65th birthday?

JimG1
Reply to  steve case
January 14, 2019 4:00 pm

Then there is the case of the statistician who drowned crossing a river with average depth of one foot.

Reply to  JimG1
January 14, 2019 5:06 pm

Ha! Good one.

A C Osborn
Reply to  steve case
January 15, 2019 5:00 am

+1000 to both of you.

Editor
January 14, 2019 3:49 pm

I would disagree, for a couple of reasons.

First, the definition of the Nyquist limit:

The Nyquist-Shannon Sampling Theorem tells us that we must sample a signal at a rate that is at least 2x the highest frequency component of the signal. This is called the Nyquist Rate.

The first problem with this is that climate contains signals at just about every frequency, with periods from milliseconds to megayears. What is the “Nyquist Rate” for such a signal?

Actually, the Nyquist theorem states that we must sample a signal at a rate that is at least 2x the highest frequency component OF INTEREST in the signal. For example, if we are only interested in yearly temperature data, there is no need to sample every millisecond. Monthly data will be more than adequate.

And if we are interested in daily temperature, as his Figure 2 clearly shows, hourly data is adequate.

This brings up an interesting question—regarding temperature data, what is the highest frequency (shortest period) of interest?

To investigate this, I took a year’s worth of 5-minute data and averaged it minute by minute to give me an average day. Then I repeated this average day a number of times and ran a periodogram of that dataset. Here’s the result:

As you can see, there are significant cycles at 24, 12, and 8 hours … but very little with a shorter period (higher frequency) than that. I repeated the experiment with a number of datasets, and it is the same in all of them. Cycles down to eight hours, nothing of interest shorter than that.

As a result, since we’re interested in daily values, hourly observations seem quite adequate.

The problem he points to is actually located elsewhere in the data. The problem is that for a chaotic signal, (max + min)/2 is going to be a poor estimator of the actual mean of the period. Which means we need to sample a number of times during any period to get close to the actual mean.

And as his table shows, an hourly recording of the temperature gives only a 0.1°C error with respect to a 5-minute recording … which is clear evidence that an hourly signal is NOT a violation of the Nyquist limit.

So I’d say that he’s correct, that the average of the max and min values is not a very robust indicator of the actual mean … but for examining that question, hourly data is more than adequate.

Best to all,

w.

Phil
Reply to  Willis Eschenbach
January 14, 2019 11:54 pm

Another way to look at it is to try to estimate the area under the curve of daily temperature. The original model of a sine wave is clearly invalid. There is a distortion of a sine wave every day. Hourly data can approximate this distortion a lot better than the min-max model. 5 minute data can improve upon that, but at the expense of diminishing returns. Nyquist or no Nyquist, the min-max practice assumes a perfect sine wave as a model of the daily temperature curve and this model is invalid. In short, I agree that hourly data is much better than min-max thermometers.

Gums
Reply to  Phil
January 15, 2019 9:54 am

Salute Phil!

A great point and I would like to see that sort of record used versus simple hi-lo average.
The daily temperature is not a pure sine wave, and at my mountain cabin a simple hi-lo average is extremey misleading, especially when trying to grow certain veggies or sprouting flower seeds.

Seems to this old engineer and weatherwise gardener that we should use some sort of “area under the curve” method. Perhaps assign a simple hi-lo average to defined intervals, maybe 20 minutes, and then use the number of increments above the daily “low” in some manner to reflect the actual daily average.

I can tellya that up at my altitude that during the summer it warms up very fast from very low temperatures in the morning and then stays comfortable until after sunset. I hear that the same thing happens in the desert. So a weighted interval method would show a higher daily average during the summer when days are long and a lower average during the shorter winter days. This past summer’s data from the 4th of July was a good example I found from a nearby airfield. The high was 88 and low was 58 and average was published as 73. But the actual day was toastie! 13 hours were above the :”average”, and 10 of those hours were above 80 degrees. The exact reverse happens in the winter, and at the same site earlier this month we saw 23 for low and 62 for a high. 42 average. It was unseasonably warm, but we still had 13 hours below the average and only 11 hours above.

Gums sends…

Editor
Reply to  Willis Eschenbach
January 15, 2019 6:58 am

w. ==> Quite right …. In a pragmatic sense, we have the historic Min/Max records and the newer 5-minute (averages) record. So our choice is really between the two methods — and the 5-min records is obviously superior in light of Nyquist. Where 5-min records exist, they should be used and where only Min/Max records exist — they can be useful if they are proper acknowledged to be error-prone and acknowledged with proper uncertainty bars….

Min-Max records do not produce an accurate and precise result suitable for discerning the small changes in National or Global temperatures.

Paramenter
Reply to  Willis Eschenbach
January 15, 2019 8:10 am

Hey Willis,

As a result, since we’re interested in daily values, hourly observations seem quite adequate.

Agreed. I’ve run some comparisons between 5-min and 1-hr sampled daily signal – they yield very similar averages. But here we’re talking about 2 samples per day, or effectively just one sample (daily midrange value).

Also, Nyquist only applies to regularly spaced periodic sampling … but min/max sampling happens at irregular times.

I see it bit differently: irregular sampling makes a problem worse. If irregular sampling could alleviate Nyquist limitations that would be very cheap win: just sample any signal sparsely and irregularly and no need to worry about Shannon! Well, regardless of sampling method such method must obey Shannon in order to replicate a signal.

As you can see, there are significant cycles at 24, 12, and 8 hours … but very little with a shorter period (higher frequency) than that.

I’ve got also smaller frequency peaks around 6 and 3 hours. Still most of the signal energy resides in the yearly and daily cycles (and around).

Lots of one and two degree errors in there … the only good news is that the error distribution is normal Gaussian.

Thanks for sharing – have you run any ‘normality tests’ to determine that or in this case eyeballing is perfectly sufficient?

William Ward
Reply to  Paramenter
January 16, 2019 2:38 pm

Paramenter said: “If irregular sampling could alleviate Nyquist limitations that would be very cheap win: just sample any signal sparsely and irregularly and no need to worry about Shannon! Well, regardless of sampling method such method must obey Shannon in order to replicate a signal.”

+1E+10

January 14, 2019 4:01 pm

I noticed in Fig 6 that the trends for Blackville, SC were very large. The regularly sampled trend is 9.6°C/Cen, and the min/max is higher. It would be interesting to see the other USCRN trends, not just the differences. The period is only 11 years.

Reply to  Nick Stokes
January 14, 2019 4:39 pm

I did my own check on Blackville trends. I got slightly higher trend results (my record is missing Nov,Dec 2017).
Average 11.8±17.8 °C/Cen
Min/Max 16.5±18.4 °C/Cen
These are 1 sd errors, which obviously dwarf the difference. It is statistically insignificant, and doesn’t show that the different method made a difference.

Reg Nelson
Reply to  Nick Stokes
January 14, 2019 5:40 pm

Anecdotal evidence is somehow scientific? And be honest, Nick, if the station showed significant cooling it would thrown out or “corrected” by the likes of you.

Reply to  Reg Nelson
January 14, 2019 6:30 pm

It is WW’s choice, not mine.

Dr. S. Jeevananda Reddy
January 14, 2019 4:01 pm

The whole exercize is itself a biassed argument. The error factor [Table] is relative to 288 points of 5 minute average. Here the writer has not taken into account the drag factor in the continuous record. The drag is season specific and instrument specific. On these aspects several studies were made before accepting average as maximum plus minimum by two.

Dr. S. Jeevananda Reddy

Ian Cooper
January 14, 2019 4:06 pm

I would have thought that scientists were always looking for consistency in results when comparing the present with even the near past, let alone back to the earliest times of recording temperature. I suppose that it never occurred to anyone at the time, that when the new AWS measuring systems came on line that it might be opportune to conduct a lengthy (10 -20 year) experiment by continuing the old process alongside the new. In this way, perhaps coming up with a way to convert the old to match the new. Then again, if you didn’t know that there was this kind of issue, that thought most likely wouldn’t have crossed your mind!

I was thinking of the process used by Leif Svalgaard and colleagues when trying to tie all of our historical sun spot observations together into a cohesive and more meaningful manner. Could this be done for the temperature method to everyone’s satisfaction?

Changing our methodology due to improvements in technology doesn’t seem to always alert us to the possibility that the ‘new’ results are actually different to the old ones. Yet, from what I have read here that most certainly is the case.

Here is another example of how technology has dramatically changed the way we record natural phenomena. I have been an active auroral observer in New Zealand for just over 40 years. Up until this latest Solar Max for SSC 24 our observations were written representations of our visual impressions. This tied easily into the historical record for everywhere that the aurorae can be seen. Photographs were a bonus, and sometimes considered a poor second to the ‘actual’ visual impression of the display!

At the peak of SSC 23 the internet first played its part in both alerting people that something was happening, and therefore increasing the number & spread of observers worldwide, but also in teaching people about what they were seeing. As SSC 23 faded the sensitivity involved in Digital SLR cameras seemed to rise exponentially. This led to the strange situation as SSC 24 matured that people who had never seen aurorae were recoding displays that couldn’t be seen with the naked-eye! This unfortunately coincided with the least active SSC for at least 100 years. This manifested itself among the ‘newbies’ as a belief that aurorae are hard to see, and even when you can see them, only the camera can record the colour! The ease with which people can record the aurorae has lead to a massive increase in ‘observers,’ and I use that term loosely, who very rarely record their visual impressions, mostly because they don’t appreciate how important those impressions are as a way to keep the connection with the past going.

If there is a fundamental disconnect in the way we now record temperature compared to the past, then this should be addressed sooner rather than later!

kevink
January 14, 2019 4:15 pm

Mr. Ward is quite correct.

The proper way to prevent aliasing is to apply a low pass filter that discards all frequencies above 1/2 your sample rate. This is standard practice in electronics work. Sometimes the sensor (ie a microphone) performs this function for you.

Problem with the (max-min)/2 method is that there is no proper low pass filter that rejects “quick” spikes of extreme temps. The response time of the sensor is set by the thermal diffusivity of the mercury. This could allow a 5 minute hot blast of air from a tarmac to skew the reading.

An electronic temperature sensor (say a platinum RTD) can respond in milliseconds and normally filtering it so it cannot respond faster than 1 minute would be necessary to sample it a a 30 second rate.

An even larger problem with the official “temperature” record is the “RMS” versus “True RMS” problem. That would be a whole discussion on it’s own. The average of a max and a min temperature is NOT representative of the heat content in the atmosphere at a location.

So here we are prepared to tear down and rebuild the world’s energy supply based a a terrible temperature record that is so corrupt it is not fit for purpose at all.

Cheers, Kevin

Loren Wilson
Reply to  kevink
January 14, 2019 6:38 pm

A platinum resistance thermometer has a time constant in the range of a few seconds based on its diameter and the fluid. The fluid in this case was either flowing water at 3 feet per second or flowing air at 20 feet per second. The situation in a Stevenson screen is much slower air flow, so the time constant is much slower. I’ll do a measurement or two tomorrow with a 0.125″ diameter PRT when I get to the lab of the time constant in relatively still air. A thermistor would have a faster time constant due to the lower mass of the sensor and its sheath. Thermistors are also notorious for drifting, so unless you calibrate it often, the temperatures aren’t ready for this application.

kevink
Reply to  Loren Wilson
January 14, 2019 7:47 pm

Loren, if you investigate the “newer” surface mount chip Platinum RTD’s you see they have response times down into the 100 millisecond range. The small volume (ie thermal capacity) of the sensor does allow those response times. I may have “stretched” things a bit, but any value from 1 millisecond to 999 milliseconds is “a millisecond response time”.

Cheers, Kevin

Loren Wilson
Reply to  Loren Wilson
January 16, 2019 7:32 am

1/8″ diameter SS sheathed PRT from Omega – still air time constant on the order of 80 seconds. I did not find the specs on the ones used in the USCRN weather stations. Are they thin film, IR, or a more traditional sheathed PRT?

1sky1
January 14, 2019 4:15 pm

This is an inept rehash of an earlier guest blog by someone who fails to understand that:

1) Shannon’s sampling theorem applies to strictly periodic (fixed delta t) discrete sampling of a continuous signal
2) It doesn’t apply to the daily determination of Tmax and Tmin from the continuous, not the discretely-sampled record
3) While (Tmax + Tmin)/2 certainly differs from the true daily mean, it does so not because of any aliasing, but because of the typical asymmetry of the diurnal cycle, which incorporates phase-locked, higher-order harmonics.

William Ward
Reply to  1sky1
January 16, 2019 2:49 pm

1sky1 says: “1) Shannon’s sampling theorem applies to strictly periodic (fixed delta t) discrete sampling of a continuous signal”

Reply: max and min are samples, period = 1/2 day and with much jitter. Jitter exists on every clock. Nyquist doesn’t not carve out jitter exceptions nor limit their magnitude. Any temperature signal is continuous. You need to address and invalidate these points or your comments are not correct.

1sky1 says: “3) While (Tmax + Tmin)/2 certainly differs from the true daily mean, it does so not because of any aliasing, but because of the typical asymmetry of the diurnal cycle, which incorporates phase-locked, higher-order harmonics.”

Reply: “…because of the typical asymmetry of the diurnal cycle…” this is frequency content. Content that aliases at 2-samples/day. “…which incorporates phase-locked, higher-order harmonics.” So “phase locked higher order harmonics gets you a waiver from Mr. Nyquist?!? 35 years – many industries – many texts – never heard of such a thing. Orchestras are “phase locked” to the conductor, does that mean I can sample music sans Nyquist?!? Of course not.

1sky1
Reply to  William Ward
January 16, 2019 4:01 pm

Your failure to recognize that the daily measurement of extrema doesn’t remotely involve any clock-driven sampling (with or without “jitter”) speaks volumes. Not only are those extrema NOT equally spaced in time (the fundamental requirement for proper discrete sampling), but what triggers their irregular occurrence and recording has everything to do with asymmetric waveform of the diurnal cycle, not any clock per se.

Nyquist thus is irrelevant to the data at hand, which are NOT samples in ANY sense, but direct and exhaustive measurements of daily extrema of the thermometric signal. Pray tell, where in “35 years – many industries – many texts” did you pick up the mistaken notion that Nyquist applies to the extreme values of CONTINUOUS signals?

William Ward
Reply to  1sky1
January 16, 2019 4:41 pm

The movement of the Earth is the clock. The fact that we can get the same max min values from a Max/Min thermometer and a USCRN 288-sample/day system proves they are samples. Once you have the values it doesn’t matter what there source is. The result is indistinguishable from the process. Nyquist doesn’t somehow stop applying because the sample happens to be the “extreme values”. Once you bring the sampled data into your DSP system for processing how does your algorithm know if the data came from a Max/Min thermometer or an ADC with a clock with jitter? If you were to convert back to analog would the DAC know where the values came from? Would the DAC care that you don’t care about converting the signal back to analog and only intend to average it? No. You get the same result. They are samples, they are periodic. Nyquist applies. You are free to take the samples and use them as if they are not samples. Fortunately there are no signal analysis police to give you a citation.

1sky, if you decide to reply then the last word will be yours. Best wishes.

1sky1
Reply to  William Ward
January 16, 2019 5:30 pm

The fact that we can get the same max min values from a Max/Min thermometer and a USCRN 288-sample/day system proves they are samples.

Bass-ackwards logic! All this proves is that sufficiently frequently sampled USCRN digital systems will reveal nearly identical daily extrema as true max/min thermometers sensing the continuous temperature signal. There’s no logical way that the latter can be affected by aliasing, which is solely an artifact of discrete, equi-spaced, clock-driven sampling.

All the purely pragmatic arguments about DSP algorithms being blind to the difference between the obtained sequence of daily extrema and properly sampled time-series do not alter that intrinsic difference. In fact, the imperviousness of the extrema to aliasing and clock-jitter effects is what makes the daily mid-range value (Tmin + Tmax)/2 preferable to the four-synoptic-time average used in much of the non-Anglophone world for reporting the daily “mean” temperature to the WMO.

R Percifield
January 14, 2019 4:22 pm

My primary concern from a Nyquist standpoint is the rate of change of the reading in relationship to the sampling rate. If the rate of change is is less than 2X the sampling rate then you meet the criteria. However, that is for reconstruction of the original signal, and not for a true averaging of the system at hand. In many control systems the sampling rate is much higher than the rate of change, many times 10X or more. This is because the average over a period needs more data than the minimum in Nyquist requirements. In refrigeration we oversample at well over 20 times to get sufficient data to rule out noise, dropped signals, etc. We know the tau of the sensor and how it reacts to temperature change, so this allows us to accurately detect and respond to changes in temperature.

I would be interested in knowing what the response curve is for the sensor being used. That would give you sufficient data to determine how the system reacts to change. I have always thought that using the midpoint between max and min temps to be a very poor measurement of average temperature. Maybe someday I will do that study.

William Ward
Reply to  R Percifield
January 14, 2019 10:49 pm

R Percifield,

Sampling far over Nyquist is done for many reasons. If you have noise as you say, then in fact your bandwidth is higher and technically your Nyquist rate is also higher. Most often sample rates are much higher because 1) for control applications the ADCs are capable of running so much faster than the application needs and memory and processing are cheap, so why not do it; and 2) sampling faster relaxes your anti-aliasing filter requirements. You can use a lower cost filter and set your breakpoint farther up in frequency such that your filter doesn’t give any phase issues with your sampling. Audio is the perfect example. CD audio is sampled at 44.1ksps. Audio is considered to be 20Hz to 20kHz. Sampling at 44.1ksps means that anything above 22.05kHz will alias. So your filter has to work hard between 20kHz and 22.05kHz. If you are an audiophile you don’t want filters to play with the phase of your audio, so this is another big issue. Enter sampling at 96.1ksps or 192ksps. Filters can be much farther up in frequency and your audio can be up to 30kHz, etc. There is a titanic argument in the audio world about whether or not we “experience” sound above 20kHz. I won’t digress further…

There is the Theory of Nyquist and then the application. In the real world there are no bandwidth limited signals. Anti-aliasing filters are always needed. Some aliasing always happens. It is just small and doesn’t not affect performance if the system is designed properly.

In a well designed system – audio is a good example – you should be able to convert from analog to digital and back to analog for many generations (iterations) without any audible degradation. Of course this requires a studio with many tens – perhaps hundreds of thousands of dollars of equipment – but it is possible.

kevink
January 14, 2019 4:26 pm

“1) Shannon’s sampling theorem applies to strictly periodic (fixed delta t) discrete sampling of a continuous signal”

Seems to me measuring the temperature one a day is the definition of a strictly periodic discrete sampling of a continuous signal. Sample period is 24 hours (frequency is 1.15 e-5 Hz)

“2) It doesn’t apply to the daily determination of Tmax and Tmin from the continuous, not the discretely-sampled record”

See my first comment above

“3) While (Tmax + Tmin)/2 certainly differs from the true daily mean, it does so not because of any aliasing, but because of the typical asymmetry of the diurnal cycle, which incorporates phase-locked, higher-order harmonics.”

That would be the “RMS” versus “True-RMS” error source in the official temperature guesstimates…..

Cheers, Kevin

1sky1
Reply to  kevink
January 14, 2019 4:39 pm

Seems to me measuring the temperature one a day is the definition of a strictly periodic discrete sampling of a continuous signal.

If the temperature measurement took place at exactly the same time each day, then you would have strictly periodic sampling and great aliasing. But that’s not what is done by Max/Min thermometers that track the continuous temperature all day and register the extrema no matter at what time they occur. Those times are far from being strictly periodic in situ.

kevink
Reply to  1sky1
January 14, 2019 6:03 pm

1SKY1 wrote;

“If the temperature measurement took place at exactly the same time each day, then you would have strictly periodic sampling and great aliasing.”

You are refering to the phase of the periodic sampling, even more complicated. The periodic sampling takes place once every 24 hours. The Min/Max selection adds phase to the data, just another error source.

Cheers, Kevin

1sky1
Reply to  1sky1
January 15, 2019 4:15 pm

When people who lack even Wiki-level grasp of concept start parsing technical words, you get the above nonsense about what constitutes strictly periodic discrete sampling in signal analysis. The plain mathematical requirement is that delta t needs to be constant in such sampling. Daily extrema simply don’t occur on any FIXED hour! The attempt to characterize that empirical fact as “Min/Max selection adds phase to the data, just another error source” is akin to claiming error for the Pythagorean Theorem–when applied to obtuse triangles.

William Ward
Reply to  1sky1
January 16, 2019 2:56 pm

1sky1 said: “The plain mathematical requirement is that delta t needs to be constant in such sampling. ”

Reply: Agreed, low jitter clocks are what we want for accurate sampling. Show me the text and rules of Nyquist that limit jitter. What are the limits – just how much jitter before Nyquist gets to have the day off? How can any ADC work in the real world and meet Nyquist? Do you have the datasheet for the jitterless ADC? What is its cost in industrial temp, 1M/yr quantities?

Dr Deanster
January 14, 2019 4:29 pm

This all BS. The temperature at any place on earth is more associated with the wind direction than anything else. Southerly winds warm you up here in the northern hemisphere, and northerly winds cool you down. What it most certainly has NOTHInG to do with is the local concentration of CO2.

The entire radiation approach is a scam. The unbalances that may occur are absorbed into the many energy sinks that exists here on earth. For sure, they will completely mask any 0.1C change in temperature globally.

Alan Tomalty
Reply to  Dr Deanster
January 14, 2019 9:17 pm

http://applet-magic.com/cloudblanket.htm

Clouds overwhelm the Downward Infrared Radiation (DWIR) produced by CO2. At night with and without clouds, the temperature difference can be as much as 11C. The amount of warming provided by DWIR from CO2 is negligible but is a real quantity. We give this as the average amount of DWIR due to CO2 and H2O or some other cause of the DWIR. Now we can convert it to a temperature increase and call this Tcdiox.The pyrgeometers assume emission coeff of 1 for CO2. CO2 is NOT a blackbody. Clouds contribute 85% of the DWIR. GHG’s contribute 15%. See the analysis in link. The IR that hits clouds does not get absorbed. Instead it gets reflected. When IR gets absorbed by GHG’s it gets reemitted either on its own or via collisions with N2 and O2. In both cases, the emitted IR is weaker than the absorbed IR. Don’t forget that the IR from reradiated CO2 is emitted in all directions. Therefore a little less than 50% of the absorbed IR by the CO2 gets reemitted downward to the earth surface. Since CO2 is not transitory like clouds or water vapour, it remains well mixed at all times. Therefore since the earth is always giving off IR (probably a maximum at 5 pm everyday), the so called greenhouse effect (not really but the term is always used) is always present and there will always be some backward downward IR from the atmosphere.

When there isn’t clouds, there is still DWIR which causes a slight warming. We have an indication of what this is because of the measured temperature increase of 0.65 from 1950 to 2018. This slight warming is for reasons other than just clouds, therefore it is happening all the time. Therefore in a particular night that has the maximum effect , you have 11 C + Tcdiox. We can put a number to Tcdiox. It may change over the years as CO2 increases in the atmosphere. At the present time with 409 ppm CO2, the global temperature is now 0.65 C higher than it was in 1950, the year when mankind started to put significant amounts of CO2 into the air. So at a maximum Tcdiox = 0.65C. We don’t know the exact cause of Tcdiox whether it is all H2O caused or both H2O and CO2 or the sun or something else but we do know the rate of warming. This analysis will assume that CO2 and H2O are the only possible causes. That assumption will pacify the alarmists because they say there is no other cause worth mentioning. They like to forget about water vapour but in any average local temperature calculation you can’t forget about water vapour unless it is a desert.
A proper calculation of the mean physical temperature of a spherical body requires an explicit integration of the Stefan-Boltzmann equation over the entire planet surface. This means first taking the 4th root of the absorbed solar flux at every point on the planet and then doing the same thing for the outgoing flux at Top of atmosphere from each of these points that you measured from the solar side and subtract each point flux and then turn each point result into a temperature field and then average the resulting temperature field across the entire globe. This gets around the Holder inequality problem when calculating temperatures from fluxes on a global spherical body. However in this analysis we are simply taking averages applied to one local situation because we are not after the exact effect of CO2 but only its maximum effect.
In any case Tcdiox represents the real temperature increase over last 68 years. You have to add Tcdiox to the overall temp difference of 11 to get the maximum temperature difference of clouds, H2O and CO2 . So the maximum effect of any temperature changes caused by clouds, water vapour, or CO2 on a cloudy night is 11.65C. We will ignore methane and any other GHG except water vapour.

So from the above URL link clouds represent 85% of the total temperature effect , so clouds have a maximum temperature effect of .85 * 11.65 C = 9.90 C. That leaves 1.75 C for the water vapour and CO2. CO2 will have relatively more of an effect in deserts than it will in wet areas but still can never go beyond this 1.75 C . Since the desert areas are 33% of 30% (land vs oceans) = 10% of earth’s surface , then the CO2 has a maximum effect of 10% of 1.75 + 90% of Twet. We define Twet as the CO2 temperature effect of over all the world’s oceans and the non desert areas of land. There is an argument for less IR being radiated from the world’s oceans than from land but we will ignore that for the purpose of maximizing the effect of CO2 to keep the alarmists happy for now. So CO2 has a maximum effect of 0.175 C + (.9 * Twet).

So all we have to do is calculate Twet.

Reflected IR from clouds is not weaker. Water vapour is in the air and in clouds. Even without clouds, water vapour is in the air. No one knows the ratio of the amount of water vapour that has now condensed to water/ice in the clouds compared to the total amount of water vapour/H2O in the atmosphere but the ratio can’t be very large. Even though clouds cover on average 60 % of the lower layers of the troposhere, since the troposphere is approximately 8.14 x 10^18 m^3 in volume, the total cloud volume in relation must be small. Certainly not more than 5%. H2O is a GHG. Water vapour outnumbers CO2 by a factor of 25 to 1 assuming 1% water vapour. So of the original 15% contribution by GHG’s of the DWIR, we have .15 x .04 =0.006 or 0.6% to account for CO2. Now we have to apply an adjustment factor to account for the fact that some water vapour at any one time is condensed into the clouds. So add 5% onto the 0.006 and we get 0.0063 or 0.63 % CO2 therefore contributes 0.63 % of the DWIR in non deserts. We will neglect the fact that the IR emitted downward from the CO2 is a little weaker than the IR that is reflected by the clouds. Since, as in the above, a cloudy night can make the temperature 11C warmer than a clear sky night, CO2 or Twet contributes a maximum of 0.0063 * 1.75 C = 0.011 C.

Therfore Since Twet = 0.011 C we have in the above equation CO2 max effect = 0.175 C + (.9 * 0.011 C ) = ~ 0.185 C. As I said before; this will increase as the level of CO2 increases, but we have had 68 years of heavy fossil fuel burning and this is the absolute maximum of the effect of CO2 on global temperature.
So how would any average global temperature increase by 7C or even 2C, if the maximum temperature warming effect of CO2 today from DWIR is only 0.185 C? This means that the effect of clouds = 85%, the effect of water vapour = 13.5 % and the effect of CO2 = 1.5%.

Sure, if we quadruple the CO2 in the air which at the present rate of increase would take 278 years, we would increase the effect of CO2 (if it is a linear effect) to 4 X 0.185C = 0.74 C Whoopedy doo!!!!!!!!!!!!!!!!!!!!!!!!!!

January 14, 2019 4:35 pm

As ever, if asked for measured data as to whether our weather is warming or cooling the answer appears to be do not know.Could not say for sure.
What time frame?
What data ?
What margin of error?
Is “Climatology” as practised a science or an art?

Alan Tomalty
Reply to  John Robertson
January 14, 2019 9:15 pm

A religion- Al Gore’s Church of Climatology

Geoff Sherrington
January 14, 2019 4:37 pm

William Ward,
Thank you for this timely essay.
Can you please correct me if these notions are wrong?
1. Nyquist Shannon sampling was derived from signals with cyclicity, which might be likened to a sine wave, typical of audio/radio signal carrier waves. Is the temperature data in this cyclicity category? Is it sufficiently like sine waves to allow application of Nyquist?
2. Can your analysis be extended from the use of actual thermometer values (confusingly named ‘absolutes’) to partially detrended signal (aka ‘anomaly values’)? There is a notion afoot that processing the former to give the latter somehow improves the variance of the data.
3. How can error estimates from Nyquist work be combined with other error sources, such as change from reading in F to C, UHI, the cooling effect of rain, the readability of divisions on a LIG thermometer, rounding errors, etc. Is it correct to use a square root of sum of squares of errors method?
4. In Australia, the BOM (Bureau of Meteorology) seems to now be using 1-second readings. Here is part of an email by the BOM:
“Firstly, we receive AWS data every minute. There are 3 temperature values:
1. Most recent one second measurement
2. Highest one second measurement (for the previous 60 secs)
3. Lowest one second measurement (for the previous 60 secs)”
It is more fully discussed at https://kenskingdom.wordpress.com/2017/03/01/how-temperature-is-measured-in-australia-part-1/
At first blush, would you consider this methodology to alter your error conclusions in the head post?
Geoff

1sky1
Reply to  Geoff Sherrington
January 14, 2019 4:53 pm

Nyquist Shannon sampling was derived from signals with cyclicity, which might be likened to a sine wave, typical of audio/radio signal carrier waves. Is the temperature data in this cyclicity category? Is it sufficiently like sine waves to allow application of Nyquist?

The Sampling Theorem is a mathematical requirement that applies to strictly periodic, discrete sampling all continuous signals (periodic, transient, or random) . It involves the misidentification in spectrum analysis of undersampled high frequencies beyond Nyquist as lower frequencies in the baseband. That should not be confused with any sampling “error” in the customary statistical sense.

Clyde Spencer
Reply to  Geoff Sherrington
January 14, 2019 5:25 pm

Geoff,
You asked, “Is the temperature data in this cyclicity category?”
The utility of the Fourier Transform is that ANY varying signal can be decomposed into a series of sinusoids of differing phases and amplitudes. The more rapidly a signal changes (steep slope) the more the signal contains high frequency components. This is where the Nyquist limit comes in. To properly capture the transient (high frequency) components, a high sampling rate is necessary. With a low sampling rate, a vertical shoulder on a pulse might instead be rendered as a 45 degree slope, or missed entirely. So, the answer to your question is, “Yes, temperature data for Earth is represented by a time-series that is composed of a wide range of ‘cyclicity.'”
https://en.wikipedia.org/wiki/Fourier_transform

Rud Istvan
January 14, 2019 4:44 pm

A substance neutral but IMO important process comment about this guest post.

ctm earlier sent this to me and one other VERY well known sciency guest WUWT poster for posting advice. Sort of (not really) like peer review. He got back a split opinion, I said let her rip, he said nope based on some preliminary sciency reasons.

ctm chose to let her rip. IMO that is important, signalling a WUWT direction out of skeptical echo chambers and toward a newly improved utility. Following is a summary of my ctm recommendation thinking, for all to critique as AW evolves his justly famous blog, as he promised beginning of this year.
1. Interesting serious effort, with references and clear methods. A big PLUS (unlike Mann’s hockey stick), because enables ‘replication’ or ‘disproof’. Essence of science right there.
2 Whether right or wrong, the paper introduces a fresh perspective not previously encountered elsewhere (maybe because just silly wrong, maybe because ‘climate acientists’ are so generally unskilled…like Mann). The Nyquist Sampling theorem is a fundamental of information theory, governing (early) wireline telephone voice quality (Nyquist developed his information theory at Bell Labs), the later music CD digital spec ( humans cannot hear much over 18000 Hz, although dogs can—hence literal ultrasonic dog whistles with pitch from ~24000 up tpo ~53000Hz, so the CD sampling rate spec was set at 44500, meaning (with error) about 22000Hz, also proving that all analog audiophiles DO NOT KNOW/believe the Nyquist sampling theorem), and now much more. So in the worst case WUWT readers might learn about basic informationtheorics they may not have previously known.
3. The wide and deep WUWT readership will sort out this potential guest post’s validity in short order, no different than McIntyre eventually did to Mann’s hockey stick nonsense. Collective intelligence based on reality decides in the end, not peer reviewers, not AGW or skeptical consensus (granted, there is too little of the latter), and surely not ctm, me, or the other frequent WUWT poster. We all have to own up to too many basic goofs.

Just my WUWT perspective offered to ctm.

Reply to  Rud Istvan
January 14, 2019 9:44 pm

Thanks, Rud. I was the other person who reviewed it. I thought that the Nyquist argument was flawed for a couple of reasons that I listed above.

However, I thought his discussion of the errors in the max/min average versus the true average was very good and worth publishing.

I stand by that. As I said above, Nyquist does NOT mean that you have to sample at 2X the highest frequency in your data, just at 2X the highest frequency of interest. Which is important because climate contains signals at all frequencies. And since our interest is in days, and the only large cycles up there are 24, 12, and 8 hours in length, sampling hourly is well above the highest frequency (aka the shortest period) of interest.

Also, Nyquist only applies to regularly spaced periodic sampling … but min/max sampling happens at irregular times. Generally the day is warmest around 3 PM, but in fact maxes can occur at any time of day. This makes the sampling totally irregular. Here are the max and min sampling times for San Diego:

You can see how widely both the min and max temperature times can vary.

All of that is why I said, leave the Nyquist question alone, and talk about the very real errors. Here are the errors for San Diego for the period shown above. This is the difference on a daily basis of the true average and the max/min average …

Lots of one and two degree errors in there … the only good news is that the error distribution is normal Gaussian. But that will still affect the trends, because the presence of even symmetrical errors generally will mean that the measured trends will be too large.

Best to all,

w.

William Ward
Reply to  Willis Eschenbach
January 14, 2019 11:40 pm

Willis,

I have been at this for 8 hours tonight. Enjoying it. But I’m out of gas, have to get up early and a long day. I’ll try to come back to you tomorrow as you deserve it. But let me park this comment: You don’t know what you are talking about here.

Willis said: “As I said above, Nyquist does NOT mean that you have to sample at 2X the highest frequency in your data, just at 2X the highest frequency of interest. ”

Willis – this is fundamental signal analysis 101, first day of class mistake you are making here. I suggest you go read up on this before misinforming people. I know you have a strong voice here and are respected – as you should be and as I respect you for your many informative posts. For this reason I’m being very direct, and maybe not very diplomatic. I mean no rudeness.

Nyquist does not say what you say. I’ll quote myself, but please consult an engineering text book:

According to the Nyquist-Shannon
Sampling Theorem, we must sample the signal at a rate that is at least 2 times the highest
frequency component of the signal.
fs > 2B
Where fs is the sample rate or Nyquist frequency and B is the bandwidth or highest frequency
component of the signal being sampled. If we sample at the Nyquist Rate, we have captured all
of the information available in the signal and it can be fully reconstructed. If we sample below
the Nyquist Rate, the consequence is that our samples will contain error. As our sample rate
continues to decrease below the Nyquist Rate, the error in our measurement increases.

If you have “interest” in a subset of the available frequencies in the signal then you MUST FIRST filter out frequencies above that. Its called an anti-aliasing filter. Then you can sample at a rate that corresponds to the bandwidth of your filtered signal.

Did Charles send you my reply to your comments last night? Your analysis is a way off the mark. I’m sure you are busy so I’m not surprised if you just gave a cursory read of the WUWT short paper. If you read the full paper you will be better off. I didn’t do a deep dive into clock jitter but mention it and it explains what happens when we deal with the timing of daily Tmax and Tmin. I also briefly discuss how real world application of Nyquist address non-bandlimited signals – which is all real world signals! This is in my full paper – not the abbreviated one. My paper for WUWT states briefly what I cover in the first 12 pages in my full paper. Those 12 pages are but a snipet of what you get in 4 semesters of engineering signal analysis and 30 years of applying it to bring hundreds of millions of devices to market that are based upon Nyquist.

If you are interested I suggest you take a look at and challenge the particular examples I present or statements I make. I welcome the challenge, but I warn you not to mystify this. I’m presenting very basic signal analysis. It applies to every signal. What I presenting here is not novel or revolutionary – nor should it be controversial. This paper has been reviewed by a few of my colleagues – experts in the industry in signal analysis and data acquisition. So bring your challenge but please digest the full paper and absorb it first. What is astonishing is that it is not known and standard fare for dealing with atmospheric temperature signals.

More tomorrow night – probably after 8 EST.

Reply to  William Ward
January 15, 2019 2:13 am

Thanks kindly for your answer, William.

To start with, as I said, in a chaotic system like the daily temperature record there are signals all the way down to seconds in length. The sun goes behind a cloud for a couple of seconds. Almost instantly the temperature drops and comes up again.

So is it your claim that Nyquist means we need to sample at milliseconds?

Next, suppose that our interest is in the yearly temperature signal. Are you saying that we need to take 288 samples per day to get an accurate number?

Next, the Nyquist limit only applies to band-limited signals which have no frequencies above a certain limit … is it your claim that temperature data fits this, and if so, what is the upper limit?

I ask in part because my periodogram shows that there is very little in the way of stable frequencies with periods shorter than eight hours … which would indicate that we could reconstruct the daily signal by sampling six times per day.

Next, the main problem with the max/min method has nothing to do with Nyquist—it is that we do not know the time of the max or the min. Knowing that would allow us to do a much better job reconstructing the daily signal … and again, that problem has nothing to do with Nyquist.

So let me ask a couple of questions

First, at what rate do we have to sample if our interest is the annual average temperature?

Second, you indicate that 288 samples per day satisfies the Nyquist limit … but I haven’t seen any mathematical discussion in your paper as to why you think this is the case. What am I missing here?

All the best, and thanks for an interesting discussion,

w.

William Ward
Reply to  Willis Eschenbach
January 15, 2019 9:38 pm

Hi Willis,

I’m hitting these out of order… and I think it would be best to focus on the long post I just made to you where I focus on the basics that may be hanging us up. If we can dialog about that it would be really good because so much is based upon the fundamentals. But on this (you comment about chaotic system behavior) a few comments of reply.

First, I think there are those more informed than me who may correct me or add to this but the word “chaotic” has a mathematical definition and I’m not sure if we really mean that here or not… Can I treat that word to mean highly variable?

The “shuttering” effect of cloud cover is a good one. One I mention in my full paper briefly. These effects are not synchronized strictly to anything else. Their effects can last a few seconds to hours or days. But the shuttering effect of cloud cover is a signal and could be properly sampled and studied. Cloud cover with its variability and shuttering or modulation effect on the air temperature signal will move energy around in the frequency domain. Fast passing clouds with lots of thin spots or openings over the station should result in much higher frequency components. What is the frequency and amplitude exactly? We don’t know unless we study it. It is also possible that a heavy could cover moving slowing and lasting days can have content below 1-cycle/day. I think this is intuitive. Let me know if you disagree. But from a fundamental perspective, sampling faster is always better. I was tempted to write that in caps. There is never a downside, except that you waste memory, waste some space on a HDD and perhaps waste some processing power to handle the data. I say who cares about that. Remember, the fate of humanity depends upon understanding the climate so we can afford a bit of waste to get all of the data. Sampling faster gives us a better chance of getting all of the information in a Nyquist-compliant way. With all of the money flowing into climate science I’m sure we can hire a small army of people to study this and determine what the system really needs. I said on another post that we really need standardized systems. We need to specify the temp sensor sensitivity, linearity, drift, thermal mass, response time, mechanical enclosure, wiring material, electrical front-end with its associated input impedance, linearity, offset, drift, sample rate, converter specs, anti-aliasing filter design, power supply voltage, accuracy, ripple, etc., etc. USCRN seems to be really good. I just don’t know how much of that is specified and if it was specified was it based upon research or a guess.

Reply to  William Ward
January 15, 2019 2:51 am

After writing the above, and being a practical kind of guy, I realized that discussing theoretical limits might not be the best way to settle the question. So I looked again at the Chatham Wisconsin 5-minute temperature data. I calculated the change in error in the daily mean temperature at a wide range of sampling rates. The following figure shows those results.

If I understand you, you claim that sampling every hour violates the Nyquist limit. But there is no visible break in the data until we get up to about 8 hours.

More to the point, sampling once an hour only results in an error in the mean daily temperature of four-hundredths of a degree (0.04°C). And while that might make a difference in the lab, in climate science that error is meaningless.

So whether or not hourly samples violate the Nyquist limit is what I call “a difference that doesn’t make a difference”

Again, thanks for a most thought-provoking discussion,

w.

William Ward
Reply to  Willis Eschenbach
January 15, 2019 6:20 pm

Good Evening Willis!

This is just a quick ping back to you at 9:15 PM EST on 1/15. I’m just catching up with all of the great posts from the day and now I will come back to you to engage on the points you brought up. I have spent the day pretending to be half my age, installing a retaining wall in Georgia red clay. It is amazing how many joules of energy 1 cubic yard of clay can consume! Give me a few minutes (not sure how long…) and I’ll provide some thoughtful things so we can go back and forth to see if we can agree. There are so many good comments and questions from many people, but I fear I cannot keep up with them all… So, I’ll focus on your comments first as I think your comments capture much of what other said – and many people will want to hear your thoughts I’m sure.

I re-read my post to you from late last night and I’m glad that it doesn’t sound as grumpy as I feared it might when I thought about it this morning while swinging the mattock.

Back in a bit with more…

WW

William Ward
Reply to  Willis Eschenbach
January 15, 2019 8:45 pm

Willis,

I propose we cover some basics to see where we deviate. Hope this is ok with you. I think/hope good for others reading as well. I need to say that those who are good at using statistical analysis will find themselves frustrated and out of their element. There is absolutely no need for statistical analysis before, during or after proper sampling. However, once a signal is properly sampled you can do anything you want with it mathematically – including statistical analysis.

Signals: After looking at and working with signals for tens of thousands of hours over 35 years, I assume it is completely intuitive to everyone – but this is a very, very bad assumption I see! A signal is (simply put) any parameter that you can measure that varies with time. Anything you can see on the screen of an oscilloscope is a signal. Anything that you can convert to an electrical signal through a transducer is a signal. Example: Take a microphone and connect its output to an amplifier input and then the amplifier output to an oscilloscope. The microphone will convert the air pressure (sound) it experiences at its capsule to an electrical voltage. You see what that voltage looks like on the screen of the scope. You could take a thermistor and with an electronic circuit convert the temperature to a voltage. That voltage can be viewed on the oscilloscope screen. So, there should be no question in anyone’s mind. Atmospheric air temperature is a signal.

Sampling: Sampling is measuring over and over at a periodic rate. Signals must be sampled because by definition signals are always changing. So how often must we sample? Harry Nyquist answers this: At a rate that is > 2x the highest frequency component of the signal being sampled. The frequency content of a signal can be measured with a spectrum analyzer.

Why Sample: Analog signals can be processed mathematically while in the analog domain, via Analog Signal Processing (ASP). We can add, subtract, divide, multiply, integrate and differentiate analog signals. This used to be the way control systems worked. (Some still do…) But when microprocessors and memory chips came along there was an opportunity to do things digitally. To store signals an manipulate them mathematically all you want and easily. Analog signals can be stored on magnetic tape but getting to the information you want has to be done by going through the tape to get to the point needed. It was not “random access”. To get analog signals to the digital domain we need to sample.

Why sample according to Nyquist? If we sample according to Nyquist, then we have captured *all* of the information available in the signal. Even if every peak of the analog signal is not visible in the samples it is there and can be extracted digitally. No need to sample faster than Nyquist to capture the absolute peaks (some wrote to claim this was needed – its not). This is part of the “magic” of sampling according to Nyquist. If we sample even much faster, it does not (repeat does not) get us any additional information. There *are* other reasons to sample much faster but I won’t digress into that here. It has to do with relaxing the requirements of the anti-aliasing filter that is used in front of all real world Analog-to-Digital Converters (ADCs). If we sample according to Nyquist, then we can convert from digital back to analog and have the exact same signal back again. With high quality ADCs and Digital-to-Analog Converters (DACs) we can go back and forth with multiple generations of conversion before we start to lose information because of converter introduced error. Not that going back and forth is necessarily the goal. The main point here is that with proper sampling you have captured what was going on in the analog domain and now your Digital Signal Processing (DSP) algorithms can be run on the data and you know it is as good as processing the analog signal itself.

How sampling works: When we sample, we create spectral images of our signal and these images are located at the sample rate in the frequency domain. The goal is to sample such that the image does not overlap the signal content. Overlap is aliasing. It is the addition of energy to our samples that doesn’t belong there.

Properly sampled: https://imgur.com/L7Wc393

Violating Nyquist: https://imgur.com/hPgub33

With an aliased sample you are working with data that does not represent the signal you sampled – at least to the extent that it is aliased.

Air temperature signals: Engineers usually work in units of Hz (Hertz), but it is more intuitive to use cycles/day here. Are air temperature signals 1Hz sinusoids shifted up with “DC” content from lower frequency portions of the signal (annual, multi-annual components)? No, they are not sinusoids. Some days look like sinusoids, but distorted sinusoids. This “distortion” is higher frequency components. Some days look almost like square waves and square waves have much higher frequency components. Some days have strange “messy” patterns. These come from higher frequency components. The sun rises and sets each day, and this is 1Hz, but depending upon your latitude and time of year you have a different duty cycle between night and day – this adds higher frequency components. Clouds passing overhead act as shutters, modulating the daily signal. This adds higher frequency components. Nyquist requires the sample rate to be greater than twice the highest frequency component of the signal. If we had a temp signal of 1Hz we still need to sample at more than 2x/day. [I’m ignoring the case where the sampling frequency can *equal* 2x the highest frequency component for a reason…]. So, can we agree that at 2-samples/day, done periodically, that Nyquist is violated?

What about this “periodic” requirement? To my frustration, many here have “gone off” with an amazing amount of conviction about this and obviously need to be educated. I’d rather light a candle than curse their darkness! Nyquist requires the sampling to be periodic, but what does that mean and imply? Does it imply perfection of the clock? No, it does not!!!! No real world ADC clock is perfect!!!!! (sorry, I don’t mean to be rude – but I want people to hear this.) Every real world ADC clock has what is called jitter. Jitter is variation in the length of each period of the clock. In modern electronics the jitter is usually quite low, but it is there. Jitter adds error – a different kind of error. Mercifully, I won’t do a dissertation on it here. So, does Nyquist limit just how much jitter can be on the clock? No! You just have to account for the jitter in your design and calculate the potential error if need be. Does jitter invalidate Nyquist? No! So 2 measurements of temperature that correspond to max and min ARE PERIODIC! They happen 2-times/day. This is the period. There is just a lot of jitter on the clock. So max and min are 2 periodic samples and the sample rate is 2-samples/day. Furthermore, if we look at the 288-samples taken each day in USCRN, we see that 2 of the 288 become our max and min. Are the 288 samples actually samples? Yes. Are the 2 that become our max and min samples? Yes. If we throw away all of the samples but the max and min are they still 2 samples? Yes! Throwing them away is no different than if the clock just happened to only fire the ADC at the times the max and min happened. So, let’s put this to rest please! Max and min are samples and they are periodic, and they fit into Nyquist.

Can we just sample at the frequency of interest? No! Any content above half of our sample rate frequency will alias. See figures showing overlap. This is by definition of Nyquist not legal per the theorem. Does Nick Stokes’ magic monthly averaging diurnal blah blah blah allow you to recover the correct monthly average? NO!!! Once you alias you can’t get it out! Go look at USCRN Monthly data for any station any month. I can get a file to someone to load for all to share if need be. We don’t have to do any more than look at what NOAA provides us. They give us the monthly “averages”, “means” or whatever the hell we are supposed to call it, for both methods every month! Just let your eyes do the walking. Fallbrook CA is one of my favorite because each and every month is off by 1.0-1.5C. But the trends for this station do well. Other stations don’t have quite as large of a difference monthly, but the trends do worse. In the real world application of Nyquist there is always an anti-alias filter in front of the ADC and the sample rate is set much higher than needed to make sure the spectral images are spaced far apart. This ensures much less aliasing and it relaxes the requirements on the filter. I don’t know if USCRN employs a filter. I hope so.

CRITICAL POINT ALERT: More on Aliasing of daily and trend signals: You need to consult my Full paper on this for more info. Any content at 1 and 3-cycles/day will come down and alias the daily signal. Any content at/near 2-cycles/day will alias the long term trends.

Can we just look at the spectrum of the signal and “by eye” to determine what the aliasing impact will be? I don’t think so. We need to know the phase relationship between the signal and aliasing components. It is best to see how the affect manifests in the time domain by looking at mean variation over time and trends. There is no other explanation for the mean and trend errors except for Nyquist violation. Give me your science if you want to disagree. Note: I’m not factoring in my list of the other 11 problems with the record (calibration, siting, etc.). Anecdote from mixing audio signals and setting levels: You can have levels set properly to not clip the converters and then if you do an EQ change to REDUCE a bass signal you can get clipping! How can a signal level reduction give you clipping??? Well if you have a large higher frequency transient (like symbol crash) that happens at the trough of a low bass note and then you reduce the base note, then the positive going high frequency transient will be higher and clip! We should resolve the impact of aliasing in the time domain.

What is “THE” Nyquist frequency of an atmospheric air temperature signal. Answer, I don’t have that exact answer, but going from what NOAA does, 288-samples/day appears to be close. Why they do the average of 20-sec samples I don’t know. Would 20-sec samples be better than averaging to 5-min? I don’t know. The calculation of mean seems to converge above 72-samples/day. Is that because they implement an anti-aliasing filter in agreement with 20-sec samples? I don’t know. Do we seem to get much better results from 288-samples than 2-samples? Hell yes. Who cares about a few hundredths or tenths of a degree C trend over a decade? Apparently climate scientists do, hence my use of Nyquist to call them on the carpet.

Darn Willis – I went and wrote a book. Jeez I don’t know how to do this with any more brevity. Many are so confused on this I hope this brings people along or at least helps to frame the debate better. I hate to “credential drop” because I know it is bad form, but I didn’t just read a wiki article and decide to write this essay. I have 35 years of practical application in the real world with real signals, working for the premier converter companies making converter solutions for the premier consumer electronics companies worldwide. Hundreds of millions of boxes and billions of channels of conversion. You most likely have some of them in your car or home. Feynman says science is the belief in the ignorance of the experts. Yes – I think we all subscribe to this. But how do I ask people to take a step back and for a moment think about what they can learn vs reaffirming what they already know? Nyquist compliant sampling is boring, daily everyday stuff for all of the technology world except climate science. The only novel thing here is the application to climate.

William Ward
Reply to  Willis Eschenbach
January 15, 2019 9:12 pm

Hey Willis,

I just sent a long reply with the hope of seeing if we can align on some basics. But to comment here briefly to your comments about the Chatham WI USCRN data: The important thing to know is that there is broad range in spectral patterns from each station. I think the system needs to be designed for the full range of what we see in the world. I can find stations that behave very badly when aliased and other stations that seem to behave well despite aliasing. To be more clear, they seem to behave well for trends over 10 years. All stations I have studied seem to have significant error in means and absolute starting and ending points. The wide variation of impact comes from the wide variation of spectral content at a location – this can be seen visually by looking at a plot of the time domain signal for different days at different stations. Of course also over multiple days, months etc. It just makes itself more apparent over 1 day. Also, frustrating but true, what we see doesn’t always translate to the means and trends. The phase relationship of the aliased components play a big role as well as the magnitude.

An example to make the point. Most audio amplifiers can sound different to trained ears if reference recordings are used and played back on mastering grade systems in a acoustically treated mastering studio. I mean sound REALLY different. I have taken outputs from various amplifiers and sampled them and studied them in the time domain and frequency domain. Sometimes I see slight differences – sometimes not. But I hear them and this is done in a blind study setting and is repeatable. The point here is that resolving the complete impact of some effect might take multiple ways of looking at it. Yes, on some spectral plots you might not think there is significant energy at some frequency but combined with its magnitude and interaction with other components yields an impact that is larger than expected or obvious.

It would be something to consider studying for anyone interested. Examine data from various USCRN stations, compare visually what we see daily, compare what spectrum exists, compare the impact to mean and trend and then try to connect it to latitude, season, cloud cover, precipitation, etc. I already have some ideas from what I have examined but it is not thorough enough to disperse.

Sampling at a higher rate, like USCRN 288-samples/day seems to allow us to take a major step forward. With proper sampling then we can hopefully, someday, get to the point where we feed the digital signals (samples) into characteristic equations that explain weather and climate. I’m amazed that all this effort takes place to distill the work down to a number. Utilizing full properly sampled signals and running them into mathematical functions to yield output signals that mean something – now we are starting to look like science in the 21st century.

Ps – I’m not hung up on 288-samples/day. Just utilizing what USCRN offers and it appears that we get good results from a variety of stations. If the number turns out to be 72 then fine – but my overall point is not changed – just the number.

What do you think?

Reply to  Willis Eschenbach
January 15, 2019 9:30 pm

William, you did “write a book” … but nowhere in it did you answer my questions. So I’ll ask them again:

So let me ask a couple of questions

First, at what rate do we have to sample if our interest is the annual average temperature?

Second, you indicate that 288 samples per day satisfies the Nyquist limit … but I haven’t seen any mathematical discussion in your paper as to why you think this is the case. What am I missing here?

All the best, and thanks for an interesting discussion,

I also posted an analysis showing that hourly sampling only has an RMS error of 0.04°C with respect to a 5-minute sampling … and it was my understanding that you think hourly sampling violates Nyquist.

Next, I pointed out that a chaotic analog signal like temperature has frequencies with periods from seconds to millennia … since your claim is that we have to sample at twice the “highest frequency” to avoid aliasing, what are you claiming that “highest frequency” is, and how have you determined it?

Next, you seem to be overlooking the fact that Nyquist gives the limit for COMPLETELY RECONSTRUCTING a signal … but that’s not what we’re doing. We are just trying to determine a reasonably precise average. It doesn’t even have to be all that accurate, because we’re interested in trends more than the exact value of the average.

Finally, you still haven’t grasped the nettle—the problem with (max+min)/2 has NOTHING to do with Nyquist. Pick any chaotic signal. If you want to, filter out the high frequencies as you’d do for a true analysis. Sample it once every millisecond. Then pick any interval, take the highest and lowest values of the interval, and average them.

Will you get the mean of the interval? NO, and I don’t care if you sample it every nanosecond. The problem is NOT with the sampling rate. It’s with the procedure—(max+min)/2 is a lousy and biased estimator of the true mean, REGARDLESS of the sampling rate, above Nyquist or not.

Best regards,

w.

William Ward
Reply to  Willis Eschenbach
January 15, 2019 10:09 pm

Willis,

At some point posts no longer offer the option to reply directly below, so I’m not sure where this is going to land in-line. This is in reply to your post where you say

“William, you did “write a book” … but nowhere in it did you answer my questions. So I’ll ask them again:”

I sent you a few more posts that do address your questions. I’ll give you time to read those and respond. Did you read my “book”? Sorry, it seems like we need to find out where we disconnect on the fundamentals. Some of your questions are answered in those posts. For your new questions…

Willis said: “Next, you seem to be overlooking the fact that Nyquist gives the limit for COMPLETELY RECONSTRUCTING a signal … but that’s not what we’re doing. We are just trying to determine a reasonably precise average. It doesn’t even have to be all that accurate, because we’re interested in trends more than the exact value of the average.”

Reply: Define “reasonably precise”. And do you mean accurate??? I say trend errors of 0.24C/decade are not accurate. Daily means of +4C error are not accurate. Monthly mean errors of 1.5C are not accurate. You can’t determine the actual average with 2-samples/day. You are welcome to add the max and min and divide by 2 but this won’t give you an accurate result in all or most cases. Did you look at my paper?? What about the table in Fig 7?? Its not about COMPLETELY RECONSTRUCTING the signal. Knowing you CAN reconstruct it tells you your samples actually mean something relative to your original signal.

“Chaotic” issue: addressed separately. Why the demand to know the exact number needed before you let in the concept. Research can be done to determine where the means converge to a specified limit. For example once a higher sample rate doesn’t give you more than 0.1C or 0.05C or 0.01C difference you can stop. The concept is what is important here. 2-samples are not enough to support the results that generate the alarm we see. If the magnitude of error I show from comparing 288-samples to 2-samples is not important then fine. Just stop bother me with daily headlines of the sky is falling. (Not you, Willis, those who do this.)

You said: “Finally, you still haven’t grasped the nettle—the problem with (max+min)/2 has NOTHING to do with Nyquist. Pick any chaotic signal. If you want to, filter out the high frequencies as you’d do for a true analysis. Sample it once every millisecond. Then pick any interval, take the highest and lowest values of the interval, and average them.

Will you get the mean of the interval? NO, and I don’t care if you sample it every nanosecond. The problem is NOT with the sampling rate. It’s with the procedure—(max+min)/2 is a lousy and biased estimator of the true mean, REGARDLESS of the sampling rate, above Nyquist or not.”

My reply: I’m confused by what you write… Are we even arguing here or agreeing… I can’t tell… I think we agree that the historical method is not good for accurate true mean calculation. But how do you prove that without sampling theory? I’m not suggesting we continue with (Tmax+Tmin)/2 so I’m not sure what you are saying. I’m saying do it like USCRN does and use all of the samples.

Reply to  Willis Eschenbach
January 16, 2019 1:15 am

“Next, you seem to be overlooking the fact that Nyquist gives the limit for COMPLETELY RECONSTRUCTING a signal … but that’s not what we’re doing.”
Exactly so. William says I need to learn more about signal processing, but I think he just doesn’t understand how aliasing works. Here is how it goes.

Suppose you have an actual sinusoid sin(2πt). It has period 1. Then suppose you sample at points t=a*n, where a is the sampling interval and n are integers. Your sample values are sin(2πa*n). Now from any sin argument, you can subtract 2πn without changing the values. So the values are the same as
sin(2πa*n – 2πn) = sin(2π(1-a)*n) = sin(2π(1/a-1)*(a*n))
So you can’t tell whether your sampled points are from sin(2πt) or sin(2π(1/a-1)t). If a<0.5, the Nyquist limit, the alternative frequency (alias) is higher than 0.5, and if your processing of the digital signal is low-pass, the alternative will contribute less than the real. But if a>0.5, the alias is less, and may cause trouble, more so if a is near 1.

But then there is the effect of locking. You sample a discrete number N of times per timestep, so a=1/N. if N>2, a<0.5, so OK. In the important case of N=2, a=0.5, so the alias frequency is also 0.5. So if you process the sampled signal with a low pass filter (relative to f=1), the alias won’t trouble you.

The only way you can get really low frequencies is if a is close to 1. But with locking, that can’t happen, except for the case where a=N=1. And then the alias frequency is zero. But this is just the familiar case of extreme undersampling (once per cycle).

So what if you sample N>1 times per day, you don’t pick up an error from the diurnal variation. You get an error from the much smaller N’th harmonic. Now mean diurnal variation is reasonably smooth, and so the (discrete) spectrum rapidly fades.

As 1sky1 has been trying to tell William, min/max has 2 samples per day, but is not strictly periodic. It is as if an AM signal were modulating a 2 day⁻¹ carrier. And so by a sort of heterodyning process, it moves the second harmonic (of diurnal) down to zero, but with sidebands large relative to the month⁻¹ lowpass. Very little power gets through.

Editor
Reply to  William Ward
January 15, 2019 10:32 am

William and w. ==> Willis’s point that daily temperature at a single station (we will start with that level) is “a chaotic system” is extremely important. Thus, trying to capture the “signal frequencies” is not trivial (or maybe not really possible).

Though I did a great deal of work with William on this essay, I started from the point of “are we sure that daily temperature can be considered a “signal” in the sense of Nyquist?” William was determined that it was and could, so I limited my input to helping him make sure this complicated subject was clearly communicated — which I think William managed to accomplish.

The many, many comments here show that if nothing else, there is a lot of interest and a lot of opinions on the topic, which makes it a valuable post — even for those who disagree strongly with his conclusions.

And pragmatically, the conclusions that Min/Max temperatures are not optimum from a climate point of view and that 5-min values are superior for climate purposes are perfectly valid.

Dr. S. Jeevananda Reddy
Reply to  Kip Hansen
January 15, 2019 4:05 pm

Sorry, I don’t agree with that. In 70s several studies were carried out on comparison of thermograph data with maximum and minimum observations. If somebody wants can check whether the thermograph show the maximum and minimum recoded by thermometers. They rarely show this as the thermograph has a drag [as I said in the my earlier comment].

Dr. S. Jeevananda Reddy

William Ward
Reply to  Kip Hansen
January 15, 2019 9:16 pm

Kip thanks for your comments here and for your help to package this giant ball of yarn. Its a big subject. My gratitude for your support is immense.

William Ward
Reply to  Kip Hansen
January 15, 2019 11:31 pm

Kip,

Don’t let Willis misinform you with the issue of “chaotic-ness” of the signal. There is absolutely no issue with that. I provided detail to Willis in a separate reply. All of the “chaotic” effects have a frequency impact that can be studied and its limits bound. Basic electrical engineering lab bench instruments are needed along with a front end that delivers the signal. Intermittent qualities of signal components are not of any concern to sampling. If they were a problem there would be no digital samples of anything. Digital video and audio would not be a possibility for example.

Reply to  Kip Hansen
January 16, 2019 12:31 am

William Ward January 15, 2019 at 11:31 pm

Kip,

Don’t let Willis misinform you with the issue of “chaotic-ness” of the signal

William, could you lay off the personal accusations that I am “misinforming people”? If you think I’m wrong, QUOTE WHAT I SAID and show us exactly where it is wrong.

There is absolutely no issue with that. I provided detail to Willis in a separate reply. All of the “chaotic” effects have a frequency impact that can be studied and its limits bound. Basic electrical engineering lab bench instruments are needed along with a front end that delivers the signal. Intermittent qualities of signal components are not of any concern to sampling. If they were a problem there would be no digital samples of anything. Digital video and audio would not be a possibility for example.

It appears you think I claimed that a chaotic signal is not a signal or is not a signal in the Nyquist sense. I said nothing of the sort.

Instead, I pointed out that temperature was chaotic in the context of you repeatedly claiming that we had to sample at 2X the “highest frequency” in the signal.

I replied that temperature is chaotic and it contains frequencies with periods down to seconds … so what is this mysterious “highest frequency” you are talking about that we need to exceed?

I’ve asked that several times and gotten no answer. Instead of answering, you accuse me of “misinforming” people when I again mention chaotic systems when I bring up the question you haven’t answered.

Of course, the real answer is, there is no “highest frequency” in temperature despite your claims.

Consider music as an example. It’s a chaotic signal with lots of very high frequencies.

However, for CDs, music is sampled at 44.1 kHz. Is that because that’s the “highest signal” as you claim?

NO. It’s an arbitrary decision based on balancing what humans are capable of hearing and the physical issues involved in doing the sampling. So they filter out the highest frequencies, and then there is an actual “highest frequency” as you state … but that’s a result of filtering, not a characteristic of the signal.

So please, climb down off of your paternalistic high horse, stop thinking you are talking to an ignoramus, and answer the questions asked. It’s wearing thin.

w.

William Ward
Reply to  Kip Hansen
January 16, 2019 1:50 am

Willis said: “William, could you lay off the personal accusations that I am “misinforming people”? If you think I’m wrong, QUOTE WHAT I SAID and show us exactly where it is wrong.”

My reply: Willis, I’m sorry for offending you. I would like to reset the tone.

As I said in my Full paper:

“Real-world signals are not limited in frequency. Their frequency content can go on to infinity. This presents a challenge to proper sampling, but one that can be addressed
with good system engineering. When air temperature is measured electronically, electrical filters are used to reduce the frequency components that are beyond the specified bandwidth B, thus reducing potential aliasing. Another method of dealing with real-world signals is to sample at a much faster rate. The faster we sample the farther in frequency we space the spectral images, significantly reducing aliasing from undesired frequencies above bandwidth B. This is how Nyquist is applied practically. In the real-world, a small amount of aliasing always exists when sampling, but careful engineering of the system will allow sampling to yield near perfect results toward our goals.”

In my Full paper I state:

“It is clear from the data in Figure 11, that as the sample rate decreases below Nyquist, the corresponding error introduced from aliasing increases. It is also clear that 2, 4, 6 or 12-samples/day produces a very inaccurate result. 24-samples/day (1-sample/hr) up to 72-samples/day (3-samples/hr) may or may not yield accurate results. It depends upon the spectral content of the signal being sampled. NOAA has decided upon 288-samples/day (4,320-samples/day before averaging) so that will be considered the current benchmark standard. Sampling below a rate of 288-samples/day will be (and should be) considered a violation of Nyquist.”

From my examination of the data, the error got very small – converges toward zero as the sample-rate approached 288-samples/day. A practical way to determine the Nyquist frequency (for real world applications) is to increase sample rate until the apparent gains diminish beyond any benefit. 288 was chosen by NOAA so I went with it. But the core points of my paper are not invalidated if it turns out to be 144 or 576!

I can confidently say Nyquist is being violated even if I don’t have a more certain number because we know there is frequency content above 2-cycles per day so 2-samples/day is a violation.

Nyquist doesn’t care about the “chaos” in a signal. The chaos or lack thereof is meaningless. Good luck finding a signal that doesn’t fit your standard of “chaotic”. Determining the total practical signal bandwidth that an ADC would experience is a very basic process. Chaos has nothing to do with sampling and doesn’t present any challenge to it. It is no different than a stage full of music instruments. They can all play according to a conductors direction or they can be a massive cacophony. The frequency range of each instrument is fixed. When it shows up on the scene is not a factor in sampling. The same is true of clouds – but someone other than me would need to research what frequencies clouds can produce.

I think I have answered your questions.

Reply to  William Ward
January 16, 2019 3:12 pm

William Ward January 16, 2019 at 1:50 am

Willis said:

“William, could you lay off the personal accusations that I am “misinforming people”? If you think I’m wrong, QUOTE WHAT I SAID and show us exactly where it is wrong.”

My reply: Willis, I’m sorry for offending you. I would like to reset the tone.

Your most gracious apology accepted. Indeed, we are reset.

As I said in my Full paper:

“Real-world signals are not limited in frequency. Their frequency content can go on to infinity. This presents a challenge to proper sampling, but one that can be addressed
with good system engineering. When air temperature is measured electronically, electrical filters are used to reduce the frequency components that are beyond the specified bandwidth B, thus reducing potential aliasing. Another method of dealing with real-world signals is to sample at a much faster rate. The faster we sample the farther in frequency we space the spectral images, significantly reducing aliasing from undesired frequencies above bandwidth B. This is how Nyquist is applied practically. In the real-world, a small amount of aliasing always exists when sampling, but careful engineering of the system will allow sampling to yield near perfect results toward our goals.”

In my Full paper I state:

“It is clear from the data in Figure 11, that as the sample rate decreases below Nyquist, the corresponding error introduced from aliasing increases. It is also clear that 2, 4, 6 or 12-samples/day produces a very inaccurate result. 24-samples/day (1-sample/hr) up to 72-samples/day (3-samples/hr) may or may not yield accurate results. It depends upon the spectral content of the signal being sampled. NOAA has decided upon 288-samples/day (4,320-samples/day before averaging) so that will be considered the current benchmark standard. Sampling below a rate of 288-samples/day will be (and should be) considered a violation of Nyquist.”

Thanks, William. You began the discussion by saying the following:

The Nyquist-Shannon Sampling Theorem tells us that we must sample a signal at a rate that is at least 2x the highest frequency component of the signal. This is called the Nyquist Rate. Sampling at a rate less than this introduces aliasing error into our measurement.

I then asked what I thought was a reasonable question—what is the value of this mysterious “highest frequency component” of the temperature signal that you are referring to?

From then until now, I have not gotten an answer. And for good reason. Because the temperature is NOT some simple superposition of a few sine waves, we can find signals with periods down to a few seconds—a puff of warm wind. And if we have to sample at 2X that rate, say every two seconds, that would be 43,200 samples per day.

In other words, your initial claim about “2X the highest frequency” was simply wrong. And to date, you have not admitted that.

From my examination of the data, the error got very small – converges toward zero as the sample-rate approached 288-samples/day.

Actually, the convergence start well below 288 samples per day. Let me re-post two of my graphs. First, here’s the periodogram of averages of the Chatham WI USCRN data:

By averaging the data first, we are only left with any long-term persistent cycles. You can see that there are strong cycles at 24, 12, and 8 hours, and very little that is shorter than that. Accordingly, at that time and without knowing anything further I suggested that a practical Nyquist limit should be sampling every 4 hours, which is 2X the highest frequency of any persistent cycles.

This was well supported by my analysis of the same data at different sampling times. Here is that graph again.

And here it is, but this time by frequency rather than by period:

As you can see, when we get up to sampling every eight hours, the errors take a big jump. And when we look at shorter times, at about four hours (or 6 samples per day) the rate of error decay becomes both linear and regular. So I’d say my initial view of the situation was accurate.

A practical way to determine the Nyquist frequency (for real world applications) is to increase sample rate until the apparent gains diminish beyond any benefit.

While I agree with this, I was surprised when you said this above because it is in total contradiction of your claim that “we must sample a signal at a rate that is at least 2x the highest frequency component of the signal.” In any case, the graph above shows that periods shorter than once per hour don’t help much.

288 was chosen by NOAA so I went with it. But the core points of my paper are not invalidated if it turns out to be 144 or 576!

Since there is only a difference of four-hundredths of a degree between sampling at 288/day and sampling hourly, I’d say that hourly fits your definition that “the apparent gains diminish beyond any benefit”.

I can confidently say Nyquist is being violated even if I don’t have a more certain number because we know there is frequency content above 2-cycles per day so 2-samples/day is a violation.

I agree. But that is NOT the main problem. Even if you were sampling well above the Nyquist limit, (min+max)/2 is a poor estimator of the mean.

Nyquist doesn’t care about the “chaos” in a signal. The chaos or lack thereof is meaningless. Good luck finding a signal that doesn’t fit your standard of “chaotic”. Determining the total practical signal bandwidth that an ADC would experience is a very basic process. Chaos has nothing to do with sampling and doesn’t present any challenge to it. It is no different than a stage full of music instruments. They can all play according to a conductors direction or they can be a massive cacophony. The frequency range of each instrument is fixed. When it shows up on the scene is not a factor in sampling. The same is true of clouds – but someone other than me would need to research what frequencies clouds can produce.

I have no clue why you keep harping on chaos. My only point in saying it was chaotic was to point out that temperature contains signals at all frequencies, so your claim that Nyquist says “we must sample a signal at a rate that is at least 2x the highest frequency component of the signal” was MEANINGLESS with respect to temperature. It’s not that a chaotic signal can’t be sampled or doesn’t obey Nyquist. It can and does. The issue is that a chaotic signal does NOT obey your imaginary rule that we have to sample at twice the highest frequency present in the signal.

I tried to point this out with respect to music. Music, like temperature, contains lots of very high-frequency signals. So, do the music engineers follow your foolish rule that “The Nyquist-Shannon Sampling Theorem tells us that we must sample a signal at a rate that is at least 2x the highest frequency component of the signal.”?

Heck, no. They sample at 44.1 kHz and simply filter out higher frequencies. Which is what I started out by saying and took a lot of heat for saying, viz:

Nyquist does NOT mean that you have to sample at 2X the highest frequency in your data, just at 2X the highest frequency of interest. Which is important because climate contains signals at all frequencies. And since our interest is in days, and the only large cycles up there are 24, 12, and 8 hours in length, sampling hourly is well above the highest frequency (aka the shortest period) of interest.

I stand by that, and the data shows that sampling hourly only introduces an error in the daily average of four-hundredths of a degree with respect to sampling every five minutes.

In closing, let me cycle back to where I started. Yes, sampling twice a day is a violation of the Nyquist limit. My graphs above show that. But so is sampling 288 times per day, because the signal contains very high-frequency components. However, on a practical level, the highest persistent frequency is 3X per day, or every eight hours. As a result, sampling at eight times that frequency, or hourly, is perfectly adequate for our purposes.

And whether you are above the Nyquist frequency or not, for real-world complex signals, (max+min)/2 is a poor estimator of the mean of ANY period. That is the biggest problem with the historical record, and it has nothing to do with Nyquist.

Finally, as to whether this is a difference that makes a difference, here are the daily mean and (max+min)/2 graphs for 30 US cities, along with the true trend and the (max+min)/2 trend. In most cases the difference in trends is not visible at this scale, one trend line drops in right on top of the other … bear in mind that these are only five years long, and that longer datasets will perforce have smaller trend errors …

Again, my thanks for a most interesting post, for your genteel tone, and for keeping the conversation focused on the issues …

w.

PS—My investigations show that we can improve the (max+mean)/2 error slightly by some more subtle mathemagic than a simple mean of the two signals … but the improvement is small, only about a 5% reduction in error. Hardly worth doing. Go figure … I’m still messing with the data, if I find anything interesting I’ll report back.

Jaap Titulaer
Reply to  Willis Eschenbach
January 15, 2019 1:57 am

Willis,

As you said 24 per day would be enough. I would say that even 2x Min & Max would be enough to get a better representative picture for the daily temperature.
Simply lowest at night and highest at night plus lowest during the day and highest during day.
So
T[night] = (T[max,night]+T[min.night])/2
and
T[day] = (T[max,day]+T[min.day])/2
would give a much better representation. See also the 5 minute charts, most information is in those two ranges, the switch at dawn and dusk is to be expected, but big range, low information.

And we think we know that the nightly temperature trends differ from those during daylight.
For a sufficient large percentage of locations this may even lead to a skewed distribution when using only daily simple averages.

1sky1
Reply to  Willis Eschenbach
January 15, 2019 5:43 pm

Nyquist does NOT mean that you have to sample at 2X the highest frequency in your data, just at 2X the highest frequency of interest.

Nonsense! Discrete sampling of any continuous signal with a fixed delta t (i.e., Dirac comb) produces a time series in which ALL frequencies higher than Nyquist (i.e., >1/(2 delta t)) are “aliased” into some baseband frequency f, according to 2fN +- f, where fN is the Nyquist frequency. Thus it’s imperative that the frequencies of interest not be significantly affected by aliasing. Simply placing those frequencies in the baseband range is NOT enough. The spectral content of all the aliases needs to be negligible.

Lots of one and two degree errors in there … the only good news is that the error distribution is normal Gaussian. But that will still affect the trends, because the presence of even symmetrical errors generally will mean that the measured trends will be too large.

While the peakiness of the diurnal cycle almost invariably produces a daily mid-range value considerably different from the true daily mean, the monthly average tends to cluster much more closely around some (unknown) mean value above that of the true monthly mean. The result thus is purely an offset that doesn’t affect the climatic trend.

1sky1
Reply to  1sky1
January 15, 2019 6:25 pm

The expression for aliased frequencies should read: 2KfN +-f (for all integer K)

Chris4692
Reply to  Rud Istvan
January 14, 2019 10:00 pm

#3 is why WUWT is truly valuable to science. Far more valuable than peer review. Would that WUWT be repeated for other fields.

William Ward
Reply to  Rud Istvan
January 14, 2019 11:49 pm

Rud,

I’m out of gas for the night but I didn’t want to sign off before acknowledging what you wrote here. I like what you wrote. I’m encouraged by this information. It all sounds very good – something the world of science needs more of. Less gatekeeping and demands to adhere to orthodoxy. I understand we don’t want a free for all with people pedaling perpetual motion machines, etc., but the pendulum seems to have swung too far towards maintaining what we know rather than uncovering what we don’t. My comments are not about WUWT as much as it is with the world in general. I’m not a “regular” here (maybe yet…) so I’m not aware of the dynamics – but what you explain sounds like a positive development.

Also, Willis is a very knowledgeable guy. But on this one I’m going to help him learn a few more things to fit into his already overflowing bag of tricks.

Reply to  William Ward
January 15, 2019 4:57 pm

William –

Hope this finds you early in the morning wherever you are as you are going to need to be up early if you intend to teach Willis any new tricks!

As for someone such as myself, who has done DSP for nearly 50 years, I didn’t see anything new. It’s all pretty much standard multi-rate DSP stuff.

I can’t see that anyone else was suggesting downsampling to just two samples/cycle. The idea of successfully achieving a correct mean as the average of “time-jittered” Tmax and Tmin is foolishness as we all suspect, but is a separate issue unrelated to aliasing errors.

If you WERE talking about PROPER downsampling (with equally-spaced samples), you would NOT have aliasing because you would have implemented the necessary “pre-decimation” filters and agreed to accepting the loss of information that was contained in the discarded bandwidth. You did not do this, as evidenced by the increasing errors with decreasing sampling rate in Fig. 11 of your PDF – they would have been -3.3 all the way through a downsampling by 2.

In fact, you could downsample to just 1 and preserve the mean (the k=0 term of the DFT, which IS the DC average). The bandwidth would have been reduced to include only zero, so you have to sample at greater than twice zero, and one sample will do. That is, if you have a signal that is a constant for the day (the mean) then one sample, anytime will do.

As I said, the bogus use of (Tmax+Tmin)/2 is a separate issue but, is not to be objected to on the basis of your demo.

Reply to  Bernie Hutchins
January 15, 2019 9:07 pm

Bernie Hutchins January 15, 2019 at 4:57 pm

As I said, the bogus use of (Tmax+Tmin)/2 is a separate issue but, is not to be objected to on the basis of your demo.

Bernie, that’s exactly what I said. The traditional method has problems … but they have nothing to do with Nyquist. No matter what rate you are sampling a chaotic signal, high or low, above or below the Nyquist limit, (max+min)/2 is NOT an unbiased estimator of the central tendency.

And as you point out, he’s dealing with a phenomenon (air temperature) that is a) chaotic and b) has frequency components at all times from second to millennia. In such a situation, his insistence on sampling faster than the “highest frequency” is meaningless. You’ve indicated the normal way to deal with that—filter the signal to remove the high-frequency components. At that point Nyquist comes into play.

But the reality is that we only need to sample often enough for our purposes. I asked William above, if we’re interested in a yearly average how often Nyquist would say we have to sample … no answer yet, because AFAIK Nyquist says nothing about that question.

Inter alia, this is because Nyquist is about the sample rate to RECONSTRUCT a signal completely … but we’re not trying to do that. We just want an accurate annual average.

I await William’s answers to my questions …

My thanks to you,

w.

William Ward
Reply to  Willis Eschenbach
January 15, 2019 10:48 pm

Willis,

Is music “chaotic”? Can a piece of music be performed whereby a violin or cymbal crash comes in unpredictably, off tempo, … “chaotically”? Of course it can. Does this stop us from recording music digitally? Of course it doesn’t. The possible frequency components of the cymbal can be studied. The sample rate is then set to capture them if they occur. If they don’t occur then no problem. If they do occur then no problem. The system is designed to handle the frequency bandwidth of interest and filters are used to eliminate that which is not of interest. Sample rate is set to capture the frequencies of interest and also relax filter requirements.

You say: “… chaotic seconds to millennia…” Well millennia is down near 0Hz. lower than your daily signal. So why even mention this? As I said above the upper frequency limit can be determined for you “chaotic” components. Your “chaotic” issue just isn’t an issue. It is dealt with all day every day everywhere sampling is done.

You are misinforming people telling them that Nyquist is only for reconstruction purposes. Most of sampled data is never reconstructed. The digital version of the signal is processed for the purposes of making control decisions, for example. It doesn’t have to be reconstructed. Why do you keep pushing this? Do you have real world experience with using Nyquist?

I have answered your questions in posts below. I now await your response to my explanations: 1) Max/min are samples and periodic, 2) You can’t undersample unless you filter first or you alias, 3) Nyquist applies to all signals.

Reply to  Willis Eschenbach
January 16, 2019 12:16 am

William Ward January 15, 2019 at 10:48 pm

You are misinforming people telling them that Nyquist is only for reconstruction purposes. Most of sampled data is never reconstructed.

No, you are misinforming people by claiming I said that. Here’s what I said.

Next, you seem to be overlooking the fact that Nyquist gives the limit for COMPLETELY RECONSTRUCTING a signal … but that’s not what we’re doing.

I said nothing about how Nyquist limit is ONLY for reconstruction purposes as you falsely claim. I said it gives the limit for completely reconstructing a signal. Are you disputing that? Are you claiming you can violate the limit and completely reconstruct a signal?

Didn’t think so …

William, I choose my words carefully for a good reason. I have to be able to defend them. But I cannot defend your fantasies about what I said.

w.

PS—My guess is that globally, by far the largest use of A/D sampling is for music and videos … and yes, that is absolutely for reconstruction purposes …

William Ward
Reply to  Willis Eschenbach
January 16, 2019 2:03 am

Willis,

You said: “Next, you seem to be overlooking the fact that Nyquist gives the limit for COMPLETELY RECONSTRUCTING a signal … but that’s not what we’re doing.”

The “that’s not what we are doing” says to me that your focus is on limiting Nyquist to reconstruction. Why would you say “you seem to be overlooking the fact…” when that was stated in my paper and underpins my entire case?

No, the majority of ADCs are not for reconstruction purposes. ADCs are everywhere and mostly for control purposes. Signals and inputs are sampled to gather data for the processor to make decisions based upon its algorithms. I think you are trying to prop up a point that does not apply to my paper. I think you are being very stubborn by not embracing it. Nyquist allows you to reconstruct the signal in the analog domain and the importance of this is twofold: 1) you can reconstruct if needed, but more so 2) you know that whether you reconstruct or not you have obtained all of the information you possibly can about the signal. I’m pretty sure if not you then several others have tried to put Nyquist off-limits to temperature sampling because the “intent is not to reconstruct”. That is just plain wrong.

William Ward
Reply to  Bernie Hutchins
January 15, 2019 10:57 pm

Bernie,

From what you write I can’t exactly tell what you are saying. I think you are taking issue with me simply discarding samples, because you would never actually do that in practice. It would cause aliasing. Instead you would use a sample rate converter and implement the filters and decimation to reduce rate.

If this is your point that I don’t think we disagree but I think you misunderstand what I did in the paper. I’m trying to show the effects of what would happen if the samples were taken at a lower rate and then I show by comparing the resulting mean from the reduced sample rate how much aliasing error appears. There is nothing wrong with this approach. Discarding samples yields the exact same results as if the discarded samples never happened – or if the clock were set to a lower sample rate. You can’t see the work I did, but I discarded samples properly so that the results looked exactly like you would get if the clock were originally set at the lower rate. This is a very different process than sample-rate conversion, where sampling is at or near the Nyquist rate and then it is desired to reduce rate and not alias. I’m very familiar with SRC design and their audible effects on audio.

Did I clear that up??

Reply to  William Ward
January 16, 2019 10:32 am

William – I understand exactly what you did and what it is that you are claiming to be meaningful.

I am reminded of the old joke about the dairy farmer who consults a physicist about a milk-production problem. “Assume a spherical cow,” says the physicist. That is, we don’t know how to do the real problem (the non-uniform problem and related complications) but here please accept a trivial oversimplified stand-in (uniform sampling while violating the standard engineering practices for avoiding aliasing) and pretend it is meaningful. It may be – or it may be a pure strawman.

For the record for everyone, here is exactly what I mean by doing it right:
* * * * * * * * * * * * * * *
Suppose you have 288, 5-minute samples. You have them all so can easily compute the mean as the sum divided by 288. You do this and write down the answer.

Your “weapon of choice” is the DFT (FFT). This means the 288 time samples are assumed (by the mathematics) to be periodic, n=0,1,2,…287 and to have a frequency domain (spectrum) indexed by k=0,1,2,…287, all generally non-zero.

You want to reduce the sampling rate from 288 down to just 1. You can’t just throw out (decimate) 287 of the 288 time samples without horrendous aliasing! You need a proper “pre-decimation” filter. [This is the same basic function as an “anti-aliasing” filter but applies to a digital filter operating on samples already taken, not to a continuous-time signal to be sampled.]
The simplest low-pass to use here is a 288 tap moving average (FIR). This has a “frequency response” that is 1 at zero frequency and zero at the remaining k=1,2,3,…287. It lets through only the k=0 term of the DFT of the time sequence. This value is (see summation equation for DFT terms) 1/288 times the sum of the time samples. All other information is lost, but was not wanted.
* * * * * * * * * *
I understand what you did exactly. But, you have not bridged the gap between the real case and the toy case. Your graphs showing the difference between (Tmax+Tmin)/2 and the computed mean say a whole lot more than the aliasing demo.

-Bernie

William Ward
Reply to  William Ward
January 16, 2019 1:19 pm

Bernie,

Thank you! You just helped to prove my point!

You said: “You can’t just throw out (decimate) 287 of the 288 time samples without horrendous aliasing!”

Exactly! I completely agree! (Where were you when I needed you earlier?) You can’t sample rate convert (SRC) a properly sampled (Nyquist compliant) signal by throwing away samples without causing aliasing! Agreed! No one who knows what they are doing would do that. Just like sampling in the analog domain, violating Nyquist in the digital domain creates aliasing. I have used SRCs while mastering audio on thousands of recordings, many of them for commercial release on major record labels. I have also been a part of designing integrated circuit SRCs for the most prominent data conversion companies. So you would never reduce sample rate by throwing samples away, unless …. you wanted to demonstrate the kind of aliasing you would get by doing so!

For those reading this who may not understand what Bernie and I are discussing, let me explain what I did. Sorry in advance, if you want to really understand the details this will take a few words to explain… but it will be easy to follow along.

I took a day for a station from USCRN (Cordova AK Nov 11, 2017 – see Fig 1, 2, 3). Look at the list of samples for that day. There are 288 of them. The first 6 samples are: -8.4, -9.7, -9.6, -8.7, -9.3, -8.9. I thought, it sure would have been nice to have been there on-location that day with perfectly match and calibrated duplicate systems with co-located sensors. I could have set up data-acquisition systems with sample rates that were divided down versions of 288/day. I would have synchronized the sample clocks of each system so that they all fired at exactly the same time. This way I could compare how 288-samples/day compared to 144-samples/day and 72-samples/day, and so on as in Fig 2. That would have been cool. If I did that, here is what would have happened. I’ll just focus on 288 and 144 for simplicity. Let’s call the systems A and B, respectively for simplicity.

At time t=0, at midnight, when the first clock pulse arrived, both system A and B would measure a reading of -8.4, because they are perfectly match, calibrated, and co-located. A few seconds later the next clock pulse would arrive for System A, but System B would only be half way through the clock period. System A would take its 2nd measurement of -9.7 and B would take no measurement at all. A few seconds later the next clock would arrive but this time both systems A and B would fire a sample and both would read -9.7. A few seconds later System A reads -9.6 and System B no sample. Next System A and B both read -9.3. On the 6th clock pulse A reads -8.9 and B no reading. And so on…

Summary
A B
-8.4 -8.4
-9.7 xxxx
-9.6 -9.6
-8.7 xxxx
-9.3 -9.3
-8.9 xxxx

It turns out, I didn’t need to find a time machine and travel back in time to set up my perfectly match duplicate systems. I can just delete every other sample to divide by 2 and the results would be perfectly, exactly, indistinguishably identical! Assuming 288 samples is the practical Nyquist-rate then of course all of those divided down results will produce aliasing! Exactly what I was trying to show! Now we can illustrate just how much error is produced by undersampling.

Thank you Bernie for making it easy for me to illustrate this. I appreciate you!

Reply to  William Ward
January 16, 2019 5:29 pm

William – you are still missing the real point! And it’s really simple.

In using a bogus proxy – that is, historically, (Tmax + Tmin)/2, for a true mean, this is a Fundamental Flaw (FF) that does not (can not) have anything to do with aliasing. This is for the simple reason that this FF is present (just a bad way of estimation) even BEFORE any sampling occurs, or even if it never does occur. For example, if you observe that a continuous-time signal hovers most of the day around 80 degrees, but drops to 40 degrees for 5 minutes, the mean is close to 80 but (Tmax+Tmin)/2 is 60. FF error, but no sampling, so it can’t be caused by aliasing.

You should look at the FF (as Willis did in his plot of 30 cities) and not celebrate the trivial results of doing DSP wrong.

-Bernie

Bright Red
Reply to  Bernie Hutchins
January 16, 2019 6:23 pm

Hi Bernie, Sorry I was not able to reply directly as no reply was available below your post.
“In using a bogus proxy – that is, historically, (Tmax + Tmin)/2, for a true mean, this is a Fundamental Flaw (FF) that does not (can not) have anything to do with aliasing. This is for the simple reason that this FF is present (just a bad way of estimation) even BEFORE any sampling occurs, or even if it never does occur. For example, if you observe that a continuous-time signal hovers most of the day around 80 degrees, but drops to 40 degrees for 5 minutes, the mean is close to 80 but (Tmax+Tmin)/2 is 60. FF error, but no sampling, so it can’t be caused by aliasing.”

What you have described is a classic case of aliasing that could be used as an example in 101.
Let’s just say that we took the samples at regular intervals and just got unlucky and happened to sample at the time the 40deg minimum occurred with the other sample at 80deg. The result from the two samples would be 60 and a long way from correct for your given example as you noted. Now we could also be lucky and both our samples are taken while the temperature is 80deg the result is 80 which is much closer to correct.
By using min/max recording you are just ensuring that you sample any high frequency component of the signal virtually ensuring the largest possible error. Violating Nyquist can and usually does cause errors.

Reply to  William Ward
January 16, 2019 6:45 pm

Bright Red said: “What you have described is a classic case of aliasing that could be used as an example in 101.”

What aliasing? The FF that I mentioned applies to the CONTINUOUS-TIME case! No sampling involved. The flaw is in believing that (Tmax+Tmin)/2 can possibly be a stand-in for the true mean. That’s the real error – not any aliasing. Do you seriously not understand?

Please do explain yourself.

-Bernie

Reply to  William Ward
January 16, 2019 7:08 pm

To be even more clear, you would take two analog “peak detectors” (a diode, a capacitor, and an op-amp or two for convenience), one for (+) polarity relative to start and the other for (-) polarity. Reset both at midnight, and come back at 11:59:59 PM and read the outputs. There is no sampling.

William Ward
Reply to  William Ward
January 16, 2019 7:15 pm

Bernie,

It seems we agree that (Tmax+Tmin)/2 is not a good method, but that’s where our mutual understanding ends.

I have completely addressed and dismantled each of your points with details. You just come back with adamant assertions. As Bright Red said, and I have said, you are fighting signal analysis 101, first week of class. It would be equivalent to going on a rant about how 1+1 is not 2. Sampling is just so basic you can’t fight it (and win).

Bernie said: “You should look at the FF (as Willis did in his plot of 30 cities) and not celebrate the trivial results of doing DSP wrong.”

My reply: No one seems to have addressed the 26 trends I have presented or if they did I didn’t see it or didn’t recognize it as such. I didn’t see the point in reviewing others counter arguments – I couldn’t always understand the process that was used and by then both fatigue and discussion disarray were taking their toll. This comment is not meant as a slight to Willis, but I didn’t see the value in Willis’ 30 city plot as the view was zoomed out. I present the numerical trends in C/decade in a table. I can’t compare what Willis did to what I did. I know that what I presented is of high quality. Over 1.2M samples were used (thanks to Piotr Kublicki) to determine the linear trends for the Nyquist compliant method. I’m frustrated to hear people just hand wave away those biases as being too small, not systemic, just noise etc. Yet those same people (probably some) give full credibility to similar magnitude trends reported from the max/min method. As for DSP, I’m not doing any DSP so I can’t be doing it wrong. I’m not doing a strict sample rate conversion. If you can’t see that then I can’t help you because I gave you the step by step explanation. You can’t show any violation with what I did – but I’d love to see you try. You said you were a DSP guy – so I assume you are on the coding and algorithm side. Well its guys like me that get you the data so you can do your work. Leave the sampling to those who understand it.

I think we have said all that can be said, so we can conclude with a cordial disagreement. If you would like to have the last word I will read it, but likely not reply. If you have something that you honestly want to engage on that is constructive, then I’ll be there, otherwise be well until next time. And thanks for participating Bernie.

Reply to  William Ward
January 16, 2019 8:57 pm

William –

(1) We both seem to agree that the historical use of (Tmax+Tmin)/2 as a measure of mean is nonsense that results in significant error (what I call the Fundamental Flaw – FF – in either the continuous or sampled cases).
]
(2) Can we also agree that if there is NO sampling done on a signal it is meaningless to suggest aliasing as a cause of any errors that are present already?

(3) Does it not follow that since the error is already in the unsampled case (FF), it is not caused by aliasing due to sampling that has not yet, and may never occur?

IF YOU CAN – please explain your position.

I will appreciate a response.

-Bernie

Rob_Dawg
January 14, 2019 5:19 pm

TMax and TMin are not samples.

LdB
Reply to  Rob_Dawg
January 14, 2019 5:44 pm

Yes they are .. two samples exactly you just aren’t sure of timing. Even a stopped clock tells the correct time twice a day and does still tell the time 🙂

1sky1
Reply to  Rob_Dawg
January 15, 2019 6:04 pm

Bingo! Within measurement (and time-of reset) error they are true daily extremal values of the CONTINUOUS temperature record EXHAUSTIVELY recorded, rather than sampled, during any month. It’s the same difference as between a complete census and a mere poll.

Gary Pearse
January 14, 2019 5:25 pm

William, it would be interesting to see what you would get for a mean if you selected random samples from the signal, maybe from a month’s signals for a monthly mean. A computerized instrument could do this. You would get samples across the geometry of the segments of the record that would be representative. This sounds no good for communication signals but for mean “signal” for temperature it should work well. Maybe a new type of thermometer here!

Steven Fraser
Reply to  Gary Pearse
January 14, 2019 6:03 pm

Run the randomization multiple times, too, save all the values.

William Ward
Reply to  Gary Pearse
January 16, 2019 3:10 pm

Hi Gary,

That is an interesting idea. If you ever try it with the USCRN data let us know what you come up with!

January 14, 2019 5:49 pm

Some years ago I recognized there was error in the Tmax-Tmin method, but didn’t think of any alternative. It’s nice to see that NOAA has made at least a minimal effort in recognizing that fact. Thanks for the interesting article.

January 14, 2019 5:54 pm

On the other hand, what is the relation between sea surface temperature and land-based temperature readings?

In calculating a global average are the sea surface temps averaged with the land-based temps with adjusting the sea surface readings?

angech
January 14, 2019 5:57 pm

Interesting comments
-Ian Macdonald Since the values taken are max and min rather than values at random times of day, this is not quite the same thing as sampling a signal at discrete points along a cycle. The problem here is that any asymmetry in the waveform, for example the warmest period of the day being only 5min whilst the coolest period covers several hours, is going to leave you with a totally wrong average value. It seeming that the short warm period has equal significance to the much longer cool period, when in fact the brief warm period is an outlier and not representative of anything.
-Clyde Spencer Stokes If you input daily garbage to calculate a monthly average, you will get monthly garbage!

In a world with a repeated steady rate rise and fall of temperature one should be able to get the mean temperature with any 2 readings roughly 12 hours apart with no TOBS rubbish.
In the real world as MacDonald points out, as does the author indirectly the mean represents an ideal average which is subject to error depending on where the days variability occurred. This error can be reduced by more frequent sampling to a more accurate figure but to what practical effect?
Long term, with the use of more sites, one would still get a practical long term record for use with the caveat that there is an error range. And the less instrumentation complexity but longer the life of the station the better the reliability?

Clyde despite your assertion proper temperature records spaced 12 hours roughly apart are not garbage. The results do give a value that we can use and the outcome is usable as long as we understand that there is always a basically unavoidable error range present.

Clyde Spencer
Reply to  angech
January 14, 2019 9:27 pm

angech
You said, “The results do give a value that we can use and the outcome is usable as long as we understand that there is always a basically unavoidable error range present.”

That cuts to the core of the problem. Consensus climatologists use uncertainty very sparingly. When they do use it, it often is not justified. For example, the mean of many samples is not only a good estimate for the central tendency of a sample population, but there are many associated statistical tools developed for parametric analysis. But, as Ward has demonstrated, the mid-range value is often significantly different from the true mean of a daily series, and can be greater or less than the mean. That means the uncertainty is much greater than if a true mean were used to calculate a monthly average. Yet, the mid-range value, obtained from just two readings, is treated as though it has the accuracy and precision of a true multi-sample mean. NOAA and others call the mid-range calculation a mean, and cavalierly use it as though it were a mean. That is, they assume it represents a true daily mean with more accuracy than is justified. The bottom line, the accuracy and precision of the monthly means is not as high as is typically reported and nobody wants to do a rigorous error analysis. Calling it “garbage” was a bit of hyperbole, because it isn’t useless. However, its value is being misrepresented!

Loren Wilson
January 14, 2019 6:24 pm

What worries me is in Figure 1, where a five minute average changes over a degree C from the previous one during what looks to me like the nighttime. Does the air actually warm up and cool down that rapidly without significant energy input disruptions like clouds blocking the sunlight? Or is this a reflection of the poor precision of the thermometer? I don’t know enough about the instrumentation, but I don’t trust those temperatures until I can explain variations that large during what should be a fairly stable period.

Curious George
Reply to  Loren Wilson
January 14, 2019 7:22 pm

Last summer I sat outside in a shade on a hot windy day with a lowly Oregon Scientific digital thermometer, approximately 1 reading a minute, temperature displayed to 0.1 deg F precision. Temperatures varied widely within 5 degrees. A hot wind gust 107 F followed by a cool wind gust 102 F.

Jim Butts
January 14, 2019 7:10 pm

Whatever it is, it is. It is a statistic because it is a function of the entire data set over some time period. It might be close to the average, but we will never know about the past data. But if you are going to compare to the past you must use the same statistic. I am not worrying about this so much.

Duane
January 14, 2019 7:15 pm

Seems to me to be one of those proverbial philosophical arguments over how many angels can dance on the head of a pin … and about as practically useful.

Who really cares about “preserving Nyquist” when there is nothing to be preserved in an actual dataset more than a few years old in more than a few locations? Unless the point of the research here is to so finely – I would say ridiculously – granularize the impact on the earth’s systems to the point that measurements in an infinite number of possible locations at 20 sec intervals really makes any damn difference to a macro, planetary climate system that continually changes on a geologic timescale – it doesn’t – then we are just chasing our own tails.

What really matters is what the sample population is doing. Not the measurements, and not the statistical tests performed on the measurenents. First define the sample population.

The sample population is the earth’s macro climate, not a daily temperature “signal” which in any event is not an ideal sine wave. How is the macro climate changing, and how are those changes affecting humans and the rest of our biosphere and our physical environment?

January 14, 2019 7:51 pm

Here is a good example of the reason the Nyquist process should be applied.
Set up a device to accurately measure the exact voltage at exactly 120 times per second on the 60 Hz AC power line that has no load and is well regulated. For EU or Australia measure at 100 times per second as the frequency is 50 Hz. When you now average the obtained numbers you should get a number extremely close to ZERO. The difference from Zero will be due to inaccuracy of the measuring equipment and any fluctuations in the voltage. You will get the same ZERO value weather your timing is at the max and min, zero crossing or anywhere along the waveform as long as they are at 120 samples (or 100 for those using 50 Hz) per second.
Now introduce a variable load. You will now get a different number. The data presented in the article above will give you an indication of the inaccuracy for the obtained data. Think of that as the change in the temperature that occurs on a daily basis and the seasonal basis. The use of graphing the temperature anomaly from the average temperature rather than the actual temperature effectively converts the real temperature variation into a waveform that will vary around ZERO, just like the AC power wave form.
Pull out your math books and read about the Nyquist theorem. Basically by using the AGW Temperature Anomaly data they have filtered out the true temperature and left only the anomaly.

Ian Macdonald
Reply to  Usurbrain
January 14, 2019 10:37 pm

-Usurbrain that is only true if you sample instantaneous voltage, and what you are describing is basically THE problem with digital oscilloscopes. However the analogous situation would be sampling max and min voltages over an AC cycle. In that case the max and min would be remarkably similar except when the motor was switched off, at which point the inductive kick might make one several kV whilst the other remained normal.

In the case of temperature it cannot be assumed that the day and night variations follow a similar waveshape, though. The effects of intermittent sunshine and convection during the day have no equivalent at night.

Chris4692
January 14, 2019 10:17 pm

You do science with the data you’ve got, not with the data you want.

While this article is very applicable to collecting data going forward and states analytically what we all knew that the max/min procedure is inadequate, the work should be expanded to examine how to deal with the data we’ve got, imperfect as it is. Perhaps examining the results from a number of sites with thermometers recording at minuscule intervals could be compared with the max/min procedure to develop a range of deviation of best historic methods compared with best current methods. Or another analytical procedure could improve on max/min.

Then we could use the couple hundred years data we have, while we collect a better data set for our great great grandchildren to play with.

January 14, 2019 11:05 pm

The Nyquist comment would not be correct. To unambiguously get all samples correctly, you must sample at 2x the wavelength of the signal. But with some aliasing, you can get a signal down to equal to the wavelength, consistently. That is, you won’t hit all occurrences, but will hit enough to see the signal. It is not a matter of 2x or you see nothing. Rather, it is harder to see the signal the more you undersample the wavelength of the phenomena being studied. For 2x, the sample error should be about zero, and for increasing undersampling, you increase the error. For example, in mineralogical infrared I was studying a particular spectral band, but seeing shifts only on occasion. I was using 8 cm spacing, went to 4, saw the band more often, and have settled on 2 cm spacing which picks up everything.

Werner Kohl
January 14, 2019 11:10 pm

The often discussed paper by Marcott et al. (2013; http://science.sciencemag.org/content/339/6124/1198) is another example for violating of the Nyquist-Shannon Sampling Theorem.

I have written about this (in German) last year in “Kalte Sonne”:
http://diekaltesonne.de/signalanalyse-randbedingungen-fur-die-klimaforschung/

Geoff Sherrington
January 15, 2019 12:07 am

Clyde Spencer January 14, 2019 at 5:25 pm noted “The utility of the Fourier Transform is that ANY varying signal can be decomposed … ”

There are various ways to decompose signals. Just for fun, what do you make of these semivariograms from a geostatistical approach, using time as a proxy for distance (since weather systems move with time) with temperatures taken half-hourly, daily, monthly and yearly; and aggregated in the standard BOM way for recent years, with 1 second glimpses etc.
http://www.geoffstuff.com/semiv_time_bases.xlsx
Fourier gets boring after a while. Not every wave is a sine wave, though it can be decomposed.

Geoff.

Editor
January 15, 2019 2:19 am

Great discussion. The entire subject of signal theory and its application to climate data is often overlooked, particularly in the integration of signals with wide frequency differences.

All time-variant sequences of numbers are signals and subject to the principles of signal theory and processing.

William Ward
Reply to  David Middleton
January 16, 2019 3:13 pm

Thanks David for the validation. Some readers seem to think that the ideas around sampling theory are novel. They are not. They are standard fare everywhere except climate science.

January 15, 2019 2:35 am

It seems to me that the only result here that bears at all on what is actually done with the data – calculating monthly averages – is the comparison of trends. But all it says is that if you calculate the trends two different ways, you get two different results. Of course. But are they significantly different, so you could deduce something from that. No information is supplied.

So I calculated some. My dataset finishes in Oct 2017 – NOAA is off the air at the moment. I first restricted to USCRN stations in CONUS with at least ten years of data. There were 109. As with Blackville above, the trends were high. The mean for min/max was 13.69°/Cen; the mean for integrated was 14.19°C/Cen. That is a difference of -0.5 opposite to the sign of the 26 stations in this article.

But the sd of the trend differences was 8.62, so the sd of the mean would be expected to be about 0.8 °C/cen. If I restrict to 11 years of data, 83 stations qualify, and the difference of means was -0.86°C/cen.

This seems to be just normal random variation. The mean difference is less than one standard error from zero. There is nothing significant there.

Scott W Bennett
Reply to  Nick Stokes
January 15, 2019 6:50 am

It is discussed in the literature that the bias between true monthly mean temperature (Td0) – defined as the intergral of the continuous temperature measurements in a month – and the monthly average of Tmean (Td1) is very large in some places and cannot be ignored. (Brooks, 1921; Conner and Foster, 2008; Jones et al., 1999*)

Wang (2014) compared the multiyear averages of bias between Td1 and Td0 during cold seasons and warm seasons and found that the multi‐year mean bias during cold seasons in arid or semi‐arid regions could be as large as 1 °C.

See my comment above for more detail.

* Jones et al. recognised that there is a difference between the two.

Richard Linsley Hood
January 15, 2019 3:49 am

Strictly Nyquist applies in both time and space. Thus there are not only the inaccuracies that come from (min + max) / 2 but also the spacial sampling from non-linear placement of the measuring stations.

As any engineer knows, a suitable anti-alias filter ahead of the sampling engine will reduce the aliasing errors and the simplest way, as has been suggested above, is to add a suitable mass around the thermometer to integrate the signal before sampling.

In fact this anti-alias signal IS available in lots of stations, just use the below ground thermometer values that some stations provide. Depending on how far below the surface the signal is taken a larger and larger anti-alias filter can then be provided. A daily sampling of that signal will then provide a much more accurate value for tAvg at that station.

Of course this will not help compensate for the well known spacial errors that come from station placement horizontally and vertically.

William Ward
Reply to  Richard Linsley Hood
January 16, 2019 3:18 pm

Richard, thanks for the validation and additional comments. Spatial aliasing is extremely important as well. Aliasing of video is a good analogy for that 2 dimensional effect. Antarctica is as large as the US + 2 Mexico’s. 1000 times more thermal energy locked up in the ice below 0C as the atmosphere has above 0C, yet we have only 26 stations there. How many from the US in the datasets? Percentage wise the weighting of the US is massive and Antarctic is microscopic.

Paramenter
January 15, 2019 3:53 am

Hey George,

William, it is Tmin/Tmax data. It is not “bad data”.

Of course is not. Simply, resolution of those records is not sufficient enough to draw firm conclusions about for example, minor variations in the trends. Also, error due to aliasing adds to another errors due to imperfect measurements, rounding and uncertainty errors and so on. It’s not the sole error associated with historical records but must be taken into account as well.

January 15, 2019 3:57 am

Well, someone will tell me the temperature of the next day, Nov. 12, starts at -1° instead of at the start of Nov. 11, -9°. This would be a temperature difference of 8° from one day to the next. This is really a nice condition to show … em, whatever.

Seldom saw such a nonsense here at WUWT.

Tom Johnson
January 15, 2019 4:31 am

The authors seem to be missing several important nuances of digital sampling theory. For starters, the Nyquist sample rate criterion of “more than two times the highest frequency” involves assumptions about the statistical nature of the data, the frequency content of the data, and the mathematical techniques used for processing the data. It precludes determining a precise value for the high and low “peaks”. If you wish to determine a precise value for a peak, a sample rate of 20 times the highest frequency involved will only give you a value within about 1% of the actual peak. Conversely, if the data are random, stationary, and ergodic, using the 2x criterion will indeed give you an accurate value for the mean, PROVIDING SUFFICIENT DATA ARE ANALYZED TO REMOVE RANDOM ERRORS. It will never give you a valid answer for the mean within in a single day. In addition, aliasing errors are not lost, they are just shifted in frequency, which can obviously create other errors.

Unfortunately, daily temperature data are neither random, nor stationary, nor ergodic. And, even more important, averaging the data from any single day amongst dozens of others defeats the whole concept of highs and lows within a single day. Thus the whole idea of using Nyquist sampling for statistics on data within a single day is mostly irrelevant. The daily high often occurs mid afternoon, and the daily low early morning. However, that is not always true. Sometimes the daily high occurs at midnight, and it may also be the daily low for the next day. Or vise versa. In addition, the thermal mass of a thermometer bulb is much higher than the thermal mass of the influencing air surrounding it, making the thermometer a somewhat effective first order anti-aliasing filter. This may be either good, or bad, as the thermal mass values of the temperature transducer can vary widely and are mostly unknown in the historical record.

The historical record is what it is. It is clearly important to make the current data more accurate. Including digital sampling theory in the data recording will certainly improve that. However, the proper method of appending new data to the historical record using different sampling techniques is just another part of climate science that is not yet “settled”.

Paramenter
Reply to  Tom Johnson
January 15, 2019 5:58 am

Hey Tom,

Thus the whole idea of using Nyquist sampling for statistics on data within a single day is mostly irrelevant.

My understanding is that problems with Nyquist starts even before you can do any statistics – at the data acquisition step. Daily midrange does not allow you to replicate the daily signal, nowhere near. Usual response to it is we don’t really need that. Crude daily averaging is all we need for monthly and yearly averages so we can happily accept errors due to undersampling of daily signal. Fine, that may be perfectly OK for all practical applications. But, by the way, using those crude averages we also want to do fine analysis with the precision down to thousandths of C. Well, in such case you will need high accuracy signal. Thus, you cannot have it both ways. For me, that’s the crux of the problem.

Problems due to spatial and to lesser extent temporal aliasing of the temperature records are discussed in the mainstream literature so looks like aliasing of such records is relevant.

Tom Johnson
Reply to  Paramenter
January 15, 2019 6:55 am

I agree with all you said. The point I was trying to make about the Nyquist criterion being irrelevant is that if you need a sample rate 20 times the highest frequency of interest to accurately capture a peak, Nyquist at twice the highest frequency is clearly not adequate.

It’s always best to apply the most up-to-date techniques when acquiring data, including sampling and anti-aliasing. It’s not stated if USCRN does or does not do that with their sampling and down sampling techniques. For example, it’s best to use constant delay anti-aliasing filters (such as Bessel) if you wish to capture peaks (like daily highs and lows), but higher roll-off filters (Such as Butterworth) for statistics such as means. It’s not stated what, if any, USCRN recommends. It’s never good to assume that your transducer is a good antialiasing filter, since other noises might be present in the data beyond the actual signal being measured. I always go by the old adage: “It’s easy to read a thermometer, it’s quite difficult to measure temperature.”

Bright Red
Reply to  Tom Johnson
January 15, 2019 1:59 pm

“The point I was trying to make about the Nyquist criterion being irrelevant is that if you need a sample rate 20 times the highest frequency of interest to accurately capture a peak, Nyquist at twice the highest frequency is clearly not adequate.”

Sorry Tom but that is not correct. Meeting Nyquist allows the waveform to be accurately reconstructed from which the min and max values can be determined. You do not need to physically sample them.

William Ward
Reply to  Bright Red
January 15, 2019 10:25 pm

That’s right Bright Red – thanks for confirming it.

Reply to  Paramenter
January 15, 2019 7:57 pm

You’re banging close to the problem.

Measurement error of historical readings are also of importance in determining if you can really do arithmetic on data and come up with accuracies out to thousandths of a degree.

Here is a simple thought experiment.

1. I measure temperature each day for 100 days and each reading is 50 degrees.
2. Each reading is accurate to +- 0.5 degrees.
3. Each reading is rounded to the nearest degree.

A. What is the mean?
B. What is the uncertainty of the mean?
C. Do you know the probability of each 0.1 degree between 49.5 and 50.5 for each day?
D. If you don’t know the probability distribution, then each tenth or even one hundredth of a degree is just as likely as another.

What does this do to the accuracy of the mean? The mean is 50 +- 0.5 degrees. You can do all the averaging and statistics you want and try using the uncertainty of the mean to convince folks that your arithmetic lets you get better and better accuracy but you’re only kidding yourself. Draw it out on a graph. Each and every day, you are going to have three points, the recorded temp plus 0.5 and the recorded temp minus 0.5. Since you have no way to know what the real temp was on each day, you are going to end up with three lines, one at 50.5, one at 50.0 (the mean), and one at 49.5.

The only thing you’ll know for sure is that the average temp is somewhere between those lines.
You can average for months, years, decades, or centuries and you just won’t be able to tell anyone that you know the temperature any more accurately than +- 0.5 degrees.

You can carry out arithmetic to the ten thousandths place but all you’ll be doing is calculating noise. Know why? Because you are measuring different things each and every time you make a measurement. You are not measuring the same thing multiple times.

Frank
Reply to  Tom Johnson
January 16, 2019 10:01 pm

Tom: Given that daily temperature is driven by the sun, the only significant frequency constantly present in continuous temperature data is going to have a period of 24 hours and longer (365 days).

The diurnal cycle in SSTs is so low, we don’t even pay attention to the minimum or maximum temperature.

Reply to  Frank
January 16, 2019 11:02 pm

Frank, that’s simply not true. I posted a periodogram twice showing that there is a significant frequency at both 12 hours and 8 hours. Here it is again.

w.

Steve O
January 15, 2019 4:45 am

The average of two readings gives a metric that is different from the average of 288 readings. There is a pattern to how a day warms up and cools down and two readings (rounded to the nearest whole number) means you don’t know much about the actual temperature on any particular day. However, nobody really cares about the temperature of any particular day. What if we are measuring the average temperature in a year? If 288 readings in a day can tell you something useful about the average temperature in a day, why wouldn’t 365 readings tell you something about the average temperature in a year?

Also, I can accept that there is a difference between the “actual mean temperature” as measured with 288 measurements is different from the employed metric of taking two readings, but as long as that difference does not change over time, I don’t see why you can’t use the employed metric to measure changes in the average temperature from one decade to the next.

Steve O
Reply to  Steve O
January 15, 2019 8:14 am

Put another way:

If instead of taking a temperature measurement 288 times a day, I divided the day into 365 intervals. At each interval I took the high and the low for the interval and divided by two. From this data, could I not know the average temperature for the day?

Clyde Spencer
Reply to  Steve O
January 15, 2019 12:49 pm

Steve,
You asked, “.. but as long as that difference does not change over time, I don’t see why you can’t use the employed metric to measure changes in the average temperature from one decade to the next.”

I think that there are two answers to your question. First, if you treat the averages of the mid-range values as an index, then you can say something about trends. But, the unanswered question is, “Just what is it that you can say?”

Second, as Ward demonstrated, the mid-range values can be and often are very different (+ & -) from the robust metric, the mean. That means that the error bars for the monthly and annual means are much larger than if a true daily mean were used. That larger error should be acknowledged, and that implies reducing the claimed precision of the values making up long-term trends.

We only have the historical data that we have, but we need to be honest about the accuracy and precision of the analysis of that data. A clue to that is given by a plot of the annual frequency distribution of global temperatures: what is shown is a skewed distribution where the range implies a standard deviation of tens of degrees. That does not support claims of an annual mean temperature known to hundredths (NOAA) or thousandths (NASA) of a degree Celsius.

Steve O
Reply to  Clyde Spencer
January 15, 2019 2:49 pm

Clyde, thank you. It’s an important point that the min plus max and divide by two can only provide an index. Back when they started recording the data I imagine they never realized its future use. I’m still trying to get a mental grasp on how important the differences are since large sampling errors have a way of getting cutting them down to size with large sample sizes.

But you’re right that we should not get too excited about hundredths of a degree. At that level, there are other errors to worry about.

Clyde Spencer
Reply to  Steve O
January 16, 2019 1:42 pm

Steve O
You said, “At that level, there are other errors to worry about.”

Indeed! When NOAA reports an annual global anomaly to be so many hundredths above the preceding year, or NASA reports it to thousandths, the reader assumes that it is highly accurate and known precisely. Whereas, neither may be the case! But, when the changes are at that order of magnitude, the only way it can be made to appear to be a crisis is to report the very small numbers, without error bars, and hope that the readers believe that the numbers are reliable.

William Ward
Reply to  Clyde Spencer
January 16, 2019 3:28 pm

Clyde, my ears thank you for the music!

Reply to  Clyde Spencer
January 17, 2019 8:37 am

Without error bars is the key!

Clyde Spencer
Reply to  Clyde Spencer
January 18, 2019 9:32 am

William
Happy to be your muse! 🙂 Would it were that alarmists had the same taste in music.

Solomon Green
January 15, 2019 6:12 am

When I suggested on WUWT, some years ago, that for a continuous function, such as temperature, (Tmax + Tmin)/2 did not equal Tmean and that with modern instrumentation we could get a more accurate figure for Tmean, Steven Mosher put me in my place by writing that everyone knew that but since these estimates had always been used in the past climate scientist must continue to use them for comparison purposes.
I am looking forward to his contribution on this thread.

Steve O
Reply to  Solomon Green
January 15, 2019 8:20 am

I suspect his point would be the same. The two methods will result in different values, but as long as the difference is constant we can use either method to compare temperature trends over time.

Yes, it’s an assumption that the difference has remained the same over time and there is a risk to it. The best we can do now is to compare the difference as it is today with the difference as when continual measurements first started.

Editor
January 15, 2019 6:59 am

William ==> Well done!

Scott W Bennett
January 15, 2019 7:02 am

I guess I may have missed something as Tmean seems to me to be an absurd way to measure daily average temperature!

Someone help me here, are we really saying that adding the max and min and dividing by two is meaningful in any way other than to provide a daily range? I do see that it might also serve the “comparative” purpose of setting normals though.

Surely this is a sampling problem alone and it is unnecessary to invoke Nyquist.

Tmean = (Tmax+Tmin)/2 is problematic to say the least.

It is not simply a matter of not knowing the time each happened – the “clock” but the duration is of vast importance!

It’s not hard to imagine a simple list of samples that make Tmean meaningless!

We’ve had days here where the min was low say, -8 C but it remained at 13 all day before reaching 14 briefly then within an hour dropped back to 8 C degrees. The “true” mean for 12 observations is 10.9. I’m too nervous to calculate the Tmean because it seems so foolish and I would have sworn the actual mean was 13 because the low and high only occurred over an hour each for a total of 2 samples out of 12 while it was 13 C for 9 hours!

These are my raw thoughts, I’ve given a more considered comment from the literature above! 😉

Hugs
Reply to  Scott W Bennett
January 15, 2019 11:54 am

Tminmax/2 is so banal it comes as a big surprise to people who didn’t know about it.

It is not meaningless though, when you look at a large set of unbiased data to find trends. But how do we know the set is unbiased?

We don’t. New biases are found, there are an endless amount of biases in the data all very difficult to remove.

unka
January 15, 2019 7:57 am

” Nyquist-Shannon Sampling Theorem tells us that we must sample a signal at a rate that is at least 2x the highest frequency component of the signal”. – this is true but author forgot to mention that it apples to signal reconstruction. To estimate the average of the signal one can under sample the signal and get good estimates of the average. This obviously can be quantified. The author could have done with little bit of simulation that would show him that he is raising here a red herring.

January 15, 2019 8:11 am

With a degree in Math and Engineering, i think a little differently. I was always taught “Work is the Integral of the area under the curve.” That heat added – removed is WORK when you consider it is actually joules. The average of Tmax and Tmin is NOT the “Area under the curve.” It is not even a close approximation of the “Area under the curve.” Thus, IMHO, if you want a true representation of the heat added (work – joules) Nyquist Theorem applies. Otherwise aliasing of the data could produce trends that are not there. Also believe that putting this into a Digital computer [think computer model] amplifies the problem. Have seen this happen [aliasing] with computer models I developed for accident analysis on Nuclear Power Plants.

Reply to  Usurbrain
January 15, 2019 9:04 am

PS, How much of the lost heat is lost in the use of (Tmax + Tmin)/2 rather than “hiding in the ocean?”

Gums
Reply to  Usurbrain
January 15, 2019 12:58 pm

Thank you Brain!
My limited math skills did not allow me to express the underlyng aspect of “average temperature” and Gorebull warming numbers that scare many each month they are published. And there are other things to consider, huh?
– hysteresis for the hot adobe to cool at night and stay cool for early the next day
– radiative heat loss as I see up at 8,000 feet on a cool night, and I guess the desert folks see it more often

Gums sends…

Joe Campbell
January 15, 2019 11:40 am

To William Ward – Great article. Thanks…Joe

William Ward
Reply to  Joe Campbell
January 15, 2019 10:19 pm

Thank you very much Joe.

January 15, 2019 11:52 am

Ward
Sampling at a rate less than [the Nyquist limit) introduces aliasing error into our measurement.

I believe this is incorrect. My understanding of Fourier analysis is that performing a DFT on any set of measurements (i.e. thermometer readings) has no effect on the sample points (“measurements”) themselves, nor do I see any way it could have any effect on any subsequent measurements or standard statistical operations performed on those sets of measurements, up to _spectral analysis_.

Yes, “aliasing” can occur if you perform a DFT on samples of a time series of a “signal” containing frequencies above the Nyquist limit . These higher frequencies will be folded back, overlapping the band-limited domain, starting at the lowest frequency, distorting the resulting spectrum. (In the sense that the higher, out of limits, frequencies will be displayed at a much lower frequency. And if you perform Shannon interpolation (aka sinc interpolation) on an undersampled series, the resulting interpolation will display the overlapped spectra.

However, Parseval’s theorem will still hold, the total ‘energy’ (sum of squared absolute values) in the frequency domain will still be the same as the time domain.

This is because Parseval’s theorem is a _mathematical identity_ on trig functions, requiring no physical interpretation of what the signal represents. Also, DFT is an involution, meaning that it is its own inverse so IDFT(DFT(x)) = x. That means you get the same points back that you started with, even if undersampled or completely random.

If you don’t believe that, try this little experiment, reconstructs the original points and squared energy of a random time series, without any regard for Nyquist limits or physical meaning of the values.

N=4096
x = rand(N,1) + 1j * rand(N,1);
sum(abs(x).^2)
sum(abs(fft(x)).^2)/N

You should see output similar to this:
N = 4096
ans = 2783.0
ans = 2783.0

I say “similar”, because the last two numbers will not be exactly the same as above. This is because it is a randomly generated complex signal. But both values will be the same, showing that Parseval’s theorem holds even for “undersampled” data.

Furthermore, since the measurements themselves are not distorted by undersampling, any standard statistical operations performed on the measured values (mean, variance etc) will not change.

The only “trouble” you could get into is “believing” the frequency of the aliased signals. These signals are “real” in the sense of energy content, but their frequencies are just wrapped around the band. In fact, if there no other frequencies in the overwrap region, the aliasing can be fixed merely by unwrapping the aliased frequencies.

So, what am I missing here? Show me an example of actual measured temperatures (or computed means of same) which are somehow undersampled.

Reply to  Johanus
January 15, 2019 12:00 pm

… oops I meant to say”Show me an example of actual measured temperatures (or computed means of same) which are erroneous because they are undersampled”

Paramenter
January 15, 2019 11:58 am

Hey Jim:

I’m working on an essay about uncertainty of the averages and something I notice here is that this paper doesn’t address the errors in measurements, nor should it necessarily do so.

I reckon this paper concentrates on the errors in the temperature records due to aliasing. Those errors obviously add to other known errors as mentioned by you errors in measurements or rounding. Looking forward for your results!

HaM
January 15, 2019 2:12 pm

Lots of straw today. Nyquist is best understood as a statement about perfect reconstruction of the original signal. Sampling below the Nyquist frequency guarantees that you cannot. Careful sampling at or above the Nyquist frequency in theory allows perfect reconstruction, but in practice cannot – consider, for instance, the very high bandwidth noise in your electronic measurement device; you must exceed the Nyquist rate for that noise if you hope to measure and suppress the noise effects.
Nyquist is nearly silent on the impact on your results – consider sampling the signal that actually is the mean daily temperature once per day It’s easy to argue that the value varies over the day (global warming/cooling both require this to be true). Despite this, if you measure it once per day, you will have a perfect representation of the mean daily temperature. Of course, to do this you need to first agree that there is such a thing as the mean daily temperature and that it takes a single, discrete value each day. Suddenly, Nyquist doesn’t even apply (only continuous signals, not discrete – non-continuous signals have infinite bandwidth).
More, consider a large number of samples of a sum of zero-mean stationary random processes, with the sampling well below the Nyquist cutoff for some of the processes. Average the results. I think the result will be a stationary, zero-mean random process. In other words, you will get the correct result even though you ignored Nyquist. That doesn’t mean Nyquist was wrong, it just means you can’t predict the values you didn’t sample – in other words, no perfect reconstruction. Ignoring Nyquist doesn’t mean the average will be wrong, just that it could be and that time-dependent (as opposed to measured) estimates likely will be.

As several people point out mean != (max + min)/2. A fundamental problem is that mean daily temperature is not captured by the historical data. Worse, it is not a well defined quantity, and the vagueness of the definition has always seemed to be exploited to advance the climate change cause.

Dr. S. Jeevananda Reddy
January 15, 2019 4:15 pm

The differences also can be seen in rainfall. Take the manual raingauge data [total for 24 hour period] and hydrograph cumulative sum [second by second or minute by minute or hour by hour or two hour by two hour, etc]. They present differences. Why? depends upon several factors — wind is the main cause for differences.

Aerial sum is more complicated — In late 70s as a scientist at ICRISAT installed 54 raingauges around 3000 ha farm. On one day, I noticed the rainfall varied between 5 and 60 mm. The average is different from the met station rainfall.

Dr. S. Jeevananda Reddy

Dr. S. Jeevananda Reddy

Dr. S. Jeevananda Reddy
January 15, 2019 5:46 pm

Heatwaves and coldwaves are expressed by maximum and minimum temperature and not by average

Human comfort is expressed by averages of temperature along with the relative humidity and wind speed at a place — hourly temperatures, hourly wind speed and hourly relative humidity are used

0830 and 1730 ist observations [standard met observations at 3 & 12 GMT]: dry and wet bulb temperatures used to estimate relative humidity

00 and 12 GMT: upper air ballon observations are made for computation of water vapour in the atmosphere.

All these were put forth after discussions based on their local experiences by international meteorologists but not by people — you scratch my back and I scratch you back.

Lowest to highest: probability curve defines the values at different probability level, when mean of that data set coincides with median [50% probability value], the data is said to be following normal distribution — bell-shaped pattern. If not they are skewed distribution, here the probability estimates are biassed. To correct this, incomplete gamma is used to get the unbiassed estimates. Second by second or part of a second by part of second data set mean: check whether that follows the normal or skewed distribution?

Dr. S. Jeevananda Reddy

Tom Abbott
Reply to  Dr. S. Jeevananda Reddy
January 16, 2019 11:05 am

“Heatwaves and coldwaves are expressed by maximum and minimum temperature and not by average”

Yes, and isn’t that what the CAGW claims are all about. Alarmists say the 21st century’s temperatures are warmer than any time in the past, but that’s not what you see if you look at the record of maximum temperatures only(Tmax), where you see that the 1930’s were as warm or warmer than current temperatures.

Here’s a Tmax chart of the U.S.:

comment image

Let’s compare actual high temperatures to actual high temperatures if we are talking about “hotter and hotter”.

Editor
January 15, 2019 9:40 pm

In the research for this discussion I discovered an interesting fact. This is that the average hourly temperature changes for a wide variety of temperature stations have a very similar shape. I looked at five years of hourly records for the following temperature stations:

Vancouver
Portland
San.Francisco
Seattle
Los.Angeles
San.Diego
Las.Vegas
Phoenix
Albuquerque
Denver
San.Antonio
Dallas
Houston
Kansas.City
Minneapolis
Saint.Louis
Chicago
Nashville
Indianapolis
Atlanta
Detroit
Jacksonville
Charlotte
Miami
Pittsburgh
Toronto
Philadelphia
New.York
Montreal
Boston

Here is the average day at each of the locations:

As you can see … not much difference. It does make it obvious why (max+min)/2 is NOT an unbiased estimator of the true mean …

w.

Dr. Strangelove
Reply to  Willis Eschenbach
January 16, 2019 5:46 am

Since the error is consistent at around +0.1 sigma. You get just adjust the MIN-MAX mean to get the true mean. It does not seem to be a random error.

Reply to  Willis Eschenbach
January 16, 2019 6:29 am

Willis,
Just a WAG, but looks to me that the small difference in max+min/2 and true mean could be explained by the wave shape. There appears to be a longer slope at the bottom (colder), and a smoother curve at the top (warmer).

Clyde Spencer
Reply to  Usurbrain
January 18, 2019 9:52 am

Userbrain and Strangelove
I think that what we are looking at is an artifact of skewness in the data. All Willis’ examples are for mid-latitude cities in the Northern Hemisphere. I’m speculating that the error will be different for different parts of the world and different seasons. While the error may dance around the mean, I’m not sure that can be shown rigorously to be the case. At the very least, we know that the mid-range value is a biased estimator of the mean and should be regarded with suspicion.

Frank
Reply to  Willis Eschenbach
January 16, 2019 10:06 pm

Willis: Any sign of the hypothetical higher frequency signals present in this data?

Clyde Spencer
Reply to  Willis Eschenbach
January 18, 2019 9:44 am

Willis
Normally, I would say that the difference is negligible and can be ignored, at least for first order estimates. However, it is the practice of alarmists to quote average annual global anomaly differences to hundredths or thousandths of a degree, in a world where the temperature range may be as great as 300 deg F, demonstrates a need to be precise and look at the unstated assumptions and the names assigned to the measurements.

Bright Red
January 15, 2019 11:26 pm

As an electronic engineer (Mostly lurking on the sidelines at WUWT) I have to say I am disappointed at the general level of understanding of Nyquist and signal sampling and processing in general and I thank William for his efforts to bring this important topic to the attention of WUWT readers.

Slightly off topic but I would like to add another source of error is Electro Magnetic Interference ( EMI ). In today’s modern world there are many sources of RF interference man made an natural. I would question the immunity of the measuring equipment and its suitability for the task as in my experance laboratory instruments in particular rate very poorly in this area. Also does anybody know if site surveys are done to determine the background RF levels at the measurement sites.

William Ward
Reply to  Bright Red
January 16, 2019 3:46 pm

Hello Bright Red – It is good to have a fellow electronics engineer here! Thanks for the concept validation. I was hoping to bring something new to this field that has benefitted the rest of the technology world for decades. Some seem to appreciate it. But I’m disappointed at some who are so fast to reaffirm what they already know rather than learn something new. There are many, many things that could be studied that currently are not. You mention RFI/EMI. A good one. I think also about the entire data acquisition circuit design: Power supply accuracy, common-mode rejection, ripple, DC offset, drift, linearity, thermal linearity, etc., etc. When we are talking about a few hundredths of a degree C/decade then there are many tens of things that can be the cause of that. How much of the trends and records are from the instruments themselves. But climate science has found the miracle cure, if you get enough of the wrong stuff it magically becomes right.

Editor
January 16, 2019 12:06 am

William Ward January 15, 2019 at 10:09 pm

Willis,

At some point posts no longer offer the option to reply directly below, so I’m not sure where this is going to land in-line. This is in reply to your post where you say

“William, you did “write a book” … but nowhere in it did you answer my questions. So I’ll ask them again:”

I sent you a few more posts that do address your questions. I’ll give you time to read those and respond. Did you read my “book”? Sorry, it seems like we need to find out where we disconnect on the fundamentals. Some of your questions are answered in those posts. For your new questions…

William, thanks for all of your answers. I’m starting a new thread with this down at the bottom of the page. I believe I’ve read all your posts, and yet questions remain unanswered. For example, you seem to be saying that we need to sample every 5 minutes to get a reasonably accurate annual average … which I think makes no sense.

Willis said:

“Next, you seem to be overlooking the fact that Nyquist gives the limit for COMPLETELY RECONSTRUCTING a signal … but that’s not what we’re doing. We are just trying to determine a reasonably precise average. It doesn’t even have to be all that accurate, because we’re interested in trends more than the exact value of the average.”

Reply: Define “reasonably precise”. And do you mean accurate???

No, I mean precise. As to “reasonably precise”, we want to determine the decadal trend in temperature. I took a look at the Boston hourly temperature for five years. Short period, so wide range in trends from errors. Running a Monte Carlo analysis by randomly adding the monthly RMS error for Boston (0.53°C for hourly vs max/min), I get an average of 1000 trends that have an average that is the same as the true trend … with a standard deviation of the trend of 0.3°/year. Since this is a 5 year sample, a sample of e.g 50 years would have much smaller errors.

For example. The trend on 75 years of Anchorage temperatures is 0.218°C/decade. If we add three times the Boston error, a random error with an SD of 1.5°C, a larger random error than any of the 30 US cities I’ve studied, to every month of that 75 years of Anchorage data, a thousand random instances give an average trend of 0.218°C ± 0.01°C/decade … a meaninglessly small trend error, in other words.

Now, I can’t see the USCRN sites because of the shutdown, but I don’t find that the errors from the two-sample method are a deal-breaker by any means.

I say trend errors of 0.24C/decade are not accurate. Daily means of +4C error are not accurate. Monthly mean errors of 1.5C are not accurate. You can’t determine the actual average with 2-samples/day. You are welcome to add the max and min and divide by 2 but this won’t give you an accurate result in all or most cases. Did you look at my paper?? What about the table in Fig 7?? Its not about COMPLETELY RECONSTRUCTING the signal. Knowing you CAN reconstruct it tells you your samples actually mean something relative to your original signal.

We agree that (min+max)/2 gives poor answers … but it has nothing to do with Nyquist.

“Chaotic” issue: addressed separately. Why the demand to know the exact number needed before you let in the concept.

I fear I don’t understand this. YOU said we had to sample the temperature at a frequency 2X the “highest frequency”. I merely asked what the “highest frequency” is for chaotic temperature data. And if, as it appears, you don’t know the answer … then how can you claim that Nyquist is “violated”?

As to “letting in the concept”, I’ve known about the Nyquist limit for decades. What am I supposed to be “letting in”?

Research can be done to determine where the means converge to a specified limit. For example, once a higher sample rate doesn’t give you more than 0.1C or 0.05C or 0.01C difference you can stop.

Huh? Did you just say that? We can throw out Nyquist and just look at shorter periods until our results are good enough for the purpose? That’s what I’ve been saying. Did you look at my graph? The difference in daily mean between hourly samples and every five minutes is 0.04°C … so I’d say hourly data is totally adequate, and our work here is done …

The concept is what is important here. 2-samples are not enough to support the results that generate the alarm we see. If the magnitude of error I show from comparing 288-samples to 2-samples is not important then fine. Just stop bother me with daily headlines of the sky is falling. (Not you, Willis, those who do this.)

I fear you haven’t shown that the error makes a significant difference. Quoting extreme errors doesn’t help. On any typical dataset like the Anchorage dataset, random errors don’t significantly alter the trend. Yes, the errors are there … but they appear to be random. They are assuredly Gaussian normal. I took a look at 1882 days worth of daily errors for each of 30 cities. In all thirty cases, the Shapiro-Wilk normality test says the 1882 errors are Gaussian normal. Here are the daily errors for 30 US cities …

As you can see, the errors in general are not large, and the standard deviation of the errors is not large. So we’ve determined that the two-sample method sucks … but we don’t know that it is a difference that actually makes a difference.

You said:

“Finally, you still haven’t grasped the nettle—the problem with (max+min)/2 has NOTHING to do with Nyquist. Pick any chaotic signal. If you want to, filter out the high frequencies as you’d do for a true analysis. Sample it once every millisecond. Then pick any interval, take the highest and lowest values of the interval, and average them.

Will you get the mean of the interval? NO, and I don’t care if you sample it every nanosecond. The problem is NOT with the sampling rate. It’s with the procedure—(max+min)/2 is a lousy and biased estimator of the true mean, REGARDLESS of the sampling rate, above Nyquist or not.”

My reply: I’m confused by what you write… Are we even arguing here or agreeing… I can’t tell… I think we agree that the historical method is not good for accurate true mean calculation. But how do you prove that without sampling theory? I’m not suggesting we continue with (Tmax+Tmin)/2 so I’m not sure what you are saying. I’m saying do it like USCRN does and use all of the samples.

We agree that the historical method is a poor estimator of the true mean. However, we do NOT need either Nyquist or sampling theory to prove that. We just need to look at the difference between 5-minute, hourly, and two-sample results to know that two-sample results are the weak one.

And Nyquist doesn’t help, which is where I started this discussion. I read your paper and said you were right for the wrong reasons—right that (max+min)/2 is a poor estimator, wrong that it has anything at all to do with Nyquist. You can’t even tell us what the Nyquist limit is for temperature data … so what use is Nyquist?

Finally, my thanks to you for putting up a most fascinating post … always more to learn.

w.

Bright Red
Reply to  Willis Eschenbach
January 16, 2019 2:06 am

Hi Willis, I will have a go at the the upper frequency limit for Nyquist question when measuring temperature from a technical point of view. In the real world the temperature transducer itself acts as a low pass filter although not a particularly good one. So if you want all the information that the transducer can provide then its specifiacation will determine the Nyquist sampling rate which will be a bit higher than 2x the transducer specification unless additional analong filtering is provided before sampling. From this position of having met the Nyquist you can of course choose to throw away some of the information that the temperature transducer is capable of providing by further digital processing, how much will be determined by what you want from the data which of course could be different from someone else’s requirements. Or you can implement additional low pass analog filtering of the transducer signal, at a frequency that again you have determined will give you what you want from the data, prior to sampling it and in doing so throw away some information. Note that none of the above violates Nyquist as to do so will introduce issues as noted by William.
It seems that many years ago it was decided that two samples a day was adequate but as it turns out that while a reasonable and practical decision at the time given the technology available it was a decision that deliberately discarded information that could be useful now or in the future. So unless you or someone can come up with the frequency/sample rate that will cover all future needs I suggest we do the best we can now which means getting everything out of the transducers we have available while properly acknowledging the limitations in the min/max recordings as one thing is for sure if you have aliasing you have non recoverable issues. In the end we may not need the higher sample rate data now but better to have it just in case future generations do find it useful given the cost to do so is minimal. /sarc Now where is that piece of wet string I had lying around.

William Ward
Reply to  Willis Eschenbach
January 16, 2019 4:24 pm

Hi Willis,

Thanks for your words of appreciation and for your many contributions here on this. I enjoyed discussing this with you. We can probably leave the remaining open items in the “agree-to-disagree” category. If there is something that you really want to pursue I’ll join you, but I think we are good for now. Agree? I think well have more opportunity to interact on this and other good things in the near future. Thanks again!

WW

Geoff Sherrington
January 16, 2019 1:06 am

Two tears ago I wrote –
“People who use the historic land records of temperature, with a century or more based almost entirely on Tmax and Tmin measured by LIG thermometers in shelters, seem not to appreciate that they are not presented with a temperature that reflects the thermodynamic state of a weather site, but with a special temperature – like the daily maximum – that is set by a combination of competing factors.
Not all of these factors are climate related. Few of them can ever be reconstructed.
So it has to be said that the historic Tmax and Tmin, the backbones of land reconstructions, suffer from large and unrecoverable errors that will often make them unfit for purpose when purpose means reconstructing past temperatures for inputs into models of climate.
Tmax, for example, arises when the temperature adjacent to the thermometer switches from increasing to decreasing. The increasing component involves at least some of these:- incoming insolation as modified by the screen around the thermometer; convection of air outside and inside the screen allowing exposure to hot parcels; such convection as modified from time to time by acts like asphalt paving and grass cutting, changing the effective thermometer height above ground; radiation from the surroundings that penetrates necessary slots in the screen housing; radiation from new buildings if they are built.
On the other side of the ledger, the Tmin is set when the above factors and probably more are overcome by:- reduced insolation as the sun angle lowers; reduced radiation from clouds; reduction of radiation by shade from vegetation, if present; reduction of convective load by rainfall, if it happens; evaporative cooling of shelter and surroundings, if it is rained on at critical times.
It does not seem possible to model the direction and magnitude of this variety of effects, some of which need metadata that were never captured and cannot now be replicated. Some of these effects are one-side biased, others have some possibility of cancelling of positives against negatives, but not greatly. The factors quoted here are in general not amenable to treatment by homogenization methods currently popular. Homogenization applies more to other problems, such as rounding errors from F to C, thermometer calibration and reading errors, site shifts with measured overlap effects, changes to shelter paintwork, etc.
The central point is that Tmax is not representative of the site temperature as would be more the case if a synthetic black body radiator was custom designed to record temperatures at very fast intervals, to integrate heat flow over a day for a daily record with a maximum. Tmax is a special reading with its own information content; and that content can be affected by factors like the hot exhaust gas of a passing car. The Tmax that we have might not even reflect some or all of the UHI effect because UHI will generally happen at times of day that are not at Tmax time. And, given that the timing of Tmax can be set more by incidental than fundamental mechanisms, like time of cloud cover, corrections like TOBs for Time of Observation have no great meaning.
It seems that it is now traditional science, perceived wisdom, to ignore effects like these and to press on with the excuse that it is imperfect but it is all that we have.
The more serious point is that Tmax and Tmin are unfit for purpose and should not be used.”
Geoff.

Steve O
Reply to  Geoff Sherrington
January 16, 2019 10:26 am

The instruments and the methodology provide a metric that serves as a proxy. And it is important to understand that the measurements are just that — metrics serving as a proxy. The points you raise are valid, but I don’t believe they justify a conclusion that the metric is unfit.

Yes there are conditions that affect the readings artificially. If those conditions are relatively constant, then how do they affect the utility of the metric? If a parking lot gets built next to the instruments, and conditions change, then that’s a different story.

Reply to  Steve O
January 16, 2019 11:19 am

You may make the decision to use this data but then you must also acknowledge the large errors range that occurs when doing so. You can’t dismiss the errors by saying that the theory of large numbers or the calculation of error of mean make them go away!

The error range that you have will undoubtedly be larger than any trend you find meaning that you have no conclusion and that the data was really unfit for purpose.

Geoff Sherrington
January 16, 2019 1:49 am

Turning to the topic of the uses to which T data are put, one that has concerned me for years is this correlogram and similar ones from other studies.
http://www.geoffstuff.com/BEST_correlation.jpg
It is trite to remind readers that two straight lines inclined at 45 degrees to an axis will give a correlation coefficient of unity – but that result has little practical purpose. I am trying to strip away the low frequency parts of the separation response, to conduct correlations on the medium to high frequencies that have the bulk of the information content of interest.
From preliminary work using the geostatistical semivariogram approach, I am finding it hard to see pairs of stations more than (say) 300 km apart being useful T predictors for each other. This has consequences for those who construct elaborate adjustment procedures for homogenization, procedures that are wide open to subjective inputs and which have cited potential to affect estimates of global and regional warming/cooling rates over multi-decade time spans. Which is pretty much the name of the game for some like GISS.
Have other readers here worked with geostatistics in these ways?
The question is relevant to this post because all methods to clarify predictability requires proper understanding of accuracy and precision, including the consequences of Nyquist/Shannon sampling theory. Also, there are interesting conceptual overlaps between classical stats and geostats. Geoff.
http://www.geoffstuff.com/semiv_time_bases.xlsx

Dr. Strangelove
January 16, 2019 6:40 am

William,
You said:
“it is a mathematical certainty that every mean temperature and derived trend in the record contains significant error if it was calculated with 2-samples/day.”

If the error is random, the probability of positive and negative errors are equal. In large samples (n >> 30) the errors may cancel out. For example, you can get a smaller error in the annual mean using daily data with larger errors. The probability of this happening is given by the binomial distribution:
P (k, n, p) = n!/(k! (n – k)!) p^k (1 – p)^(n – k)

where: n = 365 daily data, k = 182 positive or negative errors, p = 0.5 probability of +/- error

tty
Reply to  Dr. Strangelove
January 16, 2019 7:11 am

“If the error is random”

That is a VERY big “if”

Reply to  tty
January 16, 2019 7:24 pm

tty January 16, 2019 at 7:11 am

“If the error is random”

That is a VERY big “if”

Actually, no, it is NOT an “if” at all. I looked at the daily errors from using hourly data for the calculation of the daily means. In EVERY city, the errors were Gaussian normal. Or as I said above but you didn’t read:

They are assuredly Gaussian normal. I took a look at 1882 days worth of daily errors for each of 30 cities. In all thirty cases, the Shapiro-Wilk normality test says the 1882 errors are Gaussian normal.

Regards,

w.

Paramenter
Reply to  tty
January 22, 2019 6:51 am

“If the error is random”
That is a VERY big “if”

I’ve run for couple of sites comparison between temperature mean calculated from 5-min sampled data per month and calculated by averaging daily midrange values (Tmin+Tmax)/2 per month. Normality test against subsequent errors was done using Shapiro-Wilk and D’Agostino tests. For some sites monthly error is normal, for some not, with pretty bad examples here and here.

William Ward
Reply to  Dr. Strangelove
January 16, 2019 7:48 am

Dr Strangelove,

This reply is from a “smart” phone. My apology in advance if there are formatting/typo problems.

The error appears to track the shape of the signal. Specifically whether the sine-ish wave spends more time near the max or min. Of course, the shape is a function of frequency content and vice versa. And, of course that is a function of what the Earth, Sun, clouds serve up that day. Is this random? I think the question is what does the distribution look like. I saw someone anslyzed it (Willis?) and said it was Gaussian, but the distribution didn’t look strictly Gaussian to me. It was close.

Refer to my Figure 4, where I show a year of error for Boulder. I have similar graphs for other stations in USCRN. They all seem to exhibit similar behavior. I think Fig 4 seems to favor error towards warming ( positive error). Others seem to favor a cooling bias. When I then analyze the trends given by the 2 methods, the bias seems to match the tendency in the daily error plot. Perhaps someone can run analysis to see how Gaussian the error is over time.

I propose that the rather small trend error compared to the large daily mean error is a function of this averaging effect. It might behave as a dither signal for an ADC. The trend biases are “small” (but similar to the trend magnitudes that cause panic), but there is absolute error – endpoint error that can be several degrees.

What do you think?

Reply to  William Ward
January 16, 2019 9:21 am

W Ward,
When I look at the curve provided by Willis at 9:40 pm, I get the impression that the upper half has a almost sinusoidal shape, which I believe is a result of the infinite heat sink to solar radiation. However the lower curve appears to flatten until heat is added. Temperature drops to the low point at the coldest part of the day. It slows due to the fact that as the temperature of the air decreases it approaches the temperature of the ground (the source of the heat?) and less energy is transferred, just like the graph of a cooling cup of coffee. Looks very similar to the discharge of a capacitor. The difference in the time between each Tmax and the time between each Tmin could be caused by the angle of the earth and the latitude of the measuring point.
Would like to see graphs for southern points taken during the same time period.

Reply to  William Ward
January 16, 2019 12:21 pm

“Refer to my Figure 4, where I show a year of error for Boulder. I have similar graphs for other stations in USCRN. They all seem to exhibit similar behavior. I think Fig 4 seems to favor error towards warming ( positive error).”
Well, I’ll refer to my figure here, also for Boulder, with 3 years of data. It shows annual smoothing, which takes out both daily and seasonal fluctuation. But more to the point, it shows the offsets you get when you variously choose to read the min/max at 8 am, 11 am, 2pm, 5pm etc. and they are different; some above, some below. All the talk here about min/max is meaningles sunless you first specify that timing (not done) and then look at other possible choices.

Reply to  Dr. Strangelove
January 16, 2019 11:05 am

Your assumption is fine for measurements of the same thing. If you take 365 measurements of the length of the same block of wood, your error of the mean can be reduced by this method. Not only does this require random errors but a normal distribution the errors.

You can’t use the same error rationale for measurements of different items like temperature measurements hours, days, or months later.

Answer these:

How can the temperature you read today affect the accuracy of the temperature you read yesterday?

If you compute an average, do you use the recorded temps + the recording errors, the recorded temps – the recording errors, or the recorded temps without any error adjustments? Why?

Why is one more accurate than the other two or do they all have equal probabilities?

Can the average of the temperatures be anywhere between the average of the highest figures or the lowest?

Does a reading from a third day affect the accuracy of either of the prior two day?

Geoff Sherrington
Reply to  Jim Gorman
January 17, 2019 2:08 am

JG, “How can the temperature you read today affect the accuracy of the temperature you read yesterday?”
We cannot understand how/if it will affect it physically, because its value can be fixed by multiple inputs that are usually not measured; but it can afterwards be examined through statistics.
That is why I am delving again into geostatistics, which was originally developed to help estimate how much one assay value down a mining drill hole can predict for another assay, by looking at the changes in differences between pairs of assays separated by various distances.
With weather station sites, the first finding might be that one should not expect T at a site to have predictive value for another site if they are separated by more than a certain distance. What is that critical separation distance? Informally, my work so far is suggesting 300 km, which is a lot less than is used for conventional pair matching during homogenization, but I have not finished the work. Geoff.

Reply to  Geoff Sherrington
January 17, 2019 9:39 am

You are missing my point. You may judge the accuracy of a given thermometer by comparing it to others but you can’t use other thermometers to tell where a given recorded measurement falls within the range of recording error.

For example, you may determine that a given thermometer is reading two degrees low and adjust the reading by that factor. That is, move a reading of 48 +- 0.5 degrees to 50 degrees but the error range remains, i.e., 50 +- 0.5 degrees.

The error range of one individual measurement taken at one point in time is fixed. Earlier or later temperature readings will not affect the error range of that one measurement. This is an important distinction that applies when computing averages. You can not reduce the error range by averaging recordings from multiple days. In fact, if you average a recorded temp with one that has a smaller recording error you must use the larger error range as the limiting factor. Otherwise you will be crediting the far less accurate recording with a false error range.

That is why trends with different error ranges shouldn’t be spliced together without their error ranges being included on a graph. Temp readings with higher error ranges shouldn’t be averaged together with more accurate data in order to claim more accuracy than they are entitled to.

I’m not receiving many responses to my criticisms so I don’t know if people agree or not. Or maybe I’m just too far off topic. I do know that when trends of one tenth or or hundredth of a degree are quoted from temperatures that were only recorded to the nearest degree something is not kosher with the treatment of errors. These are not trends at all, they are within the range of error and should be considered noise.

Editor
January 16, 2019 9:33 am

Quite a few comments, mostly from the Stats people, suggest that we are looking at the temperature record from the viewpoint of “But the only result that matters is the monthly average. “. If one only cares about a monthly average of Min/Max temperatures — just wants to have a number to play with and is willing to be honest about the uncertainty ranges, then all this is moot.

But for Climate Science, we are interested really in the energy in the atmospheric system for which temperature (sensible heat) is used as a proxy.

“Just a monthly average” does not inform us accurately or precisely about the energy — not even the sensible heat — when it is calculated from Min/Max. For rough back of the envelope figuring, it is probably accurate enough but comes with large uncertainty bars — uncertainty bars greater than the posited change in global average surface temperatures of the 20th century.

Claims that this years GAST is x.x degrees C over the 1890 values are not scientifically sound. We only can guess at the GAST of 1890, with an uncertainty range as large as the “calculated” change since then. Only in the post-WWII era, were there finally enough weather stations operating in enough diverse places to get some kind of scientific idea of a global average — but even then the uncertainty is wide.

And certainly, going forward, automated weather station data, with its 5-min averages, are obviously superior to the old (but necessary) Min/max method. Min/Max should be discontinued altogether (regardless of one’s opinion about Nyquist).

Reply to  Kip Hansen
January 16, 2019 11:07 am

+1000

Reply to  Kip Hansen
January 16, 2019 12:12 pm

“If one only cares about a monthly average of Min/Max temperatures — just wants to have a number to play with and is willing to be honest about the uncertainty ranges, then all this is moot.”
It doesn’t matter what wishes you may have. What matters is what calculations are actually done with the numbers. And what happens is that these min/max numbers are aggregated into at least monthly averages. No-one is trying to use them to reconstruct a high frequency signal, which is what the Nyquist talk is about.

“Min/Max should be discontinued altogether (regardless of one’s opinion about Nyquist).”
It’s done where needed for consistency with the older record. With modern data, you can do whatever you feel is best.

William Ward
Reply to  Kip Hansen
January 16, 2019 4:14 pm

Kip,

The Stats People are so quick to dive into stats and averages that they miss the opportunity to study what is going on at one station. Analysis of individual stations may provide some interesting information. Averaging everything means you lose anything unique.

Speaking of averages, do you know where I can find the scientific (thermodynamic) justification for averaging temperatures from multiple locations? Also, what is the equation this average temperature is fed into? I have been looking and can’t find it.

Geoff Sherrington
Reply to  Kip Hansen
January 17, 2019 2:13 am

KH,
So should diurnal temperature range. Too much apples subtracted from oranges.
Sucking on an orange, summarising years of looking at Australian numbers, I would put a 2 sigma error envelope for DTR and Taverage(Tmedian?) at something more than +/- 2 deg C. when all imaginable sources of error are included for the 1910 onwards historic raw temperature data.
There are many exercises that you cannot do when errors are as large as that.
The term “unfit for purpose” is used by some. Geoff.

January 16, 2019 9:56 am

@WW
” I think Fig 4 seems to favor error towards warming ( positive error).”

An error estimation is considered unbiased if its expected value (mean) is zero. In theory. In practice, it will never be exactly zero. My eyeball says its unbiased overall, especially since you also said the errors from different cities vary from slightly positive to slightly negative.

Do you still believe that undersampling causes errors in the temperature measurements or their statistics? I don’t see how that can change any measurement values, in the sense that the inverse Fourier transform always returns the same data which was input to the Fourier transform, even if the sample were “undersampled” or otherwise random values. And the _signal energy_ (sum of squared amplitudes) is not affected by aliasing. (See my post above for proof of that).

Paramenter
January 16, 2019 11:33 am

Hey Johanus,

Do you still believe that undersampling causes errors in the temperature measurements or their statistics?

You should refer then to the figure 2 of the article and comment it appropriately. According to it decreasing sampling rate increases error magnitude and vice versa.

And the _signal energy_ (sum of squared amplitudes) is not affected by aliasing.

How that can be? If you undersample you’re loosing high frequency components and energy associated with that, in our case fast changes in the daily temperature signal.

Reply to  Paramenter
January 16, 2019 4:46 pm

“decreasing sampling rate increases error magnitude and vice versa”

Yes, those are digitization errors, i.e. the residual errors between the actual temperature curve and the sampling intervals, which act as a series of linear approximations to the actual curve. These errors can be made arbitrarily small (up to quantization errors) by increasing the sampling rate.

This has nothing to do with undersampling, which has no effect on individual sample values, right?

“If you undersample you’re loosing high frequency components and energy associated with that, in our case fast changes in the daily temperature signal.”

The high frequencies are not lost, merely shifted in frequency. No change in amplitudes, so energy is conserved. (See my post above for proof of this).

So I still don’t see how undersampling causes erroneous temperature measurements, as WW claims:
“Sampling at a rate less than [the Nyquist limit) introduces aliasing error into our measurement.”

Paramenter
Reply to  Johanus
January 18, 2019 1:53 pm

Hey Johanus,

The high frequencies are not lost, merely shifted in frequency. No change in amplitudes, so energy is conserved. (See my post above for proof of this).

I’ve got a strange feeling that is not quite correct. Could you actually run powers spectrum density against few examples of real temperature data and share that? I bet that aliased (heavily undersampled, say 2 per day) signal writes in higher frequency energy into lower ones therefore total energy between an original and aliased signal will differ.

Reply to  Paramenter
January 25, 2019 11:14 am

Deliberate undersampling, for the purpose of shifting signals from a passband to baseband, is the basic idea behind “passband sampling”:
https://en.wikipedia.org/wiki/Undersampling

Note that the undersampling is done on signals that are band-limited in both max and min frequencies, such that the aliased signal envelopes are not distorted, merely shifted down in frequency.

Editor
January 16, 2019 9:17 pm

William Ward January 16, 2019 at 2:49 pm

1sky1 says:

“1) Shannon’s sampling theorem applies to strictly periodic (fixed delta t) discrete sampling of a continuous signal”

Reply: max and min are samples, period = 1/2 day and with much jitter. Jitter exists on every clock. Nyquist doesn’t not carve out jitter exceptions nor limit their magnitude. Any temperature signal is continuous. You need to address and invalidate these points or your comments are not correct.

Actually, you are both wrong. 1sky1 is wrong that Shannon only applies to strictly periodic samples.

And William is wrong about max and min being just jittered signals.

Let me demonstrate. I have five years of hourly data for 30 US cities. Taking one at random, San Francisco, I calculate the monthly average of maxes and mins, as well as the true daily means. The error in the result is that the average monthly (min+max)/2 is 0.52°C warmer than the true monthly means.

Next, I do the same, but instead of taking the two daily temperatures as the maxes and mins, I take two hourly measurements at random from each day. When I repeatedly take the average of those, instead of 0.52°C, the average monthly error is 0.007°C.

This good result from the random picks is in agreement with Nyquist, as we are interested in monthly data and we have about sixty samples per month. Plenty of oversampling.

But that does NOT work with the (min+max)/2.

The problem that William appears not to see is that the min and max values are NOT just jittered samples. They are specially chosen samples, and as a result of the nature of the choosing and of the signal, additional error is introduced.

And this exemplifies what I have been saying. (Max + Min)/2 is a poor estimator of the true mean of a signal, and that is NOT a result of Nyquist. In my example above, in both cases we are sampling at sixty times the frequency of interest (sixty samples per month), but the (Max + Min)/2 still has huge errors and a true jittered sampling does not.

As I said at the start … William, you have the right problem (inaccuracy of the (Max + Min)/2) but the wrong reason (Nyquist).

w.

Scott W Bennett
Reply to  Willis Eschenbach
January 17, 2019 5:20 am

“the min and max values are NOT just jittered samples. They are specially chosen samples, – Willis Eschenbach”

I totally agree with Willis and I’m also starting to realise what very special samples* they are! I think I may have been too hasty in bagging* Tmean now!

Min/Max is a very special kind of sample/selection because they are self “clocked” as it were – I think Nick Stokes said as much but with more precise language – because you are not taking two random samples your are picking the peak and trough of a“signal” that just happens to have, on average, a 12 hour half cycle (In the real world) and you are doing this twice a day – at its frequency! So you are actually deliberately and perfectly accurately, measuring wave height. It doesn’t matter if they came from hourly samples or min/max thermometers, this “selection” of just two discrete values is the same process, of course.

By this simple triangle a “diurnal wave” is very well defined or should I say confined and it’s not quite as hard to imagine why the errors in Tmean are not as large as I first thought they should be. I’m not saying Tmean is right! I’m just wriggling a little bit! 😉

Anyway, it is more to think about or have explained to me by the better qualified, on this long and intriguing post!

*Aussie slang for denigrate it, i.e. I might have been rrrrrrr, rrrr…wrong!
**Tmean(min+max)/2

1sky1
Reply to  Willis Eschenbach
January 17, 2019 12:48 pm

1sky1 is wrong that Shannon only applies to strictly periodic signals.

I said nothing of the kind! What I said is that “Shannon’s sampling theorem applies to strictly periodic (fixed delta t) discrete sampling of a continuous signal.” This applies to all continuous signals, not just strictly periodic ones. Without strictly periodic sampling there can be no defined Nyquist frequency, 1/(2*delta t) and no bandlimited signal reconstruction, which is what Shannon’s Theorem is all about.

Of course random sampling of discrete ordinates will produce a closer estimate of the true mean of the signal than the mid-range value (Tmax + Tmin)/2, simply because the latter is a demonstrably BIASED estimator of the mean, due to the typically asymmetric wave-form of the diurnal cycle. The claim that “[t]his good result from the random picks is in agreement with Nyquist” is analytically nonsensical. Even a severely aliased data series will produce close, randomly sampled estimates of the signal mean, as long as that aliasing doesn’t extend into zero-frequency. Nyquist has little to do with the quality of UNBIASED estimates.

BTW, what no one noted throughout this entire discussion is that the true daily extrema also define the daily range Tmax – Tmin. This physically significant, practically useful metric is not readily available from any but the most highly oversampled data series.

1sky1
Reply to  1sky1
January 17, 2019 1:06 pm

Oops! The mid-range value is really (Tmax + Tmin)/2.

Reply to  1sky1
January 17, 2019 1:34 pm

1sky1 January 17, 2019 at 1:06 pm

Oops! The mid-range value is really (Tmax + Tmin)/2.

Fixed.

w.

Reply to  1sky1
January 17, 2019 1:32 pm

1sky1 January 17, 2019 at 12:48 pm

1sky1 is wrong that Shannon only applies to strictly periodic signals.

I said nothing of the kind! What I said is that “Shannon’s sampling theorem applies to strictly periodic (fixed delta t) discrete sampling of a continuous signal.”

My bad, I wrote “signals” when I meant “samples”. I meant to say:

“1sky1 is wrong that Shannon only applies to strictly periodic samples.”

Regards,

w.

1sky1
Reply to  Willis Eschenbach
January 17, 2019 2:32 pm

What you wrongly meant to say I already covered, to wit:

Without strictly periodic sampling there can be no defined Nyquist frequency, 1/(2*delta t) and no bandlimited signal reconstruction, which is what Shannon’s Theorem is all about.

January 16, 2019 10:09 pm

Willis you said: “As I said at the start … William, you have the right problem (inaccuracy of the (Max + Min)/2) but the wrong reason (Nyquist)”

Quite so my friend.

I have pointed out to him that we all agree that (Tmax+Tmin)/2 as a substitute for mean is quite silly (your “right problem”), and that this Fundamental Flaw occurs in cases where we have not even sampled yet and may not ever, so aliasing as a cause would be – I guess, non-causal (your “wrong reason”).

Now he seems to say he has given up on both of us! (Agree to disagree?) Ha – that never got me out of a jam! He doesn’t seem to appreciate a friendly lifeline being tossed his way.

Stay well.

-Bernie

Bright Red
Reply to  Bernie Hutchins
January 16, 2019 11:24 pm

Sorry about the wrong place again.
Bernie said “To be even more clear, you would take two analog “peak detectors” (a diode, a capacitor, and an op-amp or two for convenience), one for (+) polarity relative to start and the other for (-) polarity. Reset both at midnight, and come back at 11:59:59 PM and read the outputs. There is no sampling.”

Have you written down a value or caused a value to be stored in a computer memory if so then you have sampled the signal. It really is that simple. Using an analog hold circuit to delay the point the sample is taken changes nothing other than makes it convenient for a human to read while still doing other tasks. I think the world has moved on with the invention of the microprocessor.

William Ward
Reply to  Bright Red
January 17, 2019 12:02 am

Bright Red,

You said: “Have you written down a value or caused a value to be stored in a computer memory if so then you have sampled the signal. It really is that simple.”

My reply: I wrote something similar to Bernie earlier but then deleted it. I already signed off with him and thought maybe it was bad form for me to come back with that. I was hoping you would! Thanks. It is so obvious. A peak detector is not much different that a sample and hold circuit but with the S&H it needs something to trigger it. An S&H is the first stage of many older ADC architectures. It was almost poetry that Bernie suggested his circuit, because he spelled out what we have been saying but he doesn’t seem to know it. Just like with the sample reduction point.

This word “sample” has people all spun up, but if you measure a signal you sample it.

Reply to  Bright Red
January 17, 2019 1:33 pm

Bright Red said just above: “Have you written down a value or caused a value to be stored in a computer memory if so then you have sampled the signal.”

Well – No. That would be the output of a peak detector, an extracted PARAMETER of the analog signal, quite a different thing than the samples of a proper time-series.
* * * * * * * * *

Bright Red also said on January 16, 2019 at 6:23 pm “What you have described is a classic case of aliasing that could be used as an example in 101.”

Can we possibly stop this insulting pseudo-condescending “signals 101” (I know it was William who started it). Having taught signal processing in a top-10 engineering school for over 40 years, and living in a college town, I have found it risky to assume that you are the smartest one in the room! Likewise for all posting on WUWT. [I myself am happy to try to explain to others material which I know VERY well.]
* * * * * * * * * *

On January 16, 2019 at 6:45 pm, I asked you to explain your position/thinking, and you didn’t even try. In fact, why don’t YOU just move on and answer my comment to William at Bernie Hutchins January 16, 2019 at 8:57 pm ?

William Ward
Reply to  Bernie Hutchins
January 16, 2019 11:49 pm

Bernie – its nearly 3AM where I am, can I get a little time to sleep, work and come back to you tomorrow night? I have not give up on you. Or Willis – I’m going to try to come back to both of you tomorrow night. I’m at a critical point in a project and I have to make time to do this and do it with quality thinking. There is also the question of how prudent is it to continue when there is such a large gap in understanding. I have been thinking all night about a creative way to try to connect with you both on this. I’m not sure I’ll be successful and I think we should all reserve the right to pull away if we think further communication is not constructive but destructive. Standby for tomorrow evening guys. Thanks and good night.

Reply to  William Ward
January 17, 2019 12:11 am

William, please reply at your convenience. We all have other things to do.

Best regards, get some sleep, we’ll pick this up again.

w.

Editor
January 17, 2019 12:26 pm

OK, new day. Lying sleepless last night, I realized that I could demonstrate that the problem with the traditional way of determining the mean is NOT the Nyquist limit.

Here is a plot of RMS errors of monthly temperature averages for 30 US cities with respect to the accurate values we get from sampling every 5 minutes (288 samples per day). As you may recall from above, averages of hourly data are only very slightly less accurate than averages of 5-minute data. Here’s that comparison:

So instead of comparing to 5-minute samples, which I only have for one location, I compared to hourly samples, which I have for 30 US cities. As shown above, this will lead to only negligible error.

First, here are the results for evenly spaced samples:

A few comments. First, if the frequency is an even divisor of 24 hours, the error is larger because every day the samples are taken at the same time. These are indicated by the dashed line. So if we take samples every twelve hours, the error is larger than if we sample at either 11-hour or 13-hour intervals.

Next, note the size of the traditional calculation of the mean, which is (minimum temperature + maximum temperature) divided by two. It’s the dot way up at the top of the graph … as I’ve said all along, the problem is NOT the Nyquist limit. It is that (min+max)/2 is a poor estimator of the true mean.

Now, could the problem with (min+max)/2 merely be because the samples are taken at different times during each day? Well, we can examine that too, by taking samples at random times. The figure below shows that result.

Once again, you can see that the problem is NOT the Nyquist limit. The (min+max)/2 error is still way larger than just taking two random samples every day. In fact, if we just took two random samples per day, that is about the Nyquist limit for monthly temperature averages … go figure. But the (min+max)/2 error is much worse than that.

My best to all, my thanks to William for both posting and defending his ideas.

w.

William Ward
Reply to  Willis Eschenbach
January 17, 2019 4:13 pm

Hey Willis,

Its 7:00 PM EST, I’m back at my computer. I have been thinking for the past 18 hours about another way to try to bridge the gap. I’m going to go off now and try to write it up. I don’t know how long it will take. Are you wiling to continue? If not, send up a flag.

Here is one problem you are introducing with your analysis. You are looking at multiple stations and doing more averaging and statistics. Maybe not a perfect analogy, but you are studying the tree by looking at the forest. I want to focus on the tree first. Then move on to the forest. If you leave now, you will sadly not learn some important things about signal analysis – that might come in handy later. If the error from sampling problems diminishes or disappears when looking at longer periods of time or when averaging a large sample of stations this is different from no error being there in the first place. I’m not saying that is where we are headed, just trying to persuade you to consider that angle.

Note: I’m not a statistics guy. Maybe you can try to return the favor in the future with statistical analysis.

Reply to  Willis Eschenbach
January 17, 2019 7:21 pm

The first thing I thought about when saw William’s first WUWT graph (his Fig. 1) was if it was the case that this was in some sense a typical warm-up-cool-down cycle. The second thing was that it was OBVIOUS even from his one example that (Tmax+Tmin)/2 could not possibly be a reasonable estimate of the mean. This was a Fundamental Flaw, and since they had some 288 actual samples, why would they NOT calculate the mean as the (sum-of-samples)/288 – the right way. OH – bureaucratic inertia – I forgot that.

Thirdly, I wondered what a typical daily temperature curve might look like. Willis obligingly (as is usual for him) crunched tons of data with averages provided for this thread here at January 15, 2019 at 9:40 pm. It clearly shows that the typical curve is more regular than we might have suspected, and that the (Tmax+Tmin)/2 mean is significantly above the true mean, and why (values on the positive side approaches 1.5 while those on the negative side approach only -1.3)

What about the shape of the curve (Willis shows us two full cycles for clarity)? It looks a lot like a sine wave; but noticeably different. Here we could use a DFT (FFT) but since we already have a well-defined natural period (24 hours) a Fourier Series discussion is somewhat more familiar and easier to understand. http://electronotes.netfirms.com/AN364.pdf

Since it is clearly not a pure sine wave, but is periodic with a period of 24 hours, it has harmonics. Willis has also shown us here (at January 14, 2019 at 3:49 pm), using his preferred periodogram, that this is mostly fundamental, with a bit of 2nd and 3rd harmonics. That’s about it. So as he says, you would need to sample this at minimum sampling rate greater than 6 times/day (perhaps 24 times/day would satisfy most everyone).

So Willis has shown us the animals in the zoo. It could scarcely be more clear that any sampling issues can be controlled. It is further clear that the major error in using (Tmax+Tmin)/2 to represents the mean is a Fundamental Flaw (FF) that is the same, with or without sampling, so the FF is not (CAN NOT) be CAUSED by aliasing.

So, to be completely clear, the error in William’s aliasing column (Tmean C) of his Fig. 2 ARE due to aliasing, since he downsamples without first doing the required pre-decimation filtering. If done properly, the numbers would be -3.3, the mean being preserved exactly, for the 8 rows above the -4.7.

I understand perfectly what William was trying to show (thinks is true!) but he was up against an apples/orange (FF vs. aliasing) problem at the very start.

William Ward
Reply to  Bernie Hutchins
January 17, 2019 10:23 pm

Bernie,

You said: “Since it is clearly not a pure sine wave, but is periodic with a period of 24 hours, it has harmonics. Willis has also shown us here (at January 14, 2019 at 3:49 pm), using his preferred periodogram, that this is mostly fundamental, with a bit of 2nd and 3rd harmonics. That’s about it. So as he says, you would need to sample this at minimum sampling rate greater than 6 times/day (perhaps 24 times/day would satisfy most everyone).”

My reply: When sampling occurs at 2-cycles/day, where does the spectral image shift? Answer up and down by 2 cycles. What spectral image components land on (alias to) the trend components (near 0Hz or 0-cycles/day)? Answer: the near 2-cycle/day components.

Image from my Full paper (Fig8): https://imgur.com/DmXCBOt

What spectral image components alias to the daily trend (1-cycle/day)? Answer: the 1- and 3-cycle/day components. What did you say above about content at 2nd and 3rd harmonics?? How is it that you keep agreeing with me and then disagree with me? Perplexing.

Did you look at any FFTs of any signals? How far out in frequency do these signals go. Answer: infinity. But how much of that seems to affect things like mean calculation below 0.1C variability? Answer: 288-samples/day seems to be the rate at which we reach this limit so that means 144-cycles/day. If you just can’t handle that then lets divide that by 4 and we have 36-cycles/day. That would mean we have to sample above 72-cycles/day. Why do you think you can ignore the energy above the 3rd harmonic?

This image shows the overlap at a sample rate of 12-cycles/day. Can you visualize how much more overlap there is at 2-cycles/day?

https://imgur.com/xaqieor

Its not about “satisfying most everyone”, it is about capturing the frequency content that any day at any station can produce. Looking at what the average is doesn’t tell you what the maximum requirement is. Through experimentation that value can be decided upon and anti-aliasing filters built into the sampling system to match.

Bernie said: “So, to be completely clear, the error in William’s aliasing column (Tmean C) of his Fig. 2 ARE due to aliasing, since he downsamples without first doing the required pre-decimation filtering. If done properly, the numbers would be -3.3, the mean being preserved exactly, for the 8 rows above the -4.7.”

Bernie, you have this magical way of disagreeing with me while simultaneously proving my point. So you agree that my table in Fig 2 shows the aliasing error increasing as sample rate decreases to 2-samples/day. What I show is what would happen if you sampled all of the spectrum without using an anti-aliasing filter and a low sample rate. What if the max and min temps just line up perfectly with those 2 samples spaced at 12 hours apart? Is it aliasing?

If a SRC was used to downsample to 2-samples/day, you would be filtering out content. There would be no aliasing but you would not have the same signal. You would have a sine wave. Are you sure you would preserve the -3.3V mean through all steps? Would it even be close?

Finally, what is the “OBVIOUS” Theorem? How does one implement it? And what is the Fundamental Flaw (FF) identity? How does one go about learning it and using it?

Reply to  William Ward
January 18, 2019 10:28 pm

Continuing questions for William

William Ward said January 17, 2019 at 12:02 am ” It is so obvious. A peak detector is not much different that a sample and hold circuit but with the S&H it needs something to trigger it. An S&H is the first stage of many older ADC architectures. It was almost poetry that Bernie suggested his circuit, because he spelled out what we have been saying but he doesn’t seem to know it. “

[5] Of course I don’t “know it” – only you know things! Oh wait – I do know one thing: a PD and a S&H are quite DIFFERENT animals. I suggested two parallel-polarity PDs as being the same idea as a classic min/max thermometer. Did you miss that point, or is the hole in your knowledge a bit larger than you suppose?

William Ward said January 17, 2019 at 10:23 pm
“Finally, what is the “OBVIOUS” Theorem? How does one implement it?

[6] You do know the meaning of “obvious” I assume – you used it yourself (in error) as quoted above. I use it as in “obviously estimating a mean as (Tmax + Tmin)/2 is asking to be wrong whether the signal is continuous or discrete.”
”””””””””””””””””””””
“And what is the Fundamental Flaw (FF) identity? How does one go about learning it and using it?”
[7] It is recognizing that (Tmax+Tmin)/2 is already bogus as a continuous-time signals and remains essentially flawed for the same reason if sampled.
-Bernie

Clyde Spencer
Reply to  Willis Eschenbach
January 18, 2019 10:20 am

Willis
You said, “Lying sleepless last night, I realized …” I’m glad to see that I’m not the only one afflicted with the inability to shut my mind down. 🙂

Clyde Spencer
Reply to  Willis Eschenbach
January 18, 2019 11:06 am

Willis,
You said, “Next, note the size of the traditional calculation of the mean, which is (minimum temperature + maximum temperature) divided by two.” PLEASE don’t continue to call it a mean. Call it an average if you must, but “mid-range value” is more accurate.

https://sciencing.com/calculate-midrange-7151029.html

1sky1
January 17, 2019 2:14 pm

If the frequency is an even divisor of 24 hours, the error is larger because every day the samples are taken at the same time.

The truly effective reason is because such sampling rates alias some harmonic components of the asymmetric diurnal wave-form into zero-frequency, i.e. into the mean value.

Frank
January 17, 2019 3:03 pm

William: Interesting work, but far from demonstrating that a real problem exists.

You have demonstrated a significant difference between the mean temperature recorded every five minutes and the conventional average of Tmax and Tmin. However that difference will vanish when you take temperature anomalies. What we want to know is the temperature trend, not absolute temperature.

In Figure 7, you showed the bias in the trend removed by using the true average temperature rather than the convention average of Tmin and Tmax. That bias averaged 0.066 K/decade over 26 stations. Is that a significant difference? You avoided showing us any useful information about trends at all.

If I go to Nick Stoke’s trend viewer and look at NOAA’s GLOBAL land temperature trends from 1/2007 to 12/2017, I find a trend of 0.43 K/decade with a 95% ci of 0.14 to 0.71 K/decade. Now the trends for the US could be very different, but local fluctuations are likely to be bigger than global fluctuations and the confidence interval would likely be wider. So you appear to have found a bias of 0.066 K/decade in trends with a confidence interval that is at least 0.5 K/decade wide. From this perspective, you haven’t come close to identifying a significant problem.

In Figure 6, if you had plotted individual annual temperatures, your readers would have seen year-to-year temperature changes of perhaps 1 K – in addition to the trend lines you presented. Readers would have immediately seen that the difference in trend was trivial compared to the noise in the data and uncertainty in the trend. Hopefully, the absence of such relevant information was an oversight. Note that the trend was greater than +1 K/decade at the location where you found the greatest bias – 0.24 K/decade; a bias that was a small fraction of the trend. And we probably would have seen some years where the average of Tmax and Tmin was lower rather than higher than the continuous average. This would be even more likely with monthly data, but using monthly data might require dealing with temperature anomalies.

If we had 2-4 decades of USCRN data, a bias of 0.066 K/decade a trend could be comparable to the uncertain in the trend and therefore significant. Over a century, the bias could amount to 0.7 K more warming when measured by the average of Tmax and Tmin. That would certainly be non-trivial, but extrapolation is somewhat absurd given the uncertainty. Which leads to the interesting question: How do AOGCMs calculate warming. They have 96 readings per day (every 15 min). Do they average all 96 readings per day (my guess) or average the highest and lowest?

Reply to  Frank
January 17, 2019 4:34 pm

Frank,
” Now the trends for the US could be very different”

I did some analysis of OLS trends for USCRN here. The sd of the trends is about 8 °C/Cen, giving an uncertainty of mean trend of about 0.8°C/Cen, on an OLS basis. They aren’t independent, so the true uncertainty would be higher. The uncertainty σ is higher than the trend difference, so the sign of that difference (bias) is not significant.

Clyde Spencer
Reply to  Frank
January 18, 2019 10:24 am

Frank,
You said, “What we want to know is the temperature trend, not absolute temperature. ” Well, actually, with the recent concern shifting to energy accumulation (and where it might be “hiding”), we do need to know about ‘absolute’ temperatures.

Editor
January 17, 2019 5:42 pm

William Ward January 17, 2019 at 4:13 pm

Hey Willis,

Its 7:00 PM EST, I’m back at my computer. I have been thinking for the past 18 hours about another way to try to bridge the gap. I’m going to go off now and try to write it up. I don’t know how long it will take. Are you wiling to continue? If not, send up a flag.

I’m willing to continue … kinda …

Here is one problem you are introducing with your analysis. You are looking at multiple stations and doing more averaging and statistics. Maybe not a perfect analogy, but you are studying the tree by looking at the forest. I want to focus on the tree first. Then move on to the forest. If you leave now, you will sadly not learn some important things about signal analysis – that might come in handy later.

Let me review the bidding here.

You came in and your first claim was that to sample temperature, Nyquist said that we have to sample it at 2X the highest frequency, viz:

The Nyquist-Shannon Sampling Theorem tells us that we must sample a signal at a rate that is at least 2x the highest frequency component of the signal. This is called the Nyquist Rate.

I pointed out that the climate is a chaotic signal with frequencies that have periods all the way down to seconds … so how are we supposed to sample that? And Nick Stokes objected as well, saying:

The Theorem tells you that you can’t resolve frequencies beyond that limit. But we aren’t trying to resolve high frequencies. We are trying to get a monthly average. It isn’t a communications channel.

Despite that, you have not yet admitted that your initial claim was wrong.

You next claimed that we need to sample at 288 cycles per day to be above the Nyquist limit. Then you said well, no, we only need to sample at a frequency where further increases don’t provide a significant improvement

I showed that 5-minute sampling is only trivially better than hourly sampling.

You have not yet admitted that your claim about needing to sample at 288 cycles per day was wrong.

You then claimed that Nyquist applied to the (max+min)/2 because it was just a jittered sample.

This morning I posted up a graph showing that no, jittered samples do much, much better than (max+min)/2. Not only that, but I also showed that two regularly spaced samples per day also do much, much better than (max+min)/2.

Which totally supports my claim, which was that the problem is real ((min+max)/2 is a poor estimator of the mean) but it is NOT because of your imaginary violation of Nyquist.

No reply.

And now you claim you have something to teach me?

Perhaps you do, I can learn something from most people …but so far all I’ve learned from you is that you are an arrogant man who thinks he knows more than everyone else, and who is unwilling to admit it when he makes an error.

And so far, I haven’t learned one damned thing from you about signal analysis. EVERYTHING that you’ve said so far I knew already. No surprises, nothing new.

You continue …

If the error from sampling problems diminishes or disappears when looking at longer periods of time or when averaging a large sample of stations this is different from no error being there in the first place. I’m not saying that is where we are headed, just trying to persuade you to consider that angle.


I know that, William … but again, we’re looking at CLIMATE here, which is generally taken to be the average of weather over 30 years or more. So yes, in general, we are indeed averaging a large number of stations over a long period. Which means that once again, your objection is a difference that makes no difference.

Both Nick Stokes and I tried unsuccessfully above to point out that we’re not trying to reconstruct a temperature signal. Reconstructing a signal is a very common purpose of A/D analysis-we want to sample a song so that we can reproduce it in a digital format.

But that’s not what we’re doing in climate. In climate, we are simply trying to get averages and trends out of a mass of error-ridden data with heaps of gaps and problems.

And as such, we don’t care much, and we don’t need to care much, about a lot of things that are critically important when you’re trying to reconstruct a signal.

I see this problem with signal guys all the time. You think that climate signals are like a simple superposition of sine waves. Nothing could be further from the truth. Climate signals have pseudo-cycles that appear and disappear at random, only to be replaced by some other pseudo-cycles. In addition, they are hugely damped, so the usual kinds of things like aliasing and resonance and cross-talk are either greatly reduced or absent altogether.

For example, in the lab you have signal amplifiers, and notch filters, and frequency doublers, and multiplexers, and regenerative circuits, and bandpass amplifiers, and beat-frequency oscillators, and heterodyne receivers.

But in nature, those are very, very rare.

And as a result, you waltz in here and start babbling about sampling at 2X the highest frequency in a temperature signal, and I just roll my eyes and think “Here we go again, another signals guy who thinks he’s God …”.

So yes, William, I’m willing to proceed … but only if you start by admitting that your claims to date have often been wrong. No, for our purposes we don’t have to sample at 2X the highest frequency in a temperature signal. No, the min plus max over two is not just another jittered signal. No, we don’t need to sample at 288 cycles per day, once per hour is quite adequate.

And finally, (min+max)/2 is a poor estimator EVEN IF YOU ARE SAMPLING WELL ABOVE THE NYQUIST RATE! Your fundamental claim is wrong—the problem with (min+max)/2 has nothing to do with Nyquist. It is inherent in the sampling method.

So … are YOU willing to continue? If not, send up a flag …

And as always, my thanks for your willingness to put forward your ideas and defend them,

w.

William Ward
Reply to  Willis Eschenbach
January 17, 2019 6:53 pm

1200 words below – the shortest I could make this explanation.

Hey Willis, Bernie,

My simple request is to open your mind to what I present here and let’s focus on the following before taking these concepts back to the bigger picture.

By definition: if you are working with an analog signal and you take a measurement of it that is a sample. It doesn’t matter how you get that discrete number. Analog signals are continuous and digital signals are comprised of samples. I see a lot of struggle with the definitions.

There has been a lot of talk about strict periodicity. Let’s discuss that further. Nyquist is about the equivalency of 2 domains: analog and digital. Specifically, it is about the equivalency of signals in the analog and digital domain. Nyquist is the bridge. What I think people are losing sight of – or better said: have never gained sight of, is that the digital samples are representative of a signal. The folly is thinking that a “frame of mind” (“…I’m just looking at extrema…”) dissolves that bond the digital samples have with a signal. Those samples will forever be associated with a signal. Math on signals MUST comply with laws governing signals – if you want that math to apply to the signal. If the samples are obtained through complying with Nyquist, then the samples will represent that signal in the digital domain equivalently to the analog signal in the analog domain. HOWEVER, if those samples were obtained in violation of Nyquist, then the samples DO NOT represent the original signal from the analog domain. They CANNOT. Let me develop this further.

When you have samples and you do any mathematical operation on them (adding, subtracting, dividing, multiplying, integrating) you are doing DSP on them. (I know, fancy word for just adding…) Your DSP in the digital domain must ALSO adhere to Nyquist! Nyquist isn’t just about getting from analog to digital. If you start with a signal, properly sample it, then in the digital domain you MUST use all of those samples in the same timing relationship or you introduce Nyquist related error. If you want to reduce your samples, then you are reducing your sample-rate. You must do so properly according to Nyquist, using a sample-rate converter. This involves filtering out frequency content that cannot be supported by your new sample-rate via digital filtering and then you can reduce the samples properly. Example: You have a signal composed of a 1Hz and a 2Hz sine wave mixed. You sample it at 20Hz (20sps) Then you have 20 samples in 1 sec. The bandwidth is set by the 2Hz signal with a period of 0.5s. You have 10-samples for each half-second period. You are complying with Nyquist. Your samples represent your analog waveform. With high quality converters you can convert back and forth between the domains multiple times before converter related performance starts to degrade the signal quality. For each half-second period, if you keep the 1st and 6th samples and simply discard the others and then try to do mathematics on those samples then you have just violated Nyquist. Those 2 samples per half-second period CANNOT and WILL NOT represent your original signal. If you digitally filter out the 2Hz tone, then you can reduce samples according to a process called decimation. You will need to stay at or above 3sps to not violate Nyquist, if your bandwidth is 1Hz.

Using the same example (1Hz + 2Hz sampled at 20sps) you discard all samples per half-second period except samples 1, 4, 5, 6 and 7. If you take this new set of samples and run them through a DAC, then you would get the equivalent signal that this aliased signal represents. Any math you do on the modified digital signal will give you the results for the analog signal you created with the DAC. The resulting new analog waveform could be fed into a spectrum analyzer and you could see what new frequencies and amplitudes you created. If you do math on this asymmetrical sample set, you will get mathematical results, but they will not have an accurate relationship to the original signal.

If we take 288-samples/day from USCRN and discard all but the max and min samples for each day, that step alone is a violation of Nyquist! Whether people reading this like the language, are comfortable with the language or not, your 288-samples/day are a signal. The 288-samples/day represents the actual event that took place on Earth in the analog domain. The moment you select the max and min samples you have just changed the signal you are working with! If you took those 2-samples/day and fed them through a DAC, then you would reveal to yourself the new signal you are working with!

Now, you might say, but I didn’t start with the USCRN data. I just read a max/min thermometer. Ok, let’s explore this. The thermometer did the work of “effing up” your signal so you didn’t have to! You might say, well how could I pass it through a DAC to find out what it looks like? What frequency would I use?!? Good question! You can’t do this experiment because the information required is forever lost. But that fact doesn’t erase the problem. The experiment I did with USCRN allows you to compare the results using examples from that database. You can take the 288-samples/day data and feed it into a DAC and see what the temperature signal looks like. You can analyze this with a spectrum analyzer. You can also analyze spectrum in the digital domain. In this case you know what the ADC is set to. You can take the 2 max min samples and run them back through a corresponding DAC and see the difference. The analog signals you get with 288-samples and the 2 max/min samples are very different. Their spectrums are different. DSP done on the samples give you different results. My study gives you mathematical proof of this. Figures 1, 4 and 5 speak to this. Integrate the error over any timeframe you wish, 1 day, 1 week, 1 month or 1 year. You will see the accumulated error. This error varies over time in the integration. The error on a daily basis can swing between +/-4C, at least for what I saw at a few stations. [Start with individual stations and don’t go for averaging everything from the start.]

Some have said that it isn’t Nyquist! It’s just a bad method to use (Tmax+Tmin)/2! Well then why is it bad? I have not seen a competing answer. It is bad because the act of discarding the samples except the max and min are violations of Nyquist. Calculating the mean value of the properly sampled signal and comparing it to (Tmax+Tmin)/2 shows this clearly. The Nyquist compliant method is unassailably correct. It represents the original signal. The historical method is not correct because it represents some other signal.

When we plot long term trends using both methods and USCRN data we see absolute value differences and trend differences. Some stations have larger error than others. There is a correlation between the shape of the time domain signal and the size and sign of the error. Signal shape means spectral content. Now, it might be the case that over time this error averages out or averages down in absolute value. I propose that it might also be that the error acts as a dithering signal (adding broadband Gaussian nose in return for reduced quantization error). And/or the spectral content of some/many signals doesn’t have enough content where it can cause aliasing damage to the long term signal and/or the phase relationship of the aliasing results in minimal impact.

Summary: By definition, a digital value extracted from an analog signal is a sample. Nyquist is about signal equivalency across digital and analog domains. Digital signals must comply with Nyquist just as analog signals do. Discarding samples is a violation of Nyquist. All math on samples is DSP. (Tmax+Tmin)/2 is a violation of Nyquist. There is ample support from the 26 stations that aliasing error is present. Impact and significance of this error across the 26 stations and more stations is up for further study.

Comment added after reading Willis’ long complaint about my essay: Let’s see if this information helps. I didn’t write everything possible about signal analysis in the 1900 word essay. All that I said is correct and can be clarified if we get a basis of understanding and slow it down enough for me to address with quality responses. If the above doesn’t help we can conclude. Thank you.

Reply to  William Ward
January 17, 2019 8:56 pm

“When you have samples and you do any mathematical operation on them (adding, subtracting, dividing, multiplying, integrating) you are doing DSP on them. (I know, fancy word for just adding…) Your DSP in the digital domain must ALSO adhere to Nyquist! Nyquist isn’t just about getting from analog to digital. “

Just not true. Especially the last sentence. Nyquist is about getting from digital to analogue, only. Not analog to digital. You have a bunch of numbers derived from periodic readings of…something. You can add, multiply or whatever. Nothing yet about Nyquist. That comes when you try to derive some property that depends on relating those discrete values to a continuous function.

Now this is where your EE tunel vision comes in. You want to demand that the conversion to continuous (analogue) happen via fitting periodic (trig) functions. Well, you can do that, but it’s far from the only way. If you do do it, you’ll probably do a DFT, and take the numbers to be coefficients of trig functions, inverting the DFT into the continuous trig function domain. Then you get the complication that the DFT maps into the sub-Nyquist domain, and so might alias.

But what is actually done? People calculate a month average of the discrete data. There is no direct assumption of conversion to analogue there. There is an indirect assumption that it isn’t just a result for those samples, but is a measure of temperature that would persist if you sampled some other way.

I disagree with Willis and others saying that (Tmax+Tmin)/2 is wrong because it is not a good estimate of mean T. No-one claimed it was. They are different indices of temperature. The point of my continuous linking to my Boulder analysis is that it is well understood that min/max isn’t even one index. There is a spread of answers depending on when the thermometer is read. So of course they can’t all agree with the integrated average. But the point is that they are all offset by fairly constant amounts. So they are pretty much equivalent when you take anomalies. And those offsets do not bias trends, or other things you might calculate from anomaly.

In fact, the task of getting that consistent monthly average is just numerical integration. And your Fourier method can do it, but is not the only way, nor even a commonly used one.

As 1sky1 keeps pointing out, there are two other big issues that you neglect:
1. The sampling isn’t periodic. In fact, we don’t even know the sample times, except within 24-hr bounds
2. The samples aren’t chosen by time, but by value. This completely messes up your “signal analysis” anyway.

I might add that strict periodicity is important. It creates the ambiguity by always sampling at the same point in the cycle, which is a special case. You actually cumulatively can get a lot more information from the same number of points if you jitter the sample time, providing there is periodicity in the signal.

Reply to  Nick Stokes
January 17, 2019 9:00 pm

“is not a good estimate of mean T. No-one claimed it was.”
OK, I’d better clarify that. mean T means mean derived from integration – the limit of frequent sampling. The latter is an index, min/max is a different index. You need to see what their properties are, especially after taking anomalies.

William Ward
Reply to  Nick Stokes
January 17, 2019 10:33 pm

Nick said: “Just not true. Especially the last sentence. Nyquist is about getting from digital to analogue, only. Not analog to digital.”

My reply: Nick, did you actually say that Nyquist isn’t about getting from analog to digital? Did you have a typo? Did you mean to say that? Do you mean that when we sample analog signals we don’t have to comply with Nyquist?

Nick said: “You have a bunch of numbers derived from periodic readings of…something. You can add, multiply or whatever. Nothing yet about Nyquist. That comes when you try to derive some property that depends on relating those discrete values to a continuous function.”

Sure you can pull samples from a signal in a way that violates Nyquist and do math on them – just don’t expect your results to apply accurately to the original signal. If the aliasing that results is “small” then this is because the content+phase that aliases is “small”. It isn’t because the process is a correct one. If you tried the same process with another type of signal where the content at the potential alias frequencies was “large” then you would run into reality of sampling.

Reply to  William Ward
January 17, 2019 11:08 pm

“Nick, did you actually say that Nyquist isn’t about getting from analog to digital? Did you have a typo?”
I was going to ask that of you. Nyquist applies to the mapping from a set of sampled numbers to a set of continuous functions, and not before. If you choose trig functions as your basis functions, you may run into a situation that what is really (based on information external to what you observe) a sinusoid beyond the Nyquist frequency, is indistinguishable from an alias – a sin of lower frequency. That is a problem with that particular mapping.

But there are many other ways you could do the mapping (discrete to continuous). You could use finite element basis functions. You could use LOESS. You could use wavelets. It’s true that all will have limitations in representing possible rapid changes between samples. The question then is, what harm does that limitation do to whatever it is you are trying to estimate.

“just don’t expect your results to apply accurately to the original signal”

There is no original signal. One indicator of the fantasy of the EE approach here is the notion that you should pre-filter the analogue signal. How on Earth do you do that? You don’t have an analogue signal running down wires. The sampled values are your starting point.

Bright Red
Reply to  William Ward
January 18, 2019 12:19 am

Hi William

I have found your arguments sound and words well said but it seems to no avail.

I had to have a chuckle about how special some seem to think the air temperature is with the word chaotic being used. So I offer this example of a vertical accelerometer in an off road racing buggy. Some of the inputs to the sensor are engine vibration (rpm and throttle based with lots of harmonics), terrain undulations at all 4 wheels applying what I would call chaotic input (0-10g, 0-50hz) based on the terrain profile, lateral and longitudinal change in motion plus many more. Now let’s go for a drive around a track. The waveform will look like it repeats but there are lots of variations as the driver takes a different line hitting different bumps and being off road the surface itself changes every lap.
/sarc on
Now let’s use this vertical G data to work out if there is a change in gravity and rather than use the full Nyquist compliant data we will do it with just the min and max per lap without knowing where they occurred and expect the same result as using the full properly sampled data.
/sarc off and any replies to its content will be ignored.

William Ward
Reply to  Bright Red
January 18, 2019 5:33 pm

Hey Bright Red,

Thank you so much for your comments. If your were nearby I’d buy you a beer! (or whatever you prefer to drink.) Yes, an air temperature signal would be one of the least challenging signals from the natural world that I can think of regarding sampling or processing. I love your example. Especially the sarc modulated comment! Is racing a hobby of yours or do you work on electronics for that application? Either way it sounds like fun. Here is another real world application: Hybrid-Fiber Coax Cable (Cable TV). Take a coax cable with a 1GHz signal bandwidth, comprised of 6MHz channels, variable modulation profiles of 256-QAM to 8192-QAM. Add multi-carrier transmission support (OFDM) and then with 1 input and 1 integrated circuit, sample the entire spectrum well enough such that you can simultaneously tune, demux, demodulate and transcode 128 channels. What could be more chaotic that a scene change from day to night or a bunch of football players in motion on the screen.

Bright Red
Reply to  William Ward
January 18, 2019 6:29 pm

Hi William
Appreciate the thought but I doubt I am near you.
Unfortunately I am unable to use my real name or identify my industry as I have sold my interest in an international company I co founded and have agreements in place as the new owner was paranoid about any comments I make online. Fortunately the restrictions expire latter this year. I designed almost all the hardware and programmed a lot of the firmware of the products the company manufactured.
Another good example of real world signals. Yep temperature is at the yawn end of data acquisition.

Bright Red
Reply to  William Ward
January 19, 2019 1:39 am

Hi William
It should be me that buys you a quantity of your favourite beverage for the considerable effort you have put in to this important topic.

William Ward
Reply to  William Ward
January 20, 2019 10:49 pm

Hi Bright Red,

Congratulations on that! I hope it was a good deal for you and either set you up for your next company – or lots of time doing your favorite hobbies.

I hope to learn more about your work after you can “decloak” when the restrictions expire – should you chose to reveal more information.

Either way, I hope to see more of you here, and get the chance to interact with a fellow engineer.

I’m working on another paper (or 2) using basic thermodynamics to dispel Alarmists’ concerns about catastrophic ice caps melt.

Bright Red
Reply to  William Ward
January 21, 2019 3:04 am

Now I know I am mad for posting this but as I was beaten by William in replying to the post that had the quote and have no skin in the game I thought why not as it is this statement and the response that seems to be getting in the way of a robust debate about the actual topic.

Willis said: “Actually, the Nyquist theorem states that we must sample a signal at a rate that is at least 2x the highest frequency component OF INTEREST in the signal.”

Which as it stands and without any further clarification is not correct and if data was collected in this way would result in aliasing if the signal has frequency components greater than the highest frequency of interest. Adding a proviso would fix it and this is my addition to what Willis said.
“Actually, the Nyquist theorem states that we must sample a signal at a rate that is at least 2x the highest frequency component OF INTEREST in the signal” provided we use a suitable ANTI-ALIAS filter to remove all frequencies above the highest frequency component OF INTEREST in the signal.

Now in reality there are always higher frequencies present in any signal so a good design would start with EMI (Electro Magnetic Interference) filtering to get the frequencies down to where normal filters will actually work. From there set the specification for your maximum frequency of interest and acceptable anti-aliasing level.

Flame me if you like but it would be good to see this topic back on track.

William Ward
Reply to  William Ward
January 21, 2019 7:02 pm

Hi Bright Red,

I’m replying to your post where you start: “Now I know I am mad for posting this…”

Thank you for the very clear summary. I agree with all that you said.

Paramenter
Reply to  Bright Red
January 20, 2019 3:02 am

Hey Bright Red,

I have found your arguments sound and words well said but it seems to no avail.

I reckon part of the issue comes from semantic exercises. Couple people argue that the error has nothing to do with Nyquist but only with the particular way of averaging, namely calculating daily midrange values (Tmin+Tmax)/2. Well, you can say that this way of choosing daily samples and averaging them creates distortion to the original temperature signal. And this distortion is the source of error whereas using daily midrange values is the method by which the error is induced. So obviously it has something to do with Nyquist. Below example of the temperature readings every 5-min (red curve) superimposed on the daily midrange values (Tmin+Tmax)/2 – blue curve. Midrange curve somehow retains some attributes of the reference signal but is also clearly distorted compared with the original signal:

Midranges vs subhourly

So for me, acclamation that ‘this error has nothing to do with Nyquist’ is not very convincing.

Reply to  Paramenter
January 20, 2019 12:23 pm

I would not say that the error has “nothing to do with Nyquist”. However, it is also not, as William claimed, just “jittered” samples.

The error comes from a couple sources, both Nyquist and the curious nature of the taking of the signals.

w.

Bright Red
Reply to  Paramenter
January 21, 2019 2:15 am

Hi Paramenter,
I have to say I’ve enjoyed reading your post on this topic.
There is no doubt in my mind that Nyquist comes into play when sampling a signal. The only question is by how much and is that amount at any time or location going to be a problem.
As an Electronic Design Engineer I find it unacceptable that there be any issue at all around meeting Nyquist when collecting or processing the data as the additional cost of doing so is for all practical purposes zero. It also seems to me that if you can, at no additional cost, reduce one of the many potential errors involved in recording temperate data to zero why wouldn’t you just do it. It seems that with 288 samples/day USCRN have the same thoughts.

Reply to  William Ward
January 18, 2019 2:08 pm

William – you misunderstand so many things I will just try to handle them a few at a time. Four here.

[1] William Ward January 17, 2019 at 10:23 pm said in part: “Did you look at any FFTs of any signals? How far out in frequency do these signals go. Answer: infinity”

You said “any signals”. No: the FFT (DFT), X(k), length N, is bandlimited to half the sampling frequency, and the time-series x(n) from which it is calculated is exactly periodic with period N. This is all it CAN do. For all its limitations, the FFT, when understood, is at least a pretty good estimation of spectrum. I suspect this is what you used. For a very short comparison of FFT to five other transform pairs, please see:
http://electronotes.netfirms.com/AN410.pdf

[2] William also said in the same comment: “If a SRC was used to downsample to 2-samples/day, you would be filtering out content. There would be no aliasing but you would not have the same signal. You would have a sine wave. Are you sure you would preserve the -3.3V mean through all steps? Would it even be close?”

Of course it would, as I already told you at Bernie Hutchins Jan 16, 2019 at 10:32 am, it is the DC value and is EXACTLY preserved; even to just one sample!

Here are some notes on rate changing:
http://electronotes.netfirms.com/AN317.PDF
http://electronotes.netfirms.com/AN358.pdf

[3] William Ward said at January 17, 2019 at 6:53 pm: “Using the same example (1Hz + 2Hz sampled at 20sps) you discard all samples per half-second period except samples 1, 4, 5, 6 and 7. If you take this new set of samples and . . . . . not have an accurate relationship to the original signal.”
William, are you NOT aware of “non-uniform” or “bunched” samples?
http://electronotes.netfirms.com/AN356.pdf
http://electronotes.netfirms.com/EN205.pdf

[4] Most importantly, I asked you (Bernie Hutchins January 16, 2019 at 8:57 pm):

(1) We both seem to agree that the historical use of (Tmax+Tmin)/2 as a measure of mean is nonsense that results in significant error (what I call the Fundamental Flaw – FF – in either the continuous or sampled cases).

(2) Can we also agree that if there is NO sampling done on a signal it is meaningless to suggest aliasing as a cause of any errors that are present already?

(3) Does it not follow that since the error is already in the unsampled case (FF), it is not caused by aliasing due to sampling that has not yet, and may never occur?

IF YOU CAN – please explain your position.

You never responded.

– Bernie

Reply to  William Ward
January 18, 2019 2:56 pm

William – you misunderstand so many things I will just try to handle them a few at a time. Four here.

[1] William Ward January 17, 2019 at 10:23 pm said in part: “Did you look at any FFTs of any signals? How far out in frequency do these signals go. Answer: infinity”

You said “any signals”. No: the FFT (DFT), X(k), length N, is bandlimited to half the sampling frequency, and the time-series x(n) from which it is calculated is exactly periodic with period N. This is all it CAN do. For all its limitations, the FFT, when understood, is at least a pretty good estimation of spectrum. I suspect this is what you used. For a very short comparison of FFT to five other transform pairs, please see:
http://electronotes.netfirms.com/AN410.pdf

[2] William also said in the same comment: “If a SRC was used to downsample to 2-samples/day, you would be filtering out content. There would be no aliasing but you would not have the same signal. You would have a sine wave. Are you sure you would preserve the -3.3V mean through all steps? Would it even be close?”

Of course, as I already told you at Bernie Hutchins Jan 16, 2019 at 10:32 am, it is the DC value and is EXACTLY preserved even to just one sample.

Here are some notes on rate changing:
http://electronotes.netfirms.com/AN317.PDF
http://electronotes.netfirms.com/AN358.pdf

[3] William Ward said at January 17, 2019 at 6:53 pm: “Using the same example (1Hz + 2Hz sampled at 20sps) you discard all samples per half-second period except samples 1, 4, 5, 6 and 7. If you take this new set of samples and . . . . . not have an accurate relationship to the original signal.”
William, are you not aware of “non-uniform” or “bunched” samples?
http://electronotes.netfirms.com/AN356.pdf
http://electronotes.netfirms.com/EN205.pdf

[4] Most importantly, I asked you (Bernie Hutchins January 16, 2019 at 8:57 pm):

(1) We both seem to agree that the historical use of (Tmax+Tmin)/2 as a measure of mean is nonsense that results in significant error (what I call the Fundamental Flaw – FF – in either the continuous or sampled cases).

(2) Can we also agree that if there is NO sampling done on a signal it is meaningless to suggest aliasing as a cause of any errors that are present already?

(3) Does it not follow that since the error is already in the unsampled case (FF), it is not caused by aliasing due to sampling that has not yet, and may never occur?
IF YOU CAN – please explain your position.

You never responded.

– Bernie

William Ward
Reply to  Bernie Hutchins
January 18, 2019 7:28 pm

Hi Bernie,

Sorry I didn’t respond directly, I responded to both you and Willis on another reply. One can easily get diluted with too many exchanges and that can affect the quality of response. I want to make sure I can give quality responses. You sent a reply to me today twice about an hour apart, were you aware of that? 2:08 and 2:56 PM. They seem like duplicates but there are some differences too. I’ll reply to the 2:56 message. For some questions I may point you to other responses I have provided to others.

For [2] on SRC, Bernie said: “Of course, as I already told you at Bernie Hutchins Jan 16, 2019 at 10:32 am, it is the DC value and is EXACTLY preserved even to just one sample.”

We agree that the DC value will be the same. But that is not resolving our difference we have. The DC value will not be affected during SRC because the content reduction is always on the high frequency side, right? We are reducing sample rate so we have to remove content at high frequencies that will not be compatible with the new slower rate. If we are converting up in frequency then the situation is different and not a subject that applies to this discussion. If you take a signal sampled at 288 and SRC down then your mean will absolutely change because your content has changed. So we don’t agree.

Regarding [3] “bunched samples” and “non-uniform samples”. Thanks for the links Bernie. Cool stuff. But I hope you will agree that the articles show how to *recover* the full signal if you have “damaged” signals or there is some other reason you are not getting all samples the rate would provide. I don’t see anyone trying to use DSP to recover this information in climate science. I see climate science using 2 samples/day and those samples are “non-uniform”.

Regarding (1) on (Tmax+Tmin)/2: Yes we do agree here that it doesn’t provide results as good as a higher rate like 288. I was being sarcastic on a previous post – but now I ask seriously. Can you explain the FF you mention please? I understand the generic use of the term “fundamentally flawed” but “FF” as you have presented it sounds like something with more discipline to it.

Regarding (2) on NO sampling done: No I don’t agree. I wrote a detailed response to you and Willis yesterday explaining this. Willis found reason to be offended. I didn’t hear from you on that explanation. If you have a discrete value related to an analog signal then that is a sample. If you want to do operations on samples that apply to the original signal then you need to do so according to Nyquist. My post yesterday gives more detail. I thought it was a good explanation, and it included the restatement of many obvious things to be thorough (not to insult anyone’s intelligence). I would like a mathematical or scientific explanation from you about why max/min method doesn’t work. What makes it a FF specifically? I say the FF is that 2 “non-uniform” samples (borrowing from the terms you introduced) are not enough to do math that gives good results related to the original signal.

Regarding (3) error caused by aliasing or not: Well this is tied to the comments above. I don’t want to assume you had time to read the post I sent yesterday (Jan 17 6:53 PM). But that is the best I can do to explain this. It took 1200 words to say. I’d like to hear your thoughts if you read it.

Reply to  William Ward
January 19, 2019 2:46 pm

Replying to William Ward at January 18, 2019 at 7:28 pm

[8] You said: “ We agree that the DC value will be the same. But that is not resolving our difference we have. The DC value will not be affected during SRC because the content reduction is always on the high frequency side, right? We are reducing sample rate so we have to remove content at high frequencies that will not be compatible with the new slower rate. If we are converting up in frequency then the situation is different and not a subject that applies to this discussion. If you take a signal sampled at 288 and SRC down then your mean will absolutely change because your content has changed. So we don’t agree. ”

Your first sentence conflicts with the last two!!! Please clarify. What is wrong with what I said Jan 17 at 10:32 am?

If you are talking about up-conversion (we’re not) that is an interpolation problem which is completely different, and depends on a signal model (such as bandlimited, polynomial, etc.).

A good point to make here is that Nyquist/Shannon does not (NOT!) say that you have to sample at greater than twice the highest frequency. It says rather that you have to sample at greater than twice the one-sided bandwidth. For example, if a signal’s spectrum is zero except between 12 and 13, you do not need to sample at 26+ but rather at 2+, that is 2x(13-12)+. The band limiting is “bandpass” in this case. If the spectrum goes to DC, the one-sided bandwidth and the highest frequency are the same – hence the common misstatement of Nyquist! But pity the poor engineer who samples an AM radio signal broadcasting at 1 MHz at a 2.5 MHz rate when about 20 kHz will do. Of course the reconstruction is not low-pass, but bandpass in this example. It’s called “bandpass sampling”.

[9] You also said: “ Regarding [3] “bunched samples” and “non-uniform samples”. Thanks for the links Bernie. Cool stuff. But I hope you will agree that the articles show how to *recover* the full signal if you have “damaged” signals or there is some other reason you are not getting all samples the rate would provide. . . . . . “

That’s not what the notes says. If you originally sample just fast enough, and samples are lost (perhaps every 5th sample) you are just plain out of luck. If however your original bandwidth was only 4/5 what would have been allowed, you can still reconstruct exactly, although not with the usual low-pass (sync interpolation). I found your example of 20 samples of which you kept only 1,4,5,6,7 as remindful of bunched sampling. You had a bandwidth of 2 and 5 samples/cycle. Another lesser-known thing about Nyquist rate is that it is the average that matters! But you do need to know what you are doing. Oh – and you do absolutely need to know the times.

[10] Continuing your thought: “ I don’t see anyone trying to use DSP to recover this information in climate science. I see climate science using 2 samples/day and those samples are “non-uniform” “

Oh, but climate science as all about attempts to recover information – many quite problematic.

[11] “ Regarding (2) on NO sampling done: No I don’t agree. I wrote a detailed. . . . . . If you have a discrete value related to an analog signal then that is a sample. ”

NOPE. Only if you give the time the sample was taken. Tmax and Tmin (as analog) are given without time, as with a min/max thermometer. If you HAD a daily file of dense samples, then any computational software will likely have functions for min and max that obligingly also return the times index. But if you had that file, you would compute the mean directly from it.

[12] “Regarding (3) error caused by aliasing or not: Well this is tied to the comments above. I don’t want to assume you had time to read the post I sent yesterday (Jan 17 6:53 PM). But that is the best I can do to explain this. It took 1200 words to say. I’d like to hear your thoughts if you read it ”

I did read it – twice. If (Tmax+Tmin)/2 is wrong in continuous time, it is still wrong, for the same fundamental reason, with any sampling, and sampling which may never happen can’t be a cause of the analog errors. The problem with what you wrote is that the reader comes to a halt very often with a question of logic or what is just a demonstrably dubious claim. Sorry to have to say this.

– Bernie

William Ward
Reply to  William Ward
January 19, 2019 9:28 pm

Bernie,

You said: “Your first sentence conflicts with the last two!!! Please clarify.” We were talking about DC after SRC.

My reply: I’m confused as to what could be our misunderstanding. I’m not trying to insult you by stating some basics – just trying to connect and see what the misunderstanding is: As I see it, the DC is the content that establishes the offset for the scale of measurement. If we are using degrees C then the yearly signal rides on this and the daily signal rides on the yearly signal, visually speaking. While doing SRC I would not expect this “DC” value to change. But the higher frequencies will be filtered. We will see this in the things that make the daily sinusoid distort. The more we filter down the less the distortion. Agreed so far? As that content changes, depending upon how far down you sample, how much content there is to remove and how much of it is removed, the mean will change. I’m not sure how my first sentence contradicts my last 2 as you said. Can you elaborate please?

Bernie said: “A good point to make here is that Nyquist/Shannon does not (NOT!) say that you have to sample at greater than twice the highest frequency. It says rather that you have to sample at greater than twice the one-sided bandwidth. For example, if a signal’s spectrum is zero except between 12 and 13, you do not need to sample at 26+ but rather at 2+, that is 2x(13-12)+. The band limiting is “bandpass” in this case. If the spectrum goes to DC, the one-sided bandwidth and the highest frequency are the same – hence the common misstatement of Nyquist! But pity the poor engineer who samples an AM radio signal broadcasting at 1 MHz at a 2.5 MHz rate when about 20 kHz will do. Of course the reconstruction is not low-pass, but bandpass in this example. It’s called “bandpass sampling””

My reply: I wrote a 1900 word essay to introduce a concept to a broad audience. I didn’t write all that could be said on the subject. What you write about is real cool and I’d love to pick you brain about your experiences if we could meet someday. I understand what you are saying about bandpass sampling. It is also called “undersampling” by some. Over 20 years ago I worked on the design to bring the first integrated digital front end to the newly developing cable tv set top box world. At the time, a bag of converters that would do the job in 5M unit/yr quantities was over $20 for those who could even pull off the performance in an IC. This bag of components had to be integrated and sold for under $4. The requirement was for 10 ENOB and crazy phase error figures and a 6MHz 256 QAM signal had to be recovered from a low-IF of 45MHz. The technology to do this was not there at that time for an integrated solution at $4. But the trick was to undersample and essential use aliasing to downconvert the 6MHz channel down to baseband where it could be recovered. Yes, cool stuff, but… it is difficult to get across the simple messages in the paper – that kind of detail would have made this DOA for most. The basic statement of Nyquist is correct but there is more to it. I’m not sure that addition enhances my paper however.

Bernie, I’m sorry to have offended you with my strong pushbacks on some comments. I’m wondering how much of the ongoing friction is just a positive feedback loop of us both looking for respect from the other. I can see from what you have said that you have a deep DSP knowledge. There were some fundamental issues that we have been in contention and our disbelief of the others’ positions perhaps makes us discount the other at times. I think there is usually an underlying misunderstanding and I should probably be more patient to tease that out rather than jump into combat.

Reply to  William Ward
January 20, 2019 6:22 pm

William –

You said: “ We agree that the DC value will be the same. . . . . . If you take a signal sampled at 288 and SRC down then your mean will absolutely change because your content has changed. So we don’t agree. ”

I was supposing that you merely misspoke! The DC value is the same, but the mean changes! It is the SAME (sum-of-samples)/(number-of-samples). I explained this on Jan 16 at 10:32 a.

Are you omitting the pre-decimation filter (allowing aliasing)? If so, it is not unlikely that some upper component(s) will alias to DC and corrupt the correct value, but BOTH the DC component AND the mean will change together.

Possible confusion: what do you mean by SRC (I assumed Sample Rate Converter). What is it? Is it not the “downsampler” used in the classic Multi-Rate texts (Vaidyanathan, Fliege)? It’s not just the square with the box with the down-arrow and number inside – is it? If it were, it does NOT even apply to the case where you keep only Tmax and Tmin, as these are not equally spaced (in actual time) at the input. Aliasing is not the fundamental reason that (Tmax+Tmin)/2 is a poor estimator.

-Bernie

William Ward
Reply to  Willis Eschenbach
January 17, 2019 8:12 pm

Willis,

There is nothing special about an air temperature signal. You are mystifying it. There is nothing in an air temperature signal that you can’t find in a music recording. You think I’m being arrogant. Well, I can understand why you say that. I don’t think I’m better than anyone. So, I would not call that arrogant. I do think I know a lot more about signal analysis than anyone who has spoken out against what I have presented. I have spent 35 years doing it. Thousands of hours looking at signals go back and forth between the domains. Designing board level systems and integrated circuits. Then working with the best converter designers in the world and the best system designers in the world – with me being at the system/applications level of that. I’m listing to the responses and I see people who are very smart, very knowledgeable – in many areas I can’t even begin to follow what you are doing. But when I see people making the most fundamental mistakes and standing up so boldly proclaiming it, it kinda looks to me like the other person is the arrogant one. I don’t think that that is what is going on but when people get stubborn that is the look. We all have to decide, is it better to look polite and not break through or push and try to break through. When pushing happens then egos can bristle. The other night we discussed personal attacks. Jeez, did you go back and read what you just said to me? Ok, I’ll be fine, but I think we have reversed roles tonight.

Back to the temp signal. If you mystify it your lost. I don’t know how I can convince you. Air temp signals are really boring – real slow. Yawn. Try to sample millimeter wave communications! Audio, video, they both have every element you mystify. And yes, air temp signals are just combinations of sine waves at the most fundamental level. The intermittent nature of some components does not change anything. Any of the pseudo-cycles you mention are just intermittent signals and their frequency will be in a contained range. Sample to cover that range and it will be in the samples. It is really basic.

I have explained and I’m not sure you keep stubbornly claiming that I have not clarified how a real world implementation of Nyquist is done. Did you miss that maybe?? When you start you explain theory and then you introduce application of theory. Hey, maybe my writing sucks and I didn’t communicate it well in the essay. The challenge I had was trying to keep to a word limit – so I condensed the paper but offered the full version. The full version is where more of the Nyquist theory is developed. But still only a few bits of what is possible to say. I said I was using 288-samples/day as the effective Nyquist frequency because NOAA did so. It made sense because the error below that was small and only started to increase as I got down to 72-samples/day. I didn’t try enough example stations to make a strong statement that it could come down to 72. Maybe there are other stations out there that present profiles that need all 288 that NOAA gives us. Is this really such a big point against my case?

Willis said: “I showed that 5-minute sampling is only trivially better than hourly sampling.”
Reply: Jeez Willis. I showed that it wasn’t trivial. I wrote the essay. I presented data. I would think you might try to analyze the data I presented before crafting your alternate data. I explained my methods. I’m not sure I understood your methods. Probably my limitation. But we have what I presented and what you presented. They disagree unless I’m just not understanding. I don’t really remember you citing my data and showing the error. I only remember you doing your alternate analysis and proclaiming mine incorrect.

Regarding “reconstructing the signal”: You have still not locked in on the importance of the ability to reconstruct. Maybe my other post tonight will help. That explains the things that you seem to be missing.

You quoted Nick: “The Theorem tells you that you can’t resolve frequencies beyond that limit. But we aren’t trying to resolve high frequencies. We are trying to get a monthly average. It isn’t a communications channel.”

You said: “Despite that, you have not yet admitted that your initial claim was wrong.”

My reply: It is not wrong. It is correct. I have said that you can’t alias and expect to eliminate the aliasing after sampling. If you are interested in only the monthly average, then you need to filter out faster signals before sampling. That is the theory and practice and it is proven. But if the aliasing effect is small because the frequency content is not large enough or of specific phase to cause problems then you are just lucky. It doesn’t mean the method is correct, just lucky based upon the spectrum.

Willis said: “And as a result, you waltz in here and start babbling about sampling at 2X the highest frequency in a temperature signal, and I just roll my eyes and think “Here we go again, another signals guy who thinks he’s God …”.

My reply: Ouch. It sounds like you had it out for me from the beginning… “…another signals guy… babbling…eyes rolling”. Perhaps this sentiment is evident in your approach to this. It feels like your goal is to snuff it out and move on. Ok, you are here talking with me, thank you, so I’m probably wrong – but there is something to the feeling. No, I don’t think I’m god but I’m very confident in what I’m presenting based upon real world applications – and temperature signals are not any different.

Reply to  William Ward
January 17, 2019 10:24 pm

William, thanks for your answer. I did NOT have it in for you from the beginning. However, when you start by saying that we have to sample the temperature at higher than the highest frequency, and the highest frequency has a period on the order of seconds … do you seriously expect to be taken seriously?

This is especially true when very soon you start talking about a “practical Nyquist sample rate”, without ever defining what that is except to say that you think it is 288 samples per day … what happened to “higher than the highest frequency” that you defended so passionately? Now you say:

Sampling below a rate of 288-samples/day will be (and should be) considered a violation of Nyquist.

Where is your “highest frequency in the signal” in that claim? Do you see why I shake my head when I read your claims?

Next, is it a “violation of Nyquist” to only take two readings per day, the high and the low, at whatever time they might happen, and average them? Nope, because as far as I know, Nyquist says nothing about that situation. If you think Nyquist or Shannon discussed that situation, please point out where.

Everything I’ve read regarding Nyquist discusses evenly spaced samples, or samples with jitter, or random samples. So perhaps you’d be so good as to provide us with a citation that discusses your claim that taking just two samples, a high and low sample whenever they might occur, violates Nyquist …

Next, you say:

Willis said:

“I showed that 5-minute sampling is only trivially better than hourly sampling.”

Reply: Jeez Willis. I showed that it wasn’t trivial. I wrote the essay. I presented data. I would think you might try to analyze the data I presented before crafting your alternate data.

I read your essay. Your “data’ is ONE VERY UNUSUAL DAY, a day in Alaska where for half the day the temperature did what it usually does—it pauses and hangs out at the freezing point.

And foolish me, I thought I was being kind by not embarrassing you by pointing out that you are drawing a big conclusion based on one stinking ridiculous day’s worth of extreme data

So instead of busting you for that silliness, I figured I’d take the high road and just show how to do it. I averaged a number of days and months from a number of datasets to find out what the average RMS error is when calculating the average. You say;

I explained my methods. I’m not sure I understood your methods. Probably my limitation. But we have what I presented and what you presented. They disagree unless I’m just not understanding. I don’t really remember you citing my data and showing the error. I only remember you doing your alternate analysis and proclaiming mine incorrect.

So … I understood your methods. You analyzed one day. Instead, I looked at the average error in the signal of interest, which in climate is almost always the daily that makes up the monthly data.

And now, now you tell me that you didn’t understand my methods? But did you ask for an explanation?

Hah. No chance. You know you are right, apparently, so there’s no need to ask for explanations.

Now, I freely admit that that may not be an accurate reflection of who you are, William. But it sure as hell is who you look like from this side of the screen.

Returning to the data, I showed that in calculating daily averages, hourly sampling is only very slightly better than sampling every five minutes. Here’s that graph again:

To calculate that, I took the average of the RMS error between calculating the daily average using 5-minute samples and using hourly samples. Here’s how.

I looked at a full year of days. I calculated each day’s average using 288 samples per day. I calculated each day’s average using 24 samples per day. I took the differences between the two and calculated the RMS error. That is the difference between 5-minute and one-hour samples, and it’s only four-hundredths of a degree. If you still don’t understand that, ASK!

And that means that whatever aliasing may exist due to using the hourly data, it only has an RMS average error of 0.04°C when calculating daily averages. And that means that no, 288 samples is NOT necessary, and sampling slower than that is NOT violating Nyquist as you claim.

You said:

“Despite that, you have not yet admitted that your initial claim was wrong.”

My reply: It is not wrong. It is correct. I have said that you can’t alias and expect to eliminate the aliasing after sampling. If you are interested in only the monthly average, then you need to filter out faster signals before sampling. That is the theory and practice and it is proven.

First, you have already tacitly said that you were wrong without saying you were wrong by inventing a concept you call a “practical Nyquist rate” … which is NOT, in your words “at least 2x the highest frequency component of the signal”. You claim that sampling at 288 per day does NOT violate Nyquist, when you opened the discussion by clearly stating that it does violate Nyquist … and dissed me for questioning it.

Next, the issue is not whether or not there is aliasing. Unless you filter out the high frequencies, there will be aliasing, even at your “practical Nyquist rate”.

The issue you don’t seem to grasp is, is aliasing a difference that makes a difference? This is not the lab, and we’re not looking for spiritual signal purity. This is the real world.

Look, William. I consider you an expert in your field. But your field is obviously not the practical use of statistics and signal theory in the field of climate. So you invent things like a “practical Nyquist rate” and pick a number for that and then tell us that we’re all wrong to question it … sorry, but that dog won’t hunt.

Now, I’m more than happy to discuss this further. And I reckon you’re a good guy … just insufferable at times. But heck … so am I, so we have at least that in common. Well, plus one more thing we have in common. Although I’ve been called “Willis” all my adult life, my full legal first and middle name is William Ward … go figure.

So to move this discussion forwards, let me see if I can clarify my position:

1. Your “practical Nyquist limit” of 288 cycles/day is just something you made up based on highly inadequate data made up of just one unusual day’s worth of samples.

2. There is no practical way to sample temperature data at “2X higher than the highest frequency” because we’d have to sample at microseconds to do that … and if nothing else, the lag in the thermal sensor would obviate that. Nor is there any reason to do so—for our purposes hourly data is quite adequate.

3. How bad (max+min)/2 is has nothing to do with Nyquist. It is a problem because of the unusual nature of the choice of times to sample. You could be taking max and min temperature samples at 6X the Nyquist rate and the average of them would still be inaccurate.

4. In the world of temperature, there is no practical difference between hourly samples and 5-minute samples.

5. Aliasing is only a problem when it actually is a problem, not when theory says it is a problem.

6. Not following Nyquist is only a problem when it actually is a problem, not when theory says it is a problem.

7. Unlike the sampling that I suspect you are used to doing, the main use of sampling in climate is to provide data for averages which will be used in turn to calculate trends. As such, it is the accuracy of these trends that is the important metric of the adequacy of the sampling rate, not whether the sampling rate fulfills theoretical requirements or contains aliasing.

I’m happy to discuss any of that with you. But if you’d be so kind … if you can’t follow what I’m saying, just ask.

My best to you,

w.

Clyde Spencer
Reply to  Willis Eschenbach
January 18, 2019 11:34 am

Willis
You said, “4. In the world of temperature, there is no practical difference between hourly samples and 5-minute samples.” I think that you should define “practical” as an acceptable tolerance in error to achieve a result sufficient to resolve trends in temperature changes, and/or estimates of energy content. “Practical” means different things to different people.

Reply to  Clyde Spencer
January 18, 2019 5:38 pm

Clyde Spencer January 18, 2019 at 11:34 am

Willis
You said, “4. In the world of temperature, there is no practical difference between hourly samples and 5-minute samples.” I think that you should define “practical” as an acceptable tolerance in error to achieve a result sufficient to resolve trends in temperature changes, and/or estimates of energy content. “Practical” means different things to different people.

Good question, Clyde. I just found some USHCN data on the web that hasn’t been blocked by the government shutdown. I got 13 years of data for the USHCN station nearest to where I grew up, in Redding, California. I calculated the average for each day using first all 288 daily samples, and then using 24 hourly samples. Here are the results:

Year   Mean Err RMS Err Max Err Min Err
2006 -0.002     0.051   0.220  -0.216
2007  0.002     0.053   0.156  -0.221
2008  0.005     0.059   0.186  -0.275
2009 -0.003     0.049   0.183  -0.161
2010 -0.003     0.051   0.259  -0.127
2011  0.002     0.051   0.156  -0.141
2012  0.002     0.054   0.177  -0.165
2013  0.004     0.060   0.236  -0.211
2014  0.002     0.059   0.198  -0.177
2015  0.003     0.059   0.198  -0.217
2016  0.004     0.057   0.188  -0.170
2017  0.004     0.052   0.178  -0.156
2018 -0.001     0.051   0.151  -0.182

Some notes. The largest average error is five thousandths of a degree.

The largest RMS error of the daily errors is six hundredths of a degree.

The largest absolute error, both positive and negative, is a quarter of a degree.

That’s what I’m calling “no practical difference” …

w.

William Ward
Reply to  Clyde Spencer
January 18, 2019 5:51 pm

Clyde,

I have tried to understand Willis’ complaint about how I presented the theory and then the application of the theory. I have tried to restate and summarize but this has not helped. I would like another opinion. Have you been confused my presentation of that information? Let me briefly summarize again, as I have not changed or altered positions, but perhaps added the practical portion somewhere after the start of the discussion.

According to the Nyquist-Shannon Sampling Theorem, we must sample the signal at a rate that is at least 2 times the highest frequency component of the signal.

fs > 2B

Where fs is the sample rate or Nyquist frequency and B is the bandwidth or highest frequency component of the signal being sampled. However, Real-world signals are not limited in frequency. Their frequency content can go on to infinity. This presents a challenge to proper sampling, but one that can be addressed with good system engineering. When air temperature is measured electronically, electrical anti-aliasing filters are used to reduce the frequency components that are beyond the specified bandwidth B, thus reducing potential aliasing. Another method of dealing with real-world signals is to sample at a much faster rate. The faster we sample the farther in frequency we space the spectral images, significantly reducing aliasing from undesired frequencies above bandwidth B. This is how Nyquist is applied practically. In the real-world, a small amount of aliasing always exists when sampling, but careful engineering of the system will allow sampling to yield near perfect results toward our goals.

Experimentally, multiple *calibrated* and *matched* converters can sample the *same event* and the results can be compared. At some sample-rate you meet your required accuracy and beyond that there are increasingly diminishing improvements. NOAA uses 288-samples/day (divided down from 4,320 and I don’t have the official reason why they did this). You would design your system, not for the typical or “average” signal content but for the content with the most high frequency energy. You want your system to capture all data from every day and every station.

Does this sound confusing to you or like I’m presenting contradictory claims?

Thanks in advance.

William Ward
Reply to  Clyde Spencer
January 18, 2019 6:45 pm

Willis, Clyde,

Almost every station I examined shows “significant” error (many tenths to several degrees C) per day. The following graphs show for each day the difference between 288-samples/day and NOAAs (Tmax+Tmin)/2 (“historical method”). 288-samples is reference and historical method is subtracted. Result is daily error.

https://imgur.com/QyfAonp

https://imgur.com/aoUX30R

https://imgur.com/hfqjMz5

These errors can be seen over years in both absolute value and trends.

https://imgur.com/cqCCzC1

https://imgur.com/IC7239t

I have 26 of these charts corresponding to the 26 stations is Fig 7. They are not all properly labeled for public consumption but they are available and the labeling could be added.

Willis, you have not acknowledged any of the data I have presented.

As I said in my paper:

It is clear from the data in Figure 2, that as the sample rate decreases below Nyquist, the
corresponding error introduced from aliasing increases. It is also clear that 2, 4, 6 or 12-
samples/day produces a very inaccurate result. 24-samples/day (1-sample/hr) up to 72-
samples/day (3-samples/hr) may or may not yield accurate results. It depends upon the
spectral content of the signal being sampled. NOAA has decided upon 288-samples/day (4,320-samples/day before averaging) so that will be considered the current benchmark standard. Sampling below a rate of 288-samples/day will be (and should be) considered a violation of Nyquist.

The goal is to design a system to handle the worst case signals. The goal is to have a system that works equally well for all days at all stations. Finding stations that work well with 24-samples/day doesn’t mean you decrease the system performance to match those stations. You don’t design for the “average” condition, you design for worst case conditions.

Willis, USCRN hourly data is the 5-minute data integrated to hourly. Why would you assume that sampling hourly would be equivalent? It may produce similar results and it may not. Do not confuse the 2 scenarios. If there is higher frequency content then you will get different results between sampling hourly and integrating 20-second samples to hourly. Sampling hourly will alias content faster than that.

Clyde Spencer
Reply to  Clyde Spencer
January 19, 2019 12:10 pm

William
I anticipated your question and started to respond a couple of days ago. I decided I wasn’t really going to contribute anything and deleted what I wrote. However, since you asked, I’ll stick my neck out.

You and Willis both impress me as being bright, experienced people. However, my sense is that you are arguing at cross purposes. Willis seems to be comfortable with loosely defining “practical” as results that appear to have ‘relatively small’ (but not rigorously defined) errors. Further, he seems to focus just on temperatures and takes an approach of demonstrating with data whether something is outrageously wrong, rather than following through with how errors propagate and impact claims made by alarmists. You seem to be more concerned with a theoretical approach and focus on what an ideal sampling protocol should be like. Therein, I think, lies the essence of your cross purposes. To be flippant, the difference between “Good enough for government work versus the attitude of a perfectionist.”

Now, something that I think that you should have stressed is that the issue of under sampling may not be critical for getting acceptable temperature estimates (providing that an acceptable tolerance is specified), but it clearly results in distortion of the shape of the time-series. This is more important when calculating the area under the curve for energy calculations. (As an example, consider what happens when a cold front passes a station shortly after the daily high — the bottom drops out and the nice sinusoid disappears.)

I think that what is missing is something akin to a design specification that starts with just what the data collection is intended to address. That is, it should contain quantitative goals, with acceptable quantitative errors for each step in the data collection and analysis chain. If there is agreement on what the purpose is, and what acceptable error is, then one can proclaim whether existing data are fit for purpose or not. Short of that, it is two experts touching their favorite part of the elephant and holding their ground on what the ‘truth’ is.

William Ward
Reply to  Clyde Spencer
January 21, 2019 7:16 pm

Hi Clyde,

I’m replying to you post where you start: “I anticipated your question and started to respond a couple of days ago.”

I agree with your assessment of the different perspectives that are tripping up the communication between Willis and me (at least the technical issues.) I’m approaching it from an engineering perspective. Even that can bifurcate adding confusion. If I’m referring to what NOAA has given us with USCRN then analysis goes toward what is allowed with what they have already done. If we are discussing what the USCRN specifications should be or could be, that is another discussion. And of course, we start the discussion with the theory, which differs in that it starts with an ideal band-limited signal. We don’t really have those in the real world. Enter engineering with the addition of filters to approach the ideal band-limited signal. The design needs to comply with a specification. When discussing USCRN I usually assumed NOAA had reasons for their specifications so I deferred to those where I had nothing else to suggest in its place.

This provided opportunity for the communication to derail. Thanks for your insight. Your neck is safe.

Clyde Spencer
Reply to  William Ward
January 21, 2019 8:50 pm

William Ward
In case you missed it, I wanted to be sure that you saw the link I provided to Scott:
https://library.wmo.int/doc_num.php?explnum_id=3179
It is from a World Meteorological Organization document on automated weather stations. It has some interesting insights on how they think sampling should be done. I noted in particular that they recommended filtering the output of things like thermistors before digitization. There are competent people looking at the problems, but I get the feeling that the academics aren’t aware of it.

William Ward
Reply to  Clyde Spencer
January 21, 2019 11:27 pm

Clyde – thanks for the link to the WMO AWS guide. Great information.

Bright Red and Paramenter: See this guide referred by Clyde. See section 1.3.2.2 Sampling and Filtering on pg 15 of PDF (pg 539 of document). Key points:

Considering the need for the interchangeability of sensors and homogeneity of observed data, it is recommended:

(a) That samples taken to compute averages should be obtained at equally spaced time intervals which:

(i) Do not exceed the time constant of the sensor; or

(ii) Do not exceed the time constant of an analogue low-pass filter following the linearized output of a fast response sensor; or

(iii) Are sufficient in number to ensure that the uncertainty of the average of the samples is reduced to an acceptable level, for example, smaller than the required accuracy of the average;

(b) That samples to be used in estimating extremes of fluctuations should be taken at least four times as often as specified in (i) or (ii) above.

Emphasis on Note (b): Samples should be min 4x as specified in i or ii for “extremes of fluctuations” (I assume this means high frequencies and not max/min amplitude).

I started to calculate potential break frequencies of the anti-aliasing filter, but then remembered something more important and that comes from pg 11 (PDF) [535 in doc]. See Data Acquisition heading. It seems the architecture uses a switched ADC – so the converter is shared. The sensors for the various parameters (pressure, temp, humidity, etc) are fed through their front end and signal condition circuits and then switched/muxed in to the ADC. So 4,320-samples/day may be defined at that speed for another parameter that has higher frequency content than temperature. All of these things are related so I’m not sure what that would be. I’m just pointing this out as something to consider. If there is another variable that needs faster sampling then this could explain why 4,320 was selected and why averaging down by 15:1 (4,320 to 288) is done for temperature. Other variables may not use this averaging or a different averaging factor appropriate to that variables sampling needs.

That’s all for tonight. Sleep depravation meter is whizzing around at amazing speeds.

Bright Red
Reply to  William Ward
January 21, 2019 11:57 pm

Hi William,
From the document”
(a) That samples taken to compute averages should be obtained at equally spaced time intervals which:
(i) Do not exceed the time constant of the sensor; or
(ii) Do not exceed the time constant of an analogue low-pass filter following the linearized output of a fast response sensor; or
(iii) Are sufficient in number to ensure that the uncertainty of the average of the samples is reduced to an acceptable level, for example, smaller than the required accuracy of the average;
(b) That samples to be used in estimating extremes of fluctuations should be taken at least four times as often as specified in (i) or (ii) above.

I expect that the 4320 samples/day is in line with the time constant of the temperature sensor or following filter that they are using as per (i) and (ii) and nothing to do with multiplexing the A/D input.
It is also interesting that the recommended sample rate is about three times faster than required by Nyquist which is a very reasonable/normal and practical design decision.

Reply to  Willis Eschenbach
January 19, 2019 4:51 pm

Clyde Spencer January 19, 2019 at 12:10 pm

William
I anticipated your question and started to respond a couple of days ago. I decided I wasn’t really going to contribute anything and deleted what I wrote. However, since you asked, I’ll stick my neck out.
 
You and Willis both impress me as being bright, experienced people. However, my sense is that you are arguing at cross purposes. Willis seems to be comfortable with loosely defining “practical” as results that appear to have ‘relatively small’ (but not rigorously defined) errors.

OK, here’s my rigorous definition. With respect to 288 samples per day, errors in daily average temperature with an average of less than 0.01°C and an RMS error of less than 0.1°C are acceptable.

However, I will note that this does NOT include the (min+max)/2 errors. Unfortunately, for the most part they are all that we have … I’ll have to take a look at the effect of those errors before commenting further on them.

w.

Clyde Spencer
Reply to  Willis Eschenbach
January 19, 2019 8:56 pm

Willis
I think that we are all in agreement that it is an unfortunate set set of circumstances that we are saddled with an historical data set that begins with mid-range values, and that the paid professionals try to use them to make a silk purse out of a sow’s ear.

You said, “With respect to 288 samples per day, errors in daily average temperature with an average of less than 0.01°C and an RMS error of less than 0.1°C are acceptable.” Acceptable for what? Acceptable for everything that we might ever want to do with temperature data? Acceptable to justify mean annual temperatures to 0.001 deg C? Acceptable to calculate energy hiding under bushes?

Reply to  Clyde Spencer
January 20, 2019 12:46 am

William, my thanks to you for your complete and interesting analysis.

I realized yesterday that there is a huge difference between guys like me and signal guys like you.

You have access to the analog signal. We don’t.

This opens up a whole host of possibilities. You can filter your signal before sampling it. You can amplify or decrease certain parts of the signal. You can heterodyne it. Hosts of possibilities.

We don’t have that option. We get what we get—some data hourly, some data at 288 samples per day, some (min+max)/2. Not pretty.

I do think that we agree on most things. For example, the (max + min)/2 method of calculating the mean gives ugly errors. Here’s the data for Fairbanks 2015:

How does this affect the trends? Haven’t looked at that.

I think we have two remaining disagreements.

First, I’ve shown that the hourly errors are very small, and the hourly sampling does NOT contain aliased signals. I’ve also shown that it is past the “knuckle” in the graph where faster sampling gains us very little.

As a result, I’d say that the practical Nyquist limit is hourly. I’ve not seen anything to change my mind. You have NOT demonstrated that there is any aliasing in the hourly data. You have NOT given us an example where using 288 samples gives a significantly better result than hourly sampling, or an example where using hourly data leads to significant errors.

Here’s an oddity for you. The good folks at NOAA have averaged the 4,320 samples per day or whatever they are taking into 5-minute segments. In essence, this has filtered out any frequencies with periods shorter than 5 minutes.

Of course, this means that the highest frequency remaining in the digital signal is 288 samples per day … and Nyquist says we have to sample at twice that frequency.

But unfortunately, not having access to the analog signal, we can’t do that … which strictly speaking means that even the 288 samples per day is below the Nyquist limit. To which I can only say … so what? What practical difference does that make?

So that’s our first disagreement.

Second, my blood is still angrified by this exchange:

Willis said:

“As I said above, Nyquist does NOT mean that you have to sample at 2X the highest frequency in your data, just at 2X the highest frequency of interest. ”

Willis – this is fundamental signal analysis 101, first day of class mistake you are making here. I suggest you go read up on this before misinforming people.

Then you followed that insult up by saying that well, no, we DON’T have to sample at 2X the highest frequency in the data, we can sample far below that, 288 samples per day is just fine, we’ll call that the “practical Nyquist limit” … which is EXACTLY WHAT I HAD SAID and had been severely dissed for saying.

I’ve given you a couple of opportunities to apologize for that piece of ugly paternalistic nastiness, and you’ve shined them on.

So those are our two areas of disagreement. You think that there is something holy about 288 samples per day, and you think you are qualified to talk to people who disagree with you as though they were ignorant children.

So there we are. I’ll take a look at trends and see what I can find. I can tell you right now that there will be no significant difference between hourly data trends and 288 sample trends.

In addition, I suspect strongly that the difference in say 30-year trends using min/max versus 288 samples will be very small. We can do that monte-carlo style, because the USCRN data lets us quantify the min/max error as to mean, skewness, kurtosis, and RMS. So all we have to do is add that error to any longterm set of daily data and convert that into monthly and then 30-year trends. I don’t think the difference will be large.

As always, thanks for your perseverance,

w.

Reply to  Willis Eschenbach
January 20, 2019 1:36 am

OK, I looked at trends for the full Fairbanks data, 2007-2018. The max+min has an error of about 0.06°C per decade, which is large.

The hourly sampling, on the other hand, has the same trend as the 288-sample data to six decimal places.

More evidence that there is no problem with using hourly data …

w.

Reply to  Willis Eschenbach
January 20, 2019 1:59 am

OK, I looked at the trends in the full Fairbanks USCRN 2007-2018 record. Here are the decadal trends:

2.671°C/decade traditional (max+min)/2

2.6175°C/decade 288 sample

2.6184°C/decade hourly sample

The difference between the traditional and the 288 sample is 0.06°C/decade, a significant amount.

The difference between the hourly and the 288 sample is 0.0009°C/decade, a meaningless amount.

More evidence that hourly data is perfectly adequate …

w.

William Ward
Reply to  Willis Eschenbach
January 20, 2019 9:08 am

Hi Willis,

Thanks to your reply here on the exchange to Clyde. I sent you a post last night (Jan 19, 8:37 PM). That post, I hope, will close more of the gaps in our understanding. It also addresses some or most of your concerns with the way I have treated 288 samples, etc.

I think we are getting close to a harmonious understanding – but I’ll wait for you to reply to last nights post. I’ll address a few things you said here. And I’ll have a reply to Clyde tonight after I return.

Willis said: “Here’s an oddity for you. The good folks at NOAA have averaged the 4,320 samples per day or whatever they are taking into 5-minute segments. In essence, this has filtered out any frequencies with periods shorter than 5 minutes.”

My reply: Willis, I agree with you. What NOAA did is strange. I struggled with how to comment about in in the paper without wasting too many words and distracting people on an already complex subject. As I see it, NOAA’s 4,320 samples averaged down to 288 is actually different and superior from a design perspective than a pure sample rate of 288. If you have a signal with cycles faster than 144-cycles/day then 288- starts to alias this. If that content is large in amplitude then the aliasing is large. For our air temp signal I think we agree that this is not the case so I’m not trying to assert that. When NOAA averages, they lose the particular samples but the data is there in the sample. While individual frequency components are lost in the averaging, the energy is there and the mean calculated with 4,320 and 4,320 averaged to 288 should be the same except for sample rounding. About using 4,320: This frequency is still very slow for converters. It might be done to make the anti-aliasing filter easier to implement (lower cost, smaller components), and it wont introduce as much phase shift or pass-band ripple. It is also possible that the average down is to just not have to deal with so much data. Not that it would be a lot of data by today’s standards.

Willis said: “Of course, this means that the highest frequency remaining in the digital signal is 288 samples per day … and Nyquist says we have to sample at twice that frequency.”

My reply: See my post from last night. Selecting the sample rate is to not alias any potential content in the signal that you want to keep. Anti-aliasing filters are aligned to this.

Willis said: “But unfortunately, not having access to the analog signal, we can’t do that … which strictly speaking means that even the 288 samples per day is below the Nyquist limit. To which I can only say … so what? What practical difference does that make?

So that’s our first disagreement.”

My reply: Ok, I think I see your point. NOAA themselves have chosen to throw away data in their averaging. Well once you sample properly you can digitally throw away data if done properly. That isn’t aliasing. They lose frequency components but the energy should be retained in the average if I’m thinking clearly.

Regarding that entire tangle that started when you said something like “we can sample 2x the frequency of interest”: There were a few others, primarily Nick, who said it is okay to alias because we are only interested in the long term trends. Others said it was not even possible to alias. If the aliased content is very small then he is right that you might be able to get away with a violation. But if not then it creates problems. Either way he was promoting an idea that violates the most basic requirements of sampling. When you said sample the frequency of interest I heard that as echoing what Nick said. My entire case was dependent upon people, other people reading our exchanges, to take in the concept of proper sampling. When 2 of the most respected people on the forum started to derail the most basic concept I felt I had to speak strongly to counter that. There was no disrespect intended. Note I never went into personal attacks or countered any personal attacks on me. I did apologize to you and you accepted. I think it was after the incident you mentioned. Now more on this “frequency of interest”. This detail matters. As an engineer, when designing the system, you can arbitrarily set the Nyquist limit. If you, or the climate scientist who sets the specifications tells you that only frequencies below a certain point are valuable for the research and frequencies above this are noise, then that can guide the design. Should guide the design. Just like the audio examples we talked about for CD audio with sampling at 44.1ksps. You said that I thought there was something “holy” about 288-samples/day. No, I just think your perception of what I intended is not correct. As explained in my paper last night, we are focusing on 2 different things. An engineer sets guard bands, captures all of the content, even if the frequency components are not commonly experienced. I acknowledge in that post that you appear to have provided convincing analysis that 24-samples per day captures most of the signals. As an engineer I’d like to capture all of the content I saw. So if more analysis showed that 72-samples/day gave sufficient guard-band then I’d go with that. I thought the more conservative approach was to align with NOAA’s 288 – assuming they actually did research to come up with their system. And overkill in this instance is not a bad idea. Also, my real point in the entire paper was to get people to see that 2-samples/day, whether regularly timed or max/min are inferior to a higher sample rate. I didn’t write a paper to get people to bow to 288-samples/day (said with a smile).

I’m eager to hear your thoughts after reading my reply from last night. I think we are getting close to agreeing – or at least a much more harmonious disagreement.

Reply to  Willis Eschenbach
January 20, 2019 12:05 pm

William, upon re-reading what I wrote I realized I owe you an explanation.

I have nothing but my reputation. I have no diploma in science. I took a total of two science classes in college—chem 101 and physics 101. I am 100% self-taught. I have nothing but my thousands and thousands of hours of study, my interesting ideas, my unquenchable honesty, my honor, and my reputation for admitting my mistakes when I make them as we all do. Plus half a dozen papers published in the peer-reviewed journals and over a hundred citations to those papers.

As a result, people think that they can take free shots at me. One such person was one of my scientific heroes, Dr. Roy Spencer. One day somebody must have pissed in his oatmeal, and he up and wrote a particularly ugly and untrue post attacking me. He falsely claimed that I was taking credit for another man’s discoveries, which was a damned lie.

I wrote a post in reply, explaining exactly where he was wrong … but the damage was done, he didn’t have the hair to apologize, and as a result, to this day I get fools and idiots telling me “Oh, we don’t have to believe a word you say, Dr. Spencer said so!”

Gotta say … my respect for Dr. Spencer took a dive that day …

That is why your ugly and untrue attack on me was so disturbing. And now, unless you finally get the balls to apologize, fools and idiots will no doubt say “Oh, Willis, we don’t have to believe you about signal analysis, William Ward said so!”

And that is why I have asked for a clear apology, to keep fools and idiots from believing you. You claimed I was foolish and ignorant to say that the practical Nyquist limit was NOT twice the highest frequency in temperature signals, and then shortly afterward you said the very same thing I’d said, that in fact 288 samples per day or fewer is the practical Nyquist limit for temperature signals.

Somebody showed that they were foolish and ignorant in that exchange, but it sure as hell wasn’t me.

Respectfully,

w.

Clyde Spencer
Reply to  Willis Eschenbach
January 20, 2019 1:25 pm

Willis
There are a couple of particularly obnoxious alarmist trolls that I have been exchanging comments with on Yahoo. They seem to think so highly of themselves that they behave as though they have a license to insult. I wouldn’t be surprised to discover that they have been banned from WUWT for their behavior. One of them, it doesn’t really matter because they use pseudonyms (and I suspect it is really one person with different personas) had provided me, unsolicited, with what they thought was an accurate representation of your background after I had linked to one of your articles. I never bothered to try to confirm because it didn’t matter to me. You are someone who has demonstrated numerous times that you are able to think outside the box, are facile with acquiring and processing data in an understandable way, and have made contributions to understanding climatology. As a colleague remarked to me once, a ‘sheepskin’ opens doors to certain jobs, but doesn’t guarantee that you can figure out when to come in out of the rain. You have no need to apologize for the lack of a degree. Now, having said that, as I have told Mosher, self-educated people often have gaps in their knowledge-base that they are not even aware of. So, you do need to exercise some humility when it is possible that you are making claims that are outside your area of expertise. But, that applies to everyone! I’ve worked with FFTs for years, but I don’t consider myself an expert in the subject.

Reply to  Willis Eschenbach
January 20, 2019 12:31 pm

Willis says (of Roy Spencer): “He falsely claimed that I was taking credit for another man’s discoveries”

He did not do that.

He pointed out that you were “re-inventing the wheel,” due to the fact that you did not research prior work.

Reply to  Willis Eschenbach
January 20, 2019 12:42 pm

Spencer’s exact words were: “But don’t assume you have anything new unless you first do some searching of the literature on the subject.”

http://www.drroyspencer.com/2013/10/citizen-scientist-willis-and-the-cloud-radiative-effect/

William Ward
Reply to  Willis Eschenbach
January 20, 2019 10:18 pm

Willis,

This is in reply to your post from Jan 20 at 12:05 PM.

I thought we were getting closer to agreeing on the technical issues, but I don’t know yet because the personal issues are in the way. I would like to resolve this, so I think that means we will have to discuss our personal processes a bit. You have already done that quite a bit and told me much about your thoughts about me. Now, I will share with you some of my thoughts about you.

First, I think it takes a lot of courage to reveal something that has caused you emotional pain. I don’t know the details about your interaction with Dr. Roy Spencer, but I understand it has marked you. I’m not taking sides in that – just acknowledging the impact the interaction had. Revealing that to someone (me) whom you think has been or is hostile to you takes even more courage. It was some time ago that I first started reading your posts on WUWT. You immediately stood out to me because of your insights and analytical capabilities. I was quite impressed, and I thought: “I’d like to meet that guy”. I started to wonder about your background because your ability to whip out all kinds of analysis was a real stand-out. I remember Googling your name and I did come across one of those (despicable) websites that catalogues all of the “deniers”. You were in there and there were criticisms of your work and your education. I remember reading that you were self-educated. It was written as if it were some kind of deficiency. I thought that if it was true that you were self-educated with all of the analytical capabilities you demonstrated on WUWT, then my estimation of you just jumped up an order of magnitude. I know how hard it is to get through an engineering curriculum. But most do this at a young age, with the financial assistance of their parents – so they are not working or working much. They can focus on their studies. They have the benefit of being forced to be disciplined lest they fail out. They have the benefit of fellow students to bounce ideas off of, TAs and Professors and lab assistants to help them learn this crazy difficult stuff. Then they get out in the working world and have more senior co-workers to guide them along in their development. There is also the incentive that you need to succeed or fail in your career. Going through this is difficult, but success is common. In contrast, I doubt 98% of people who succeed in this path could have “self-educated” themselves to the extent you have. Once I had all the tools of engineering at my disposal and having been forced to “learn to learn” – now I too self-educate. I taught myself audio engineering and started a successful audio engineering and mastering company. I also started a record company – in parallel to working the corporate world. 10 years ago, I got into building and renovating properties and got my general contractors license. I retired from the corporate world at a young age to run my businesses and part of that is home building and real estate investing, along with the audio work. So, I appreciate someone who self-educates. I do not think I could have done what you have without first making it through the system. What you have done is something to be proud of and I would not let the lack of a diploma to have any meaning to you except great satisfaction of your independence, self-reliance, tenacity and natural capabilities.

While “watching “you on WUWT I also noticed that you moved fast, thought fast, were quick to judge/evaluate, quick to share your opinions and your opinions (even if data based) were strong and sometimes overbearing. I’d also like to add quick to confront and not bashful about being blunt. These are not necessarily “bad” qualities, but ones that invite in-kind communication. Now, imagine for a minute my thoughts and feelings after putting together a well thought out and thoroughly reviewed paper based upon my career long experience/expertise, to have someone dash off a quick analysis and then publicly proclaim with CAPS that the most fundamental issue in the paper – (a fundamental concept that is “101” to the discipline) is not correct. The line was: “I stand by that. As I said above, Nyquist does NOT mean that you have to sample at 2X the highest frequency in your data, just at 2X the highest frequency of interest.” It just wasn’t any person saying this. It was the guy whom I admired for his tremendous analytical skills (you). The point of contention was very basic – definition of Nyquist theorem. So maybe you were telling me I was the idiot. 35 years of work, put hundreds of hours into a paper and I can’t even get the 101 right. Actually, I didn’t go to that place in my mind, but it did seem to threaten to shut down the progress of communication and exploration around the subject. If I were a timid person and didn’t push back, I think a lot of readers might have dismissed the concept of Nyquist. After all, Willis proclaimed it wrong. There is a lot of good stuff on WUWT. It would have been easy for readers to move on. I had plans to engage around this topic and your far too fast proclamation threatened that goal. I’m not a timid person so I pushed back. I did say that the error you were making was “101”, but I needed a counter force to equal your style of fast judgement. To make you pause and consider maybe the person who wrote this is qualified and knowledgeable too – maybe you should slow roll the conclusions. Maybe I overestimated the force needed. In hindsight I should have said something like: “Wait a minute Willis. The definition of Nyquist is pretty fundamental – we can read a text book together and get the definition. When you say you don’t need to sample 2x the frequency content and that you only need to sample 2x the highest frequency of interest, what do you mean by that? Can we explore and discuss this?” I didn’t. I missed an opportunity to do it much better. I expect I’ll do better next time. But it is almost ironic, Willis. What you did with your assessment of my work is very parallel to what Dr. Spencer did to you. However, seeing how you respond when angry and hurt, I’ll bet your response to him was stronger than mine was to you.

I don’t agree with your assessment that my statement was an “ugly attack”. I was saying you were fundamentally wrong on the issue. I was not therefore dismissing all of your other great qualities. I was not dismissing you as a person. I think your fears that your background (lack of diploma) will haunt you are unfounded, but I understand the emotions are real. My advice is to just tell anyone who downs you for lack of diploma to effe themselves. Now, you tried to shame me into apologizing to you: “And now, unless you finally get the balls to apologize…”. I don’t need to be shamed into doing what is right. I see you are upset so I’m overlooking a lot of behavior toward me that I think is worse than what I did in quality and quantity. I can’t apologize for making an “ugly attack” – because that isn’t what I did – not something I believe I did and not what I had in my mind or heart. But I see how upset you are – and since my words did that, I’m remorseful. Willis, I’m sorry that how I spoke to you made you feel bad about yourself and mad at me. If I could do it over I would. For you, for me and for everyone else reading.

I’m not a fan of how you can be when you are hurt and angry but I’m a fan of your capabilities and analysis. However, I recommend you think about slowing down your snap judgements and allow more time and space for contrary opinions. The speed and intensity you use can trample others at times – but even that doesn’t invalidate you or your capabilities. I just thought it might be appropriate for me to share my thoughts on this.

I really want to resolve this with you Willis because I don’t think there is any reason for either of us to harbor this. I’d also like to see how much honest harmony of understanding we can get on the technical subject. I value your input and your capabilities. Maybe all I said here wasn’t what you expected or wanted but it is the most honest and benevolent response I can offer.

Reply to  Willis Eschenbach
January 20, 2019 11:48 pm

William Ward January 20, 2019 at 10:18 pm

Now, imagine for a minute my thoughts and feelings after putting together a well thought out and thoroughly reviewed paper based upon my career long experience/expertise, to have someone dash off a quick analysis and then publicly proclaim with CAPS that the most fundamental issue in the paper – (a fundamental concept that is “101” to the discipline) is not correct. The line was: “I stand by that. As I said above, Nyquist does NOT mean that you have to sample at 2X the highest frequency in your data, just at 2X the highest frequency of interest.” It just wasn’t any person saying this. It was the guy whom I admired for his tremendous analytical skills (you). The point of contention was very basic – definition of Nyquist theorem. So maybe you were telling me I was the idiot. 35 years of work, put hundreds of hours into a paper and I can’t even get the 101 right. Actually, I didn’t go to that place in my mind, but it did seem to threaten to shut down the progress of communication and exploration around the subject. If I were a timid person and didn’t push back, I think a lot of readers might have dismissed the concept of Nyquist. After all, Willis proclaimed it wrong. There is a lot of good stuff on WUWT. It would have been easy for readers to move on. I had plans to engage around this topic and your far too fast proclamation threatened that goal. I’m not a timid person so I pushed back. I did say that the error you were making was “101”, but I needed a counter force to equal your style of fast judgement.

William, first, thanks for your explanation. Here’s the part I don’t get.

You started out by saying that we have to sample at twice the highest frequency in the temperature signal. This, as I pointed out, is a frequency with a period on the order of a second or fractions of a second. I said no, there was no reason to sample at that high a frequency, that we can sample at a lower frequency because we’re not interested in those high frequencies.

Then, after insulting me instead of just saying I was wrong, you went on to say well, no, we don’t really have to sample at milliseconds to be at twice the highest frequency, we can sample at a far lower frequency. You say we can sample at or above what you call the “practical Nyquist limit” of 288 cycles per day, which is far, far below 2X the highest frequency in the temperature signal. Here’s the discouraging part.

THAT IS EXACTLY WHAT I SAID, AND WHAT YOU CLAIMED WAS A FOOLISH NEWBIE ERROR!

Not only that, but I’ve shown that the error from sampling at a twelveth of that “practical Nyquist frequency” is trivially small—mean error on daily averages on the order of 0.005°C, RMS error of 0.05°C, maximum error 0.25°C. And obviously, since the errors are symmetrical, the error on the monthly averages is smaller than that.

So no, William, I fear I’m not buying your explanation. If your initial claim were true, you would still be saying that we need to sample on the order of seconds. But you’re not. You have agreed with me that sampling at 288 samples per day, or perhaps even hourly, is entirely adequate and satisfies your “practical Nyquist limit”.

And what is this “practical Nyquist limit” based on? For example, could we use your practical Nyquist limit of 288 samples per day if we were interested in the one-minute fluctuations in the signal?

Of course not. Those frequencies are way above the practical Nyquist limit.

But 288 cycles per day is above the frequencies of interest, which are almost exclusively the daily averages that are turned into monthly averages and long term trends.

Now, let’s recall, this is YOU saying we don’t need to sample at twice the highest frequency in the signal. Instead, you are saying it is OK to sample at something well below the highest frequencies but above the frequencies of interest … and in that context, please consider my statement:

“I stand by that. As I said above, Nyquist does NOT mean that you have to sample at 2X the highest frequency in your data …”

That’s what really frosted my banana. After dissing me for making a claim that we don’t have to sample temperature data at a frequency of milliseconds, YOU say we don’t have to sample temperature data at a frequency of milliseconds.

And your diss? That was the final straw. It wasn’t that I was wrong. That would have been fine. It was not even an emphatic statement that I was wrong. That would have been fine too—I’ve been wrong many times. Heck, I’ve even got a whole post called “Wrong Again”, posted because I was indeed wrong … and believe me, that’s not easy to admit in public. But it’s something I’m fanatical about—if someone can show I’m wrong, I will admit it with no hedging.

No, your insult was that I was wrong because I’ve never taken a college course in signal analysis … which makes you no better than the other jerks out there who think that my lack of a formal education is an ironclad mystical guarantee that I can be safely ignored.

So no, William, you are not the good guy in this. Yes, I probably over-reacted; but I’m really, really tired of the long line of pricks who claim that everyone can ignore me because I haven’t taken a college class in their favorite subject. And despite your good intentions and your good nature, it turned out that you are just another in that long line. As soon as the dispute started, you reached for that ever-present and well-worn personal attack, which is always the same bogus claim—that “Willis didn’t study this in college so all of you can and should ignore him totally.”

Can you understand now why I reacted as I did?

Look, William, I do respect your knowledge, as I respect that of all people with a deep and thorough understanding of their chosen subject. But when you started out, in your very first response to me, by ragging on my lack of formal education, I fear my respect for you as a person took a huge hit … and it is only my respect for your knowledge and my sense that your social skills aren’t that sharp that has kept me in the discussion.

As I said several times, it does seem that you really don’t understand the effect of your words, which in part is why I’ve stayed in the discussion. I don’t think you set out to join the aforementioned long line of pricks, and I don’t think you even realized you joined them … but join them you did, and emphatically so …

Now as I said before, I’m willing to reset and go forwards. I don’t like bearing grudges. And as you said, I don’t think our remaining disagreements are large.

But before we dive back into the science, I did want you to understand very clearly what your words look like and what effect your words have from this side of the silver screen …

And with that out of the way, returning to the science I still have not found any USHCN sites which have a significant difference between hourly samples and 288-samples, either in the mean results or in the trends.

And I still have not found any sites where there is any kind of significant aliasing of higher frequencies into the hourly samples. Yes, there is aliasing into the two-hour samples. But as I demonstrated above, I haven’t found any in the hourly samples.

So I’ll ask again—do you have any actual examples where using hourly results gives significantly different answers from 288 samples per day, and if so, what and where are they? I’m happy to be proven wrong, but it takes facts to do that, not accusations about my well-known lack of formal education.

My best regards to you, and I do regret that we got off on the wrong foot,

w.

William Ward
Reply to  Willis Eschenbach
January 22, 2019 8:43 pm

Hi Willis,

You ended you post with “Your Friend”. Well alright! Thanks Willis.

Willis said: “All the data that I’ve looked at give a mean daily error on the order of five-thousandths of a degree; an RMS daily error on the order of five-hundredths of a degree; and a maximum daily error on the order of ± a quarter of a degree. Together these add up to a trend error on the order of a few thousandths of a degree per decade. None of these are significant in the field of climate science.”

My reply: Can you clarify what you are comparing here? Is it 24-samples/day vs. 288-samples/day? Or is it one of those vs. max/min? I’m assuming the former, but please clarify so I can respond to the correct concept. I’m not hung up on the difference between 288 and 24. We have shown error between 288 and max/min. Paramenter has provided some good information in addition to mine. If 24-samples/day produces the same error as 24, this doesn’t really change the core message. I think we are in agreement.

Willis said: “I don’t see the “oscillation” that you mention so perhaps I don’t understand what you are referring to.”

My reply: Look at my Fig 2. As you read the chart from the bottom up (increasing sample rate). If you were to plot the error vs sample-rate would decrease from 0.7 or 0.8 to 0.1, cross over zero to -0.1 and then back up to 0. I didn’t plot other rates, so we don’t know what it does other than the ones I show for that example. Not exactly an oscillation, but a convergence with ripple. The error changes signs. Your analysis is RMS. Can you explain how you account for sign of error in your analysis?

Willis said: “A much more important question is, what we can do with the errors that using min-max has created in the past?” And: “So let me invite you to consider that question, of how we might minimize the errors of the traditional method ex post, as a much more important puzzle than the exact reason that we get errors from the traditional method. I’d be very happy to hear your thoughts, particularly on removing the aliasing …”

My reply: An admirable goal! I wish I had a more optimistic reply to match the good intention of your goal. There are plenty of texts you can refer to. I found this brief paper to be convenient:

http://www.dataphysics.com/downloads/technical/Effects-of-Sampling-and-Aliasing-on-the-Conversion-by-R.Welaratna.pdf

Quoting the paper: “Aliasing is irreversible. There is no way to examine the samples and determine which content to ignore because it came from aliased high frequencies. Aliasing can only be prevented by attenuating high frequency content before the sampling process…”

Maybe if you study the individual station signals you can come up with some innovative way to reduce the daily mean error generated by max/min for days in those stations. If you can do this successfully for day after day in a station then maybe you are on to something. I’ll think about this some more…

William Ward
Reply to  Willis Eschenbach
January 21, 2019 8:36 pm

Willis,

I’m replying here to your post where you advised me I’m now cataloged under P for Prick. I’m just going to overlook all of that vitriol, not because it has no impact on me, but because my most honest response to you is to see the immense amount of visceral pain and hurt you seem to be in over this issue. I feel sad that you have experienced this in the past and that you carry this scar. Its a very human issue most of us can relate with and therefore easy to have compassion for. I’m sure I’m not going to be able to convince you that I wasn’t thinking at all about your education. Yes, I had read the info on that nasty blog, but 1) it was not forefront in my mind, 2) even if it were I had no confirmation from you that you were self-educated until after you revealed it to me, and 3) as I said before I deeply admire what you have done. I’m not the slightest bit critical of it. I won’t spend any more energy trying to convince you as you are not really open to it at the moment. Maybe these words will mean more in the future. I was simply fighting over the point in the discussion and I was trying to push you back hard to get you to pause and consider your position. As I said previously, there was a better way for me to do it – and I missed that opportunity.

I won’t try to further untangle the mess around “practical” implementation of Nyquist. I have restated and clarified my position at least twice, maybe 3 or 4 times if you include all of the people I talked to about it. I can’t undo the confusion. I can only ask that we try to understand that it was confusion and mutual impatience in communicating. My clarifications are there if you want to take them in. Otherwise, I don’t think I can say anything more that is constructive.

You said: “And with that out of the way, returning to the science I still have not found any USHCN sites which have a significant difference between hourly samples and 288-samples, either in the mean results or in the trends.”

My reply: I didn’t do much of a study between 24-samples/day and 288-samples/day. I focused on 2-samples/day or max/min and 288-samples/day. In my table of Fig 2 I show that 24, 36 and 72-samples per day are +/- 0.1C from 288-samples/day. I have other similar work I did that showed similar results. But my study of this was not exhaustive. It was not my focus, but I see the value of what you added with your analysis. I did say in my paper that 24-samples may give good results, depending upon the spectral content. But again, I went back to my engineering approach and thought while +/-0.1C is “small” to my thinking, 1) it seems to matter to climate science and 2) NOAA used 288 and 288 seemed to allow the engineering requirements to be satisfied. Seeing the error start to oscillate around +/-0.1C for 3 rates before reaching 288 suggested to me that we were converging toward a good design. Also, I was trying to expose people to a problem with the numbers we are “fed” in the narrative of alarmism. I wasn’t being asked to design the next generation system, so I though going with the “reference” network NOAA came up with probably had some proper thinking that went into it. I accept your analysis about hourly vs 288 – it actually fits with the smaller sample in my analysis, that most days do well with hourly sampling. Again, the phrase “do well” is rather subjective without a specification or definition of that term.

Does this explanation satisfy and resolve our technical differences? I won’t ask about the interpersonal differences. Maybe time and new, more positive interactions will allow that to heal in the future.

I think and hope we can all take away that sampling theory plays a role – significant role in measuring temperature, and climate science seems to overlook/ignore that and there are mean errors and trend errors that result. There are obviously many other errors that factor in. (I provided my list of 12 errors/issues in a post to others.) But I had not really seen violating Nyquist/sampling theory as an issue in the conversation about climate. So I wanted to bring something new to the discussion.

Reply to  William Ward
January 21, 2019 11:56 pm

William Ward January 21, 2019 at 8:36 pm

Willis,

I’m replying here to your post where you advised me I’m now cataloged under P for Prick. I’m just going to overlook all of that vitriol, not because it has no impact on me, but because my most honest response to you is to see the immense amount of visceral pain and hurt you seem to be in over this issue. I feel sad that you have experienced this in the past and that you carry this scar. Its a very human issue most of us can relate with and therefore easy to have compassion for. I’m sure I’m not going to be able to convince you that I wasn’t thinking at all about your education. Yes, I had read the info on that nasty blog, but 1) it was not forefront in my mind, 2) even if it were I had no confirmation from you that you were self-educated until after you revealed it to me, and 3) as I said before I deeply admire what you have done. I’m not the slightest bit critical of it. I won’t spend any more energy trying to convince you as you are not really open to it at the moment. Maybe these words will mean more in the future. I was simply fighting over the point in the discussion and I was trying to push you back hard to get you to pause and consider your position. As I said previously, there was a better way for me to do it – and I missed that opportunity.

William, I absolutely do understand that you didn’t realize that when you accused me of a lack of formal education that you were talking about my lack of formal education. It’s the only reason that we’re still in this discussion.

But the fact remains that you were indeed talking about my lack of formal education … which was the point I was trying to make when I said that you really, really don’t see what your words do.

I won’t try to further untangle the mess around “practical” implementation of Nyquist. I have restated and clarified my position at least twice, maybe 3 or 4 times if you include all of the people I talked to about it. I can’t undo the confusion. I can only ask that we try to understand that it was confusion and mutual impatience in communicating. My clarifications are there if you want to take them in. Otherwise, I don’t think I can say anything more that is constructive.

I do understand that there is a difference between practical implementation of Nyquist and the theoretical implementation of Nyquist. That’s the difference that I was trying to point out when I got shut down by insulting my education …

You said:

“And with that out of the way, returning to the science I still have not found any USHCN sites which have a significant difference between hourly samples and 288-samples, either in the mean results or in the trends.”

My reply: I didn’t do much of a study between 24-samples/day and 288-samples/day. I focused on 2-samples/day or max/min and 288-samples/day. In my table of Fig 2 I show that 24, 36 and 72-samples per day are +/- 0.1C from 288-samples/day. I have other similar work I did that showed similar results. But my study of this was not exhaustive. It was not my focus, but I see the value of what you added with your analysis. I did say in my paper that 24-samples may give good results, depending upon the spectral content. But again, I went back to my engineering approach and thought while +/-0.1C is “small” to my thinking, 1) it seems to matter to climate science and 2) NOAA used 288 and 288 seemed to allow the engineering requirements to be satisfied.

Thanks for that. Let’s start with the data. All the data that I’ve looked at give a mean daily error on the order of five-thousandths of a degree; an RMS daily error on the order of five-hundredths of a degree; and a maximum daily error on the order of ± a quarter of a degree. Together these add up to a trend error on the order of a few thousandths of a degree per decade.

None of these are significant in the field of climate science.

Seeing the error start to oscillate around +/-0.1C for 3 rates before reaching 288 suggested to me that we were converging toward a good design.

I’m sorry, but I have no idea what this means. My results show the following:

I don’t see the “oscillation” that you mention so perhaps I don’t understand what you are referring to.

Also, I was trying to expose people to a problem with the numbers we are “fed” in the narrative of alarmism. I wasn’t being asked to design the next generation system, so I thought going with the “reference” network NOAA came up with probably had some proper thinking that went into it. I accept your analysis about hourly vs 288 – it actually fits with the smaller sample in my analysis, that most days do well with hourly sampling. Again, the phrase “do well” is rather subjective without a specification or definition of that term.

I have not yet found any hourly data that does not compare very well to the 288-sample data. Nor have I found any aliasing in the hourly data, that you warned about, although it certainly exists in the lower freqencies.

It’s an important question, because while we have very little 288-sample data, we have much, much more hourly data … and as I’ve said before, I respect your opinion in these matters.

Does this explanation satisfy and resolve our technical differences? I won’t ask about the interpersonal differences. Maybe time and new, more positive interactions will allow that to heal in the future.

My friend, as I’ve said, I don’t think you knew what you were stepping into. I’m not angry with you. I believe that you are a good guy who unknowingly stepped into the wrong long line … and yes, we are very close on the technical questions.

I think and hope we can all take away that sampling theory plays a role – significant role in measuring temperature, and climate science seems to overlook/ignore that and there are mean errors and trend errors that result. There are obviously many other errors that factor in. (I provided my list of 12 errors/issues in a post to others.) But I had not really seen violating Nyquist/sampling theory as an issue in the conversation about climate. So I wanted to bring something new to the discussion.

Given the size of the error from two samples per day, it is obvious that Nyquist plays a role in the genesis of the error. However, that’s only of theoretical importance. A much more important question is, what we can do with the errors that using min-max has created in the past?

I’ve had a couple of insights in that direction. The first is that the (max+min)/2 errors have a strong annual cycle. This offers the possibility of either basing our trends on the months with the least errors, or of subtracting the known error structure from the data. I suspect that the structure of the errors is in part due to the times when the temperature crossing the freezing point of water, which would allow for a more general application. But that’s just a guess at this point.

The other plan of attack is that as you’ve pointed out, aliasing is a problem when we sample below the Nyquist limit. It seems to me that it might be possible to figure out the structure of the aliasing, at least the part of it due to being below the Nyquist limit and remove it …

So let me invite you to consider that question, of how we might minimize the errors of the traditional method ex post, as a much more important puzzle than the exact reason that we get errors from the traditional method. I’d be very happy to hear your thoughts, particularly on removing the aliasing …

My best to you, and as always, my thanks for your constructive tone.

Your friend,

w.

Clyde Spencer
Reply to  Willis Eschenbach
January 22, 2019 8:27 am

Willis
You said, “… how we might minimize the errors of the traditional method ex post, as a much more important puzzle than the exact reason that we get errors from the traditional method.”

It seems to me that if it is possible to correct ex post, the solution would be easier if we were certain of the “exact reason that we get errors from the traditional method.” Inasmuch as the mid-range value is acknowledged as not being as robust of an estimator of the central tendency as is the mean, I’m not optimistic about the probability of making acceptable adjustments.

Reply to  William Ward
January 22, 2019 11:10 am

Clyde Spencer January 22, 2019 at 8:27 am

Willis
You said, “… how we might minimize the errors of the traditional method ex post, as a much more important puzzle than the exact reason that we get errors from the traditional method.”
 
It seems to me that if it is possible to correct ex post, the solution would be easier if we were certain of the “exact reason that we get errors from the traditional method.” Inasmuch as the mid-range value is acknowledged as not being as robust of an estimator of the central tendency as is the mean, I’m not optimistic about the probability of making acceptable adjustments.

Thanks, Clyde. That assumes that Nyquist errors are not correctible. This is where beginner’s mind comes in. I start with the assumption that things are fixable, even Nyquist errors.

It seems to me that the real question is not the origin of the errors, it is the structure of the errors. In the Redding dataset, for example, the monthly error varies regularly over the course of the year from about 0.2 to 0.8 °C, with a minimum in the summer … and that seems to me to indicate that we could remove at least some of that error.

I also need to run a periodogram of the Redding errors, to see what is happening. That may also suggest some error correction methods.

Finally, my hope is that William or you or some other signal engineer will come up with some lines of attack. I mean, aren’t signal guys supposed to be the ones able to clean up messy, noisy signals? I suspect he (and others) know of methods I haven’t even dreamed of …

Always more to learn …

w.

Clyde Spencer
Reply to  William Ward
January 22, 2019 11:38 am

Willis

You said, “It seems to me that the real question is not the origin of the errors, it is the structure of the errors.”

I have been giving some thought to this and I think that the explanation for the mid-range value differing from the mean is a result of the daily temperatures being skewed or unsymmetrical. It is similar to the usual situation of mean, median, and mode being identical for a symmetrical, normal distribution, but the median and mean shift when there is a long tail on the distribution. Thus, I would speculate that the mid-range and mean temperature would be most similar about the time of the equinoxes (lag?) and would have the greatest difference about the time of the solstices. That is a generalization. Because the daily high temperatures usually occur in the late afternoon in the Summer, the peak insolation (noon) may not be driving the symmetry. Additionally, any cold front will probably distort the temperature distribution at any time of the year. It is the latter problem that leads me to believe that there is too little historical information to correct the mid-range values.

Reply to  William Ward
January 22, 2019 2:33 pm

Thanks, Clyde. For me, the oddity is that there is a trend inherent in the error between the traditional and the true daily means. It’s not clear to me why the error would change over time.

Sadly, after doing more work, I’m slowly coming to your conclusion, which is that we have too little data to determine what’s happening. We can improve the situation by removing the annual cycle of the trends. That will make the data more accurate, but it doesn’t fix the real issue, which is the trend in the error …

Seems to me that the error is related to certain weather conditions, and that if there is more or less of whatever that condition is, we get either more or less trend in the error … but that is handwaving which doesn’t easily translate into mathematical procedures.

I continue the investigation … and will report any findings of interest.

Best regards,

w.

Clyde Spencer
Reply to  Willis Eschenbach
January 22, 2019 4:09 pm

Willis,
You said, “… the oddity is that there is a trend inherent in the error between the traditional and the true daily means. It’s not clear to me why the error would change over time. … We can improve the situation by removing the annual cycle of the trends. That will make the data more accurate, but it doesn’t fix the real issue, which is the trend in the error … Seems to me that the error is related to certain weather conditions, and that if there is more or less of whatever that condition is, we get either more or less trend in the error.”

We know that the minimum temperatures are increasing more rapidly than the maximum temperatures. That is one clue. I have speculated, previously, that the different climate zones are experiencing different rates of temperature increase, which means that results may be sensitive to the selection of stations.

Fundamentally, the difference between the mean and mid-range temperatures is related to the shape of the daily temperature curve. If there is a change in the trend, that would suggest to me that there is an unidentified process shaping the curve, i.e. changing the skewness. That is a second clue. Yes, there is a dearth of high-quality data to unravel the mystery.

However, something that you might want to consider is to treat the daily temperature curve as though it were a distribution of the frequency of temperatures and calculate a pseudo-skewness, as an index to work with, and then see if there is a correlation with the mean/mid-range error.

I’ll be interested in hearing what you come up with.

Reply to  William Ward
January 23, 2019 1:54 am

Willis,
“It seems to me that it might be possible to figure out the structure of the aliasing, at least the part of it due to being below the Nyquist limit and remove it …”
Yes, you can do that. The main contribution is from the aliasing of harmonics of the average diurnal cycle with the sample frequency. So just calculate an average diurnal cycle, get its DFT (hourly sampling will do). If you start from midnight, the sampled means are just equal to the initial sampled value, which is the coefficient of the cos coefficient of the DFT for that harmonic at the sample frequency.

For example, if you sample twice a day, that will alias with the second harmonic of the average diurnal cycle d(t). Suppose the DFT
d(t)= a1*cos(w*t) + b1*sin(w*t)+a2*cos(2*w*t)+ etc w the angular diurnal frequency
It also aliases with the 4th, 6th etc. So the error due to alias is
a2+a4+a6… This is close to a2
If you sample 3x, then error is a3+a6+a9…

I can produce more details, numbers etc.

Reply to  William Ward
January 23, 2019 11:27 am

Nick Stokes January 23, 2019 at 1:54 am

Willis,

“It seems to me that it might be possible to figure out the structure of the aliasing, at least the part of it due to being below the Nyquist limit and remove it …”

Yes, you can do that. The main contribution is from the aliasing of harmonics of the average diurnal cycle with the sample frequency. So just calculate an average diurnal cycle, get its DFT (hourly sampling will do). If you start from midnight, the sampled means are just equal to the initial sampled value, which is the coefficient of the cos coefficient of the DFT for that harmonic at the sample frequency.

For example, if you sample twice a day, that will alias with the second harmonic of the average diurnal cycle d(t). Suppose the DFT
d(t)= a1*cos(w*t) + b1*sin(w*t)+a2*cos(2*w*t)+ etc w the angular diurnal frequency
It also aliases with the 4th, 6th etc. So the error due to alias is
a2+a4+a6… This is close to a2
If you sample 3x, then error is a3+a6+a9…

I can produce more details, numbers etc.

Thanks, Nick, I knew someone would have an answer.

I also like Clyde’s idea of calculating the skewness of the daily temperature data and using that to unravel the Gordian knot …

w.

Clyde Spencer
Reply to  Willis Eschenbach
January 23, 2019 12:49 pm

Willis,
You said, “I also like Clyde’s idea of calculating the skewness of the daily temperature data and using that to unravel the Gordian knot …” I lost some sleep last night thinking about that off the top of my head remark. Because it has been decades since I took a statistics class, I had forgotten the details for calculating skewness. So, I did some background reading today to refresh my memory. While the calculation is rather straight forward, it involves cubing the difference between the mean and sample temperature and summing all the cubed differences. It appears that there is an issue of stability or robustness of the calculation, perhaps related to the cubing. In particular, at least one article suggested that this is a good example of needing to rely on the Law of Large Numbers to get a reliable estimate. That is, a sample of at least a couple thousand was necessary to get close to the result derived from 5,000 samples. What’s worse, it isn’t just a matter of asymptotically approaching the true value, but the numbers oscillate above and below the true value as the number of samples increases. The advice given was not to rely on calculations of skewness (or kurtosis) for samples under a few thousand and to instead rely on a histogram for a subjective estimate of the skewness. I’m not happy with a subjective estimate, nor am I happy with having to bin the temperature data to prepare a histogram because it reduces the accuracy and precision. Therefore, I’m thinking about a different approach to model the skewness using the shift in the mean compared to the median or mode obtained with binning. I’m going to have to think about this some more. I’ll get back to you if I think I have found a workable solution.

Reply to  Clyde Spencer
January 23, 2019 1:20 pm

Thanks, Clyde. The other problem with skewness is that you need all of the data to calculate it. My method calculates the relationship between trend error, and max and min trends. Once that is calculated, I would think it could be applied to “nearby” stations, or in particular earlier records of the same station, without needing all of the data to calculate it.

However, it’s likely worth taking a look at skewness, or at a minimum the median/mean difference, to try to at least understand what is going on.

w.

Clyde Spencer
Reply to  Willis Eschenbach
January 23, 2019 4:44 pm

Willis
You said, “The other problem with skewness is that you need all of the data to calculate it.” I’m assuming that stations with high temporal resolution can be used to establish a relationship between the mid-range values and the true mean, based on the assumption that it is asymmetry in the daily time series that is responsible for the mid-range value being different from the mean. Then, that might be used to correct the mid-range values, IF we can come up with another descriptor or predictor of the skewness, such as the season.

Yet another problem is that while the skewness might be related to the seasons, I can imagine situations where unusual weather might give an unusual skewness. At this point (I’m basically thinking while typing) perhaps multiple regression of all the available meteorological data might reveal some single or combined correlations. The essence of the problem is that two temperatures per day isn’t a lot to work with, but typically other meteorological data such as humidity, wind speed and direction, and cloudiness might be helpful.

Reply to  William Ward
January 23, 2019 2:35 pm

OK, Clyde, I looked at skewness, and it’s ugly …

I generated 100,000 data points with a Poisson distribution. Skewness is 0.43, as you might expect.

From that Poisson data, I selected 288 data points at random and got the skewness. I repeated that 1,000 times. Here’s the summary of the results:

   Min. 1st Qu.  Median    Mean 3rd Qu.    Max. 
   0.02    0.32    0.41    0.43    0.52    0.94 

The answers go from 0.02 to 0.94. The interquartile range is 0.32 to 0.52 … ooogh. Ugly.

Well, it was fun while it lasted.

w.

Clyde Spencer
Reply to  Willis Eschenbach
January 23, 2019 4:48 pm

Willis
You have confirmed what I read in the online article today — conventional skewness is a poster child for the utility of the Law of Large Numbers. One needs much more than 288 samples; at least an order of magnitude more!

Clyde Spencer
Reply to  William Ward
January 24, 2019 8:14 am

Willis
You said, “OK, Clyde, I looked at skewness, and it’s ugly …” Inasmuch as it appears that the conventional calculation of skewness (third moment) is not robust or reliable, I’ve been thinking about an alternative metric.

For a symmetric frequency distribution, all the measures of central tendency are coincident. As a tail begins to stretch out, the mean moves away from the mode in the direction of the stretching. So, I propose a different metric for skewness. Find the difference between the mode and mean for a given time series, where the sign indicates which tail is skewed. To adjust for different means, divide by the standard deviation to transform the difference into z-scores. I haven’t explored the effect of the standard deviation increasing with the stretching of the tail, but I suspect it will just dampen the rate of change of the z-score. The bottom line is that all three metrics (mean, mode, and SD) are readily available in all statistics packages and even Excel. So, calculating “Spencer’s Skewness index” with high-temporal resolution temperature data (e.g. 288 samples/day) should give you something to plot mid-range values (or mid-range to mean error) against, to see how asymmetry in the temperature data affects the error.

I’d rather see something that measures the asymmetry of the daily temperature curve, but the only thing that comes to mind is to treat the daily temperatures as a frequency histogram where the time is replaced with a dummy value where the Tmax is treated like the mode, and assigned a value of zero.

Clyde Spencer
Reply to  Willis Eschenbach
January 18, 2019 11:24 am

Willis,
You said, “…but so far all I’ve learned from you is that you are an arrogant man who thinks he knows more than everyone else, and who is unwilling to admit it when he makes an error.” There is an old saying that when you point a finger at someone, there are three fingers pointing back at yourself. I think that your response is out of line. William has been quite the gentleman and doesn’t deserve that level of incivility.

Most of us who post here clearly have a high opinion of ourselves. But, you have basically lowered yourself to the level of an ad hominem attack on William. I think it would better to just agree to disagree if you can’t provide an argument William is willing to accept. Such remarks do not become you!

You also remarked, “Climate signals have pseudo-cycles that appear and disappear at random, only to be replaced by some other pseudo-cycles.” What you are calling “pseudo-cycles” can be explained as constructive/destructive interference by out-of-phase sinusoids revealed by Fourier decomposition.

A little more civility, please!

William Ward
Reply to  Clyde Spencer
January 18, 2019 4:57 pm

Clyde, thank you for your comments.

Reply to  Clyde Spencer
January 18, 2019 5:51 pm

Clyde, I said quite clearly:

Now, I’m more than happy to discuss this further. And I reckon you’re a good guy … just insufferable at times. But heck … so am I, so we have at least that in common.

So I’m aware of my own faults …

However, when a man comes in and accuses me and everyone here of having closed minds, and claims that we’re ignorant, I’m sorry, but I’m gonna hit back twice as hard. And the sooner William notices that, the sooner he’ll quit that nonsense. It’s not doing him any good.

Finally, you say:

You also remarked,

“Climate signals have pseudo-cycles that appear and disappear at random, only to be replaced by some other pseudo-cycles.”

What you are calling “pseudo-cycles” can be explained as constructive/destructive interference by out-of-phase sinusoids revealed by Fourier decomposition.

I’m sorry, but that assumes that there are underlying sinusoids that are invariant in frequency, phase, and amplitude. If you can demonstrate that, please do. The daily sunspot data might be a good place to start. Until you or someone can do that, I’ll continue to call them “pseudo-cycles”.

Best regards,

w.

Clyde Spencer
Reply to  Willis Eschenbach
January 19, 2019 12:28 pm

Willis
It is entirely conceivable that there are forcings that come and go. That results in actual, real-world periodicities that are only transient.

However, it is my understanding that the utility of the Fourier Transform is that ANY varying signal can be represented by a decomposition into sinusoids of different amplitudes and phases. Doing the decomposition can provide insights on the frequency of what you are calling “pseudo-cycles.” From that, one might conclude what the nature is of the actual forcings. However, to do that, one might need to at least meet the Nyquist Criteria for sampling, which might be as high as every few minutes. Therefore, as Ward has been suggesting, we should probably be acquiring modern data towards the end of being able to faithfully reconstruct the time-series so that analyses above and beyond the temperature question can be addressed in the future. That is, we shouldn’t restrict ourselves so that at sometime in the future we have to invoke the Stokes Lament, “That’s all we have to work with!” What I read Ward as saying is, “Let’s do the best job we can so that we can move forward, and not just accept as adequate that which we have inherited.”

Reply to  Clyde Spencer
January 19, 2019 5:21 pm

Clyde Spencer January 19, 2019 at 12:28 pm

Willis
It is entirely conceivable that there are forcings that come and go. That results in actual, real-world periodicities that are only transient.
 
However, it is my understanding that the utility of the Fourier Transform is that ANY varying signal can be represented by a decomposition into sinusoids of different amplitudes and phases. Doing the decomposition can provide insights on the frequency of what you are calling “pseudo-cycles.”

Thanks, Clyde. If you want some fun, take a long natural signal, say daily sunspots. Do a Fourier decomposition on the first half of the data, and then another on the second half of the data … what you are very likely to find is that each half is made up of very different sinusoids.

Heck, I’ll save you the trouble. Here you go …

As you can see, the issue is not interference patterns from underlying stable cycles. It is that the underlying cycles themselves come and go, appearing, changing in frequency, phase, and amplitude, and then disappearing …

Which is why I call them “pseudo-cycles” …

Regards,

w.

Clyde Spencer
Reply to  Clyde Spencer
January 20, 2019 8:47 am

Willis
Thank you for spending the time to generate the periodogram. I’m not surprised by the results. Sunspot numbers and the shape of their envelope are known to vary with time. Taking the beginning half and the ending half is effectively two different signals with the commonality of an approximately 11-year base signal with unknown influences superimposed. I’d be surprised if they looked identical.

However, a periodgram isn’t quite what I had in mind because it is a summary of the power of the apparent frequencies. Notably, it is missing phase information of the composite sinusoids.

“The power spectral density, PSD, describes how the power of your signal is distributed over frequency whilst the DFT shows the spectral content of your signal, the amplitude and phase of harmonics in your signal.”
https://dsp.stackexchange.com/questions/24780/power-spectral-density-vs-fft-bin-magnitude

If you take two vibrating tuning forks in proximity, they will generate an apparent third tone that varies in amplitude commonly called a “beat.” What would a periodogram show?

Editor
January 18, 2019 12:33 am

William Ward January 17, 2019 at 10:23 pm

Bernie,

You said:

“Since it is clearly not a pure sine wave, but is periodic with a period of 24 hours, it has harmonics. Willis has also shown us here (at January 14, 2019 at 3:49 pm), using his preferred periodogram, that this is mostly fundamental, with a bit of 2nd and 3rd harmonics. That’s about it. So as he says, you would need to sample this at minimum sampling rate greater than 6 times/day (perhaps 24 times/day would satisfy most everyone).”

 
My reply: When sampling occurs at 2-cycles/day, where does the spectral image shift? Answer up and down by 2 cycles. What spectral image components land on (alias to) the trend components (near 0Hz or 0-cycles/day)? Answer: the near 2-cycle/day components.
 
Image from my Full paper (Fig8): https://imgur.com/DmXCBOt
 
What spectral image components alias to the daily trend (1-cycle/day)? Answer: the 1- and 3-cycle/day components. What did you say above about content at 2nd and 3rd harmonics?? How is it that you keep agreeing with me and then disagree with me? Perplexing.
 
Did you look at any FFTs of any signals? How far out in frequency do these signals go. Answer: infinity. But how much of that seems to affect things like mean calculation below 0.1C variability? Answer: 288-samples/day seems to be the rate at which we reach this limit so that means 144-cycles/day. If you just can’t handle that then lets divide that by 4 and we have 36-cycles/day. That would mean we have to sample above 72-cycles/day. Why do you think you can ignore the energy above the 3rd harmonic?

 
We can ignore the energy above the 3rd harmonic (8 hours per day) because it doesn’t alias into the hourly data. Absolutely, as you show, sampling at 12 cycles per day (every 2 hours) aliases the signal. Let me demonstrate. Here are six days of temperature data sampled at 288 cycles per day (every 5 minutes).

And here is the periodogram of the data. As you can see, there is little strength at anything faster than about 8 hours (3 cycles per day).

Now, you are 100% right that we get aliasing when we sample at 12 cycles per day (every 2 hours). Here’s that periodogram, with the 12 cycle per day periodogram (red) overlaid over the 288 cycle periodogram (blue).

As you can see, there is a very large aliased cycle at four hours. Ugly.

But let’s look at what happens when we sample 24 times daily, or every hour.

Note that there is very, very little evidence of aliasing in the hourly samples. In fact, the periodogram looks very much like the original periodogram at 288 cycles per day.

Which is another example of why, in a practical sense, I say that hourly data does NOT violate the Nyquist limit …

My best to you both,

w.

Bright Red
Reply to  Willis Eschenbach
January 18, 2019 2:28 am

Hi Willis
Over the last few years I have enjoyed and learnt a few things by reading your numerous articles on WUWT. As this topic is of interest I have decided to join in and have some final questions.

Out of interest could you tell me the maximum error I can expect in the daily mean from 24 samples/day compared to the 4320 downsampled to 288 samples a day for a new site to fill in a gap and one being upgraded from an old fashion min/max if that makes any difference.

Are your periodograms capable of more short period resolution and given the large daily variations dominate can the Y axis be Logarithmic also? As I believe this would add additional insight and allow the results from different sample rates to be better compared.

Best Wishes

Reply to  Bright Red
January 18, 2019 1:39 pm

Hey, Bright, interesting questions. First, nobody can say what a new site will do. It may or may not be where the old site was. It may be near to the old site but in a different enclosure. The old site may have used slightly inaccurate instruments.

Given that, however, the results from the 100,000 or so five-minute samples I’m using as a dataset are:

Average error, daily avgs. based on hourly vs. based on 5-minute samples = -0.001°C

RMS error, daily avgs. based on hourly vs. based on 5-minute samples = 0.048°C

Maximum error, daily avgs. based on hourly vs. based on 5-minute samples = 0.206°C

You can see why I say that hourly data is more than sufficient for almost all purposes.

Next, I’m not sure what you mean by “more short period resolution” for the periodograms. They resolve down to the sample rate …

Finally, no, I’m not interested in a log scale. Theoretically I’m opposed to log scales unless a) there is a valid reason for them and b) they don’t distort the results. Given that, no problem.For example, I use a log x-scale on my periodograms, so we can see the shorter time scales.

Unlike say FFT results, my scale compares cycles to the range of the original data. This shows visually just how much power there is in each period. If I use a log scale, it will appear visually as though the periods with almost no power have large power … bad idea. Such plots are one of the reasons that folks get confused about things like a solar effect on temperature. They see a microscopic cycle blown up big and say “See!”.

All the best,

w.

Bright Red
Reply to  Willis Eschenbach
January 18, 2019 7:16 pm

Hi Willis,
Thank you very much for taking the time to explain you decisions regarding the Periodograms that you produce.

Sorry in advance as I am doing this on a mobile device and for some reason can not cut and paste.
The following is only in relation to the error between the 288/day and 24/day sampling.
Thanks for the demonstratation that it is possible to determine the errors by post processing the data for a site where the 288 samples are available. It seems to me that each site will likely yield a different error and the same site a different error each day. My comment is that it is not possible to say in advance that for any given site 24samples/day will always yield a suitably low error that would meet the requirements of an equipment specification that represents current best practice of say 288 samples/day.
BTW I fully understand your “for all practical purposes” comment as I spent most of my career doing just that but sometimes you just have to do what is best practice or at the maximum technology will allow for many reasons and not all of them technical. It is always hard to sell something with a lower specification than a competitor that sells a better spec at the same price.

Editor
January 18, 2019 1:14 am

William Ward January 17, 2019 at 6:53 pm

1200 words below – the shortest I could make this explanation.
 
Hey Willis, Bernie,
 
My simple request is to open your mind to what I present here and let’s focus on the following before taking these concepts back to the bigger picture.

Dang, gotta say, now that’s a clever way to open a discussion—claim that the people you are speaking with have closed minds … you really, really don’t see what you are doing, do you?

By definition: if you are working with an analog signal and you take a measurement of it that is a sample. It doesn’t matter how you get that discrete number. Analog signals are continuous and digital signals are comprised of samples. I see a lot of struggle with the definitions.

Seriously? You truly think that I and others don’t know that analog signals are continuous and digital signals are not? Are you not reading what we’re writing?

There has been a lot of talk about strict periodicity. Let’s discuss that further. Nyquist is about the equivalency of 2 domains: analog and digital. Specifically, it is about the equivalency of signals in the analog and digital domain. Nyquist is the bridge. What I think people are losing sight of – or better said: have never gained sight of, is that the digital samples are representative of a signal.

Oh, this just gets better. Now we’re all just innocents who have never gained sight of your brilliant wisdom … look, if you’re gonna insult us, at least have the balls to QUOTE WHAT WE’VE SAID that you are disparaging. And yes, digital samples are representative of a signal, duh … who ever said they weren’t?

Moving on, you say:

Math on signals MUST comply with laws governing signals – if you want that math to apply to the signal. If the samples are obtained through complying with Nyquist, then the samples will represent that signal in the digital domain equivalently to the analog signal in the analog domain. HOWEVER, if those samples were obtained in violation of Nyquist, then the samples DO NOT represent the original signal from the analog domain. They CANNOT.

First you lecture us about the Nyquist limit. Then you invent something you call a “practical Nyquist limit” and say hey, that’s OK, 288 samples per day doesn’t obey Nyquist but it’s close enough. Now you are back again to tell us that your “practical Nyquist” is nonsense because what it gives us CANNOT represent the original signal …

Say what? Make up your mind!

Meanwhile, I’ve shown immediately above that hourly sampling gives a result which is just about identical to 5-minute sampling, so obviously I’m violating your “practical Nyquist limit” by a factor of 12 without bad effects …

And you claim OUR minds are closed?

It appears that you are innocent of the concept called “for all practical purposes”.

I remember my wonderful high school math teacher, Mr. Hedji, explaining to us what “for all practical purposes” meant. He said, “You’ve heard of Zeno’s paradox?” Of course we had, he’d told us about it. Before an object can travel a given distance d, it must travel a distance of d/2. And in order to travel d/2, it must first travel d/4, etc. Since this sequence goes on forever, it therefore appears that the distance d cannot be traveled.

Mr. Hedji said “Suppose we line all the boys up against one wall of the classroom and all the girls on the other side. Every time I ring a bell, they move half the distance toward the middle. Now, of course, as Zeno points out, doing that they can never arrive at the middle of the room.”

“But before long, they’ll be close enough for all practical purposes” …

So yes, in my periodogram above of hourly sampling of a temperature signal, there is still some small residual aliasing at about 2 hours.

But hourly sampling is perfectly adequate for all practical purposes …

w.

William Ward
January 18, 2019 4:51 pm

Willis,

There was nothing in what I wrote that would have warranted that kind of hostile response from you. Asking you to take a look at something with an open mind was not saying or inferring that you “have a closed mind”. There was no intent to insult you. It is pretty standard practice to restate fundamental facts when making a case. Stating basic facts to build up a case should not be viewed as an insult. I addressed the post to you – but I write to reach the full range of audience. I had many people making the claim that discrete values (max and min) are not samples – and they made their case for a few reasons. My reply to you tried to address many of the claims against my paper – even if you specifically didn’t make all of the counter claims.

You seem to have inferred tone and intent was not there. I have gone back and re-read my post nearly a day later and I’m quite fine with it. I’m a fan of sarcasm. I have used it on several posts. But I don’t go into hostility with it. The post that has triggered you is about as dry as it can get. I had no humor, sarcasm or tone in it at all. I don’t understand your response to it.

Reply to  William Ward
January 18, 2019 6:31 pm

William Ward January 18, 2019 at 4:51 pm

Willis,

There was nothing in what I wrote that would have warranted that kind of hostile response from you. Asking you to take a look at something with an open mind was not saying or inferring that you “have a closed mind”.

Say what? If you ask everyone to stand up, it implies they’re sitting or lying down. If you ask someone to help you, it assumes they are not helping you.

And if you ask people to look at things with an open mind, it assumes that their minds are closed. Otherwise, why would you have to ask them?

As I said before, it seems you really, really don’t know what you sound like from this side of the screen. Patriarchal and condescending are the words that come to mind.

Now, as I said, I can be insufferable as well … but the difference is, I know it, and you don’t seem to.

In any case, I’m happy to re-reset. But please, leave the things that sound like “open your minds to my wisdom” out of your comments, they don’t do you any good.

Here’s the kind of thing that drives a man mad. You’ve claimed that the “practical Nyquist limit” is 288 samples per day … but you haven’t provided a scrap of actual measurements to justify that. You seem to think that your mere word is enough to make it true.

And it’s bizarre, because you started by insisting that “we must sample a signal at a rate that is at least 2x the highest frequency component of the signal.”

I questioned this, saying:

The first problem with this is that climate contains signals at just about every frequency, with periods from milliseconds to megayears. What is the “Nyquist Rate” for such a signal?
 
Actually, the Nyquist theorem states that we must sample a signal at a rate that is at least 2x the highest frequency component OF INTEREST in the signal. For example, if we are only interested in yearly temperature data, there is no need to sample every millisecond.

And your response:

Willis – this is fundamental signal analysis 101, first day of class mistake you are making here. I suggest you go read up on this before misinforming people. I know you have a strong voice here and are respected – as you should be and as I respect you for your many informative posts. For this reason I’m being very direct, and maybe not very diplomatic. I mean no rudeness.

Bad start. Very bad start. When a man says “I mean no rudeness”, I just roll my eyes and go “Yeah, right. If you meant no rudeness, you wouldn’t be rude.”.

But then, having been very rude in your answer, you go on to claim that oops, you really didn’t mean it, and 288 samples per day were actually quite acceptable and were above the “practical Nyquist limit” … whatever that is …

I, on the other hand, have shown that hourly measurements give results with an average error in the daily mean w.r.t. 5-minute measurements of five-thousandths of a degree, with an RMS error of the errors of 5 hundredths of a degree. Those are meaninglessly small differences, so obviously, if 288 samples per day is above the “practical Nyquist limit”, then so are 24 samples per day.

But nooo … you refuse to hear that. You are welded to 288 samples per day, your mind is made up, and you don’t want to be bothered with facts … and if you don’t understand the facts and graphs I’ve presented, you don’t ask me what I meant. You just plow forwards.

In any case, today I finally found some USCRN data that’s not blocked by the government shutdown. I got 13 years of data for the USHCN station nearest to where I grew up, in Redding, California. I calculated the average for each day using first all 288 daily samples, and then using 24 hourly samples. Here are the results:

Year Mean Err RMS Err Max Err Min Err
2006 -0.002     0.051   0.220  -0.216
2007  0.002     0.053   0.156  -0.221
2008  0.005     0.059   0.186  -0.275
2009 -0.003     0.049   0.183  -0.161
2010 -0.003     0.051   0.259  -0.127
2011  0.002     0.051   0.156  -0.141
2012  0.002     0.054   0.177  -0.165
2013  0.004     0.060   0.236  -0.211
2014  0.002     0.059   0.198  -0.177
2015  0.003     0.059   0.198  -0.217
2016  0.004     0.057   0.188  -0.170
2017  0.004     0.052   0.178  -0.156
2018 -0.001     0.051   0.151  -0.182

Some notes. The largest average annual error in 13 years of daily data is five-thousandths of a degree.

The largest annual RMS error of the daily errors is six-hundredths of a degree.

The largest absolute error in 13 years, both positive and negative, is about a quarter of a degree.

Here’s the state of play. According to YOU, we do NOT have to sample a signal at a rate that is at least 2x the highest frequency component of the signal as you first claimed. Instead, you say 288 cycles per day is acceptable under the Nyquist criterion for our practical purposes … and I’ve shown that there’s no practical difference between that and hourly sampling.

Finally, I’ve demonstrated that despite strong aliasing when you go from 288 samples per day to 12 cycles per day, there is only negligible aliasing when you go from 288 samples per day to 24 cycles per day. You keep making claims about aliasing, but I notice that I’m the only one measuring it in the actual data under discussion …

So … will you NOW admit that hourly samples are acceptable with respect to your “practical Nyquist limit”, and that I was correct in my opening statement that you were so rude to me about? To use your words, there was indeed a “first day of class mistake” made, but YOU were the one making it …

Best regards,

w.

Reply to  Willis Eschenbach
January 18, 2019 10:41 pm

Willis –
As the “other guy” who William considered close-minded, in order for outsiders to make a judgment in this regard, they have to know the subject – signal processing in this case. Many here know the basics well, but not the details such as we have here that come with experience. So not so good. But – us being worried about being called uncivil (even if it were true – and it’s not) takes a backseat to resisting those who are distressingly wrong, especially in engineering matters (a hard, logic-based science). We can (with likely little rest-of-world consequences) “agree to disagree” (a meme some favor) if we are voicing opinions of Beethoven vs. Stravinsky. Engineering? Feet to fire please. In my experience, it is the person who comes to see that he/she has lost (demonstrably wrong) who calls for peace. But you know that.
-Bernie

January 18, 2019 10:48 pm

Willis –
As the “other guy” who William considered close-minded, in order for outsiders to make a judgment in this regard, they have to know the subject – signal processing in this case. Many here know the basics well, but not the details such as we have here that come with experience. So not so good. But – us being worried about being called uncivil (even if it were true – and it’s not) takes a backseat to resisting those who are distressingly wrong, especially in engineering matters (a hard, logic-based science). We can (with likely little rest-of-world consequences) “agree to disagree” (a meme some favor) if we are voicing opinions of Beethoven vs. Stravinsky. Engineering? Feet to fire please. In my experience, it is the person who comes to see that he/she has lost (demonstrably wrong) who calls for peace. But you know that.
-Bernie

William Ward
Reply to  Bernie Hutchins
January 19, 2019 8:02 am

Bernie,

Because I asked you and Willis to open your minds to the particular points I was making is not the same as saying you are both closed minded people. I didn’t intend that and I don’t see why a reasonable person would interpret it that way. And as you said in another post, it is not true that I think I’m the only person who knows anything. I agree with your comments: it is important to resist those who are distressingly wrong. I see your points as being distressingly wrong and obviously you think mine are. So you are making the case that we should keep up the debate. Great, I’m on board. But you can’t have it both ways. I can’t be the arrogant know it all, for doing so any more than you.

While I don’t mind the friction, and I support your point about fighting for correct math and science, I also completely support Clyde’s adult reminder to be our best selves while communicating. I would like a redo on few things I have said, that is for sure! Where possible I have corrected myself, adjusted my tone or overtly apologized. But I have not had a public meltdown and I’m not joining in with any drama. I’m unphased by the indictments of arrogance. I could easily lob back the counter indictment, but I won’t. I won’t because it isn’t dignified and I actually don’t feel that way. I see people getting upset and out of that acting in an unpleasant manner. But I’m not hung up on that.

Tremendous heat can burn things to ash or it can forge great bonds. I have made good friends in the past that started out with a lot of friction. It doesn’t always end with the best of those possibilities, but I’ll do my part to aim for a positive outcome, even if we knock some things over along the way.

Reply to  William Ward
January 19, 2019 8:18 pm

William Ward at January 17, 2019 at 8:12 pm said in part to Willis:

“ You think I’m being arrogant. Well, I can understand why you say that. I don’t think I’m better than anyone. So, I would not call that arrogant. I do think I know a lot more about signal analysis than anyone who has spoken out against what I have presented. “

Are we to assume (1) that this sort of comment is in the past and 2) you won’t take offenses to those of us who are trying to help your understanding of signals and in turn to interact with what YOU say?

-Bernie

Scott W Bennett
January 18, 2019 11:28 pm

Where fs is the sample rate or Nyquist frequency and B is the bandwidth or highest frequency component of the signal being sampled. However, Real-world signals are not limited in frequency. Their frequency content can go on to infinity. – William Ward

William,

At the risk of getting bogged down in a wordy comment and not getting a answer from you, I will be brief and to the point.

Clearly, you don’t seem to be making the distinction between the “Real-world signal” in the abstract and that “real-world” accessible through physical measurement – which keeps tripping me up! And I would hazard a guess that this is at the bottom of most disagreement here – would you agree?

It is well known that thermometers – automatic and manual – suffer from a response lag (L) introduced by the Stevenson screen in which they are housed. It is over and above the L of the instrument itself coming to thermal equilibrium. This L is on the order of minutes. More over “the screen’s lag time L lengthens with decreasing wind speed, following an inverse power law relationship between L and wind speed (u2). For u2 > 2 m s−1, L ∼ 2.5 min, increasing, when calm, to at least 15 min.

Spectral response properties of the screen to air temperature fluctuations vary with wind speed because of the lag changes. Additionally there is a tendency towards more frequent low wind speeds as temperatures increase, and an associated increase in lag time and radiation error.

I’m no DSP expert but as a layman it is not hard to see that the screen is operating as a variable high-pass filter, with a ventilation-dependent frequency response.

It has been noted in many studies that the magnitude of the error depends not only on the windspeed but also on the wind direction relative to the azimuth angle of the sun. Most observational evidence put the total error of the apparatus itself at 1C, although greater errors have been noted*.

Therefore, accessible real-world “signals” are composed of “samples” the measurement of which is limited in frequency and therefore the frequency content of the measured “signal” is finite. Would you agree with that statement (Or one better worded)?

I’m certain that no-one here disputes the rules of DSP but is that what they are disputing?

*Note that the error also varies with station location and climate zone.

William Ward
Reply to  Scott W Bennett
January 19, 2019 7:36 am

Hi Scott,

Thanks, you bring up a very important point. One that I have touched upon briefly in some posts, but clearly it has not been the bulk of the discussion. So you give an opportunity to add a bit more on that subject. The subject I bring up with sampling is larger than can fit into 2,000 or even 20,000 words, but through the back and forth comments more can be brought out.

You are right, the characteristics of the thermal transducer (and screen) should be factored into the design. I assume the engineers NOAA commissioned to design USCRN did this but I don’t have any information saying either way.

There is the question of: “what is the signal?” Well, air moves and mixes, and has a spatial and vertical temperature profile. We measure at 6 feet off of the ground or where ever it is measured in the screen. But we would get different readings by moving the screen laterally or vertically. If you have an IR Thermometer (IR Gun), go around your living room or house and try to determine the temperature. Shoot the floor, walls, ceiling, objects at different locations. You get a range of numbers. So what is the temperature of your house or a room in the house? For personal comfort we don’t need this kind of nuance – what the temperature is at the thermostat works for most. Unless you have an older home that is not well balanced and you have hot rooms and cold rooms. The thermostat reading doesn’t represent the house well. Relative to that analogy, how does our Stevenson Screen method work related to the planet? Does it represent it well or not? So, is the signal from the air outside of the screen or inside of the screen? I always go back to a system level definition. We are really trying to look at the temperature of Earth so it starts outside the screen. The screen must be considered a component in the data acquisition system. I see the screen acting as a low-pass (high-cut) filter. You said high pass – I’m not sure if you misspoke or meant to say that. If we see it differently let me know what you think. The engineers designing it should have some way to study the signal outside of the screen with a low mass/fast response time instrument and get an idea of the frequency content there. The transducer selected should be modeled to understand its characteristic equation – and this gets factored into the design. Also, understanding the characteristic equation, with properly sampled data one can calculate what is happening outside of the screen even though it is measured inside. It isn’t just the screen or transducer that needs to be factored. It is the entire circuit. The anti-aliasing filter used would only need to respond to what happens inside of the screen as the screen could be viewed as the first stage of the filter network.

Mercury thermometers don’t really allow you as much freedom to work, but they can be/have been modeled. I just don’t think that information factors into the data analysis.

Regarding infinite vs finite frequency: I’ll used the example of electronics to illustrate. Frequency is always infinite, but from a practical perspective it is finite. Why is it infinite? If you look at frequency content of a signal and go out further and further you will always measure content, but its magnitude gets smaller. Audio amplifiers specify a “signal to noise ratio” (SNR). The possible input of signal to the amplifier is limited by the design spec. The components are selected and the architecture selected to minimize noise so you get a good clean signal. But when you go to measure the noise, if you have really good instruments you can see noise is there as you go out in frequency, it just gets really small. For example the best instruments can measure to -160db or lower. Every -20dB (with voltage) means you have gone down to 1/10th of the input [every +20dB means you get a gain of 10 or 10x]. So 1V of noise if reduced by 20dB would be 0.1V. Reducing it 40dB means it is down by 1/100th, so it is 0.001V. There are 8 20dB steps to get to -160dB, so the noise would be down 10^8 power or 0.00000001V. At some point the instrument itself generates the noise and you no longer know what is going on. We say it is infinite because it is beyond out ability to measure. I’m not sure if it is just a good guess that it is infinite or if theoretically it has been proven to be so. It doesn’t really matter for practical purposes. Can you hear noise at -100dB? Yes – with the right music source and listening environment. The human brain/ear is amazingly sensitive – but sadly this ability diminishes with age. If listening to hip-hop though earbuds on a subway then , no it won’t be audible. How about -120dB? Well, the best amplifiers shoot for -130 or -140dB of noise so you know it is outside of human hearing even in the best environments. The key is to design for the practical need of the application. But do you design for the kid riding the train listening to hip-hop or for the person in their listening room, listening to a 24bit/96ksps recording of the world’s finest orchestras on their $100k playback system? [Strangely enough, people can hear (in double blind experiments) differences in amplifier all specified at the same good specs.]

It is true that Nyquist requires sampling fs > 2B. But B can be altered by design – based upon knowing what is needed practically. The frequency gets limited with filters, knowing what is being discarded by the filters. As I said in my Full version of the paper, aliasing always exists, but if the system is done right the aliasing doesn’t not impact our measurement based upon the accuracy specified.

What is the dispute here? We all seem to disagree that max/min method is not good. The reason for this is in dispute I think, at least with some, not with others. Also in dispute is “are max and min values samples?” and do we have to comply with Nyquist when using samples? Also in dispute is the sample rate that is required to satisfy Nyquist. But actually, from some of the latest responses, I’m getting some hope that we can gain some greater agreement. All of our stubborn determination on this subject can be a good thing or a bad thing. The communication gets frustrating and then feelings get hurt and tempers flair. But if we all stay with it then maybe we can gain some agreement and if that happens the “stubborn determination” is good.

I think one sticking point is whether we should specify the system based upon the average or upon the worst case. So if we can get to some agreement that a system needs to be designed for worst case then maybe we can also agree that much of that performance is not needed in many or maybe most instances. Not sure yet.

Is this any help?

William Ward
Reply to  William Ward
January 19, 2019 7:47 am

I’ll edit my self: I said: “The frequency gets limited with filters, knowing what is being discarded by the filters. As I said in my Full version of the paper, aliasing always exists, but if the system is done right the aliasing doesn’t not impact our measurement based upon the accuracy specified.”

Clarification: Filters “roll-off” or reduce frequencies, they can’t completely eliminate them. Filters can be designed to be more aggressive and remove more faster but this comes at a cost of “ripple” (adding error to the amplitude) or phase shift (frequency components don’t line up in time). So filtering reduces content but doesn’t eliminate it. What is left can still alias, hence my comment that “aliasing always exists”. But it is also true that this aliasing can be below anything that matters to us based upon our specifications and the design.

Editor
January 19, 2019 3:12 am

William Ward January 18, 2019 at 6:45 pm

Willis, Clyde,

Almost every station I examined shows “significant” error (many tenths to several degrees C) per day. The following graphs show for each day the difference between 288-samples/day and NOAAs (Tmax+Tmin)/2 (“historical method”). 288-samples is reference and historical method is subtracted. Result is daily error.

Thanks, William. Neither I nor Clyde have denied that there is a significant error between the 288-samples and the NOAA traditional method. I was unaware that there was any dispute about that.

Willis, you have not acknowledged any of the data I have presented.

My apologies, I was unaware that there was something I should have acknowledged.

As I said in my paper:

It is clear from the data in Figure 2, that as the sample rate decreases below Nyquist, the
corresponding error introduced from aliasing increases.

It is clear from the ONE DAY’S WORTH of data in Figure 2 that ON THAT DAY as the sample rate decreases below Nyquist, the corresponding error introduced from aliasing increases. However, in larger samples that is not true.

It is also clear that 2, 4, 6 or 12-samples/day produces a very inaccurate result. 24-samples/day (1-sample/hr) up to 72-samples/day (3-samples/hr) may or may not yield accurate results. It depends upon the spectral content of the signal being sampled. NOAA has decided upon 288-samples/day (4,320-samples/day before averaging) so that will be considered the current benchmark standard. Sampling below a rate of 288-samples/day will be (and should be) considered a violation of Nyquist.

So far I have not found any 288-sample datasets that give inaccurate results when sampled hourly. Let me repeat my data for 13 years worth of 288-sample data from Redding.

I calculated the average for each day using first all 288 daily samples, and then using 24 hourly samples. Here are the results:

Year Mean Err RMS Err Max Err Min Err
2006 -0.002     0.051   0.220  -0.216
2007  0.002     0.053   0.156  -0.221
2008  0.005     0.059   0.186  -0.275
2009 -0.003     0.049   0.183  -0.161
2010 -0.003     0.051   0.259  -0.127
2011  0.002     0.051   0.156  -0.141
2012  0.002     0.054   0.177  -0.165
2013  0.004     0.060   0.236  -0.211
2014  0.002     0.059   0.198  -0.177
2015  0.003     0.059   0.198  -0.217
2016  0.004     0.057   0.188  -0.170
2017  0.004     0.052   0.178  -0.156
2018 -0.001     0.051   0.151  -0.182

Some notes. The largest average error is five thousandths of a degree.

The largest RMS error of the daily errors is six hundredths of a degree.

The largest absolute error, both positive and negative, is a quarter of a degree.

So no, William, there is no significant difference between 288 samples per day and one sample per hour.

The goal is to design a system to handle the worst case signals. The goal is to have a system that works equally well for all days at all stations. Finding stations that work well with 24-samples/day doesn’t mean you decrease the system performance to match those stations. You don’t design for the “average” condition, you design for worst case conditions.

Unless I missed it, to date I’ve presented 14 years of data showing only trivial differences between 288 samples per day and 24 samples per day. You haven’t presented a single station that does NOT “work well with 24-samples/day”. If you can show that such stations exist, you may have a point … but so far, so good. I will continue to examine other stations, now that I’ve found a source that is not closed due to the Gov’t shutdown.

Willis, USCRN hourly data is the 5-minute data integrated to hourly. Why would you assume that sampling hourly would be equivalent? It may produce similar results and it may not. Do not confuse the 2 scenarios. If there is higher frequency content then you will get different results between sampling hourly and integrating 20-second samples to hourly. Sampling hourly will alias content faster than that.

The real question is, why would you assume that I don’t know the difference between integrated hourly data and one sample per hour? Once again, you assume I don’t know what I’m doing, when I know very well. My data comparing 288 samples per day and 24 samples per day is NOT using integrated hourly data as you foolishly assume. I am doing exactly what I’ve described. I take the 288 samples per day. From those 288 samples, I select one sample on the half hour. That is what has given me the results.

And no, as my periodograms have shown, sampling hourly does NOT alias content in any meaningful way. Here is a periodogram of 6 months of 288-sample data.

As you can see, there is some energy in the frequencies with 12-hour, 8-hour, and 6-hour periods. Now, here is that same graph overlain with the periodogram of hourly sampling.

As you can see, other than a trivial bit of aliasing up near the highest frequencies, the two periodograms are nearly identical.

However, things get very different when we sample every two hours. Here is that comparison.

As you can see from the black line showing the two-hour sampling, there is extensive aliasing into the frequencies with periods of 4 hours and about 1.4 hours.

Please allow me to suggest that you download and analyze a year or two of actual USHCN data. You can’t pull out one very unusual day as you’ve done and base your conclusions on that. If you think that there is aliasing going on, then please demonstrate that it is there and that it is significant.

Best regards, 3 AM, light rain falling, I’m off to bed …

w.

William Ward
Reply to  Willis Eschenbach
January 19, 2019 6:35 am

Willis,

Thanks for your reply from Jan 19, 3:12 AM. I’m working projects all day, but will thoroughly read (re-read) you post and work on a reply tonight.

Reply to  William Ward
January 19, 2019 11:26 am

William, thanks to you for continuing the discussion. As I said before, I know everyone has a life outside, and so if someone doesn’t answer right away, I figure they’re out in the world.

Best regards,

w.

Bright Red
Reply to  Willis Eschenbach
January 19, 2019 1:54 pm

Willa’s said “As you can see, there is some energy in the frequencies with 12-hour, 8-hour, and 6-hour periods.”

One thing I noticed is that the predominate frequencies at this site are all divisible into 24. Given the sample rate of 24/day is phase locked to the main frequency component I think it is also likely phase locked to the 6, 8 and 12 hour components. Having the sampling phase locked to all the signals frequency components often leads to some very interesting outcomes.
It would be interesting to see how other sample rates such as 27/day perform or even non integer rates

Cheers
Red

Editor
January 19, 2019 3:18 am

Oh, yeah, I forgot … the location of the accessible USHCN 288-sample data is here.

Regards to all,

w.

Editor
January 19, 2019 12:44 pm

A new day, new data … to pick a location as different as possible from Redding California, my last analysis, I picked Fairbanks Alaska. Here is the error data from that USCRN station, showing the difference between sampling 288 times per day and sampling 24 times per day (hourly samples).

Yr    Mean    RMS Err  Max Err  Min Err
2007  0.003     0.058   0.177  -0.196
2008  0.000     0.053   0.237  -0.153
2009  0.000     0.054   0.278  -0.188
2010  0.003     0.054   0.209  -0.213
2011 -0.003     0.060   0.250  -0.151
2012  0.005     0.061   0.322  -0.232
2013  0.000     0.054   0.184  -0.202
2014  0.003     0.063   0.308  -0.223
2015  0.000     0.056   0.200  -0.180
2016  0.006     0.056   0.246  -0.159
2017 -0.003     0.054   0.167  -0.188
2018 -0.006     0.051   0.129  -0.226
MEAN  0.001     0.056   0.226  -0.193
MAX   0.006     0.063   0.322  -0.232

Once again, we have only very small differences between 288 and 24 samples per day …

More to come, I’ll post up the further analyses of the Fairbanks data as they are finished.

w.

1sky1
January 19, 2019 1:41 pm

Five days ago, Rud Isvan opined:

The wide and deep WUWT readership will sort out this potential guest post’s validity in short order.

This sanguine faith has not yet been justified. On the contrary, fundamental misconceptions about what constitutes frequency aliasing and what inadequacies of sampled data are unrelated to it stubbornly persist. Thus we have Willis stating;

It is clear from the ONE DAY’S WORTH of data in Figure 2 that ON THAT DAY as the sample rate decreases below Nyquist, the corresponding error introduced from aliasing increases.

Since the Nyquist frequency is DEFINED as 1/(2 delta t), where delta t is the fixed sampling interval, the sampling rate, 1/delta t, cannot decrease “below Nyquist.” It can only decrease below some independently determined highest frequency of appreciable spectral content in the continuous signal. In either event, barring aliasing of spectral content into zero frequency, the efficacy of the sampled data points in estimating the signal mean is entirely a matter of sample SIZE, not of any frequency aliasing. Sadly, the inept practice of plotting periodograms as a function of the logarithm of period, instead of simple frequency, distorts spectral content (area under the curve) and totally obscures what happens around zero-frequency.

Reply to  1sky1
January 19, 2019 3:25 pm

1sky1, it appears you have not have noticed, but that statement was not entirely mine. I was merely qualifying William’s statement. He said:

It is clear from the data in Figure 2 that as the sample rate decreases below Nyquist, the corresponding error introduced from aliasing increases.

I merely qualified it by saying, with my additions in capital letters, that:

It is clear from the ONE DAY’S WORTH of data in Figure 2 that ON THAT DAY as the sample rate decreases below Nyquist, the corresponding error introduced from aliasing increases.

So if there are errors in that statement, they are Williams, not mine.

Regards,

w.

1sky1
Reply to  Willis Eschenbach
January 20, 2019 4:44 pm

While the capitalized words qualify William’s statement, you seem to accept his false attribution of increased error in estimating the true mean to alaising. Further on, you state:

Onwards … aliasing. William keeps claiming, again without evidence, that if we sample hourly, the higher frequencies will alias into the result. Now, just as with the Redding data, I find that his claim is true if we are sampling every two hours (12 samples per day)… Just exactly as with the Redding data, there is strong aliasing at the four hour and just under an hour and a half periods.

However, with bihourly sampling, the Nyquist frequency is 1/ 4hrs–which is NOT aliased. Furthermore, unless there’s aliasing into zero-frequency, aliasing does NOT affect the stimation of the mean.

Editor
January 19, 2019 3:19 pm

Well, always more to learn. I took a look at how the error increases as the number of samples per day decreases in the Anchorage data. Here is that graph. It compares the error at a given number of samples as compared the value at 288 samples per day. As you would imagine, the error at 288 samples per day is zero.

As you would expect, the more samples per day, the smaller the error. The graph shows a few interesting things.

First, the traditional way of calculating the error, (min+max)/2, is NOT simply another kind of 2-samples-per-day as William has said. As you can see, it gives a daily RMS error about 12% larger than the error at two samples per day.

Next, compared to 288 samples per day, the RMS error using hourly samples is quite small, less than a tenth of a degree C. William above said that:

A practical way to determine the Nyquist frequency (for real-world applications) is to increase sample rate until the apparent gains diminish beyond any benefit.

He then claimed, without any evidence, that the practical Nyquist limit was 288 samples per day.

However, as the graph and the table in my previous comment shows, at hourly samples, the mean error is 0.001°C compared to 288 samples, and the RMS error is 0.056°C compared to 288 samples. We’re well past the “knuckle” in the graph, and so at this point, it is obvious that the “apparent gains are diminishing beyond any benefit” to be gained from faster sampling.

Finally, the results for Fairbanks Alaska USHCN data in terms of mean and RMS error are indistinguishable from those of Redding California, or those of Chatham Wisconsin. To date, I’ve looked at 26 years of USHCN data. In no case does hourly sampling give any significant errors compared to 288 samples per day.

Onwards … aliasing. William keeps claiming, again without evidence, that if we sample hourly, the higher frequencies will alias into the result. Now, just as with the Redding data, I find that his claim is true if we are sampling every two hours (12 samples per day). Here is a periodogram of those results:

Just exactly as with the Redding data, there is strong aliasing at the four hour and just under an hour and a half periods. However, now take a look at the hourly data.

There is only a tiny bit of aliasing at just under three-quarters of an hour. Other than that, the results are basically identical to the results from sampling at 288 cycles per day.

All of which supports my contention that hourly sampling of temperature data is more than adequate for practical purposes, and that hourly sampling is above or at the practical Nyquist limit.

Best to all,

w.

William Ward
Reply to  Willis Eschenbach
January 19, 2019 8:37 pm

Hi Willis,

You have written a few posts since I last wrote to you. I’m going to focus on your last post (Jan 19, 3:19PM) first and I will do this for a few reasons. 1) I’m gaining some optimism that maybe we can find a common understanding and your most recent post provides a potential platform for that, 2) I want to suspend and if possible, move past the friction and 3) I’d like to reply to you tonight and I don’t think I can process it all at once. I’ll pull in selective comments of yours from other posts.

On 2 separate posts Willis said: “Fig 3 again is just one day. Again there is no way you can talk about Nyquist or aliasing for a single period.” And: “Please allow me to suggest that you download and analyze a year or two of actual USHCN data. You can’t pull out one very unusual day as you’ve done and base your conclusions on that.”

My reply: I have over 1GB of USCRN data downloaded and this was used in my analysis. For Fig 1, I used a day that was representative of my case but not the worst I saw. Over 28 stations were used specifically in the paper but many more were analyzed in the study. It is not a correct assessment that I’m basing my case on just 1 or a few days or stations. More below will clarify approach and strategy with the presentation.

Willis, it was just over the past 24 hours that I could start to see through the chaos of our disagreement, and I started to see a way that might allow us to find a common agreement. It began with a better understanding our disagreement. Let’s see if we can make progress. I think your case is against 288-samples/day. Would I be correct to say that at 24-samples/day you agree with my overarching assessment? Are we primarily divided by the number at this point? My thrust with this paper was to introduce Nyquist and sampling as the reason that the historical method has issues. Even 2 regularly timed samples/day has issues based upon sampling and not complying with signal analysis requirements. It feels like the baby is circling the drain with the bath water if we are divided by the number. Are we aligned on the basic issue? Does the application of Nyquist shed light on problems and potential solutions?

Now, I approach this from an engineering perspective. Please allow me to develop this. I will get to a conclusion. I would like to use an example to illustrate. If you are tasked with recording high fidelity audio and you begin your exploration with a concert grand piano, you will quickly discover that there isn’t much content above 10kHz. It’s there but it is usually down below -60dB FS. If you assume all instruments are going to operate similarly you might design your data acquisition system to not alias the grand piano. But if you look at violins the frequency is much higher. If you look at cymbal crashes, you will see 20Hz – 20kHz flat content across the spectrum. But cymbal crashes are 1) infrequent in most music and 2) usually brief in the program material. Your recording of the cymbal crashes would suffer if you designed the data acquisition system for just the piano. As I did my investigation, I had a reference provided by NOAA and that was 288-samples/day. I went out and looked at graphical images of many, many days’ worth of signals. The distorted sinusoid is what I saw most often, but occasionally I saw very different daily profiles. Some were square wave like, and some had many large fast transients. These types of profiles tell you that higher frequencies are present. I searched out a number of these types of days. I also searched for longer strings of days like with these profiles (ex: Figure 5: 10 days at Spokane WA). I was looking for the upper limit of spectral content a day could throw at us. This is where I focused my efforts. Engineers design the system to handle the full range of possibilities. Like the cymbal crashes we need to properly capture the days with higher frequency, even if they are not frequent.

Now, let me quote myself from the paper (below Fig 2): “It is clear from the data in Figure 2, that as the sample rate decreases below Nyquist, the corresponding error introduced from aliasing increases. It is also clear that 2, 4, 6 or 12-samples/day produces a very inaccurate result. 24-samples/day (1-sample/hr) up to 72-samples/day (3-samples/hr) may or may not yield accurate results. It depends upon the spectral content of the signal being sampled. NOAA has decided upon 288-samples/day (4,320-samples/day before averaging) so that will be considered the current benchmark standard. Sampling below a rate of 288-samples/day will be (and should be) considered a violation of Nyquist.”

Notice that I acknowledge that 24-samples/day may produce accurate results – depending upon the spectral content. But from an engineering perspective, we select the sample rate to make sure we don’t alias any signals and I had an abundance of information that signals were present to demand more than 24. Since NOAA used 288 and the error at 24, 36 and 72 were only +/- 0.1C off from 288 I knew we were approaching that limit of accuracy needed. I had not done the statistical analysis you have done. I see that 24-samples seems to work for most situations based upon your data – which I have not studied, but for this discussion I’m assuming is correct. But it seems we are arguing 2 separate points – and therefore we may not be completely incompatible with our conclusions. As an engineer I would not recommend going with 24-samples/day without more research. I prefer to not have error from the days or periods that do have more content. Also, as I have explained several times, but I think was lost in the melee, sampling faster gives us guard band and it relaxes the requirements for the input anti-aliasing filters. Filters can add ripple and phase shift and these problems tend to increase with increasing filter order. Higher sample rate allows lower filter order (stages).

Willis said: “First, the traditional way of calculating the error, (min+max)/2, is NOT simply another kind of 2-samples-per-day as William has said. As you can see, it gives a daily RMS error about 12% larger than the error at two samples per day.”

I said in my paper (below Fig 2): “It is interesting to point out that what is listed in the table as 2-samples/day yields 0.7 C error. But (Tmax+Tmin)/2 is also technically 2-samples/day with an error of 1.4C as shown in the table. How can this be possible? It is possible because (Tmax+Tmin)/2 is a special case of 2-samples per day because these samples are not spaced evenly in time. The maximum and minimum temperatures happen whenever they happen. When we sample properly, we sample according to a “clock” – where the samples happen regularly at exactly the same time of day. The fact that Tmax and Tmin happen at irregular times during the day causes its own kind of sampling error. It is beyond the scope of this paper to fully explain, but this error is related to what is called “clock jitter”. It is a known problem in the field of signal analysis and data acquisition. 2-samples/day, regularly timed, would likely produce better results than finding the maximum and minimum temperatures from any given day. The instrumental temperature record uses the absolute worst method of sampling possible – resulting in maximum error.”

I’m stand by my assertion that max and min are samples. This is based upon my experience and confirmation from people in my industry who have reviewed this for that point. But I’d like to suggest that we not continue to debate that point as I think we have both made our cases to the other unsuccessfully. I think you have accepted 2-samples/day as samples so I think we can proceed with analysis even if we don’t agree with the nature of the problem with max/min – we agree about the results of using max and min, I believe.

If I do a rewrite of my paper – and I might – I’ll consider adding some clarity around the fact that 24 hours appears to capture most of the situations. (Credit to you). But I’ll keep my recommendation for a higher rate for engineering and system integrity purposes. Additionally, I’m working with the knowledge of what is available from a technology perspective. Sampling at 24, 288, 4,320-samples/day are all absolutely glacial compared to converter technology. Memory and storage are cheap. There would not be an engineering incentive to run that slow. All of those rates are considered a stand-still from a converter perspective.

I don’t want to put words in your mouth. Do you agree that some “significant” accuracy can be had by sampling faster than 2-samples/day? When I speak about Nyquist rate, it is from an engineering system design perspective. You were focusing on the minimum increase to rate that seems to capture all of the data from a statistical perspective. I think these are both very valuable perspectives and need not be completely incompatible. I do think we were misunderstanding each other and in the heat of debate missed an opportunity to agree. I’m not talking about a fake agreement to make the discomfort of debate go away. I see potential here to have a positive resolution for having pushed each other. What do you think?

I don’t think you addressed the trends I showed. Yesterday I put up links to figures not in my paper showing stations over 10-12 years.

https://imgur.com/cqCCzC1

https://imgur.com/IC7239t

https://imgur.com/SaGIgKL

They show the yearly average differences and linear trends between the 2 methods. [Note: these figures do not benefit from the full accuracy of processing all 288 daily samples. The data in the table of Fig 7 was generated using all of the samples – no intermittent averaging was done to calculate the linear trends. For the graphs in the links, I took the USCRN monthly averages generated from 288 daily samples. These graphs suffer from some rounding error from using the published USCRN monthly averages. Note in my paper I said (above Fig 6): “While no conclusions can be made by comparing the trends over 7-12 years from 26 stations in the USCRN to the currently accepted long-term or short term global average trends, it can be instructive. It is clear that using the historical method to calculate trends yields a trend error and this error can be of a similar magnitude to the claimed trends. Therefore, it is reasonable to call into question the validity of the trends. There is no way to know for certain, as the bulk of the instrumental record does not have a properly sampled alternate record to compare it to. But it is a mathematical certainty that every mean temperature and derived trend in the record contains significant error if it was calculated with 2-samples/day.”

Trends biases of 0.06 are small according to my way of thinking about numbers, but as others have said, long terms trends of this magnitude are causing serious concern. I said in point 7 of my conclusion: “More work is needed to determine if a theoretical upper limit can be calculated for mean and trend error resulting from use of the historical method.”

Maybe your data analysis skills can be utilized here. Are you willing to examine this? Also, what do you think about the absolute error we see propagating over time? As there seems to be more interest in the “missing” energy, maybe the absolute values are of greater importance. We need to move past the historical method if we want to study that with more accuracy.

I’ll await your reply. Thanks Willis.

Clyde Spencer
Reply to  Willis Eschenbach
January 20, 2019 9:18 am

Willis
You said, “Next, compared to 288 samples per day, the RMS error using hourly samples is quite small, less than a tenth of a degree C. … All of which supports my contention that hourly sampling of temperature data is more than adequate for practical purposes, …”

Alarmists are citing annual average differences of hundredths of a degree as justification for their concern about trends. There is still disagreement as to how much improvement in accuracy and precision can be justified by large numbers of samples. I think that to resolve the questions, raw data should be collected that is at least an order of magnitude more precise than what is being claimed as evidence for the alarm.

The use of “the point of diminishing returns” as a metric for design specifications usually has the implication that going beyond the ‘knee’ will have costs that aren’t justified. However, what I have read here suggests that there is no additional cost because the technology has advanced so far that hourly data is really an anachronism. That is, current, off-the-shelf A/D converters might actually be the cheapest solution because of the scale of volume production. There may be storage costs for large amounts of temperature data, but I think that the remote sensing industry (consider EROS Data Center) has made significant inroads on the cost of storing and accessing huge amounts of image data.

Reply to  Clyde Spencer
January 20, 2019 5:57 pm

Clyde Spencer January 20, 2019 at 9:18 am

Willis
You said,

“Next, compared to 288 samples per day, the RMS error using hourly samples is quite small, less than a tenth of a degree C. … All of which supports my contention that hourly sampling of temperature data is more than adequate for practical purposes, …”

Alarmists are citing annual average differences of hundredths of a degree as justification for their concern about trends. There is still disagreement as to how much improvement in accuracy and precision can be justified by large numbers of samples. I think that to resolve the questions, raw data should be collected that is at least an order of magnitude more precise than what is being claimed as evidence for the alarm.

Thanks, Clyde. While the RMS error is less than a tenth of a degree, the mean error is on the order of a few thousandths of a degree. So it’s within spec.

The use of “the point of diminishing returns” as a metric for design specifications usually has the implication that going beyond the ‘knee’ will have costs that aren’t justified. However, what I have read here suggests that there is no additional cost because the technology has advanced so far that hourly data is really an anachronism. That is, current, off-the-shelf A/D converters might actually be the cheapest solution because of the scale of volume production. There may be storage costs for large amounts of temperature data, but I think that the remote sensing industry (consider EROS Data Center) has made significant inroads on the cost of storing and accessing huge amounts of image data.

The problem is not in the collection of the data. It’s in the transmission and analysis of the data. A hundred years of 288-sample data for a hundred stations is over a billion integers, likely requiring a minimum of 9 bits per integer to store or analyze …

Finally, as Anthony has pointed out with the Surfacestations project, in many, perhaps most cases, the SOURCE of the data is hopelessly compromised by encroaching structures and roads, or growing trees, or air conditioner exhausts, or the like … and there is little point in collecting hyper-accurate data from hyper-inaccurate sources.

Best regards,

w.

Bright Red
Reply to  Willis Eschenbach
January 21, 2019 5:13 pm

Willis Said: “The problem is not in the collection of the data. It’s in the transmission and analysis of the data. A hundred years of 288-sample data for a hundred stations is over a billion integers, likely requiring a minimum of 9 bits per integer to store or analyze …

My reply: And the problem with that is? Current typical internet speeds of 50Mbs mean that 1GByte takes about 160 Seconds to download. Even my slow 15Mbs link would only need 533 seconds or < 9 minutes. With computer power and data transfer speed increasing I don't think that future generations will have any issue with what they will consider a tiny amount of data.
Cheers

Bright Red
Reply to  Clyde Spencer
January 21, 2019 4:09 am

Hi Clyde,

Clyde Said ” However, what I have read here suggests that there is no additional cost because the technology has advanced so far that hourly data is really an anachronism. That is, current, off-the-shelf A/D converters might actually be the cheapest solution because of the scale of volume production. There may be storage costs for large amounts of temperature data, but I think that the remote sensing industry (consider EROS Data Center) has made significant inroads on the cost of storing and accessing huge amounts of image data.”

I thought I would expand a little on your comment and bring into perspective the cost and memory requirements of doing the temperature sampling at 288 samples/day.
Your are correct that there is no additional cost in the acquisition hardware and this would hold up to around 20,000 samples/second.
On the memory front 18 bits will represent all air temperatures on earth to 0.001C now we can round up to 24bits to make life easier so 288samples/day gives 864Bytes/day or 315.36KBytes/year or 31.536MBytes in 100 years of data collection. To put that in perspective 31Mbytes is about 5 seconds of quality std def video. This means your standard 32GByte SD card for $30 or less could hold over 100,000 years of temperature data from a single station or 100 years from 1000 stations. Now it seems the data is often stored in ascii text file format or 8 bytes/sample (+00.000Cr) Which gives 2.304KBytes/day or 840.9Kbytes/year. So a quality $300 hard drive of 4TByte capacity could store 100 years of data from over 47,000 stations.
And that is before we compress the data which should reduce the storage requirements by a considerable amount.

Editor
January 20, 2019 3:41 pm

William Ward January 20, 2019 at 9:08 am

Regarding that entire tangle that started when you said something like “we can sample 2x the frequency of interest”: There were a few others, primarily Nick, who said it is okay to alias because we are only interested in the long term trends. Others said it was not even possible to alias. If the aliased content is very small then he is right that you might be able to get away with a violation. But if not then it creates problems. Either way he was promoting an idea that violates the most basic requirements of sampling. When you said sample the frequency of interest I heard that as echoing what Nick said. My entire case was dependent upon people, other people reading our exchanges, to take in the concept of proper sampling. When 2 of the most respected people on the forum started to derail the most basic concept I felt I had to speak strongly to counter that. There was no disrespect intended. Note I never went into personal attacks or countered any personal attacks on me.

As I said before, you really, really don’t see what you are doing.

When you tell a man “you are wrong”, that’s one thing. There is no disrespect intended.

On the other hand, when you tell a man ” this is fundamental signal analysis 101, first day of class mistake you are making here”, there is obvious disrespect intended. You are telling him that not only is he wrong, he is making a really, really ignorant mistake.

No disrespect?

Look, if you don’t wish to apologize just say so. But you can’t piss on my boots and then try to convince me it’s raining.

w.

Editor
January 20, 2019 3:53 pm

J. Philip Peterson January 20, 2019 at 12:31 pm

Willis says (of Roy Spencer):

“He falsely claimed that I was taking credit for another man’s discoveries”


He did not do that.

He pointed out that you were “re-inventing the wheel,” due to the fact that you did not research prior work.

J. Philip Peterson January 20, 2019 at 12:42 pm

Spencer’s exact words were: “But don’t assume you have anything new unless you first do some searching of the literature on the subject.”

Like I said, Dr. Roy’s crap keeps following me around.

First, There is no practical difference between claiming that I was “re-inventing the wheel” and claiming that I was taking credit for another man’s ideas. If you re-invent the wheel and take credit for it, you are taking credit for another man’s ideas.

Next, the person who didn’t“do some searching of the literature” was Dr. Roy himself. Had he done so, he would have realized that Ramanathan most assuredly did NOT make the claims that I am making. Ramanathan said that there is a “super-greenhouse effect” that acts as a thermostat to keep the temperature of the Pacific Warm Pool from going over about 30°C.

I, on the other hand, say that a host of emergent phenomena act as a thermostat keeping the entire planetary temperature within a narrow band (e.g. a variation of only ± 0.3°C during the 20th century).

Other than the fact that the word “thermostat” appears in both hypotheses, there is no similarity between the two. Which is why Dr. Roy was wrong when he said I was “re-inventing the wheel”—he hadn’t done his homework.

But as it turns out, there are lots of credulous folks out there who are willing to do what Dr. Roy himself did, to believe Dr. Roy without “searching the literature” to determine the truth of the matter … if you are wondering who they are, J. Phillip, grab a mirror …

w.

Paramenter
January 21, 2019 1:23 am

Hey Willis,

I would not say that the error has “nothing to do with Nyquist”. However, it is also not, as William claimed, just “jittered” samples.

The error comes from a couple sources, both Nyquist and the curious nature of the taking of the signals.

Thanks for clarifying that. As per jitter, again on the basic level I understand it simply as sampling at the irregular intervals what itself induces additional error. And as far as I can see William text does not claim that this the main problem. Jitter is mentioned that it can additionally distort the signal.

William Ward
Reply to  Paramenter
January 21, 2019 7:42 pm

Paramenter,

Thanks for your comments. On the issue of jitter, I have a thought exercise (for an atmospheric air temperature signal).

If we start with 2 ideal samples/day (perfectly spaced/timed) I think we have pretty good agreement that these are actually samples and that these samples lead to sampling error – evident when calculating means. Now if we move to a real world converter, they all have clock jitter. The best are in the range of picoseconds of jitter. The error produced can be important for some applications like audio, where jitter is audible at some levels. For other applications this level of jitter is inconsequential. But I think we will get agreement that we still have samples, and sampling error based upon not having enough samples for air temperature. Now what if we use a converter that gives microsecond jitter? How about millisecond jitter? I’m not aware of any converters that give jitter in the second range, but let’s take the progression for the thought exercise. Now how about minutes and finally hours. I think case can be made that the error grows but we still have samples and sampling related error. It would be interesting if someone could provide some mathematical reasons why a time limit would apply and what that would be. Otherwise, I think a logical deduction is that max and min are samples – just very bad ones. 2 discrete values representing an analog signal are samples and if 2/day, and insufficient to represent an atmospheric air temperature signal.

What do you think about this approach?

Paramenter
Reply to  William Ward
January 22, 2019 12:44 pm

Hey William,

Sure, sampling process is never perfectly periodic due to clock inaccuracies. But we don’t need milliseconds per temperature records. I reckon some people here objected against interpretation of temperature series in the light of Nyquist because of recording daily max/min is not strictly periodic. We record daily extremes not knowing exactly when they happened. But that only makes things harder in the context of Nyquist not invalidating requirements of signal reconstruction in a reliable way. As you and Clyde quoted classic textbook:

Bright Red and Paramenter: See this guide referred by Clyde. See section 1.3.2.2 Sampling and Filtering on pg 15 of PDF (pg 539 of document). Key points:

Considering the need for the interchangeability of sensors and homogeneity of observed data, it is recommended:
(a) That samples taken to compute averages should be obtained at equally spaced time intervals which:

Here it is stated expressis verbis that samples obtained via irregular sampling process are still samples with the additional burden of error.

Precisely.

Reply to  Paramenter
January 22, 2019 6:51 pm

Parameter – you said: ” obtained via irregular sampling process are still samples ”

You are not given the TIMES at which Tmax and Tmin are taken. You can only guess. You said as much. So they are not samples to which Nyquist could possibly apply.

I can’t find any link to the “Guide” you mention (please post if you care to) but wonder if “irregular” does not refer to what I call “bunched”
http://electronotes.netfirms.com/AN356.pdf
http://electronotes.netfirms.com/EN205.pdf

-Bernie

Paramenter
Reply to  Bernie Hutchins
January 23, 2019 5:27 am

Hey Bernie,

You are not given the TIMES at which Tmax and Tmin are taken. You can only guess. You said as much. So they are not samples to which Nyquist could possibly apply.

There are kinds of signal where you actually don’t need timestamp per sample (William pointed audio signals). All you have is an array of equally spaced values which allows you to reconstruct shape of a signal. If you introduce to the sampling procedure irregularity you are likely to distort reconstructed signal, even sampled with sufficient, on average, sampling rate. More irregularities, more probable is that your recovered signal will be distorted severely (depending on nature of a signal). Exactly as per daily min/max. I don’t why this is controversial.

Furthermore, from your daily min/max you derive pretty much new signal – daily midrange (which differs significantly compared with an original one, see here. Red curve original signal, blue based on daily midranges). And here you’ve got all canonical: discrete values (datapoints) per each equally spaced day. So sweet!

I can’t find any link to the “Guide” you mention (please post if you care to)

Clyde referred to this WMO guide, section 1.3.2.2. Good thing is that that was written by weather gurus, not pure DSP wizards to avoid accusations of imposing DSP practices into the alien field of temperature acquisition.

I can’t find any link to the “Guide” you mention (please post if you care to) but wonder if “irregular” does not refer to what I call “bunched”

Thanks for resources, yes, looks like they call it ‘bunched’ or non-uniform sampling procedures. So, looks like Mr Nyquist is quite happy with non-uniform sampling. He is saying that whatever sampling method we use, uniform or not, it still needs to obey limits he defined.

Paramenter
Reply to  Bernie Hutchins
January 23, 2019 10:04 am

Hey Bernie,

I can’t find any link to the “Guide” you mention (please post if you care to) but wonder if “irregular” does not refer to what I call “bunched”
http://electronotes.netfirms.com/AN356.pdf
http://electronotes.netfirms.com/EN205.pdf

Initially, I haven’t noticed that those articles were authored by yourself. Congratulations – an impressive stuff! So, from your research and simulations you know better than me that ‘bunched’ sampling may introduce significant distortion to the recovered signal. We may recover a signal to some extent by different interpolation techniques but how much – that depends on exact nature of a signal and irregular sampling.

Reply to  Bernie Hutchins
January 23, 2019 10:14 am

Paramenter said January 23, 2019 at 5:27 am: “ There are kinds of signal where you actually don’t need timestamp per sample (William pointed audio signals). All you have is an array of equally spaced values which allows you to reconstruct shape of a signal. If you introduce to the sampling procedure irregularity you are likely to distort reconstructed signal, even sampled with sufficient, on average, sampling rate. More irregularities, more probable is that your recovered signal will be distorted severely (depending on nature of a signal). Exactly as per daily min/max. I don’t why this is controversial. “

Yes, and I have been pointing just that out to my students for 40 years! We are talking here of knowing (or NOT knowing) the sample time RELATIVE to a regular spacing. That is the “time” we don’t know. For example, in the case where you HAVE actually sampled to 288 samples/day, is a particular value of Tmax reported at n=0 or n=287 or in-between? Potentially HUGE obvious errors, and unrelated to Nyquist or to “jitter”.

Thanks for the WMO link. I will look at it.

You also said : “ So, looks like Mr Nyquist is quite happy with non-uniform sampling. ”

Funny because I know an EE named H. Nyquist! Don’t know if the original wrote about non-uniform spacing. There is a modest little! book of 900 pages titled; Nonuniform Sampling: Theory and Practice, edited by F. Marvasti (Kluwer 2001). Tough going, and a WAD of cash!

-Bernie

Reply to  Bernie Hutchins
January 23, 2019 11:54 am

Paramenter said, January 23, 2019 at 10:04 am:
“ So, from your research and simulations you know better than me that ‘bunched’ sampling may introduce significant distortion to the recovered signal. We may recover a signal to some extent by different interpolation techniques but how much – that depends on exact nature of a signal and irregular sampling.”

Well no – Actually I am talking about recovering the full signal EXACTLY from bunched samples. Here is an example. Suppose I have a signal of bandwidth B which I sample greater than 2B. The sampling train function is [ . . . . . 1 1 1 1 1 1 1 1 1 1 1 1 . . . . . ]. All is well, and I can recover the signal with low-pass filtering (sync interpolation).

If I then notice that the bandwidth was actually only B/2, then I can obviously resample by throwing away every other sample. That is [ . . . . . 1 0 1 0 1 0 1 0 1 0 . . . . . ] . Keep one, toss one. Wider sync interpolation functions.

With bunched sampling, it means I can, for example, keep 2, toss two. That is [ . . . . . 1 1 0 0 1 1 0 0 1 1 0 0 . . . . . ]. And so on. NOTHING IS LOST because the bandwidth was actually smaller. To reconstruct, you need to calculate the interpolation functions which are “sync-like”, and you have to know the (relative) times of the samples kept.

-Bernie

Reply to  William Ward
January 22, 2019 7:10 pm

William said: ” If we start with 2 ideal samples/day (perfectly spaced/timed) I think we have pretty good agreement that these are actually samples and that these samples lead to sampling error – evident when calculating means. ”

Really ! If these are “ideal samples” as you postulate, the anti-aliasing measures assures that you have only a DC value and one cycle of a sinusoidal, and the two samples average EXACTLY to the correct mean.

What sampling error are you talking about?

-Bernie

Clyde Spencer
Reply to  Bernie Hutchins
January 22, 2019 7:33 pm

Bernie Hutchins
You said, “… and the two samples average EXACTLY to the correct mean.” Not so. If you were dealing with a single frequency, that would be true. But, it should be obvious that the daily temperatures do not follow a pure, single frequency, but only appear to do so because that is all that can be extracted from two samples. Therefore, it should be obvious that the mean is not going to be correct if it is based on a fictional single frequency. In extreme cases, the real world time series may not even resemble a sinusoid, but be more like a saw-tooth form, in which case the steep rise or decline needs a very high sampling rate to be captured.

Reply to  Clyde Spencer
January 22, 2019 8:38 pm

Clyde – thanks

I was responding to William’s particulars: ” 2 ideal samples/day (perfectly spaced/timed).” Assuming no one neglected to implement the required anti-aliasing filters for this rate, this can only be a DC term plus a fundamental – a mean and a single sinewave cycle, any two samples of which are symmetric in amplitude about the mean, and average to the exact mean.

I have previously posted (Jan 16 at 10:32 a) an argument that you can even decimate an additional step – to just one sample – and still get an error-free mean.

You even get the correct mean of your sawtooth.

Please explain.

– Bernie

Clyde Spencer
Reply to  Bernie Hutchins
January 23, 2019 10:30 am

Bernie
You said, “I have previously posted (Jan 16 at 10:32 a) an argument that you can even decimate an additional step – to just one sample – and still get an error-free mean.” That is effectively what calculating the mid-range value is doing. Willis has shown that errors ARE introduced compared to the actual mean by using fewer samples than are required by the high-frequency components. Additionally, an unstated assumption is that one is dealing with low frequencies so that a linear interpolation is a best guess. However, if high frequencies are present, it is possible that the true value of the original signal at the position of the interpolated value is VERY different from the interpolated value! If it is a singular noise spike in the original data, then the interpolated value will be close to being the correct average. However, if the signal is characterized by many noise spikes, then the interpolated value will be lower than the mean. One has to be careful to define all of the circumstances, and properties of the data before making generalizations.

Reply to  Clyde Spencer
January 23, 2019 1:17 pm

Clyde Spencer at January 23, 2019 at 10:30 am said: “ Bernie You said, “I have previously posted (Jan 16 at 10:32 a) an argument that you can even decimate an additional step – to just one sample – and still get an error-free mean.” That is effectively what calculating the mid-range value is doing. “

You can’t be suggesting that the mean and the mid-range are the same! They can be drastically different. What are you saying? It is true that the mean and the mid-range along with Tmax and Tmin are all single-number PARAMETERS of the particular cycle although they are not samples of the signal, since no sampling time is associated with any of them. Note that a “running mean” or “moving average” (rectangular tap FIR digital filter) would almost certainly be a signal.

-Bernie

Clyde Spencer
Reply to  Bernie Hutchins
January 23, 2019 4:28 pm

Bernie
No, I’m NOT suggesting that the mean and the mid-range are the same. I’m one of the people who first questioned the utility of the mid-range being used as though it were a mean.

I was responding to your claim that you could decimate two readings and get a single value that would be an accurate representation of the mean. I was pointing out that collapsing two samples to one was what the mid-range calculation does, and it has been demonstrated to not be equivalent to the mean.

William Ward
Reply to  Bernie Hutchins
January 22, 2019 8:56 pm

Bernie I think you misunderstood the drift of that thought exercise. “Ideal” was just referring to perfectly timed. I developed the idea that if you start with a real world, best in class performing converter you get jitter in the pico-second range. No one seems to say that this small clock error invalidates Nyquist. I progressed to greater and greater jitter times and ended with scenario of Tmax/Tmin. I asked of those claiming max and min are not samples, to explain where along the path from ideal to best in class to absurd max/min do we stop calling it samples? What is the limit and what is the math or science that justifies that claim? For those who say there is no timing with max and min, think of the case where we extract max and min from 288-samples per day. Timing is there. If we got the same values from a max/min thermometer we don’t have the timing, but it is still a Nyquist/sampling issue. Not enough data to reconstruct the original signal. When we reconstruct with a DAC we don’t feed the timing information along with the sample. The timing is implied by the reconstruction rate and ordering of the samples. Sample times are not recorded at all for most applications (like audio). The timing is inferred between the rate and position of the sample in the stream.

Clyde Spencer
Reply to  William Ward
January 23, 2019 9:15 am

William
In my dictionary, any observation and recording of data is a “sample.” However, for this topic, a distinction has to be made for multiple random samples versus periodic or continuous sampling with a uniform temporal sampling interval.

Hlaford
January 21, 2019 2:04 am

Nyquist works equally for faster and slower sampling frequencies, the only difference is the measurement spectra. If you go for a fast phenomenon, like the temperature is not, then you need faster sampling. When your goal is the monthly average temperature, the 2 samples per day are just fine. The important detail here is that noise must also be observed in relationship with the Nyquist criterion.
Why 4,320-samples/day? Because they can. It kinda obsoletes the old data… not.

Scott W Bennett
January 21, 2019 5:59 am

William Ward,
I’ve read all your writings here and your full paper. You are one of the most intellectually dishonest writers I’ve ever witnessed. Have you ever had your psychopathy measured? No matter how carefully commenters worded their posts to you, you never got it. Many times writers here, made honey traps for you and you just burnt them down. You aren’t even aware of what this speaks about you, let alone your theory. I went out of my way to understate and downplay the obvious in my comments, particularly regarding the real world spectral analysis of temperature time series. And yet you bull-shitted your way through wordy answers that avoided responsibility for your own statements. You know there is a difference between signal theory and real measurement and you now know that measurement systems can not capture frequency information below about 10 minutes. And yet here you are still flogging that dead horse.

Clyde Spencer
Reply to  Scott W Bennett
January 21, 2019 9:57 am

Scott W Bennett
This response seems out of character compared to your previous dozen comments. Is it really you, or did someone hijack your name? In any event, it is an ad hominem attack that contributes little to the technical discussion. Are you purposely trying to start a flame war? You are not being a good ambassador for Australians.

You concluded your personal attack by stating (without citation or evidence) “… you [William] now know that measurement systems can not capture frequency information below about 10 minutes.” I don’t know why you would say that. Automated weather systems routinely REPORT summary averages in the 1 to 10 minute range, and collect samples much more frequently. See, especially, the discussion of sampling at pages 539 and 540 at the following link:
https://library.wmo.int/doc_num.php?explnum_id=3179

Scott W Bennett
Reply to  Clyde Spencer
January 24, 2019 8:47 pm

Lag is the word, is the word…

I wrote a simple thoughtful and succinct summation and got a long wordy prevarication from William in return.

At some point you have to call BS on the navel gazing of abstraction and return to the real world.

Lag (L) is the real world problem in the existing record. The historic, the traditional, the long term records we “actually have” were all made with thermometers in enclosed screens. Those thermometers were relatively slow to come to thermal equilibrium but the whole apparatus can lag by 15 mins. This places a real world constraint on the time constant of the system and the resulting measurements themselves. Thus, there is a true and identifiable spectral limit for any historical time series and/or any potential “signal” analysis made from them!

So that’s my bottom line! And talk of the comparison of those records with higher sampling rates is pointless!

Newer systems have introduced their own unique problems (And screens) such as a systematic bias resulting from the rapid response to micro temperature fluctuations that are more likely to be anthropogenic in origin* and/or unrepresentative of a natural system** and therefore, are not comparable with the existing records.

What is the thermal L of the real system being measured, are you measuring an homogenous temperature field that is at thermal equilibrium or are you sampling momentary fluctuations of dubious relevance?

Interestingly, studies of the spectral range of temperature information using automated – fast response type – electronic thermometers in Stevenson screens find a diffuse forcing around 12h and turbulent-like behaviour for smaller scales***. More interestingly perhaps, from 1h up to around 9h, air temperature behaves in a predictable way. Beyond this scale, Hurst values become smaller than 0.5, indicating a decreasing predictability.

*i.e. Exhausts, vehicular wake etc
**Having more thermal inertia and a slower rate of change
***No smaller than 10 minutes due to screen L!

William Ward
Reply to  Scott W Bennett
January 24, 2019 9:42 pm

Scott,

I saw your last hostile comments and I see this one. Why do conclude that I’m prevaricating? Why isn’t it a reasonable conclusion that I didn’t understand your comment or question and therefore missed your point? I took the time to write a longer post with the attempt to be helpful. If it wasn’t helpful or insightful, well ok, that is something I can handle. But it sounds like your actual goal was to set a “honey trap.” Is it really necessary to bring that kind of hostility to the conversation? Constructive feedback is something I will consider. Hostility is just dismissed. Your hostility speaks to your character – or lack thereof – not to mine. I’m not going to engage you in some adolescent spat on a forum. I have not seen any sincerity from you and therefore your attacks have no significance to me. I don’t think your hostility shows anything good about you. If something I said or did got under your skin, if you tell me about it constructively I’ll try to address it. If my personality or style is not to your liking then just stay away from me because it is not going to change for someone that has no significance to me and comes at me with hostility.

And though you don’t deserve for me to dignify any other response from you, I will address your specific technical comments for anyone else who may be reading.

Thank you for sharing some detailed information about the kinds of lag that the Stevenson screen and thermometers introduce. Why didn’t you just make that point without the hostility?

Scott said: “At some point you have to call BS on the navel gazing of abstraction and return to the real world.” And: “So that’s my bottom line! And talk of the comparison of those records with higher sampling rates is pointless!”

My reply: Why would you say that sampling and sampling properly is not the real world? It is how it is done in every other application I can think of except climate science. Why is it pointless to compare the correct method to the method currently used that doesn’t give us accurate information? Maybe I don’t fully get your drift, but it sounds like you are saying the problem is the lag. Well why can’t there be more than 1 problem? Mercury in glass thermometers can be replaced with other faster instruments. Screens can be redesigned. (Is this what you are recommending?) The max/min method will still not give you what is correct. From an engineering perspective, capture all of the content available and once sampled properly you are free to filter out what you don’t want or need. When you start talking about exhausts and vehicular wakes then aren’t we now speaking about improperly sited stations? (Yet another problem with the record).

Scott, if I missed your point maybe I’m just slow on comprehending what you are saying or maybe you just don’t write clearly. Neither scenario needs to be an indictment of us personally. I’m looking past your hostility to open a door to see if a better version of Scott might want to come forward. I’ll allow some space for you to come around, but just know that I’ll be just fine giving you the silent effe-you if you want to keep coming at me with hostility. Don’t confuse dignity with weakness. I tend to be rather forgiving – and I’d rather not have you as an enemy. The choice is yours.

Scott W Bennett
Reply to  William Ward
January 25, 2019 2:03 am

Wlliam,

Again, you did not address what I’ve said. It is very hard to believe that you are being genuine by not conceding a single point that disrupts your narrative!

In Australia we already have over 500 AWS mostly housed in existing Stevenson screens that record three measurements a second (highest lowest and last). However, these are not compliant with WMO guidelines that recommends iteration over time to smooth out rapid fluctuations:

“The natural small-scale variability of the atmosphere, the introduction of noise into the measurement process by electronic devices and, in particular, the use of sensors with short time-constants make averaging a most desirable process for reducing the uncertainty of reported data.

In order to standardise averaging algorithms it is recommended:

(a) That atmospheric pressure, air temperature [And others] be reported as 1 to 10 min averages, which are obtained after linearization of the sensor output…

These averaged values are to be considered as the “instantaneous” values of meteorological variables for use in most operational applications and should not be confused with the raw instantaneous sensor samples or the mean values over longer periods of time required from some applications. One-minute averages, as far as applicable, are suggested for most variables as suitable instantaneous values. (1.3.2.4 Instantaneous meteorological values)”

You can see the two main issue above are one, to reduce the error of the reported data introduced by over sampling!* And two, to make the new system compliant with the old without introducing further uncertainty.

Probably the single most important reason the WMO insist on numerical averaging of electronic sensors is because mercury and alcohol thermometers have both longer and different time constants! Which makes mirroring the behaviour of liquid in glass thermometers an intractable issue. To be very clear and to restate this particular point:

“Alcohol thermometers (that measure temperature minima) have longer time constants than mercury thermometers (that measure temperature maxima).”

So again to restate, there a two real world problems pulling in opposite directions and they can not be solved by higher frequency sampling even in the most ideal situation because this simply introduces its own problems!

1-10 minute averaging is ‘worlds best practice’ going lower or higher is not recommended by anybody measuring real world temperature “signals”!

*To infinity and beyond as you keep repeatedly recommending!

Bright Red
Reply to  Scott W Bennett
January 25, 2019 2:37 am

Scott said “You can see the two main issue above are one, to reduce the error of the reported data introduced by over sampling!*”

Let me start by saying that one mans noise is another mans data.
It is easy to downsample what you call over sampled data. So please explain the introduced error due to oversampling?

Scott W Bennett
Reply to  William Ward
January 27, 2019 2:23 am

William,

I hope somebody has actually had contact with you in the flesh because you sound a lot like a bot to me; I say this sincerely and with no hostility. Synthetic or cyber personalities – if you prefer – act a lot like the way you present here. Lots and lots of words but with very little meaning.

Here, in your very last response to me, you wrote 10 paragraphs, one 200 words long, that managed to say nothing approaching a logical debate, let alone a scientific discourse!

Here they are, in all their synthetic banality. You manage to say nothing of substance in paragraph one:

I saw your last hostile comments and I see this one. Why do conclude that I’m prevaricating? Why isn’t it a reasonable conclusion that I didn’t understand your comment or question and therefore missed your point? I took the time to write a longer post with the attempt to be helpful. If it wasn’t helpful or insightful, well ok, that is something I can handle. But it sounds like your actual goal was to set a “honey trap.” Is it really necessary to bring that kind of hostility to the conversation? Constructive feedback is something I will consider. Hostility is just dismissed. Your hostility speaks to your character – or lack thereof – not to mine. I’m not going to engage you in some adolescent spat on a forum. I have not seen any sincerity from you and therefore your attacks have no significance to me. I don’t think your hostility shows anything good about you. If something I said or did got under your skin, if you tell me about it constructively I’ll try to address it. If my personality or style is not to your liking then just stay away from me because it is not going to change for someone that has no significance to me and comes at me with hostility.

Three paragraphs of nothing follow:

And though you don’t deserve for me to dignify any other response from you, I will address your specific technical comments for anyone else who may be reading. Thank you for sharing some detailed information about the kinds of lag that the Stevenson screen and thermometers introduce. Why didn’t you just make that point without the hostility?  Scott said: “At some point you have to call BS on the navel gazing of abstraction and return to the real world.” And: “So that’s my bottom line! And talk of the comparison of those records with higher sampling rates is pointless!”

Five paragraphs now and 300 words in and “you” are yet to say anything of substance. 

Finally – six paragraphs in – you may have actually asked a question but only by restating the initial “argument” for FFS! Are you actually thinking about (Or computing) what I said:

[1] My reply: Why would you say that sampling and sampling properly is not the real world? It is how it is done in every other application I can think of except climate science. Why is it pointless to compare the correct method to the method currently used that doesn’t give us accurate information? Maybe I don’t fully get your drift, but it sounds like you are saying the problem is the lag. [2]Well why can’t there be more than 1 problem? Mercury in glass thermometers can be replaced with other faster instruments. Screens can be redesigned. (Is this what you are recommending?) The max/min method will still not give you what is correct. [3] From an engineering perspective, capture all of the content available and once sampled properly you are free to filter out what you don’t want or need. [4] When you start talking about exhausts and vehicular wakes then aren’t we now speaking about improperly sited stations? (Yet another problem with the record).

Let’s agree to say that you are slow – though very, very hard to believe – I’ll give you the benefit of the doubt and will go through pg.5 point-by-point:

[1] It’s not a sampling limited issue! It’s an uncertainty limited issue!  Lag – the time constant – of the real system, is the constraining parameter. Temperature isn’t heat! Heat is a measure of flux. Temperature itself is an index of heat but only when the measuring device is itself in equilbrium with the system being measured. There is an inherent L  in the real world being “measured”. 

[2] It’s hard to answer this without first repeating – again – are you even thinking about what I’ve already said several times? In glass thermometers have been replaced, screens have been redesigned and if you mean the ”Tmeam” that equals (Max = Min)/2, I’ve said Ad nauseam that it is incorrect but that Min & Max – alone as singular selections – are much more “correct” than any 2 random samples!

[3] Here you make my own point exactly and I quote: “capture all of the content available and once sampled properly you are free to filter out what you don’t want or need.” Of course, “Capturing all “available” content is – to be very generous – the million-dollar-question and the same one I’ve been raising here!

[4] No, you are talking about ephemeral phenomena whose fluctuations pollute the record because they are only picked up by the “super sampling” of electronic sensors available today. No known real world climate or weather related heat fluxes change as fast as these sensors are capable of recording! By anthropogenic, I mean things like black bitumen roads with thermal bubbles disrupted by vehicles that create sudden inflows. These types of changes are not possible anywhere in the natural world except for specific landscapes such as the magma fields of a volcanos.

William, 

You then end your output, with another 100-odd words of nothing!

cheers,

Scott

Bright Red
Reply to  Scott W Bennett
January 27, 2019 3:31 am

Hi Scott
Scott Said”I mean things like black bitumen roads with thermal bubbles disrupted by vehicles that create sudden inflows. These types of changes are not possible anywhere in the natural world except for specific landscapes such as the magma fields of a volcanos.”

Clearly you have never experienced what a slight wind change can do near an inland lake or river.

From
http://www.bom.gov.au/climate/change/acorn-sat/documents/ACORN-SAT_Observation_practices_WEB.pdf
“The primary AWS sampling rate is 1 Hz, and mean, maximum temperature statistics are generated from the valid 1 Hz samples over the period of interest. The AWS stores the previous 72 hours of data in the form of statistics for ten minute periods in a circular buffer.”

Saying that faster sample rates cause errors is misleading as they are just capturing the reality of the site. If this is a problem then the site location is the problem not the sampling. It seems to me this additional higher sample rate data could be used as a site diagnostic.

Some general comments and not in reply:
Clearly to compare data collected with different system filter characteristics it is first necessary to match the system filter characteristics by putting one or both signals through a suitable proven transfer function. And yes having commentators compare a figure collected using one system to one collected using another system is a big problem.
It would be a big improvement if future researchers did not have as many valid reasons to complain about the data collected from now on as there is for data collected in our past and collecting it according to Nyquist is a big step in the right direction. There are also many places where standardised filtering is used for data comparison such as EMI measurements so it is about time the collection of climate data caught up with what industry has been doing correctly.

Paramenter
January 21, 2019 8:20 am

Hey Bright Red,

Thanks for kind words. As per data acquisition I also cannot see any reason why not use high quality measurements that include suitable sampling procedures. Sticking to daily midranges seems to be an artifact of the historical record where daily min/max is usually all what we have. Usual answer to discussed here objections is that by doing monthly averaging of daily midranges we effectively construct the new signal so we don’t need recover any daily frequencies. Well, again, that’s fine providing that you accept inherited errors caused by such treatment of underlying data, which as William posts points can be significant.

Few years ago there was a discussion on Judith Curry blog about potential impact of aliasing on averaged temperature record. Author of this article, Dr Richard Saumarez, attacked the problem from a different angle. He argues that yearly temperature record most likely is heavily aliased. Trends may be much more immune to the consequences of that (though not completely resistant) but any model built on aliased signal is in a grave danger. Discussion under that post in some respect is similar to discussion under this post. As Dr Saumarez pointed out when confronted with the problem of undersampling the usual response is:

“I’ve got a lot of data, I’ve analysed with ‘R’ packages and I don’t think there isn’t a problem.”

Priceless.

Now it seems the data is often stored in ascii text file format

NOAA stores their subhourly data in plain text, fixed-width delimeted. A file contains, along with the air temperature, several other attributes as solar radiation and so on. Usual size per each file containing yearly data is ~14 MB, thus for, say 10,000 stations across the globe that would be ~137 GB per whole globe per each year. That’s piece of cake – on the one decent laptop we could store global data for several years.

Bright Red
Reply to  Paramenter
January 21, 2019 1:30 pm

Hi Paramenter

“I’ve got a lot of data, I’ve analysed with ‘R’ packages and I don’t think there isn’t a problem.”

Yep Priceless.

No eliminating one error item that you have full control over simply makes no sense to me.

It would be interesting to have some of the commentators here at a design meeting where the specification of the data collection system was being decided and the topic of error due to potential aliasing came up.
Q) Is there aliasing at 24samples/day in the limited examples we have looked at?
A) Yes but it is small.
Q) Can you give the daily maximum error due to aliasing for all station locations in the world over the 30 year design life?
A) No
Q) What sample rate would reduce this error to practically 0, being at least an order of magnitude lower than the resolution we are recording, for all stations at all times?
A) Current best practice and available data indicates 288samples/day should be plenty.
Q) Is there any additional cost in implementing 288/samples/day
A)No
Then 288 is the minimum. Next topic.

William Ward
Reply to  Bright Red
January 21, 2019 7:23 pm

Bright Red,

I love the Q&A. I have been in a few meetings like that.

William Ward
Reply to  Paramenter
January 21, 2019 7:27 pm

Hi Paramenter,

You said: “That’s piece of cake – on the one decent laptop we could store global data for several years.”

When you consider that the fate of humanity and life as we know it on the planet is in jeopardy from climate catastrophe, you would think incurring the cost of a few laptops would be warranted.

1sky1
January 22, 2019 5:48 pm

We are now more than 500 comments deep and nothing resembling a compellingly clear view of adequate sampling has emerged. Two quite distinct problems are still being unduly conflated or confused:

1. The relatively simple task of closely estimating the daily or monthly mean.

2. The much more demanding objective of preserving the spectral structure of the continuous signal in the discrete, periodic samples, so that the bandlimited interpolation of Shannon’s Theorem can closely reconstruct the original signal.

While potential aliasing of high frequency components into lower (baseband) frequencies is a major concern in the latter case, it’s quite irrelevant (barring any aliasing into zero-frequency) in the former. After all, in climatic-scale analyses, we’re not trying to reconstruct the ever-varying diurnal cycle, whose highly atypical example William produces to over-dramatize the difference between the daily mid-range value and the mean. His misleading “thought exercise” about “clock-jitter” ignores the fact that the daily Min and Max tend to occur near dawn and in mid-afternoon, with a highly irregular separation of considerably less than half a day. That’s simply not explainable by any periodic, semi-diurnal “sampling” with random jitter.

What is overlooked almost entirely throughout the discussion is the intimate dependence of the great discrepancy between the mid-range value and the true mean upon the sharp daytime rise to the Max followed by the gradual, night-long decline to the following day’s Min near dawn. This asymmetry turns out to be quite stable at each station in the long run, resulting in good low-frequency coherence between the two distinctly different metrics. In fact, other analytically-derived estimates of the true monthly mean based solely upon the recorded extrema can reduce that discrepancy to nearly negligible levels. The value of historical Min/Max observations should not be dismissed out of hand.

William Ward
Reply to  1sky1
January 22, 2019 9:55 pm

Hello 1sky1,

1sky1 said: “We are now more than 500 comments deep and nothing resembling a compellingly clear view of adequate sampling has emerged.”

My reply: I don’t agree with that statement. I think Willis has shown that hourly sampling (24-samples/day) seems to be the rate that beyond which error is measured in hundredths or thousandths of a degree C. From a system engineering perspective I would still go with USCRN rate of 288 (averaged from 4,320). If a more detailed study showed the system requirements could be lower than 288, then I have no objection to that.

1sky1 said: “Two quite distinct problems are still being unduly conflated or confused: 1. The relatively simple task of closely estimating the daily or monthly mean. 2. The much more demanding objective of preserving the spectral structure of the continuous signal in the discrete, periodic samples, so that the bandlimited interpolation of Shannon’s Theorem can closely reconstruct the original signal.”

My reply: I think #1 has been a point you have been emphasizing for a while and maybe one we should take up in more detail. Am I correct that you don’t think the daily and/or monthly means are affected by the aliasing from working with the historical method (max/min)? What do you think of the recent post by Paramenter where he showed the distribution of monthly mean error? I showed a year’s worth of daily mean error for at least 3 locations. Regarding your #1, can you put a figure on “closely estimate” and explain what you mean by simple task? 1sky1 I’m trying to have a more open conversation with you so I want to understand your point more thoroughly. Maybe we can get your analysis for the data Paramenter or I provided or you can provide your counter analysis.

1sky1 said: “While potential aliasing of high frequency components into lower (baseband) frequencies is a major concern in the latter case, it’s quite irrelevant (barring any aliasing into zero-frequency) in the former.”

My reply: Do you agree that at 2-samples/day, the content at or near 2-cycles/day will alias the zero-frequency?

1sky1 said: “His misleading “thought exercise” about “clock-jitter” ignores the fact that the daily Min and Max tend to occur near dawn and in mid-afternoon, with a highly irregular separation of considerably less than half a day. That’s simply not explainable by any periodic, semi-diurnal “sampling” with random jitter.”

My reply: “What if you designed a system to sample electronically at dawn and mid-afternoon? Day after day you would get samples of the analog signal. Can these samples be used to accurately reconstruct the original signal? Not likely. Can they be used to accurately calculate the mean (daily or monthly)? It doesn’t appear to be so based upon all of the analysis done so far. Do you have information that shows something different? Why cant they be used to determine the mean? Unless the signal is symmetrical about an axis, we need to integrate the entire signal. Why can’t we integrate the entire signal? Because we don’t have samples that comply with Nyquist. Therefore, I conclude that the historical method is a sampling/Nyquist problem. The point about jitter was just to show that violating timing doesn’t invalidate Nyquist it violates it. I guess I could be convinced to abandon the use of the word jitter in this application, but I’m still not convinced by any arguments that the historical method is not a violation of Nyquist.

1sky1 said: “What is overlooked almost entirely throughout the discussion is the intimate dependence of the great discrepancy between the mid-range value and the true mean upon the sharp daytime rise to the Max followed by the gradual, night-long decline to the following day’s Min near dawn. This asymmetry turns out to be quite stable at each station in the long run, resulting in good low-frequency coherence between the two distinctly different metrics. In fact, other analytically-derived estimates of the true monthly mean based solely upon the recorded extrema can reduce that discrepancy to nearly negligible levels. The value of historical Min/Max observations should not be dismissed out of hand.”

My reply: It sounds like you have done some work around this – studying the stability of the asymmetry and the good low-frequency coherence. Can you tell us more? What are you comparing to what? What is your reference? It seems to me that without a properly sampled signal (24 to 288-samples/day) to use as a reference, it would be difficult to really know anything. I noticed the signal shape you mention. But how can max and min values get you to where you want to go. Is there a formula that can be used that, based upon this typical shape we can get the correct mean? It is similar to a capacitor charge/discharge. Maybe with a time constant and the max and min, then means could be more accurately calculated?? That would be interesting. If so, then we could reevaluate the record with more accuracy. Overall, I’m struggling to see the value of max/min when properly sample signals seem to show much different means.

Also, I don’t think anyone has gotten around to critically analyzing the 26 trends I show. In the discussions I think I also provided 3-5 charts showing the yearly mean differences between 288 and historical and the associated linear trend over 10-12 years. 1sky1, what do you think about the trends and the charts showing the yearly mean errors. I can grab the links and provide them again if they have been lost in the shuffle.

Clyde Spencer
Reply to  William Ward
January 23, 2019 9:43 am

William
You said, “I think Willis has shown that hourly sampling (24-samples/day) seems to be the rate that beyond which error is measured in hundredths or thousandths of a degree C.” True enough, but, I think that there is another concern with 24 samples per day. As a rule of thumb, something like 20 to 30 samples are recommended as a minimum number of samples to be able to demonstrate statistical significance. So, 24 samples are right on the edge of the minimum for statistically analyzing daily data. Now, it is assumed that what happens at the daily interval is mostly meteorological noise. However, what if it isn’t? What if there is a signal or trend that could be teased out at the daily level that tells us something about climatological changes? It would be better to have a more statistically robust data set to work with to explore that. We would never find it with only mid-range samples, and 24 samples would not allow the rigor that 100+ samples would. So, by standardizing on what NOAA has selected (288/day) gives us something to work with should someone want to pursue a path down Heresy Lane. If we settle on hourly data, then future researchers won’t have historical data to work with that would allow them to go beyond what we know today.

Incidentally, Willis’ data suggest that the error in the mean asymptotically approaches zero around an order of magnitude more frequent sampling than hourly. Why wouldn’t we want to eliminate a potential source of error or uncertainty for a trivial cost increase? Lastly, even though hourly data allows a good estimate of the mean, which is the primary use to which it is being put today, Nyquist-compliant sampling assures the ability to reconstruct the times-series faithfully, which future researchers may thank us for if they want to go beyond where we are, such as looking for trends in the standard deviation or daily energy exchanges.

1sky1
Reply to  William Ward
January 23, 2019 4:02 pm

I have neither the time nor the interest to keep dispelling the same misreadings and basic analytic misconceptions over and over in stubbornly fixated minds.

In a nutshell, the periodic sampling rate required for close signal RECONSTRUCTION is very much greater than that for accurate determination of the signal mean. 288 samples/day from fast-response thermistors is not enough to avoid aliasing when there are significant temperature variations produced by 3-sec gusts in strong winds associated with frontal passages. Conversely, hourly readings of LIG thermometers are more than adequate to establish the monthly means for CLIMATIC investigations. The details of diurnal wave-forms are not only irrelevant in the the latter case, they constitute an obstacle to perceiving the low-frequency climate signal. Since the spectral content of those forms is almost always negligible beyond the fourth harmonic, even periodic sampling at 3-hour intervals is sufficient to prevent significant aliasing into the monthly mean–provided the data are properly decimated.

A singular property of the much-maligned diurnal mid-range metric is the total absence of ANY aliasing, because that metric is determined not from discrete samples, but from the CONTINUOUS signal. While the (usually positive) offset from the true mean is a significant discrepancy, it can be greatly reduced by empirically determining for each station (and for each of 12 months) the coefficient 0 < eta <0.5 in a much more effective estimate, (1 – eta)Tmin + eta Tmax, of the true signal mean.

That pretty much sums everything up. Farewell!

Reply to  1sky1
January 23, 2019 1:24 am

“The value of historical Min/Max observations should not be dismissed out of hand.”
Indeed. One thing I’ve been emphasising is that the Min/Max measure tends to be offset from the integrated by a fairly constant amount. That constant depends on the time at which the min/max is read (this is the basis of the TOBS adjustment). So at one level, there is an apparent discrepancy of up to a degree or so, and min/max indices don’t even agree with each other, let alone the integrated.

But this doesn’t take into account that what is sought is the monthly mean of the anomaly. That is, the mean is subtracted, and so these offsets will disappear.

The same is actually true of periodically sampled averages. It is true that if you sample twice a day, say, that will alias with the second harmonic of the diurnal to give an offset, which could be up to a degree, as Willis has shown. But for a given month, the diurnal cycle doesn’t change much from year to year, so it is a fairly constant offset, and again disappears when you calculate temperature anomalies, since it is also present in the reference value.

Editor
January 22, 2019 10:33 pm

William Ward January 22, 2019 at 8:43 pm

Hi Willis,

You ended you post with “Your Friend”. Well alright! Thanks Willis.

I was perfectly serious in that, and you are welcome.

Willis said:

“All the data that I’ve looked at give a mean daily error on the order of five-thousandths of a degree; an RMS daily error on the order of five-hundredths of a degree; and a maximum daily error on the order of ± a quarter of a degree. Together these add up to a trend error on the order of a few thousandths of a degree per decade. None of these are significant in the field of climate science.”

My reply: Can you clarify what you are comparing here? Is it 24-samples/day vs. 288-samples/day? Or is it one of those vs. max/min? I’m assuming the former, but please clarify so I can respond to the correct concept. I’m not hung up on the difference between 288 and 24. We have shown error between 288 and max/min. Paramenter has provided some good information in addition to mine. If 24-samples/day produces the same error as 24, this doesn’t really change the core message. I think we are in agreement.

Yes, it was comparing hourly versus 288 samples per day.

Willis said:

“I don’t see the “oscillation” that you mention so perhaps I don’t understand what you are referring to.”

My reply: Look at my Fig 2. As you read the chart from the bottom up (increasing sample rate). If you were to plot the error vs sample-rate would decrease from 0.7 or 0.8 to 0.1, cross over zero to -0.1 and then back up to 0. I didn’t plot other rates, so we don’t know what it does other than the ones I show for that example. Not exactly an oscillation, but a convergence with ripple. The error changes signs. Your analysis is RMS. Can you explain how you account for sign of error in your analysis?

I believe from looking at Figure 2 that you are looking at one day’s data. I’m looking at years of daily data.

Next, the RMS error as the name implies is the square root of the mean of the squares of the errors, so it always has a positive value. Since I was looking at more than one day, I had lots of errors, and I wanted to know how well they were doing on average.

Willis said:

“A much more important question is, what we can do with the errors that using min-max has created in the past?” And: “So let me invite you to consider that question, of how we might minimize the errors of the traditional method ex post, as a much more important puzzle than the exact reason that we get errors from the traditional method. I’d be very happy to hear your thoughts, particularly on removing the aliasing …”

My reply: An admirable goal! I wish I had a more optimistic reply to match the good intention of your goal. There are plenty of texts you can refer to. I found this brief paper to be convenient:

http://www.dataphysics.com/downloads/technical/Effects-of-Sampling-and-Aliasing-on-the-Conversion-by-R.Welaratna.pdf

Quoting the paper: “Aliasing is irreversible. There is no way to examine the samples and determine which content to ignore because it came from aliased high frequencies. Aliasing can only be prevented by attenuating high frequency content before the sampling process…”

Yeah, I was afraid of that … however, I’m not sure it is completely true in the larger sense. It seems to me that it might only be true if we sample the signal at a single sampling rate.

But suppose we sample it at 288, 287 286, 285, etc. samples per day. In your opinion, could there be information in that set of samples which would allow us to distinguish between aliased and non aliased signals?

For example, my periodograms show very little aliasing in the hourly samples, but strong aliasing in the 4-hour band of the two-hour samples … shouldn’t that tell us something about the signal?

Maybe if you study the individual station signals you can come up with some innovative way to reduce the daily mean error generated by max/min for days in those stations. If you can do this successfully for day after day in a station then maybe you are on to something. I’ll think about this some more…

Curiously, what I realized today is that we don’t really need to reduce the absolute error. What we need to do is to reduce the trend error. I haven’t worked out yet what that might mean … I know that a proper combination of max and mean data can likely do that, and I’ve done it for the Redding data. But how stable that might be over time is a question …

I suspect that the eventual trend of the traditional method is related to the trends of the max and the min. By that I mean that say if the max has no trend and the min is warming, it pushes both the true trend and the traditional trend in the same direction, but by different amounts. So we may be able to use that to reduce the trend error.

Anyhow, those are my thoughts. I’m working as usual on about three projects right now (Argo data, buoy data, and this one), so as time and the tides permit I’ll post up what I’m finding.

Finally, you said to 1sky1:

I think Willis has shown that hourly sampling (24-samples/day) seems to be the rate that beyond which error is measured in hundredths or thousandths of a degree C. From a system engineering perspective I would still go with USCRN rate of 288 (averaged from 4,320). If a more detailed study showed the system requirements could be lower than 288, then I have no objection to that.

I have no problem with the system requirements being 288. My question is more practical—is the error in hourly sampling small enough to get solid results? I am looking at that because we have lots of hourly data and little 288 sample data.

My best to you, and my personal thanks for fighting through the headwinds in order to get back to the science … much appreciated.

w.

Clyde Spencer
Reply to  Willis Eschenbach
January 23, 2019 10:00 am

Willis,
In response to William, you said, “Curiously, what I realized today is that we don’t really need to reduce the absolute error. What we need to do is to reduce the trend error.” That is true with respect to answering the prevailing question of the times. But, what if future researchers want to go beyond what we are worried about today? They would be quite thankful if their inheritance from us would be data that allowed faithful reconstruction of daily temperature data. As an example, there are claims that warming has been resulting in more extreme weather. Weather is what happens on a daily basis. What if researchers had a historical data set that allowed rigorous analysis of meteorological parameters on a 5-minute basis to see if there really is a change in extremes?

Reply to  Clyde Spencer
January 23, 2019 12:52 pm

Clyde, as I said, “I have no problem with the system requirements being 288.” I’m a data guy, and the more facts we have the better off we are.

w.

Bright Red
Reply to  Clyde Spencer
January 23, 2019 6:29 pm

Clyde said “That is true with respect to answering the prevailing question of the times. But, what if future researchers want to go beyond what we are worried about today? They would be quite thankful if their inheritance from us would be data that allowed faithful reconstruction of daily temperature data. As an example, there are claims that warming has been resulting in more extreme weather. Weather is what happens on a daily basis. What if researchers had a historical data set that allowed rigorous analysis of meteorological parameters on a 5-minute basis to see if there really is a change in extremes?”
and Clyde said
” Why wouldn’t we want to eliminate a potential source of error or uncertainty for a trivial cost increase? Lastly, even though hourly data allows a good estimate of the mean, which is the primary use to which it is being put today, Nyquist-compliant sampling assures the ability to reconstruct the times-series faithfully, which future researchers may thank us for if they want to go beyond where we are, such as looking for trends in the standard deviation or daily energy exchanges.”

I agree with what you have said above as we do not know what the data collected now will be used for in the future. Further we only get one chance to take the measurements unlike in a laboratory.
The amount of data and transmission time are excuses not reasons. We should be doing the best job possible and in my view 288samples/day is at the very bottom end.
These readings SHOULD be on the record UNALTERED for as long there is someone or something to look at them and that hopefully will be a very very long time.

To coin a phrase
Anyone who claims to know all that will be required from the data in the year 2100 is blowing smoke up your fundamental orifice …

William Ward
Reply to  Clyde Spencer
January 23, 2019 8:52 pm

Clyde,

As to your “inheritance” thoughts… right on! I would like to see what we could learn about climate with better use of data and available technology. I have an interesting example from the process control industry. As you know, some factories can generate revenue from operations that can be measured in the millions of dollars/hr. If a line goes down because a machine goes down, then revenue and profit can suffer. If a company misses earnings estimates because the line is down for a prolonged period of time then stock prices can take a big hit, causing misery for the executives and even the employees if their compensation is tied to company stock. If the company runs their factory 24/7 then there is no way to make up lost time from a shut down line. So systems have been developed to study frequency content from sensors on machines with motors. Through gathering the right data and analyzing it, we now have frequency profiles of what a bearing starts to “sound” like when it starts to go bad. Using this information, a bearing that is starting to go bad can be identified and replaced prior to failure with only a brief service interruption that doesn’t shut down the line.

What could we learn about weather (and therefore climate) with better tools and data? The mindset of climate scientists has to change first.

Ps – for the record – Willis and I are much better in sync now. My comments to you are generic and not related to anything Willis said. Just clarifying.

Editor
January 23, 2019 12:00 am

Well, after my thought that the difference in trends between the traditional and the true trends might be related to the trends of the max and mean values, I thought of a quick test of that. I took the trends of the true, traditional, max, and min monthly values for each of the 12 years of the Redding USCRN record. I then used linear regression with the difference in the true and the traditional trends as the predictand and the max and min trends as the variables. In other words, I was seeing if I could predict the trend error from the min and max trends. Here is that result.

Interesting result, huh? The encouraging part is that the one set of parameters from the linear regression apply well to all of the individual years.

Hmmm … I’ll have to take a look at some other datasets. I’ve seen far too many one-off good fits to get too excited. Lots of flashes in the pan, not a lot of actual successes …

Best to all,

w.

Reply to  Willis Eschenbach
January 23, 2019 1:13 pm

I neglected to mention an interesting part of the equation. When the max trend goes up, the trend error increases … but when the min trend goes up the trend error decreases by about the same amount. Here’s the regression:

Call:
lm(formula = (trad - true) ~ themax + themin)

Residuals:
     Min       1Q   Median       3Q      Max 
-0.12079 -0.09099 -0.04888  0.08142  0.25382 

Coefficients:
            Estimate Std. Error t value Pr(>|t|)   
(Intercept)  0.07488    0.12109   0.618  0.55167   
themax       0.07933    0.01715   4.626  0.00124 **
themin      -0.08135    0.03666  -2.219  0.05367 . 
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 0.1347 on 9 degrees of freedom
Multiple R-squared:  0.715,	Adjusted R-squared:  0.6517 
F-statistic: 11.29 on 2 and 9 DF,  p-value: 0.003522

Go figure …

w.

Clyde Spencer
Reply to  Willis Eschenbach
January 23, 2019 8:23 pm

Willis
You said, “When the max trend goes up, the trend error increases … but when the min trend goes up the trend error decreases by about the same amount.” I think that I have an explanation for you. As the “max trend goes up” the difference between min and max increases, meaning an interpolation (mid-range) over a larger difference. However, when the min trend goes up, the difference is decreased, meaning that the interpolation is over a smaller difference with less chance for error. Not unlike calculating the slope of a line by letting the limit approach zero.

Paramenter
January 23, 2019 2:01 pm

Hey Bernie,

We are talking here of knowing (or NOT knowing) the sample time RELATIVE to a regular spacing. That is the “time” we don’t know.

Quite correct but that only makes things worse from the signal recovery point of view. Or starting from another end: what if you actually have timestamp per each daily min and max? You have array of not equally spaced values and you can somehow interpolate. Not splendid, but smaller error. Now, remove the timestamp? More uncertainty and bigger error then.

If you’ve got 2 measurements per day not knowing their exact time what options have you got? One is to assume equal spacing and interpolate. Another option is to assign both values to the same point (a day) and then calculate average. Then you’ve got regular spacing between samples. So, for me not knowing exact sample timing causes further degradation of the signal recovery procedure and bigger error. All aligned with Nyquist.

For example, in the case where you HAVE actually sampled to 288 samples/day, is a particular value of Tmax reported at n=0 or n=287 or in-between? Potentially HUGE obvious errors, and unrelated to Nyquist or to “jitter”.

From where the error between true mean and daily midrange value comes from? It comes from the fact midrange value is often poor estimator. Why midrange value is often poor estimator? Because for certain variable changes, as temperature, shape of those changes (or signal form) determines that midrange value is dragged away from a true mean value. Because you cannot reliably recover a signal, in consequence you introduce an error. Precisely because of Nyquist. For me it is simply like that.

Well no – Actually I am talking about recovering the full signal EXACTLY from bunched samples.

Yes, under certain circumstances such interlaced or ‘bunched’ samples would work. Unfortunately, not for all. For example, if you’ve got bursts of dense sampling followed by gaps – all at the irregular intervals signal recovery becomes highly problematic.

Reply to  Paramenter
January 23, 2019 6:10 pm

Paramenter at January 23, 2019 at 2:01 pm; excerpt here:

“ . . . . .
[Bernie] Well no – Actually I am talking about recovering the full signal EXACTLY from bunched samples.
[Paramenter ] Yes, under certain circumstances such interlaced or ‘bunched’ samples would work. Unfortunately, not for all. For example, if you’ve got bursts of dense sampling followed by gaps – all at the irregular intervals signal recovery becomes highly problematic. . . . . . ‘‘

Reply

WRONG – it works for all. Do you suppose it works for [ . . . . 1 1 0 0 1 1 0 0 . . . . .] (burst followed by gaps to use your words), but not for [. . . . . 1 1 1 0 0 0 1 1 1 0 0 0 . . . . .] or for [ . . . . . 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 . . . . .] or even for [ . . . . . 1 1 0 1 0 0 1 0 1 1 0 1 0 0 1 0. . . . .]. It in fact always works as long is the bandwidth is reduced by the fraction: (samples kept)/(total samples), which is ½ in these examples. At some point you need to BELIEVE THE MATH! What have I not made clear? Sampling theory can be subtle, but is not subject to what anyone “thinks should be true”. Enjoy it.

-Bernie

Bright Red
Reply to  Bernie Hutchins
January 23, 2019 6:34 pm

Bernie Said”It in fact always works as long is the bandwidth is reduced by the fraction: (samples kept)/(total samples), ”

Something we can agree on

Clyde Spencer
Reply to  Bernie Hutchins
January 23, 2019 8:15 pm

Bernie
You said, “At some point you need to BELIEVE THE MATH! What have I not made clear?” My first naive reaction is that you are peddling a perpetual motion machine. You say if you are missing data, the solution is to throw away some data. That sounds to me like sampling a ‘bunch’ at the beginning of a transmission, sampling a ‘bunch’ at the end, and then by throwing away some more data you can fill in the missing middle part of the transmission. OK, I’m being a little facetious, but I’ve never been comfortable with the “Less is more” mantra.

I do need to have it explained in more detail.

Paramenter
Reply to  Clyde Spencer
January 24, 2019 7:01 am

Hey Clyde,

I reckon Bernie refers here to interlaced sampling where, as far as I understand that, we can choose any number distinct points within Nyquist intervals. If samples are then taken from this pool, throwing away some of them still allows you to restore signal reliably. But that only works under particular circumstances and still has to follow strict rules. I don’t know why Bernie brought this concept – except for proving that non uniform sampling is still sampling I cannot see any relevance to daily max/min. Those bad boys have nothing to do with interlaced sampling.

Clyde Spencer
Reply to  Paramenter
January 24, 2019 7:29 am

Paramenter

It is obvious to me that if a signal has been over-sampled, then one has the luxury of compressing it by sub-sampling. Or, alternatively, if one is willing to live with aliasing or other loss of fidelity, then I can understand throwing away some data. But, in general, Bernie did not make a convincing case (at least to me) that you can get something for nothing.

Most of this discussion has centered around capturing an accurate representation of a temperature time-series, using audio recording as examples of how it should be done to prevent distortion. My experience with FFTs comes from the image processing field. I’m not sure whether the ear or eye is better at detecting corruption in the signal, but I do know from personal experience that aliasing or ringing is evident in images when signal processing rules are violated.

1sky1
Reply to  Bernie Hutchins
January 24, 2019 5:41 pm

At some point you need to BELIEVE THE MATH!

Amen! Those who patently fail to grasp the math invent the most tediously tortured arguments for disbelieving it.

Reply to  Paramenter
January 23, 2019 6:44 pm

Paramenter said January 23, 2019 at 2:01 pm: “Hey Bernie,
[Bernie] We are talking here of knowing (or NOT knowing) the sample time RELATIVE to a regular spacing. That is the “time” we don’t know. [Paramenter] Quite correct but that only makes things worse from the signal recovery point of view. Or starting from another end: what if you actually have timestamp per each daily min and max? You have array of not equally spaced values and you can somehow interpolate. Not splendid, but smaller error. Now, remove the timestamp? More uncertainty and bigger error then.”

(1) You say “what if you actually have timestamp”. If you don’t – you don’t – no “what if”.

(2) What is “somehow interpolate” (bandlimited, polynomial, min norm)?

(3) You say “Now, remove the timestamp? More uncertainty and bigger error then.” True. But in what sense are you “removing” that which you never had? So why not just compute the true mean. Where is the “glory” in doing things wrong?

-Bernie

William Ward
Reply to  Bernie Hutchins
January 23, 2019 8:29 pm

Hello Bernie,

The following statement is not meant to be a criticism. I’m stating it as a possible explanation of our disagreements. I have found on almost every post of yours I have read that I come away confused by what you are intending to say. If there is a problem here it may be mine as it relates to my ability to track what you are intending. (So please, no offense meant here). Maybe if we were talking in the same room this would resolve itself because of aspects of communication that are hard to fit into writing. I just mention it to offer an possible explanation as to why we seem to be at odds over some of this. I have the repeated experience of feeling like you are disagreeing and then you say something that sounds like you are agreeing or vice versa. Here is an example: Above in (3) you said: “(3) You say “Now, remove the timestamp? More uncertainty and bigger error then.” True. But in what sense are you “removing” that which you never had? So why not just compute the true mean. Where is the “glory” in doing things wrong?”

My thrust on this subject – and at the risk of speaking for Paramenter, his too, is to criticize the way max and min data are used. We are debating with you (I think) whether or not the methods used with max and min are violations of signal analysis or something else. You seem to agree max and min are not good for determining mean, but you think it is not because of signal analysis violations. When you say “where is the glory in doing things wrong?” I’m confused by why you say this. We are not recommending that anything is done wrong. We are recommending that they are done right. We are trying to illustrate how the use of max and min are samples and violations of sampling requirements. So let me try another approach.

Here is a hypothetical to illustrate: #1: We have a data acquisition system that delivers 288-samples/day of a real world analog signal. This analog signal can be recorded on magnetic tape. The analog signal can be simultaneously recorded on a chart recorder. So we have magnetic tape, chart recorder and digital samples. The chart recorder is not of much use, but it captures the signal with an analog representation. The magnetic tape could be played back through an amplifier to recreate the electrical signal that represents the original. The digital samples can be played back through a DAC to recreate the electrical signal that matches the original. You could do an experiment whereby (if levels are set correctly) you could subtract one of these from the other to null them out, proving that they are equivalent. You could take the recording from the magnetic tape and run it through an amplifier with a gain of -6dB, cutting the amplitude in half. In the digital domain, you can digitally divide the samples by 2 then feed this through the DAC. Aside from quantization error from the math operation, this signal matches the one from the tape. If the operations we do on the digital signal comply with allowed signal analysis operations then the digital version will always equal a similarly processed analog signal. Now what if we start to do DSP operations that are not supported by signal analysis theorems? What if we digitally search through the samples and identify the maximum and minimum values? (Of course, the max and min values may occur more than once each day, but for this exercise assume they only appear once each day.) Your algorithm can then discard all samples except these 2 (max and min). You have just done something that is not supported by signal processing. Your operation results in a digital signal that no longer represents the original. I call this a signal analysis violation. I think it is appropriate to call it a Nyquist violation because we don’t have periodic samples or enough samples that are required to reconstruct the signal. Now, continuing. The timing of these 2 samples is known, since they came from our 288-samples/day. But what if in parallel we also had a max/min thermometer there capturing the same event and suppose the max/min thermometer and the ADC system are matched such that they yield the same values. Someone could have been paid to stand there and watch the thermometer and write down the times that the max and min occurred. So we have 2 different methods of obtaining the same sample values and timing. Are these 2 scenarios signal analysis violations? Can either of them be used to get to the original signal? Will mathematical operations on these 2 samples with their corresponding timings ever relate to the original signal? Now, what if no one was there to capture the time max and min were reached? How does this now differ from the scenario of starting with 288-samples and discarding all but max and min. Furthermore, what if we discard or don’t use the timing from the 288-samples? Where in this process do you say we are not dealing with a signal analysis/Nyquist problem? And what is the mathematical or technical explanation that justifies it?

Additionally, sample clock jitter does not invalidate Nyquist. Do you agree? It adds error to the sampling process. Modern ADC jitter is very small: picoseconds. If we extract max and min from 288-samples we have sampled values and sample times. The timing of the max and min values take place 2x/day, so there is a periodic rate, but there is variability in that rate. While the magnitude of the variability between max and min and the magnitude of variability of a jittered clock are many, many orders of magnitude different, conceptually they are the same. At what time limit does jitter invalidate Nyquist? From a signal analysis perspective, what is the math that differentiates the scenarios? Could you differentiate a set of samples with max and min vs. a set with 2-samples/day and an irregular clock?

Unless these can be addressed then I conclude that the practice of using max and min to accurately calculate anything about the original signal is a violation of signal analysis/Nyquist requirements.

I think it is rational to say that if you are working with discrete values that come from an analog signal then you have samples. The following things will mean your samples will not represent or will not well represent your original analog signal: 1) Not enough samples/unit time, 2) deviation from regular sampling interval or 3) loss of timing information (or failure to record it).

Please only bring in bunched samples if you think bunched sampling can validate the use of max and min.

Ps – I will try to look through your Electronotes. Bernie, this is an impressive body of work!!! Nearly 50 years of publishing I see! (It has been a while since I have been reminded of the typewriter… seeing some of your older notes, I see you probably wore out many ribbons.)

Paramenter
Reply to  Bernie Hutchins
January 24, 2019 3:05 am

Hey Bernie,

(3) You say “Now, remove the timestamp? More uncertainty and bigger error then.” True. But in what sense are you “removing” that which you never had?

To illustrate a difference between better and worse situation. If have timestamp per each daily min/max your approximation of the recovered signal will be better. If you don’t have (and we don’t) your approximation will be worse. Not knowing timestamps per daily min/max makes situation worse form the signal recovery standpoint.

So why not just compute the true mean.

Because for most of the instrumental temperature record we have only daily min and max thus we cannot compute the true mean.

Where is the “glory” in doing things wrong?

Indeed.

(2) What is “somehow interpolate” (bandlimited, polynomial, min norm)?

Whatever you fancy and is suitable for this purpose. When I had a contact with this stuff I liked cubic splines interpolation-smooth first and second derivatives and Akima interpolation.

WRONG – it works for all.

Probably I was unclear. And because I want to enjoy it lets consider this use case: I synthesize a signal and then sample it using non-uniform fashion where each section of dense sampling can have variable length. Distances between sections of dense samples also vary. Between high density sampling sections there is sparse sampling or not at all. Now, because I’m a bad boy in the segments of the signal where sampling is sparse or none I generate high amplitude and high frequency signal. Where sampling is dense I introduce quiet signal with low amplitude and frequency. Now I’m sending you sampled this way signal asking to recover the original one. And, I’m afraid, there is not a slightest chance to do so.

If you’re really desperate I can make a graph illustrating this situation but I believe you’ve got the picture.

Paramenter
January 24, 2019 8:33 am

Hey Clyde,

Also for me errors associated with traditional methods of recording temperatures by daily min/max come from poorly restored original signal. Bigger distortion, bigger error we’ve got. Furthermore, relatively poor underlying data forces us to talk mainly about trends. But that’s only one aspect. Climatic models are very complex taking into account energy transfers, dynamic feedback mechanism and so on. Here, reliable signal is vital to get a clear picture. Marrying those two worlds and assuming that they are fully compatible may be wishful thinking.

Clyde Spencer
Reply to  Paramenter
January 24, 2019 10:46 am

Paramenter
I hope it isn’t just wishful thinking! To get a better estimate of the daily net energy transfer we need more than just two temperatures. If all stations were to ultimately transition to high-temporal resolution we could treat the historical data as proxy data for the true mean. The transition period could be used to confirm what, if any, error exists in historical trends and to then correct them in the long-term analysis.

William Ward
Reply to  Clyde Spencer
January 24, 2019 8:59 pm

Hi Clyde, Paramenter,

Until climate science graduates to feeding full signals into transfer functions and then getting back and analyzing resulting full signals I think our understanding is stagnant.

I also think this needs to build up from smaller regional transfer functions. But the goal of modeling climate may never be realized. Some who have studied climate modeling their entire lives have come away with the belief that it is not possible to model climate as climate is a non-linear and chaotic system of coupled feedbacks. (I think one of the IPCC reports admits as much). Today there are dozens of climate models – perhaps over 100. The exact number of them that backtest is zero.

January 24, 2019 6:09 pm

1sky1 said in part January 23, 2019 at 4:02 pm: “ A singular property of the much-maligned diurnal mid-range metric is the total absence of ANY aliasing, because that metric is determined not from discrete samples, but from the CONTINUOUS signal. While the (usually positive) offset from the true mean is a significant discrepancy, it can be greatly reduced by empirically determining for each station (and for each of 12 months) the coefficient 0 < eta <0.5 in a much more effective estimate, (1 – eta)Tmin + eta Tmax, of the true signal mean. ”

Bernie replies: Very possibly, you have the best answer.

William Ward said in part January 23, 2019 at 8:29 pm: “ Please only bring in bunched samples if you think bunched sampling can validate the use of max and min. “

Bernie replies: That is the reason for, and the implication of the suggestion.

Paramenter said in part at January 24, 2019 at 7:01 am: “ I reckon Bernie refers here to interlaced sampling where, as far as I understand that, we can choose any number distinct points within Nyquist intervals. If samples are then taken from this pool, throwing away some of them still allows you to restore signal reliably. But that only works under particular circumstances and still has to follow strict rules. I don’t know why Bernie brought this concept – except for proving that non uniform sampling is still sampling I cannot see any relevance to daily max/min. Those bad boys have nothing to do with interlaced sampling. “

Bernie replies: The ”strict rules” are just proper basic sampling. ” If you HAD the times of Tmax and Tmin, you can (I think) recover the mean exactly (not the entire record!).

Clyde Spencer said in part at January 24, 2019 at 7:29 am : “ It is obvious to me that if a signal has been over-sampled, then one has the luxury of compressing it by sub-sampling. Or, alternatively, if one is willing to live with aliasing or other loss of fidelity, then I can understand throwing away some data. But, in general, Bernie did not make a convincing case (at least to me) that you can get something for nothing. “

Bernie replies: It’s not something for nothing (last sentence). You paid up-front (your first sentence).

* * * * * * * * * *
GENERAL IDEA I HAVE IN MIND
You want the true mean. All you have is Tmax and Tmin. Above 1sky1 pointed out that you can make a useful correction to the trial value: (Tmax+Tmin)/2, based on empirical observations. Can we do better – Would it make a difference if we actually knew the times for these measurements? That is, we have nmax and nmin in addition to Tmax(n=nmax) and Tmin(n=nmin) for some supposed T(n). Let’s suppose we have a dense set of potential sample positions for any one day – perhaps n=0,1,2,3,. . . 47, which we could start with. We install Tmax at nmax and Tmin at nmin (0 for all other n). From these two non-uniform samples we “reconstruct” the full 48 samples, and filter back to DC.

Perhaps –

– Bernie

William Ward
Reply to  Bernie Hutchins
January 24, 2019 8:50 pm

Bernie,

Bernie said: “GENERAL IDEA I HAVE IN MIND…”

Would you like some USCRN data files to run your proposed algorithm on? The website that stores the data is not available due to the government shutdown, but one of us can get files to you. I recommend the “Sub-Hourly” data. You can extract Tmax and Tmin for each day and also get the timestamp of those samples. It would be interesting to see if you can do any DSP on those max & min samples and get a mean value that matches the 288-samples/day and furthermore see if you can reconstruct the original signal that the 288-samples/day matches. Just let us know how to reach you by email or if you prefer I can load some files on DropBox and give you a link.

Reply to  William Ward
January 26, 2019 7:53 pm

William –

Thanks for the offer – but in truth I am very far from trying the method on actual temperature data. I would have to work out the computational details first, and test them on some “toy” sequences.

Since this thread has quieted down, this is probably a convenient place to park the idea. This is based on a previous reference: http://electronotes.netfirms.com/EN200.pdf and a one-page “tape-up” summary is here:

http://electronotes.netfirms.com/BunchedSamples.jpg

The basic goal is to obtain a theoretical basis for any useful correction factor from (Tmax+Tmin)/2 to the true mean. 1sky1 suggested above such a correction (he/she called it “eta”) based on empirical data. I was hoping for a value for this factor based on the “offset” of Tmax and Tmin relative to a nominal ½ day separation.

The top portion off the single page is ordinary (uniform) sampling and the corresponding use of sinc functions as time-domain interpolators (shown for three samples (1, 0.6, and -0.4). Nothing unusual here. Suppose the sampling rate is 1/T = 288 (samples per day, say). Such sampling could support a bandwidth approaching 144 cycles/day. The temperature curve might be much more bandlimited – perhaps to 3 or 4 cycles/day. We don’t need 288 samples/cycle – perhaps 24 samples/day would do.

But we are kind of working right here toward a rate as low as 2 samples/day.

If we wanted to use two equally spaced samples, we would have to bandlimited to perhaps 0.5 cycles/day. This would surely cut into the fundamental (1 cycle/day) and we would have no hope of recovering the original temperature curve (as we could have with 288 or even 24 samples). However the mean (DC) would be recoverable.

Note that if we had the full temperature curve, and we just threw out all except two samples, we would have severe aliasing (including, overlap of images*). But we do not need to resamples with an analog antialiasing filter at 0.5. We just reduced the bandwidths with a basic low-pass digital filter (usually called, in this use, a “pre-decimation” filter.

Onward to unequal spacing (non-uniform or “bunched” sampling). What we have said is that if the bandwidth is sufficiently low, a much lower (even in average) sampling rate is what matters. The two samples per cycle (per day here), perhaps Tmax and Tmin, ALONG WITH THEIR TIME INDICES, would be enough, not to recover the full temperature curve of course, but likely the DC (mean) as with the equally spaced samples. That is, we recover the bandlimited (to 0.5) curve with the expectation that except close to DC, it is aliased beyond use.

How hard is it to handle the non-uniform case? Well there is a fair amount of discussion in my app note:
http://electronotes.netfirms.com/BunchedSamples.jpg
although actually coding it in general, or up to the size of just two arbitrary samples in 288, seems tedious. No theoretical issues however. The discussion in the note is entirely frequency (spectrally) based. I’m not sure that is a problem if we just want the value of the spectrum at zero. More illuminating at the moment is perhaps the bottom portion of the jpg here – the bunched sampling in continuous time. The graph there shows the interpolation functions for the bunched case. Compare to the sinc of the uniform case – there are two such functions, a(t) and b(t) as shown. One for the even samples, and the other for the odd samples as displaced. Note that these two interleaved sub-sequences are generally called “polyphases”.

Note three things: (1) The interpolation functions are not sinc functions but are built from sinc(2t) and t sinc^2(t); (2) These are for continuous time t, not for discrete time n, but it might basically involve the corresponding “periodic sincs”; (3) The top shows the weighted sum of three sinc function while the bottom shows just he interpolation functions themselves – since there are two of them.

PROSPECTS: I might hope to find that for some typical range of the time indices of Tmax and Tmin that the true mean can be estimated from (Tmax+Tmin)/2 by some theoretical factor. Myself, I am retired (with some emphasis on “tired”). If I were still working I would look at offering this as a proposed “senior project”, likely for 3 or 4 credits.

-Bernie hutchins@ece.cornell.edu

*the term “aliasing” sometimes refers to the case where we are really talking about normal “spectral images” about multiples of the sampling rate which may not actually overlap populated regions of the original spectrum and/or other images, and hence still offer a recovery opportunity (e.g. bandpass sampling).

Reply to  Bernie Hutchins
January 26, 2019 8:20 pm

Link to App Note should have been – sorry

http://electronotes.netfirms.com/AN356.pdf

William Ward
Reply to  Bernie Hutchins
January 27, 2019 9:41 pm

Bernie,

Thanks for your detailed reply. I agree that maybe we can park this discussion for now. It is not always easy to completely understand technical detail in these kind of exchanges and that misunderstanding can sidetrack what would be an otherwise stimulating discussion if we were in the same room interacting. Although we have bumped horns a few times in the discussions I have enjoyed talking with you. I’m fascinated by your massive library of Electronotes – I can see you have had a rich career! I hope you are enjoying retirement. I too retired – at least from my primary industry – but “retire” is not really in my vocabulary – I call it “Phase 2”. Now I focus on the projects and work of my choosing – if and when I want. I appreciate you sending your contact information. I’ll email you to reciprocate. Maybe one day I’ll look you up to pick your brain about a subject aligned with your experiences. Ps – one of my businesses is an audio engineering company. This grew into a record company with in-house engineering. I was a full member of AES for years but let my membership lapse. I used to enjoy the AES shows, especially in NYC.