SIGNAL CONVOLUTION, MIDPOINT OF RANGE, AND ALL THAT

KEVIN KILTY

Introduction

A guest blogger recently1 made an analysis of the twice per day sampling of maximum and minimum temperature and its relationship to Nyquist rate, in an attempt to refute some common thinking. This blogger concluded the following:

(1) Fussing about regular samples of a few per day is theoretical only. Max/Min temperature recording is not sampling of the sort envisaged by Nyquist because it is not periodic, and has a different sort of validity because we do not know at what time the samples were taken.

(2) Errors in under-sampling a temperature signal are an interaction of sub-daily periods with the diurnal temperature cycle.

(3) Max/Min sampling is something else.

The purpose of the present contribution is to show that these first two conclusions are misleading without further qualification; and the third conclusion could use fleshing out to explain Max/Min values being “something else”.

1. Admonitions about sampling abound

In the world of analog to digital conversion admonitions to bandlimit signals before conversion are easy to find. For example, consider this verbatim quotation from the manual for a common microprocessor regarding use of its analog to digital (A/D or ADC) peripheral. The italics are mine.

“…Signal components higher than the Nyquist frequency

(fADC/2) should not be present to avoid distortion from unpredictable signal convolution. The user is advised to remove high frequency components with a low-pass filter before applying the signals as inputs to the ADC.

clip_image001[4]

Date: February 14, 2019.

1 Nyquist, sampling, anomalies, and all that, Nick Stokes, January 25, 2019

2. Distortion from signal convolution

 

What does distortion from unpredictable signal convolution mean? Signal convolution is a mathematical operation. It describes how a linear system, like the sample and hold (S/H) capacitor of an A/D, attains a value from its input signal. For a specific instance, consider how a digital value would be obtained from an analog temperature sensor. The S/H circuit of an A/D accumulates charge from the temperature sensor input over a measurement interval, 0 → t, between successive A/D conversions.

(1) clip_image003

Equation 1 is a convolution integral. Distortion occurs when the signal (s(t)) contains rapid, short-lived changes in value which are incompatible with the rate of sampling with the S/H circuit. This sampling rate is part of the response function, h(t). For example the S/H circuit of a typical A/D has small capacitance and small input impedance, and thus has very rapid response to signals, or wide bandwidth if you prefer. It looks like an impulse function. The sampling rate, on the other hand, is typically far slower, perhaps every few seconds or minutes, depending on the ultimate use of the data. In this case h(t) is a series of impulse functions separated by the sampling rate. If s(t) is a slowly varying signal, the convolution produces a nearly periodic output. In the frequency domain, the Fourier transform of h(t), the transfer function (H(ω)), also is periodic, but its periods are closely spaced, and if the sample rate is too slow, below the Nyquist rate, spectra of the signal (S(ω)) overlap and add to one another. This is aliasing, which the guest blogger covered in detail.

From what I have just described, several things should be apparent. First, the problem of aliasing cannot be undone after the fact. It is not possible to figure the numbers making up a sum from the sum itself. Second, aliasing potentially applies to signals other than the daily temperature cycle. The problem is one of interaction between the bandwidth of the A/D process and the rate of sampling. It occurs even if the A/D process consists of a person reading analog records, and recording by pencil. Brief transient signals, even if not cyclic, will enter the digital record so long as they are within the passband of the measurement apparatus. This is why good engineering seeks to match the bandwidth of a measuring system to the bandwidth of the signal. A sufficiently narrow bandwidth improves the signal to noise ratio (S/N), and prevents spurious, unpredictable distortion.

One other thing not made obvious in either my discussion, or that of the guest blogger, concerns the diurnal signal. While a diurnal signal is slow enough to be captured without aliasing by a twice per day measurement cycle, it would never be adequately defined by such a sample. One would be relatively ignorant of the phase and true amplitude of the diurnal cycle with twice per day sampling. For this reason most people sample at least as fast as 2 and one-half times the Nyquist rate to obtain usefully accurate phase and amplitude measurements of signals near the Nyquist rate.

3. An example drawn from real data

clip_image005[4]

Figure 1. A portion of AWOS record.

As an example of distortion from unpredictable signal convolution refer to Figure 1. This figure shows a portion of temperature history drawn from an AWOS station. Note that the hourly temperature records from 23:53 to 4:53 show temperatures sampled on schedule which vary from −29F to −36F, but the 6 hour records show a minimum temperature of −40F.

Obviously the A/D system responded to and recorded a brief duration of very cold air which has been missed in the periodic record completely, but which will enter the Max/Min records as Min of the day. One might well wonder what other noisy events have distorted the temperature record. Obviously the Max/Min temperature records here are distorted in a manner just like aliasing– a brief, high frequency, event has made its way into the slow, twice per day Max/Min record. The distortion is about 2F difference between Max/Min and the mean of 24 hourly temperatures–a difference completely unanticipated by the relatively high sampling rate of once per hour, if one accepts the blogger’s analysis uncritically. Just as obviously, if such event had occurred coincident with one of the hourly measurement schedules, it would have become a part of the 24 samples per day spectrum, but at a frequency not reflective of its true duration. So, there are two issues here. The first one being the distortion from under-sampling, and the second being that transient signals possibly aren’t represented at all in some samples but are quite prevalent in others.

In summary, while the Max/Min records are not the sort of uniform sampling rate that the Nyquist theorem envisions, they aren’t far from being such. They are like periodic measurements with a bad clock jitter. It is difficult to argue that a distortion from unpredictable convolution does not have an impact on the spectrum resembling aliasing. Certainly sampling at a rate commensurate with the brevity of events like that in Figure 1 would produce a more accurate daily “mean” than does midpoint of the daily range; or, alternatively one could use a filter to condition the signal ahead of the A/D circuit, just as the manual for the microprocessor suggests, and just as anti-aliasing via the Nyquist criterion, or improvement of S/N would demand. Trying to completely fix the impact of aliasing from digital records is impossible after the fact. The impact is not necessarily negligible, nor is it mainly an interaction with the diurnal cycle. This is not just a theoretical problem; especially considering that Max/Min temperatures are expected to detect even brief temperature excursions, there isn’t any way to mitigate the problem in the Max/Min records themselves. This provides a segue into a discussion about the “something otherness” of Max/Min records.

4. Nature of the Midrange

The midpoint of the daily range of temperature is a statistic. It is among a group known as order statistics, as it comes from data ordered from low to high value. It serves as a measure of central tendency of temperature measurements, a sort of average; but is different from the more common mean, median, and mode statistics. To speak of the midpoint range as a daily mean temperature is simply wrong.

If we think of air temperature as a random variable following some sort of probability distribution, possessing a mean along with a variance, then the midpoint of range may serve as an estimator of mean so long as the distribution is symmetric (kurtosis, excess, and higher moments are zero). It might also be an efficient or robust estimator if the distribution is confined between two hard limits, a form known as platykurtic for having little probability in the distribution tails. In such case we could also estimate a monthly mean temperature using a midrange value from the minimum and maximum temperatures of the month or even an annual mean using the highest and lowest temperatures for a year.

In the case of the AWOS of Figure 1 the annual midpoint is some 20F below the mean of daily midpoints, and even a monthly midpoint is typically 5F below the mean of daily values. The midpoint is obviously not an efficient estimator at this station, although it could work well perhaps at tropical stations where the distribution of temperature is more nearly platykurtic.

The site from which the AWOS data in Figure 1 was taken is continental; and while this particular January had a minimum temperature of −40F, it is not unusual to observe days where the maximum January temperature rises into the mid 60s. The weather in January often consists of a sequence of warm days in advance of a front, with a sequence of cold days following. Thus the temperature distribution at this site is possibly multimodal with very broad tails and without symmetry. In this situation the midrange is not an efficient estimator. It is not robust either, because it depends greatly on extreme events. It is also not an unbiased estimator as the temperature probability distribution is probably not symmetric. It is, however, what we are stuck with when seeking long-term surface temperature records.

One final point seems worth making. Averaging many midpoint values together probably will produce a mean midpoint that behaves like a normally distributed quantity, since all elements to satisfy the central limit theorem seem present. However, people too often assume that averaging fixes all sorts of ills–that averaging will automatically reduce variance in a statistic by the factor 1/n. This is strictly so only when samples are unbiased, independent and identically distributed. The subject of data independence is beyond the scope of this paper, but here I have made a case that the probability distribution of the maximum and minimum values are not necessarily the same as one another, and may vary from place to place and time to time. I think precision estimates for “mean surface temperature” derived from midpoint of range (Max/Min) are too optimistic.

0 0 votes
Article Rating

Discover more from Watts Up With That?

Subscribe to get the latest posts sent to your email.

224 Comments
Inline Feedbacks
View all comments
February 15, 2019 2:13 am

‘In summary, while the Max/Min records are not the sort of uniform sampling rate that the Nyquist theorem envisions, they aren’t far from being such. They are like periodic measurements with a bad clock jitter. ”

precious.

With respect to your AWOS.. post a link.

And say his name.

It’s Nick Stokes.

Reply to  steven mosher
February 15, 2019 5:07 am

Eh? There is a link, and a name. Or did the author add these after posting?

Reply to  DaveS
February 15, 2019 5:49 am

I checked the edit log. The name and link were there from the get-go. It was just Steven Mosher playing out his role as drive-by hack again.

Matthew Drobnick
Reply to  Anthony Watts
February 15, 2019 6:47 am

I still don’t understand his smug attitude. Is this just a character flaw or has the actor become stuck in a role?

There is no need for such arrogance, especially when being so painfully misguided

Bill Powers
Reply to  Matthew Drobnick
February 15, 2019 7:20 am

arrogance is a learned behaviorial trait. that becomes a permanently ingrained personality flaw of the pseudo intellectual babbling class.
It is how they talk to each other about the great unwashed that have yet to be indoctrinated. It is a off putting attempt to gain an advantage by talking down to others. They confuse arrogance for wit and in truth they don’t really like each other very much once the others turn their back.

Matthew Drobnick
Reply to  Matthew Drobnick
February 15, 2019 9:51 am

Bill, that is exactly what I was thinking! Thanks for putting it into print.

I used to be that way for a brief stint in my 20’s until i recognized how obnoxious and conceited it is.

Caligula Jones
Reply to  DaveS
February 15, 2019 9:48 am

Funny: Mosher pretends to be a real scientists who knows numbers and stuff, even though his entire background is in words and stuff.

http://www.populartechnology.net/2014/06/who-is-steven-mosher.html

So here, instead of agreeing/disagreeing with the numbers, and showing his work (as he demands, constantly, of others), he uses his Awesome English Skillz, “proofreads” a post, finds it lacking in something that is actually there, farts into the wind, then leaves.

Not exactly bringing the A game there…

Kevin kilty
Reply to  steven mosher
February 15, 2019 6:09 am

I didn’t want this to look like an attack on Nick Stokes, himself, who I have regard for. I just wanted to fill in things I thought were vague, and correct what I saw as mistakes in his thinking.

MarkW
Reply to  steven mosher
February 15, 2019 8:07 am

Once again, Mosh only sees what his paycheck requires him to see.

The final sentence of the article:

“I think precision estimates for “mean surface temperature” derived from midpoint of range (Max/Min) are too optimistic.”

I don’t know if Mosh just stops reading when he sees something he can use to support his paycheck, or if he really isn’t able to understand these papers.

John Endicott
Reply to  MarkW
February 15, 2019 9:30 am

He only understands what his paycheck lets him understand, anything else he just sneers at.

John Endicott
Reply to  steven mosher
February 15, 2019 9:27 am

The name and link were there the whole time. with respect to your drive-by – learn to read before posting.

Reply to  steven mosher
February 15, 2019 9:54 am

Mosher
What percentage of all surface “data”
are actually not “sampled” at all —
they are numbers wild guessed
by government bureaucrats,
and calling them “infilled” data
does not change that fact.

Do you know the percentage ?

Do you even care ?

If you don’t care, then why not ?

Reply to  Richard Greene
February 17, 2019 12:22 pm

I expected the usual silence from
Steven al-ways clue-less Mosher,
so I was not disappointed !

tom0mason
February 15, 2019 2:17 am

“Thus the temperature distribution at this site is possibly multimodal with very broad tails and without symmetry. In this situation the midrange is not an efficient estimator. It is not robust either, because it depends greatly on extreme events. It is also not an unbiased estimator as the temperature probability distribution is probably not symmetric. It is, however, what we are stuck with when seeking long-term surface temperature records.”

So what can be said of using such ‘averages’ for homogenizing temperatures over very wide areas?

Reply to  tom0mason
February 15, 2019 2:51 am

So what can be said of using such ‘averages’ for homogenizing temperatures over very wide areas?

I believe the technical term for what may be said regarding this is the ‘square root of Sweet Fanny Adams.

Sadly the whole of climate science – and it is prevalent on the ‘denier’ side, as well as endemic on the ‘warmist’ side – is pervaded by people using tools and techniques whose applicability they do not understand, beyond their (and the tools) spheres of competence….

What we have really is a flimsy structure of linear equations supported by savage extrapolation of inadequate and often false data, under constant revision, that purports to represent a complex non linear system for which the analysis in incomputable, and whose starting point cannot be established by empirical data anyway. That in the end cant even be forced to fit the clumsy and inadequate data we do have.

Frankly, those who think they can see meaningful patterns in it might as well engage in tasseography…

There are only two things that years of climate science have unequivocally revealed, about the climate, and they are firstly that whatever makes it change, CO2 is a bit player, as the correlation between CO2 and temperature change of the sort we allegedly can measure, is almost undetectable, and secondly that we don’t have any accurate or robust data sets for global temperature anyway.

Other things that it has revealed are the problems of science itself in a post truth world. What, after all, is a ‘fact’ ? If nobody hears or sees the tree fall has it in fact fallen? (Schrödinger etc).

Has the climate ‘got warmer’? By how much? What does it mean to say that? How do we know that it has? How reliable is that ‘knowledge’? Is some ‘knowledge’ more reliable than other ‘knowledge’ ? Is there any objective truth that is not already irrevocably relative to some predetermined assumption? I.e. is there such a things as an objective irrevocable inductive truth?.

[ I.e. why when faced with a spiral grooved horn, some blood on the path and equine feces, do I assume that someone has dropped a narwhal horn, a fox has killed a pigeon and a pony has defecated there rather than assuming the forces of darkness killed a unicorn].

I think there are answers to these questions, but they will not, I fear, please either side in this debate.

Since both are redolent of the stench of sloppy one-dimensional thinking.

R Shearer
Reply to  Leo Smith
February 15, 2019 5:31 am

Chiefly important, ” There are only two things that years of climate science have unequivocally revealed, about the climate, and they are firstly that whatever makes it change, CO2 is a bit player, as the correlation between CO2 and temperature change of the sort we allegedly can measure, is almost undetectable, and secondly that we don’t have any accurate or robust data sets for global temperature anyway.”

Good comment.

rbabcock
Reply to  Leo Smith
February 15, 2019 6:00 am

Oh, I don’t know Leo we are able to measure the global temperatures to the tenth of a degree (doesn’t matter if it is C or F). I see this published all the time.

And better yet we can actually measure temperatures of vast areas of the Arctic and Antarctic with just a couple of thermometers to high precision. How impressive is that?

And let’s include knowing exactly what the temperature of the Pacific Ocean 1581 meters deep, 512 km east of Easter Island is to the tenth of a degree C!!

I mean with an internet connection, a super computer and a few thousand lines of code, we can do pretty much anything.

Terry Gednalske
Reply to  rbabcock
February 15, 2019 10:01 am

And of course we also know the temperature of the Pacific Ocean at 1581 meters deep, 512 km east of Easter Island to a tenth of a degree- – – 50 years ago, so we can “prove” a warming trend.

James Clarke
Reply to  Leo Smith
February 15, 2019 6:25 am

“I think there are answers to these questions, but they will not, I fear, please either side in this debate.

Since both are redolent of the stench of sloppy one-dimensional thinking.”

Ouch!

I am not sure why you are concerned with ‘pleasing’ one side or the other. If there are answers to your questions, please share, and damn the torpedoes. I, for one, wish to be enlightened, not ‘pleased’.

In the meantime, Mr. Smith, please pardon the stench of our ‘sloppy, one-dimensional thinking’.

Farmer Ch E retired
Reply to  Leo Smith
February 15, 2019 8:42 am

Pardon my ignorance and off topic. Is anyone measuring and monitoring global heat content of the atmosphere? This seems the better parameter to follow than temperature when looking for a GHG signal.

Paul Penrose
Reply to  Farmer Ch E retired
February 15, 2019 10:26 am

In a word, no.

Ian W
Reply to  Farmer Ch E retired
February 15, 2019 6:02 pm

They could do but they won’t.

Due to the presence of water vapor the enthalpy of the various volumes of air vary considerably . The heat content should be reported in kilojoules per kilogram. Take air in a 100% humidity bayou in Louisiana at 75F, it has twice the heat content of a similar volume of close to zero humidity air in Arizona at 100F.
Averaging the intensive variable ‘air temperature’ is a physical nonsense. Claiming that infilling air temperatures makes any sense is a demonstration of ignorance or malfeasance.

Farmer Ch E retired
Reply to  Ian W
February 15, 2019 6:42 pm

“Averaging the intensive variable ‘air temperature’ is a physical nonsense.”

Well said!

David Stone CEng MIET
February 15, 2019 2:40 am

The contents of this article are quite complex, but do point to potential difficulty with temperature records, particularly if they are not continuously sampled and then the data collected subjected to the correct processing (which might be called averaging, but this opens a whole can of worms). Strictly speaking this is not an alias problem addressed by Nyquist, which makes sampled frequencies above half the sampling frequency appear as low frequency data, but the simple question of “what is the average temperature”? Measurements at a normal weather station are sampled at a convenient high rate, but taking peak low and high readings is clearly not right to get a daily temperature record. Even averaging these numbers (or some other simple data reduction) will not give the same result as correct sampling of the temperature with the data low pass filtered before sampling. However the data from the A/D converter can be digitally filtered to give a true correctly sampled result, but this raw data is not normally available. The difference between averaged min and max numbers and correctly sampled data may not be large, but when one is looking for fractional degree changes may well be important. As usual the climate change data is not the same as simple weather, but often assumed to be the same. Note that satellite temperatures will suffer from a sampling problem as the record is once per orbit at each point, and again we do not know how this is processed as the “temperature” readings!

Reply to  David Stone CEng MIET
February 15, 2019 3:29 am

I would like to see a simple, cleverly illustrated explanation of how the temperature is measured and averaged. Having lived in seven climatic zones the differences between the max and min, as well as when these occur, as well as fluctuations have all been different. Even working out an average for the seven areas where I have lived is a major headache. How anyone can be so confident of the average temperature of our whole world, of the average increase, of the relationship of increases and decreases at various times in different areas and how this impacts on the world average, baffles me.

Lloyd Martin Hendaye
Reply to  Michael in Dublin
February 15, 2019 5:30 am

Of course, valid statistical methodology is key to measurement in detail. But on broad, long-term semi-millennial and geologic time-scales, climate patterns are sufficiently crude-and-gruff to distinguish 102-kiloyear Pleistocene glaciations (“Ice Ages”) from median 12,250-year interstadial remissions such as the Holocene Interglacial Epoch which ended 12,250+3,500-14,400 = AD 1350 with a 500-year Little Ice Age (LIA) through c. AD 1850/1890.

In this regard, aside from correcting egregiously skewed official data, recent literature makes two main points:

First: NASA’s recently developed Thermosphere Climate Index (TCI) “depicts how much heat nitric oxide (NO) molecules are dumping into space. During Solar Maximum, TCI is high (Hot); during Solar Minimum, it’s low (Cold). Right now, Earth’s TCI is … 10 times lower than during more active phases of the solar cycle,” as NASA compilers note.

“If current trends continue, Earth’s overall, non-seasonal temperature could set an 80-year record for cold,” says Martin Mlynczak of NASA’s Langley Research Center. “… (pending a 70-year Grand Solar Minimum), a prolonged and very severe chill-phase may begin in a matter of months.” [How does this guy still have a job?]

Second: Australian researcher Robert Holmes’ peer reviewed Molar Mass Version of the Ideal Gas Law (pub. December 2017) definitively refutes any possible CO2 connection to climate variations: Where Temperature T = PM/Rp, any planet’s near-surface global Temperature T equates to its Atmospheric Pressure P times Mean Molar Mass M over its Gas Constant R times Atmospheric Density p.

Accordingly, any individual planet’s global atmospheric surface temperature (GAST) is proportional to PM/p, converted to an equation per its Gas Constant reciprocal = 1/R. Applying Holmes’ relation to all planets in Earth’s solar system, zero error-margins attest that there is no empirical or mathematical basis for any “forced” carbon-accumulation factor (CO2) affecting Planet Earth.

As the current 140-year “amplitude compression” rebound from 1890 terminates in 2030 amidst a 70+ year Grand Solar Minimum similar to that of 1645 – 1715, measurements’ “noise levels” will certainly reflect Earth’s ongoing reversion to continental glaciations covering 70% of habitable regions with ice sheets 2.5 miles thick [see New York City’s Central Park striations]. If statistical armamentaria fail to register this self-evident trend from c. AD 2100 and beyond, so much the worse for self-deluded researchers.

OweninGA
Reply to  Lloyd Martin Hendaye
February 15, 2019 7:37 am

I see a definite problem with this formulation. While temperature is related to those parameters, without an external heat source (i.e. a star), all those theoretical (and actual) planets would very quickly approach the 4K of space. There has to be a term for the heating of the atmosphere by external radiation or the whole thing collapses.

Johann Wundersamer
Reply to  OweninGA
February 16, 2019 12:15 am

OweninGa, your 4K of space

is https://www.google.com/search?client=ms-android-samsung&ei=28RnXNHZFa2FrwTih7rgDA&q=+space+min+temperature&oq=+space+min+temperature&gs_l=mobile-gws-wiz-serp.

Every planet IN THE UNIVERSE maintains its own VERY SPECIAL base temperature by

The air column above the ground floor generates heat by its own weight / pressure.

Johann Wundersamer
Reply to  Lloyd Martin Hendaye
February 15, 2019 11:19 pm

OweninGa,

the pressure / heat problem again:

The air column above the ground floor generates heat by its own weight / pressure.

ever worked with air pressure operated machines?

you go in Bermuda shorts and Hawaii shirts into the machine hall.

Johann Wundersamer
Reply to  Lloyd Martin Hendaye
February 15, 2019 11:31 pm

pressure –> compression

OweninGa,

the pressure / heat problem again:

The air column above the ground floor generates heat by its own weight / pressure.

ever worked with air compression operated machines?

you go in Bermuda shorts and Hawaii shirts into the machine hall.

And need lots of beverages during working hours.

Johann Wundersamer
Reply to  Lloyd Martin Hendaye
February 16, 2019 12:53 am

My fault :

OweninGa, your 4K of space

is https://www.google.com/search?client=ms-android-samsung&ei=28RnXNHZFa2FrwTih7rgDA&q=+space+min+temperature&oq=+space+min+temperature&gs_l=mobile-gws-wiz-serp.

Every planet IN THE UNIVERSE maintains its own VERY SPECIAL base temperature by

UNIVERSE BASE TEMPERATURE +

The air column above the ground floor generated heat by the planets own weight / pressure

air column above the ground floor.

Richard Saumarez
Reply to  Michael in Dublin
February 15, 2019 11:15 am

One analysis is here. It looks at Hadcrut and shows that it is simply awful.

https://judithcurry.com/2011/10/18/does-the-aliasing-beast-feed-the-uncertainty-monster/

Greg F
Reply to  David Stone CEng MIET
February 15, 2019 5:38 am

Even averaging these numbers (or some other simple data reduction) will not give the same result as correct sampling of the temperature with the data low pass filtered before sampling. However the data from the A/D converter can be digitally filtered to give a true correctly sampled result, but this raw data is not normally available.

Using a digit filter post sampling will not “give a true correctly sample result”. The information has already been lost.

Reply to  David Stone CEng MIET
February 15, 2019 5:52 am

The difference between averaged min and max numbers and correctly sampled data may not be large, but when one is looking for fractional degree changes may well be important.

Fractional degree changes, yes, this has been the point that I have tried to emphasize on several occasions in my peon comments.

All this highly technical mathematical tooling seems ludicrous, with regard to seemingly tiny fractional differences that it produces, … in general and in relation to the precision of instrumental measurements.

Micky H Corbett
February 15, 2019 2:52 am

You said the magic words: “identically distributed”. When considering a signal to be measured you need to consider if your tools are able to produce the necessary condition for analysis. For identically distributed data, each data point in the sample needs to have sufficiently less uncertainty compared to the variation of their nominal values (nominal being the 4 in say 4 +/- 0.5 units for example). Otherwise you cannot determine the distribution to sufficient resolution to achieve the conditions to meet the Central Limit Theorem, which underpins the Standard Error of the Mean.

The majority of textbook examples you see of this use populations with discrete sample elements, not individual measurements with their own intrinsic uncertainty. And if you read the scientific papers by people such as the Met Office, they take the same approach considering uncertainties in the data. They have to make assumptions about the sample measurement distribution. A wet finger in the air basically.

Bottom line: the derived temperature anomaly data is a hypothetical data set since determination of identical distributions cannot be obtained from the intrinsic measurement appartus tolerances. The original measurement apparatuses (I’ll stick to English plurals) were never designed to give this level of uncertainty. From the get-go climate science was about creating suppostion from data with little possibility of it ever being definitive.

And yet results from exercises are taken as real results applicable to the real world.

Make no mistake climate science advocacy is very little to do with logic and fact, or ethics for that matter. Climate scientists who push their results as being indicative of Nature have no skin in the game or accountability.

As I have said before, if they think their methods are sufficient then they should spend a month or two eating food and drinking water deemed safe for consumption under the same standards. Standards where the certification equipment has orders of magnitude more uncertainty than the variation of the impurities they are trying to minimise. And where a “safe” value is obtained by averaging many noisy measurements to somehow produce lower uncertainty.

I’m pretty sure physical reality would teach them some humility. They may even lose a few pounds.

Mariano Marini
February 15, 2019 3:10 am

I wonder why not record WHEN the temperature change!
Let say record the time when temperature change of 1°C. This way would be easy to follow what really happen.
Of course there will be a BIG DATA to deal with but …
Just an idea.

R Shearer
Reply to  Mariano Marini
February 15, 2019 5:33 am

?

MarkW
Reply to  Mariano Marini
February 15, 2019 8:16 am

The temperature is always changing. 24 hours a day.

John Endicott
Reply to  Mariano Marini
February 15, 2019 12:50 pm

Temperature changes constantly, from moment to moment. Historically, it was a manual process (a person would read the thermometer and jot down the results a handful of times a day) before it was automated. Today, in theory, modern equipment could make a near continuous recording of temperature (to the tune of however fast the computer is that is doing the recording) but you’d quickly run out of storage space at the shear volume of data making such continuous recording impractical for little to no gain. Somewhere between those two extremes is a happy medium. I leave it to others to figure out where that is.

William Ward
February 15, 2019 3:18 am

Kevin,

Thank you for the very interesting post. I would like to think about it more and then discuss a few things with you. Nick Stokes’ essay that you refer to was a response to the essay I presented. I would like to calibrate with you – can you tell me if you read my post as well? It is located here:

https://wattsupwiththat.com/2019/01/14/a-condensed-version-of-a-paper-entitled-violating-nyquist-another-source-of-significant-error-in-the-instrumental-temperature-record/

If you have not, and you would like to read it, then I recommend the full version of the paper located here:

https://wattsupwiththat.com/wp-content/uploads/2019/01/Violating-Nyquist-Instrumental-Record-20190101-1Full.pdf

The full version covers some basics about sampling theory, which you clearly do not need – but there are some comments and points made that might be lost if you just read the short version published on WUWT. The full paper version is a superset of the WUWT post. Because I cover a few points in the full version not covered in the WUWT post I have a few differences in the conclusions as well. For example, in the full version I detail what frequencies in the signal are critical for aliasing and where they land in the sampled result. (For 2-samples/day, the spectral content at 1 and 3-cycles/day aliases and corrupts the daily-signal and spectral content near 2-cycles/day aliases and corrupts the long-term signal.)

In my paper I stated the max and min samples, with their irregular periodic timing as being related to clock jitter, practically speaking – although the size of the jitter is quite large compared to what you would see from an ADC clock. This was a point of much discussion between Nick and me. We also debated whether or not Tmax and Tmin were technically samples. My position was/is that Nyquist provides us with a transform for signals to go between the analog and digital domains. The goal is to make sure that the integrity of the equivalency of domains is maintained. In the real world the signal we start with is in the analog domain. If we end up with discrete values related to that analog signal then we have samples and they must comply with Nyquist if the math operations on those samples are to have any (accurate) relevance to the original analog signal. In the discussion there was also some misunderstanding that needed to be cleared up and that was just what does the Nyquist frequency mean: was it what nature demanded or what the engineer designed to. There were discussions about what was signal and what was noise – did that higher frequency content matter. My position as an engineer is that the break frequency of the anti-aliasing filter needs to be decided – perhaps by the climate scientist. Then the sample rate had to be selected to correspond to Nyquist based upon the filter. Another key point was that sampling faster is good to ease the filter requirements and lessen aliasing – and practically speaking, the rates required for an air temperature signal are glacial – all good commercial ADCs can do that with no additional cost. The belief is that the cost of bandwidth and data storage are also relatively low so if we sample adequately fast then accurate data can be obtained and after sampling any DSP operation desired can be done on that data. Sample properly and the secrets of the signal are yours to play with. Alias and you have error that can not be corrected post sampling. No undo button available – despite many creative claims to the contrary.

Anyway, if you wouldn’t mind letting me know if you have read my paper (and which version), it will help me to communicate with you most efficiently. I’m pleased that this subject is getting more attention.

Thank you,
William

Kevin kilty
Reply to  William Ward
February 15, 2019 6:18 am

I’d be happy to cooperate on this topic. Beyond issues of sampling, I was hoping to raise some awareness about why Max/Min temperatures, by design, might not be well suited to the purpose of “mean surface temperature” calculated to the hundredth of a degree. Let me go read the papers you have referenced, which I am not familiar with. I’m not sure how to make contact.

William Ward
Reply to  Kevin kilty
February 15, 2019 4:27 pm

Thanks for the reply Kevin – I appreciate it. Also, thanks for taking the time to read my paper. Let me know what you think.

I have some thoughts about your Figure 1 and the “missing” -40 reading.

You said: “Obviously the A/D system responded to and recorded a brief duration of very cold air which has been missed in the periodic record completely, but which will enter the Max/Min records as Min of the day.”

I agree with most of what you said in your paper but if I’m understanding you correctly I don’t think I agree with your assessment about this point. At this particular station, samples are taken far more frequently than 1 hour intervals. Consulting the ASOS guides (thanks to Clyde and Kip for supplying them recently) we see that samples occur every 10 seconds and these are averaged every 1 minute and then the 1 minute data is further averaged to 5 minutes. (A strange practice – but this is what is done.) So from this data we achieve the ability to get the max and min values for a given period. The data is also presented in an “hourly sample package”, and I would not expect to see any of those hourly samples contain the max min values – except by luck. I don’t see this as a problem. I see the hourly packaging as the problem – well maybe “problem” is too strong of a description – hourly packaging seems to be an unnecessary practice. We have more data – just publish all of it and let the user of the data process it appropriately. I don’t think we have a loss of data in your example – just a presentation of a subset that doesn’t include the max/min. What do you think?

For any properly designed system, whatever the chosen Nyquist rate is, assuming the anti-aliasing filter is implemented properly, we can use DSP to retrieve the actual “inter-sample peaks”. It is a common operation performed when mastering audio recordings as the levels are being set for the final medium (CD, iTunes, Spotify, etc.). A device called a “look-ahead peak limiter” is used on the samples in the digital domain. Through proper filtering and upsampling, the “true-peak” values can be discovered and the gains used in the limiting process can be set according to the actual true-peak instead of the sample peak values. For a sinusoid, the actual true-peak can be 3dB higher in level than the samples show. For a square wave the true-peak can be 6dB or more above the samples! This can, of course, be observed by converting the samples properly back into the analog domain. However, the problem in audio is that the DAC (Digital to Analog Converter) has a maximum limit to the analog voltage it can generate. If the input samples are max value for the DAC but the actual true-peak is +3 or +6dB higher than the DAC’s limit then the DAC “clips” and the result is an awful “pop” or “crackle” sound in the audio. The goal when mastering is to set levels with enough margin to handle the inter-sample peaks not visible from the samples – so that the DAC is never forced to clip.

In my paper (partly due to trying to keep the size of it under control) I do not mention inter-sample peaks or true-peak. In my analysis I simply select the highest value sample from the 5-minute samples and call this the max. This is because that is what NOAA does. A more accurate second pass analysis would actually show that the real “mean” error between 5-minute samples and max/min samples is even greater than I presented. The proper method would be to do DSP on the samples to integrate upsampled data (equivalent to analog integration of the reconstructed signal) and compare this to the retrieved inter-sample true-peaks. I suspect that analysis would yield even more error than I showed in my paper. This is because the full integrated mean would not change much as the energy in that content is small. But the max and min values could swing by several °C, more greatly influencing the midrange value between those 2 numbers.

What do you think?

Reply to  William Ward
February 15, 2019 10:54 am

William, good to see your response. Kevin, thanks for a most interesting post and your reply to William. I’ve learned much from both of your posts.

I particularly enjoyed your example of the six-hour min-max being different from the hourly. It made the issues clear.

Onwards,

w.

David Stone CEng MIET
February 15, 2019 3:20 am

I missed out a point which is the low pass filter cutoff frequency, which should obviously match the day length, in other words something close to 1/24 hours. This will produce a true power accumulated temperature reading, to be measured at the same time each day. We do not want short period change readings for the climate data, and this filter will reduce all the HF noise variations to essentially zero, and so be able to be measured with great accuracy. Sampling noise reduction can be by averaging many samples taken over a time interval significantly less than a day length, so be capable of reproducing very accurate change data, if not absolute data due to instrument inherent accuracy.

Greg F
Reply to  David Stone CEng MIET
February 15, 2019 6:10 am

I missed out a point which is the low pass filter cutoff frequency, which should obviously match the day length, in other words something close to 1/24 hours. This will produce a true power accumulated temperature reading, to be measured at the same time each day.

An analog filter with that low of a cutoff frequency would be pretty hard to do. It’s simply not practical. It can be done in the digital domain by down sampling. The signal is first run through a digital low pass filter then is resampled at a lower frequency. For example, to reduce the amount of data points to half you apply a digital filter to the original data to remove all the frequency components above 1/4 the the original sampling rate. You would then decimate (resample) the data by dropping every other sample.

Clyde Spencer
Reply to  Greg F
February 15, 2019 9:03 am

Greg F
One approach is to design a collection system with a specific thermal inertia to dampen high-frequency impulses. Actually, Stevenson Screens already do that, but they may not be the optimal low-pass filtering for climatology.

Clyde Spencer
Reply to  Greg F
February 15, 2019 9:12 am

Greg F
Decimating the sampled data set exacerbates the aliasing problem by making the sampling rate effectively a fraction of the original sampling rate. If the initial sampling rate meets the Nyquist Criteria, to capture the highest frequency component(s), thereby preventing aliasing and assuring faithful reproduction of the signal, then and only then, digital low-pass filters can be applied to suppress the high-frequency component(s).

Greg F
Reply to  Clyde Spencer
February 15, 2019 9:39 am

Decimating the sampled data set exacerbates the aliasing problem by making the sampling rate effectively a fraction of the original sampling rate.

Perhaps I didn’t explain it well enough as you appear to think I didn’t LP filter the raw data before decimating it.
analog signal –> sampled data–> digital filter (low pass) –> decimate
I thought my example was simple and clear enough.

One approach is to design a collection system with a specific thermal inertia to dampen high-frequency impulses.

All temperature sensors have some thermal inertia. A/D converters and processing is sufficiently cheap enough that adding thermal inertia to the sensor (to effectively low pass filter the signal) is likely not the most economical solution.

William Ward
Reply to  David Stone CEng MIET
February 15, 2019 12:51 pm

David Stone,

I suggest that we should be careful to assess what is signal and what is noise. How do we know that higher frequency information doesn’t tell us important things about climate? There is energy in those frequencies and capturing them properly allows us to analyze it. I recommend erring on the side of designing the system to capture as wide a bandwidth as possible. If properly captured then all options are at our disposal. We can filter away as much of it as we desire or as much as the science supports. Likewise we can use as much as desired or as much as the science supports. If we don’t capture it then the range of options is reduced. It is difficult to make a case that there are any benefits (economic or technical) to capturing less data up front. But we do agree that the system design must have integrity: the anti-aliasing filter and the sample rate must comply with the chosen Nyquist frequency. Standardized front-end circuits should also be used with specifications for response times, drift, clock jitter, offsets, power supply accuracy, etc, etc. And with that of course the siting of the instrument. I’m not advocating any specific formulation – just stating that these are important parameters that should be standardized so that we know each station is common to those items. The design should be done to allow a station to be placed anywhere around the world and not fail to capture according to the design specifications.

Others have said correctly that there is a lot of hourly data out there and we should use it. I support that and an effort to understand how the way it was captured could contribute to inaccuracies. That data is likely a big improvement over the max/min data and method. There may or may not be much difference between that and a system engineered by even better standards. At this point I don’t know, but if any new designs are undertaken, then they should be undertaken with best engineering practices.

February 15, 2019 3:21 am

David Stone, note that satellites will suffer from sampling issues as the sample the same spot on (or over) the earth once every 16 days, approximately twice per month.

You can read more here:
https://atrain.nasa.gov/publications/Aqua.pdf

Look at page one, ‘repeat cycle’.

You can see the list of other satellites within the A-train here:

https://www.nasa.gov/content/a-train

Obviously all of the instruments on board each satellite will pass over once per fortnight…

It is all very complex, clever but the data is very ‘smeary’.

tom0mason
February 15, 2019 3:44 am

And of course there is that pervading idea from some climate scientists that they ‘know’ the average temperature of the world to hundredths of a degree centigrade!

February 15, 2019 4:26 am

” It describes how a linear system ” … and here lies a problem in climastrology, many times they use linear systems methodology that is not applicable to the non-linear system they study. Ex falso, quodlibet.

Reply to  Adrian
February 15, 2019 10:02 am

+100

Jim

February 15, 2019 4:57 am

Theoretically, the equations (“models”) of physics precisely and exactly represent only what we _believe_ is Reality.

In practice, observations tend to corroborate these models, or not. We are happy if our observations are approximately in agreement with the models, more or less.

For example, Shannon’s sampling theorem states that a signal may be perfectly reconstructed from band-limited signals if the Nyquist limit is satisfied. In the same sense that a line may be perfectly reconstructed from two distinct points.

But we all know that Bias and Variance dominate our observations, explaining why draftsmen prefer to use at least three points to determine a line

And using MinMax temps is a reasonable “sawtooth” basis for mean temp, if that is the only data you have.

commieBob
February 15, 2019 5:13 am

Nick was replying to William’s article.

In William’s article is “Figure 1: NOAA USCRN Data for Cordova, AK Nov 11, 2017”. He does a numerical analysis and shows an error of about 0.2C comparing “5 minute sample mean vs. (Tmax+Tmin)/2”.

The point that (Tmax+Tmin)/2 produces an error is adequately demonstrated by William’s numerical analysis. The question is about how often you get a temperature profile like the one shown in Fig. 1. In my neck of the woods, within 25 miles of Lake Ontario, in the winter, anomalous temperature profiles are very common. In fact, the winter temperature profile looks nothing at all like a nice tame sine wave.

So, why does it matter whether the temperature profile looks like a sine wave? The simplest way to think about the Nyquist rate is to ask what is the waveform that could be reconstructed from a data set that does not violate the Nyquist criterion. The answer is that you could reconstruct an unchanging sine wave whose frequency is half the sampling rate.

What happens when you don’t have a sine wave? The answer is that the wave form contains higher frequencies. The Nyquist rate applies to those and their frequencies go all the way to infinity. Thus, you have to set an acceptable error and set your sample rate based on that.

The daily temperature profile is nothing like a sine wave (where I live) and does not repeat. Nyquist is a red herring. William’s numerical analysis is sufficiently compelling to make the point that (Tmax+Tmin)/2 produces an error.

commieBob
Reply to  commieBob
February 15, 2019 8:09 am

Oops.

… what is the waveform …

Should be.

… what is the highest frequency waveform …

Clyde Spencer
Reply to  commieBob
February 15, 2019 9:20 am

CommieBob
You said, “The Nyquist rate applies to those and their frequencies go all the way to infinity.” That is why it is necessary to pre-filter the analog signal before sampling. We can’t have a discretized sample with an infinite number of sinusoids.

commieBob
Reply to  Clyde Spencer
February 15, 2019 10:09 am

Exactly so. Of course, when you do the LPF, you lose information and that produces an error.

William Ward
Reply to  commieBob
February 15, 2019 1:01 pm

Thank you commieBob.

February 15, 2019 5:30 am

That convolution discussion is very confusing.

I’m sure that what the author meant to say is true. But when he says, “In this case h(t) is a series of impulse functions separated by the sampling rate,” it sounds as though he wants us to convolve a signal consisting of a sequence of mutually time-offset impulses with the signal being sampled.

I haven’t spoken with experts on this stuff for awhile, so I’m probably rusty. But what I think they said is that in this case the convolution occurs in the frequency domain, not in the time domain.

Reply to  Joe Born
February 15, 2019 8:54 am

“… convolution occurs in the frequency domain, not in the time domain”
Actually, convolution is strictly a mathematical operation and attaches no specific physical interpretation to the integration variable.

Or perhaps you are referring to the so-called ‘convolution theorem’, in Fourier analysis, which states that a convolution of two variables in one domain is equal to the product of their Fourier transforms on the other domain

Reply to  Johanus
February 15, 2019 11:56 am

Or perhaps you are referring to the so-called ‘convolution theorem’, in Fourier analysis, which states that a convolution of two variables in one domain is equal to the product of their Fourier transforms on the other domain

That is indeed what I’m referring to. But if, as the author seems to say, h(t) is a sequence of periodically occurring impulses, then sampling is the product of h(t) and s(t), not their convolution. So this situation is unlike the typical one, in which h(t) is a system’s impulse response and convolution therefore occurs in the time domain, with multiplication occurring in the frequency domain. In this situation the frequency domain is where the convolution occurs.

Reply to  Joe Born
February 15, 2019 12:37 pm

As Jim Masterson pointed out below, convolving any function with the unit impulse is mathematically identical to the function itself.
But such ideal unit impulses do not exist in Nature. What we really have is a noisy, short interval in which a signal is present.

As I pointed out above the sampling theorem guarantees, in theory, perfect reconstruction of band-limited sampled signals, if the Nyquist limit is obeyed. In practice it means you can reconstruct a sampled signal with arbitrarily small error.

Reply to  Joe Born
February 15, 2019 9:14 am

Any time function can be represented as a summation or integral of unit impulses as follows:

\displaystyle f(t)=\int\limits_{-\infty }^{\infty }{f(\lambda )\delta (t-\lambda )d\lambda }

The \displaystyle \delta function is the unit impulse. Of course, you must be dealing with a linear system where the principle of superposition holds.

Jim

Reply to  Jim Masterson
February 15, 2019 12:00 pm

Yes, the convolution of a function with the unit impulse is the function itself. But I don’t see how that makes sampling equivalent to convolving with a sequence of impulses

Reply to  Joe Born
February 15, 2019 12:54 pm

https://www.google.com/url?sa=t&source=web&rct=j&url=http://nms.csail.mit.edu/spinal/shannonpaper.pdf&ved=2ahUKEwjPh5atzr7gAhUHVd8KHaZyBiwQFjAAegQIBRAB&usg=AOvVaw0_2SG_yumr7ovVqV5JR3R1

Equation 7 is the working version of the reconstruction part of the sampling theorem.

Proof is based on constructing Fourier series coefficients.

Reply to  Johanus
February 15, 2019 1:36 pm

Yes, yes, yes. All of Shannon’s papers are sitting in my summer home’s basement, so, although I can’t say I carry around all of his teachings in my head, you needn’t recite elementary signal-processing results. Doing so doesn’t address Mr. Kilty’s apparent belief that the output of a sampling operation is the result of convolving the signal to be sampled with a function h(t) that consists of a sequence of unit impulses.

I appreciate your trying to help, but you seem unable to grasp what the issue is.

Reply to  Joe Born
February 15, 2019 2:05 pm

The function above is not a convolution integral. It’s an identity. The identity can be used to solve convolution integrals. A convolution integral involves two signals as follows:

\displaystyle {{f}_{1}}(t)\bullet {{f}_{2}}(t)=\int\limits_{-\infty }^{\infty }{{{f}_{1}}(\lambda ){{f}_{2}}(t-\lambda )d\lambda }

Essentially you are taking two signals and multiplying them together. The second signal is flipped end-for-end and multiplied with the first one–one impulse pair at a time. You start at minus infinity for both signals and go all the way to plus infinity. (Some convolution integrals start at zero.)

This is useful for solving the response to a network with a specific input. The network is reduced to a transfer function–f1 above. The input signal is f2. The convolution is the response of the network to the input signal. Using Laplace transforms, convolution is simply multiplying the transforms together. You then convert the Laplace transforms to their time domain equivalent. It takes some effort, but it’s a lot easier than doing everything in the time domain.

Jim

Reply to  Jim Masterson
February 15, 2019 2:23 pm

Again, I know what convolution is. I have for over half a century. The issue isn’t how convolution works. I know how it works.

The issue is Mr. Kilty’s apparent belief that sampling is equivalent to convolving with an h(t) (i.e., your f_2) that consists of a sequence of impulses that recur at the sampling rate.
I hope and trust that he doesn’t really believe that, but that’s how his post reads.

If you have something to say that’s relevant to that issue, I’m happy to discuss it. But I see no value in responding to further comments that merely recite rudimentary signal theory.

Reply to  Jim Masterson
February 15, 2019 4:06 pm

>>
But I see no value in responding to further comments that merely recite rudimentary signal theory.
<<

I don’t mean to insult you with rudimentary theories. But convolution requires linear systems. Weather, and by extension climate, are not linear systems. Therefore, convolution can’t be used for these systems. And as I and many others have said repeatedly, temperature is an intensive thermodynamic property. Intensive thermodynamic properties can’t be averaged to obtain a physically meaningful result. (However, you can average any series of numbers–it just may have no physical significance.)

After further consideration–your initial concern is a valid point.

Jim

Jim

Reply to  Jim Masterson
February 16, 2019 10:55 pm

Joe – I don’t get it either. Is it clumsy writing or clumsy understanding (or both).

Just in the first paragraph of the top post:

“. . . . . . . . . . This sampling rate is part of the response function, h(t). For example the S/H circuit of a typical A/D has small capacitance and small input impedance, and thus has very rapid response to signals, or wide bandwidth if you prefer. It looks like an impulse function. . . . . .”

No – the Sample/Hold response is a rectangle in time. If that is h(t), you convolve a weighted (weighted by the signal samples) delta-train with the rectangle to get the S/H output (to hold still) prior to A/D conversion. But this is NOT what he says h(t) is a few lines lower!

“. . . . . . . . . . In this case h(t) is a series of impulse functions separated by the sampling rate. . . . . . . .”

As suggested, h(t) is a rectangle of width 1/fs. A “series of impulse functions separated by the sampling rate” (a frequency) would be a Dirac delta comb in frequency. That is the function with which the original spectrum is convolved (in frequency) to get the (periodic) spectrum of the sampled signal. But he says h is a function of time h(t)!

“. . . . . . . . . . If s(t) is a slowly varying signal, the convolution produces a nearly periodic output. . . . . . . . “

What convolution? Equation (1)? What domain? He must mean the periodic sampled spectrum. Why “nearly periodic?” Is it the sync roll-off of the rectangle due to the S/H?

“. . . . . . . . . . In the frequency domain, the Fourier transform of h(t), the transfer function (H(ω)), also is periodic, but its periods are closely spaced. . . . . . . .”

Why does he say “also is periodic?” What is ORIGINALLY periodic in this statement? In what sense are the periods “closely spaced”? It depends on fs.

The rest of the paragraph is correct.

Bernie

Reply to  Joe Born
February 15, 2019 6:33 pm

“Doing so doesn’t address Mr. Kilty’s apparent belief that the output of a sampling operation is the result of convolving the signal to be sampled “
It isn’t a convolution. Sampling multiplies the signal by the Dirac comb. This becomes convolution in the frequency domain, also with a Dirac comb (one comb transforms into another). As convolving with δ just regenerates the function, convolving with a comb generates an equally spaced set of copies of the spectrum, which then overlap. That overlapping is another way to see aliasing.

1sky1
Reply to  Nick Stokes
February 17, 2019 4:41 pm

Sampling multiplies the signal by the Dirac comb. This becomes convolution in the frequency domain, also with a Dirac comb

Indeed, but only for strictly periodic sampling in the time domain. Otherwise, there’s no Nyquist frequency defined and there can be no periodic spectral folding in the frequency domain, a.k.a aliasing.

Sadly, entire threads have been wasted here on egregious misconceptions.

William Ward
Reply to  Nick Stokes
February 17, 2019 7:02 pm

1sky1,

You said: “Indeed, but only for strictly periodic sampling in the time domain. Otherwise, there’s no Nyquist frequency defined and there can be no periodic spectral folding in the frequency domain, a.k.a aliasing.”

Can you give us a definition of “strictly periodic”? In that, can you explain how sampling works with semiconductor ADCs which operate with jitter? I assume you agree that not 1 clock pulse has ever triggered on planet Earth in the history of clock pulses that didn’t have some quantity of jitter. So how do you resolve your statement? What are the limits of jitter allowed for Nyquist to be defined? It sounds like you are saying the value is exactly 0 in every possible unit of time. If zero is the answer then how do you explain the function of an ADC in the real world? What does an imperfect clock mean for the operation? Does aliasing (spectral folding) get eliminated in reality? What then is the cause of imperfections of sampling when the clock frequency is < 2BW?

Reply to  Nick Stokes
February 17, 2019 9:43 pm

Responding to William at Feb 17, 2019 at 7:02 pm:

The suggestion that timing discrepancies due to the substantial ignorance of the actual times of Tmax and Tmin are analogous to the tiny “jitter” in familiar sampling situations (such as digital audio) is a stretch, and then some.

Might I suggest that you consider the practice of “oversampling” (OS) that is pretty much universal in digital audio. This (with “noise shaping” (NS) ) allows one to trade off expensive divisions for additional amplitude resolution (more bits) for inexpensive additional divisions in time (smaller sampling intervals). [An OSNS CD player (one bit D/A) can be purchased for less than the cost of even a single 16 bit D/A chip.] Note that Digital audio is something like a 44.1 kHz sampling rate (very low frequency as compared to things like radio signals). We are limited, for practical purposes, with amplitude resolution; but easily work at higher and higher data rates.

The timing jitter and amplitude quantization errors (round-off noise) are very similar effects, handled as random noise. This can be seen by considering that, for a bandlimited signal, any sample might be taken slightly early or slightly late, perhaps with no error after quantization, and at least is unlikely to be much more than the LSB. That is, given an error (noise), the choice of assigning it to timing jitter, or to limited amplitudes, is your choice. For the most part, you CAN arguably assume that the sampling grid is PERFECT and the signal slightly more noisy.

The point is that we have an IMMENSE amount of “wiggle room” with respect to precise timing of samples. Even a millisecond of error in weather data recording would be unusual. In contrast, the unknown times of Tmax and Tmin are hours.

Jitter seems a very poor analogy here.

Bernie

William Ward
Reply to  Nick Stokes
February 17, 2019 11:11 pm

Hello Bernie,

Consider taking 5-minute samples and discarding all but the max and min. For this discussion we retain the timing of those samples.

Jitter is when a clock pulse deviates from its ideal time by some amount ∆t. Assuming the clock pulse happens inside of its intended period, what are the limits of ∆t before it is no longer jitter and becomes something else. And what is the something else?

Please be as succinct as possible because I think this should be answerable with brevity.

I’m open to being convinced otherwise. I know the amount of ∆t is large relative to the period as compared to modern ADC applications, but I could not find a limit so I went with it as an analogy. It has garnered a lot of attention. I’m not sure if it is or isn’t critical to anything I have said. So my intention at this point is to have some fun to see what we can learn on this.

My thinking so far is that the noise/error just scale with what I’m calling jitter. Its larger than we normally encounter but nothing else in the theory breaks down.

What do you think?

Reply to  Nick Stokes
February 18, 2019 9:20 am

Replying to William Ward at February 17, 2019 at 11:11 pm

An explanatory model, or theory, usefully applies (or breaks down) according to overall circumstances, not to some threshold.

Jitter is small (perhaps 1% of a sampling interval), usually adequately modeled as random noise. For a substantial fraction of the sampling interval or greater, we need your “something-else”.

Jitter is when you show up for a party 5 minutes early or 5 minutes late (different watch settings, traffic, expected indifference). “Something-else” is when you show up the wrong evening!

Taking just two numbers (Tmax and Tmin), when much more and better data (with actual times included) is possible, is definitely a something-else.

Bernie

William Ward
Reply to  Nick Stokes
February 18, 2019 2:02 pm

Hello Bernie,

I hope you are well.

Your said: “An explanatory model, or theory, usefully applies (or breaks down) according to overall circumstances, not to some threshold.”

Ok, you are on record here, as you were in our previous discussions, with your opinion about the applicability of jitter. But you offer no empirical evidence, math, rationale, nor cite any source to refute it.

I’m open to being convinced otherwise, but so far my rationale has stood up to the critique. I will continue to use it until someone can show my how it breaks down.

I’m still not sure that it is worth arguing about. 2 samples/day, whether poorly timed or not, don’t provide an accurate daily mean. Most people seem to think the trends are the more important issue. I have shown good evidence of trend error that results as well. This trend error is small to my standards of measurement, but then again so are the claimed warming trends small by my standards. The fact that they (trend error and trends) are of similar magnitude is what is most important in my assessment. No one yet has shown that the trends Paramenter and I have presented are not correct. There have been a lot of claims to that effect, and alternate forms of analysis that contradict it, but no findings of error in our analysis. And that is where we are with it.

The decision to measure max and min daily, was probably an intuitive decision – not one based upon science or signal analysis. It is a fortunate thing that the spectrum of the temperature signal does allow for sampling 2-samples/day without completely destructive effects. If there was more energy in 1, 2, 3-cycles/day, then the result would be different. It turns out to work ok – but not so good that we can claim records to 0.1C or 0.01C or trends of 0.1C/decade.

1sky1
Reply to  Nick Stokes
February 19, 2019 3:15 pm

Ok, you are on record here, as you were in our previous discussions, with your opinion about the applicability of jitter. But you offer no empirical evidence, math, rationale, nor cite any source to refute it.

I’m open to being convinced otherwise, but so far my rationale has stood up to the critique. I will continue to use it until someone can show my how it breaks down.

For those with even a modicum of comprehension of DSP math, the rationale is perfectly clear: there’s a categorical difference between periodic, clock-DRIVEN sampling of the ordinates of a continuous signal and clock-INDEPENDENT recording of phenomenological features (e.g. zero-upcrossings, peaks, troughs) of the signal wave-form. The former may contain some random clock-jitter in practice (e.g., imprecise timing of thermometer readings at WMO synoptic times), whereas the latter is entirely signal-dependent (although some features may be missed in periodic sampling).

There’s simply no way that the highly asymmetric diurnal wave-form that produces daily Min/ Max readings at very irregular times, clustered near dawn at mid-afternoon, can be reasonably attributed to any clock-jitter of regular twice-daily sampling. Jitter smears, but does not change the average time between samples. Ward starts with a highly untypical diurnal wave in Figure 1 of his original posting and proceeds to insist against all analytic reason that kiwi-fruit are apples.

Reply to  Nick Stokes
February 19, 2019 6:31 pm

Thanks 1sky1 – you said: “Jitter smears, but does not change the average time between samples. . . . . . . “ Good point .

On the “In search of the standard day” thread (Feb 16, 2019) I had a relataed observation (Feb 16 at 6:13 pm) while illustrating the problem of fitting sinewave min-max that were not 12 hours apart! The illustration is here:

http://electronotes.netfirms.com/StandDay.jpg

Below at 5:41 pm I just minutes ago posted an interesting result of sampling a one-day cycle at two samples a day where aliasing is “undone” by well-known FFT interpolation.

Bernie

William Ward
Reply to  Nick Stokes
February 19, 2019 10:49 pm

Jitter is defined simply as the deviation of an edge (sample) from where it should be.

Nothing more and nothing less.

Total Jitter can be decomposed into bounded (deterministic jitter) and unbounded (random jitter). Bounded jitter can be decomposed into bounded correlated and bounded uncorrelated jitter. Bounded correlated jitter can be decomposed into duty cycle distortion (DCD) and intersymbol interference (ISI). Bounded uncorrelated jitter can be decomposed into periodic jitter and “other” jitter. Other jitter – meaning anything else not described that explains a sample deviating from its correct time.

DCD fits well with our temperature signal max/min issue. Feel free to call it whatever you prefer – it will not change the results. We have sample values that deviate from where they need to be for Nyquist. There is no other mathematical basis for working with digital signals except Nyquist. There is no Phenomenological Sampling Theorem. If you are working with discrete values from an analog signal, then they have to comply with Nyquist.

It does not matter if you get the sample values through an ADC or through a max/min thermometer or a crystal ball. It does not matter what provides the timing to get the digital samples. **It doesn’t matter if we obtain or keep the timing information. **

Here is the reason. Nyquist requires perfect correspondence between the analog and digital domains. We can demonstrate this correspondence by reconstructing back to the analog domain. Nyquist insists upon the perfect timing and perfect values for reconstruction. Nyquist provides the timing by forcing it to the rate inferred by the samples. We provide the values of the samples obtained. Jitter in the conversion in either direction deviates from the “strict periodicity” required for perfect reconstruction. This is seen as quantization error when we compare the sampled signal to the original signal at the perfect sampling times.

If you want to find the actual analog signal you are working with digitally then you simply run the sample values through a DAC at the sample rate inferred by your samples.

Examples: Assume typical air temperature signal and ADC with anti-aliasing filter compatible with 288-samples/day.

1) We sample at 288-samples/day (5-minute samples) with no perceptible jitter. If we run the samples back through a DAC set at 288-samples/day we get the original signal.

2) We sample at 2-samples/day, clocked with no perceptible jitter. We must run the samples back through the DAC at 2-samples/day. The reconstructed signal will differ from the original as a function of the aliasing.

3) We sample at 2-samples/day, clocked with DCD (jitter). The samples happen to correspond to the timing and values of Tmax and Tmin. To reconstruct, Nyquist requires we run this back through the DAC with a clock rate of 2-samples/day. The reconstructed signal will differ from the original as a function of the aliasing *and* as a function of the quantization error resulting from measuring the analog signal at the wrong time compared to what Nyquist required. The reconstruction places the samples at the time location they should have come from originally if the sampling was “strictly periodic.” However, the sample values used in the reconstruction are not what they should have been for that time.

1sky1
Reply to  Nick Stokes
February 20, 2019 2:43 pm

There is no other mathematical basis for working with digital signals except Nyquist. There is no Phenomenological Sampling Theorem. If you are working with discrete values from an analog signal, then they have to comply with Nyquist.

It does not matter if you get the sample values through an ADC or through a max/min thermometer or a crystal ball. It does not matter what provides the timing to get the digital samples. **It doesn’t matter if we obtain or keep the timing information. **

This is an egregious display of illogic developed by blind fixation upon ADC hardware operation. With Min/Max thermometry we are NOT “working with digital signals,” but with phenomenological features of the CONTINUOUS temperature signal. It’s a straightforward matter of waveform analysis (q.v.), no different than registering the elevation of crests and troughs of waves above prevailing sea level or the directed zero-crossings. The number of such features observed per unit time has nothing to do with the discrete sampling frequency and everything to do with the original signal.

If the tortured rationale that claims this to be a DSP problem were ever submitted to any IEEE-refereed publication, it would elicit only head-shaking and laughter.

Reply to  Nick Stokes
February 20, 2019 6:05 pm

William Ward at February 19, 2019 at 10:49 pm said:
“. . . . . . . . . . DCD fits well with our temperature signal max/min issue. . . . . . . . . “

It would seem logical, but you don’t know the duty-cycle nor is it constant, so what is gained as opposed to just saying you don’t know the two times? You just postulate one of the times and postulate the wait till the second time. Right?

“ . . . . . . . . . . There is no other mathematical basis for working with digital signals except Nyquist. There is no Phenomenological Sampling Theorem. If you are working with discrete values from an analog signal, then they have to comply with Nyquist. . . . . . . . . . . .”

Of course there are – alternative signal models. For example, I curve-fitted a sinusoid of known frequency to non-uniform min-max. Four equations in four unknowns.
http://electronotes.netfirms.com/StandDay.jpg

“. . . . . . . . . . 3) We sample at 2-samples/day, clocked with DCD (jitter). The samples happen to correspond to the timing and values of Tmax and Tmin. . . . . . . . . . “

You have to describe this (happen to correspond) better. Is the temperature accommodating the physical clocking or the clocking accommodating the physical temperature max-min? How would you expect this to happen? Thanks.

Bernie

William Ward
Reply to  Nick Stokes
February 20, 2019 9:05 pm

Bernie,

You said: “You have to describe this (happen to correspond) better. …”

I think you misunderstand my point. In the example (#3), I’m showing that the max and min values are not special. The ADC samples can hypothetically land on the max and min values, via some clocking jitter. Or they can land on other values similarly far from the ideal “strictly periodic” sample time. All that matters is the time deviation from the ideal, which results in quantization error for that sample.

Nyquist requires clocking to be strictly periodic when we convert from analog and when we convert to digital. Tmax and Tmin are periodic (2-samples/day) but not strictly periodic. But no sample is strictly periodic in reality. Every sample has some timing deviation from the ideal.

What happens when sampling analog at times that deviate from the ideal? (This is jitter by definition). We measure a value that is correct for the moment of sampling, but not correct for when Nyquist is expecting the sample to be taken. If we reconstruct, the DAC uses the correct sample-rate and sample-time (because Nyquist requires it) but with the wrong value. This is quantization error when we view any individual sample.

See this image for an illustration: https://imgur.com/KSgcDxm

In this example, the clock is supposed to trigger a sample at point #1, which corresponds to x=π/3. The correct value of the sample at x=π/3 is 0.866. But what if the clock pulse arrives early? Regardless of the cause, what if it arrives at point #2, which happens to correspond to π/4? The ADC samples and reads the correct value for x=π/4, which is 0.707, but not the correct value for what Nyquist expects (0.866) We have quantization error of 0.866-0.707 for this sample, due to the jitter. If reconstruction is performed, we get error compared to the original signal. The DAC will sample at x=π/3 but with the value from x=π/4. So reconstruction uses 0.707 instead of 0.866. Again, quantization error for that sample.

If we just sampled temperature twice a day (every 12-hours), we would have aliasing due to higher frequency components between 1 and 3-cycles/day, but no jitter related quantization error. If we move the sample times (perhaps such that they line up with where max and min occur – just to make the point), then this increases jitter and now we add jitter related quantization error.

Whether we know the times of these samples doesn’t change the fact that the reconstruction takes place where they were supposed to have happened. Digital audio is a good example. There is no sample timing information included with the sample values – just the overall sample rate required.

If you think some of your methods can be used to reconstruct the signal using max and min, then we have USCRN data you can use. The timing is available for these samples. You also have the 5-minute samples with which to use as your comparison. To test your results you can simply invert one of the signals and sum them. The closer to a null the closer the match.

Reply to  Nick Stokes
February 24, 2019 6:19 pm

The problem of unknown times corresponding to Tmax and Tmin can be approached by guessing likely times, perhaps 7AM for a min and 4 PM for a max. Assuming these times will do, we still have non-uniform sampling – the times are not 12 hours apart. If they were equally spaced, VERY standard recovery (interpolation) would be achieved with sinc functions. Instead, different interpolation functions (due to Ron Bracewell) are necessary, but work in VERY much the same way the sincs do:

http://electronotes.netfirms.com/BunchedSamples.jpg

After trying a number of approaches with the FFT, I succeeded instead with the continuous-time case as shown here:

http://electronotes.netfirms.com/NonUniform.jpg

The top panel shows one day with Tmin=50 at 7AM and Tmax=80 at 4PM. The interpolated (black) curve goes exactly through both samples. In order to get a better idea as to what an ensemble of days might look like, this day is placed between two other days in the bottom panel. Again, the interpolation is exact. Note that all other days are Tmax=0, Tmin=0 by default (so avoid the ends!).

The daily sinusoidal-like interpolated curve is evident, but the min and max of this black curve are not Tmin or Tmax, but the actual timing of these suggests better guesses for a second run (etc.) of an iterative program.

-Bernie

Kevin kilty
February 15, 2019 6:07 am

Moderator: I see I made one mistake by saying “variance reduced in a statistic by the factor 1/(sqrt n)” It should read (1/n). Also why does the font switch here and there?

February 15, 2019 6:52 am

This is just one problem with the temperature databases.

1) Precision – For an example look at the BEST data. They take data from 1880 onward and somehow get precision out to one one-hundredth of a degree. In other words, when they average 50 and 51, they keep the answer of 50.5. This is adding precision that is not available in the real world. Any college professor in chemistry or physics would eat a students lunch for not using the rules for significant digits when doing this. If recorded temperatures are in integer values, then every subsequent mathematical operation needs to end up with integer values!

2) The Central Limit Theory and Uncertainty of Mean have very specific criteria when using them to increase the accuracy of measurements. The measurements must be random, normally distributed, independent, and OF THE SAME THING. You simply can’t take measurements of different things, i.e. temperature at different times, average them and say you can increase the accuracy and precision because you can divide the errors by the sqrt(1/N). They are different things and the measurement of one simply can not affect the accuracy of the other.

3) Temperature is a continuous function. It is not a discreet function with only certain allowed values. Consequently, one must have sufficient sampling in order to accurately recreate the continuous function. As the author describes, what we have now is not something that accurately describes what the actual temperature function does.

Clyde Spencer
Reply to  Jim Gorman
February 15, 2019 9:23 am

Jim Gorman
+1

Reply to  Jim Gorman
February 15, 2019 9:43 am

Excellent points.

William Ward
Reply to  Jim Gorman
February 15, 2019 12:59 pm

Thank you Jim. I agree with Clyde, and will raise him one. +2 for your comments.

Reply to  Jim Gorman
February 15, 2019 2:53 pm

Jim, ++++10000. From an industrial chemist.

Clyde Spencer
Reply to  Macha
February 17, 2019 9:28 am

Macha
Its people like you who cause ‘Thumb Up’ inflation! 🙂

Steve O
February 15, 2019 7:48 am

The best way to resolve the discussion is with a simulation.

You can create a made a made up temperature history spanning a 100 year period, knowing the continuous function of each day as if you measured once every 5 seconds. Against this set of data you can apply whatever sampling methods you choose and evaluate the usefulness and reliability of each.

William Ward
Reply to  Steve O
February 15, 2019 1:35 pm

Hello Steve O,

I did some of that in my paper:

https://wattsupwiththat.com/2019/01/14/a-condensed-version-of-a-paper-entitled-violating-nyquist-another-source-of-significant-error-in-the-instrumental-temperature-record/

Using USCRN data I was able to do it for up to 12 years using 5-minute sample data and compare it to the mix/max data method. I presented the trend errors in a table at the end.

Here are some charts not presented in the paper, showing yearly offset and long term linear trend errors. Note, these graphs were never intended for publication so some labeling might be below my normal standards.

https://imgur.com/xA4hGSZ

https://imgur.com/cqCCzC1

https://imgur.com/IC7239t

https://imgur.com/SaGIgKL

What do you think?

Rick
February 15, 2019 8:02 am

Leo Smith sums things up petty well IMO:
“What we have really is a flimsy structure of linear equations supported by savage extrapolation of inadequate and often false data, under constant revision, that purports to represent a complex non linear system for which the analysis in incomputable, and whose starting point cannot be established by empirical data anyway, that in the end can’t even be forced to fit the clumsy and inadequate data we do have”

MrZ
February 15, 2019 8:12 am

The time aspect is missing from this analyses.
Even though any specific stations’s day value can (and will) be way off it is hard to see how this can create any lasting bias over longer time periods. The error will be random and stay within boundaries and hence not affect the overall long term trend. Moreover any possible bias should cancel out between multiple stations.
There is a much bigger problem how multiple station data records are aggregated to faithfully represent a larger area especially when locations and area coverage constantly changes. There you have the real “sampling error”.

Clyde Spencer
Reply to  MrZ
February 15, 2019 9:28 am

MrZ
You said, “The error will be random and stay within boundaries and hence not affect the overall long term trend.”

Not strictly random. Most of the Tmins will be at night and most of the Tmaxes will be in the day time. That is over a 24-hour day, the bulk of the lows will be during the nominal 12-hour night and the bulk of the highs will be when the sun is shining.

MrZ
Reply to  Clyde Spencer
February 15, 2019 10:52 am

Hi Clyde,

True, I can not object to that but how could that fact affect any average bias over time?
Are you saying in context of sampling theory or TOBS adjustments? I am asking because the mercury MIN/MAX is an almost perfect sample for those two points (the article discuss how relevant those are) but very much depending on WHEN you read and reset.

MarkW
Reply to  MrZ
February 15, 2019 11:01 am

Your error is your assumption that the error is truly random.
That has not been shown to be the case.
Indeed there is evidence that it is not the case.

MrZ
Reply to  MarkW
February 15, 2019 11:27 am

OK MarkW,
If so please, take me through how that creates a consistent and growing bias across stations and time. My main point was simply it does NOT affect the overall trend.
Next step i.e station aggregation DO affect those trends.

MarkW
Reply to  MrZ
February 15, 2019 5:13 pm

It may or may not create a trend. That’s the problem, the data is so bad that there is no way to tell for sure.
The reality is that it is absurd to claim that we can measure the temperature of the planet to with 0.01C today. It is several orders of magnitude more absurd to claim we can do it with the records from 1850.

MrZ
Reply to  MarkW
February 16, 2019 9:50 am

Hi Mark!
Agree 0.001 precision is a joke.
The main error however does not originate from sampling the aggregation of stations is the main source of error

William Ward
Reply to  MrZ
February 15, 2019 8:57 pm

MrZ,

I provide you 4 links here (also provided elsewhere in other replies, but you seem to have not seen them). Using quality data from USCRN, I plot ~12 years of data using 5-minute samples. Yearly averages are provided along with the corresponding linear trend. From these samples, the daily max and min values are obtained. This is essentially the same as we would get with a well calibrated max/min thermometer. Yearly averages and trends are plotted for the max/min values. You can see the differences for yourself. Note: TOBS plays no role in this as the max and min values are determined by bounds of midnight to 11:55PM in a local calendar day.

https://imgur.com/xA4hGSZ

https://imgur.com/cqCCzC1

https://imgur.com/IC7239t

https://imgur.com/SaGIgKL

MrS
Reply to  William Ward
February 16, 2019 8:45 am

Hi William!
Thanks for this. You are right its hard to check all links when you join the thread afterwards.
Look at the trends in your graphs. They basically stated what I said. Any shift in trend is coincidental because last or first years differ expand a few yers and the trend difference is gone.

William Ward
Reply to  William Ward
February 16, 2019 8:51 pm

MrZ (MrS),

You said: “Look at the trends in your graphs. They basically stated what I said. Any shift in trend is coincidental because last or first years differ expand a few yers and the trend difference is gone.”

My reply: I don’t agree with your assessment. You would need to find a way to demonstrate that.

Reply to  William Ward
February 16, 2019 11:07 pm

William
“Note: TOBS plays no role in this as the max and min values are determined by bounds of midnight to 11:55PM in a local calendar day.”
TOBS plays a role. There is no special exemption for those arbitrary times. Midnight is a human convention, not anything physical. The bias created by double counting happens at any reading time, and causes an offset.

I see you have Boulder, CO there. Again in this post I showed this plot of annually smoothed Boulder data, from 2009 to 2011. It shows that every reset time creates an offset; I don’t have midnight, but 11 pm has an offset of about 0.2°C, with min/max being lower than integrated. That pretty much agrees with your plot.

Since it is a near constant offset, you wouldn’t expect it to have much effect on trend, and your plots show that. The small changes seen are statistically quite insignificant, and so entirely to be expected when you change the method of calculation.

William Ward
Reply to  William Ward
February 17, 2019 8:29 pm

Hello Nick,

You said: “TOBS plays a role. There is no special exemption for those arbitrary times. …”

I would like to better understand each other on this to see what we can learn (or at least what I can learn). I read the post you recommended and to the extent I fully digested it I have some thoughts and questions. First, let me detail what I did. For the record, Paramenter did some of the work on trends and this data was presented in my paper.

** All 5-minute data for 12 years was used, without intermediate averaging, and a linear trend was calculated.

** For the max/min method (MMM) [I’m tired of writing (Tmax+Tmin)/2! 🙂 ] we took NOAA’s stated monthly avg and fed 12 years of monthly avg into the calculation to determine a linear trend.

** The alternate method (less precise) that was used was: for the 5-minute data, each year was integrated and yearly mean values were used to determine the trend. For the MMM, 12 monthly averages provided by NOAA were averaged to a yearly data point and these were used to calculate a linear trend.

With the alternate method, we don’t see a consistent delta between the yearly datapoints and there is a trend difference for most stations between the methods.

Why are we seeing different results through our different methods (ours and yours)? I thought it was important to you to use monthly averages, but in your post on TOBS you seem to be using daily deltas. I must be misunderstanding something because, for Boulder CO, when I look at the day to day mean error it varies daily by quite a bit:

https://imgur.com/QyfAonp

NOAA uses midnight to 11:55 PM and we have data sampled automatically, so this is why I said there are no TOBS issues. But you are saying (I think) that if we (or NOAA) changes there definition of a day then the results change. Yes? Also, I understand you are defining a notional reset time and looking back to find the max and min for 24 hours prior, correct? Am I correct that this doesn’t present the exact same set of problems we get with a max/min thermometer? With a max/min thermometer, if reset when cooling and a cold front comes through then the next day we may actually read the previous day’s max – but this can’t happen when using the NOAA samples, correct?

Also, to be clear, if we integrate 5-minute samples then there really is no possible TOBS, correct? This was also factoring into my statement.

Over the next day or so, I will try to do the following to see what results: from the 5-minute samples select 2-samples spaced 12 hours apart. Repeat this for a number of different starting times. Ex: 12-12, 3-3, 6-6, 9-9. It would be very similar to what you did except the point would be to capture samples periodically vs. max and min.

I’m curious about what this will show – have some ideas but will let the results guide my analysis.

Thanks for engaging Nick. Good stuff.

Reply to  MrZ
February 15, 2019 11:23 am

The problem arises because the temperatures, especially up to the not too distant past, are recorded in integer values. That means the error is +/- 0.5 degrees. You can not reduce that error that is contained in each reading because you are measuring different things each time you make a measurement. That means each reading is stand alone, with that error. Averaging will not reduce it, i.e. the average will have that error also.

What does that mean? So the most accuracy you can claim is +/- 0.5 degrees regardless of how you apply statistical methods. So, you tell me how come climate scientists and the media run around whining about 0.001 degree changes. That is simply noise and they won’t admit it. Too many programmers and mathematicians who have no idea about the real world. Is it any wonder we have no reliable, physical experiments validating any of the claims?

MrZ
Reply to  Jim Gorman
February 15, 2019 11:41 am

Hi Jim!

Try a long series of 1 decimal measurements and average them.
The try the same series rounded without the decimal. You will be surprised. The longer the series the closer they are.
I do agree 0.001 precision is ridiculous though.

Reply to  MrZ
February 16, 2019 12:16 pm

I have done this. Look up the problem with rounding numbers using the traditional method. You will find a bias. Why? Because there are 9 chances to have a number that you use to round. Four of them, 1, 2, 3, 4 go down. Five of them go up, 5, 6, 7, 8, 9. You get an automatic bias toward higher numbers. This is a very well known problem. Most computer languages use this when you utilize a standard function.

The digit zero is a special case. The only way to get it is to have all even integer numbers or numbers with 0 as the decimal value.

MrZ
Reply to  Jim Gorman
February 16, 2019 12:45 pm

When you tried did the result get closer or deviate more with the number of records?

Reply to  Jim Gorman
February 15, 2019 1:24 pm

” Averaging will not reduce it, i.e. the average will have that error also.”
Just not true, as demonstrated with real temperature data here. But you can do it yourself. Just take any set of data; get some monthly averages. Round it to 1° or 2° or whatever. Take the averages again. The averages will agree with the unrounded versions to a lot better than 1°.

Reply to  Nick Stokes
February 15, 2019 1:59 pm

You’re missing the point. The averages are not the point, the errors are! If the original values are integer values, you must round averages to integer values. Your average simply can not contain more precision than the original measurements. That means your average, at best has an error range of +/- 0.5 degrees. It does not matter how many data points that have an error of +/- 0.5 degrees you average together, the end result will not have gained any less error!

MrZ
Reply to  Jim Gorman
February 15, 2019 2:12 pm

Jim,

Please try the simple example above! You appear as non-adjucated until….

Reply to  Jim Gorman
February 15, 2019 2:49 pm

“Your average simply can not contain more precision than the original measurements”
Much dogmatism, as often expressed here, but no authority quoted. And as I said, it just isn’t true, as is easily demonstrated.

In the link, I took daily readings for Melbourne, given to 1 decimal. So you’d say the monthly averages could only be accurate to 0.1°C. I calculated the 13 monthly averages (Mar-Mar):

22.72 19.24 17.13 14.43 13.29 13.85 17.26 24.33 22.73 27.45 25.98 25.1 24.86

Then I rounded the daily data to 1°C. So, you’d say, the average can only be accurate to 1°C. Then I average that:

22.77 19.27 17.13 14.37 13.29 13.84 17.33 24.35 22.67 27.48 26 25.17 24.84

The differences were:
0.05 0.03 0 -0.06 0 -0.01 0.08 0.03 -0.06 0.03 0.02 0.08 -0.02

That’s a lot better than ±0.5°C accuracy. It’s very close to the theoretical, which is ±0.05.

Reply to  Jim Gorman
February 15, 2019 2:51 pm

Sorry, those were Melbourne daily max temperatures

MrZ
Reply to  Jim Gorman
February 15, 2019 2:54 pm

He is not missing the point. You are…

Udar
Reply to  Jim Gorman
February 16, 2019 6:34 am

Nick,

You are not doing it correctly. As many already said, your underlying accuracy is not what you assume in your tests.

What you need to do is to generate random error of +/- 0.5 deg, add it to your accurate data and then round it out to integer and then average it.
Can you show that this will result in no error?

Reply to  Jim Gorman
February 16, 2019 12:51 pm

MrZ –> If you’ll notice I said integer values, not numbers that have decimal precision already.

Reply to  Jim Gorman
February 16, 2019 4:15 pm

Udar,
“What you need to do is to generate random error of +/- 0.5 deg, add it to your accurate data and then round it out to integer and then average it.”
I did almost that here, and with the whole data set. Noise was normally distributed, ±1.0°C, applied to monthly averages. It made almost no difference to the global average. That did not include rounding, but there is no reason to expect that random noise and rounding would have an effect together when they separately have none.

Micky H Corbett
Reply to  Jim Gorman
February 16, 2019 11:34 pm

Okay Nick, as I can see your usual flimflam.

All of the differences you calculated need to have the +/- 0.5 degC error added.

So really all we have are 1 degreee ranges. Basically useless for determining changes of 0.1 deg C

And that’s a real value.

The real values are magnitude +/- uncertainty.

Micky H Corbett
Reply to  Jim Gorman
February 17, 2019 12:30 am

Nick, I’ll use the example in the link you gave.

If you take March’s data, read to 0.1 degC (+/- 0.05 degC uncertainty), the average is 24.86 +/- 0.05 degC, using the 0.1 degC as reference

If you round the data (using a standard spreadsheet round) and average that you get 24.90 +/- 0.05 degC

If you add an offset of 0.3 degC, simulating an offset (either fixed or some slow varying one) and round, the average is 25.26 +/- 0.05 degC.

Okay. But now consider the differences between the offset and non-offset in terms of the system stated uncertainty for the rounded data (1 degree so +/- 0.5 degC)

25.3 +/- 0.5 degC,

where the “real” value in this case would be 24.9. Even though we see there is a drift, which in actuality would be caught by characterisation and calibration, there is still no significant difference, primarily as the system is designed to allow for a certain drift, having a larger uncertainty.

However if you assume your averaging creates a lower uncertainty then you may believe the 25.3 degC because you assume that the errors are random. You take the system to somehow have lower uncertainty.

This is the point. The uncertainty in a measurement system includes variations from normal distributions. It relates to reliability and insensitivity of results to small variations and drifts below the stated uncertainty.

You however think differently and seem to believe that the Theory of Large Numbers is a catch all. You are making the mistake of reading too much into the data and only considering the magnitudes not the magnitudes + error.

Reply to  Jim Gorman
February 17, 2019 5:09 am

Nick Stokes –> “Much dogmatism, as often expressed here, but no authority quoted. And as I said, it just isn’t true, as is easily demonstrated. ”

You want authority, here are three sources.

http://www.occonline.occ.cccd.edu/online/ahellman/Sig%20Fig,%20%20Rounding%20&%20Scientific%20Notation%20%20WS.pdf

[i]• Use significant figures to deal with uncertainty in numbers and calculations. [/i]

You’ll notice the word “uncertainty”, it is important in measurements.

https://engineering.purdue.edu/~asm215/topics/calcrule.html

[i]Any properly recorded measurement can be said to have a maximum uncertainty or error of plus or minus one-half its last digit. Significant figures give an indication of the precision attained when taking measurements.[/i]

Again, please note the reference to uncertainty.

https://physics.nist.gov/cuu/pdf/sp811.pdf

[i]B.7.2 Rounding converted numerical values of quantities

The use of the factors given in Secs. B.8 and B.9 to convert values of quantities was demonstrated in Sec. B.3. In most cases the product of the unconverted numerical value and the factor will be a numerical value with a number of digits that exceeds the number of significant digits (see Sec. 7.9) of the unconverted numerical value. [b]Proper conversion procedure requires rounding this converted numerical value to the number of significant digits that is consistent with the maximum possible rounding error of the unconverted numerical value. [/b]

Example: To express the value l = 36 ft in meters, use the factor 3.048 E−01 from Sec. B.8 or Sec. B.9 and write l = 36 ft × 0.3048 m/ft = 10.9728 m = 11.0 m.

The final result, l = 11.0 m, is based on the following reasoning: The numerical value “36” has two significant digits, and thus a relative maximum possible rounding error (abbreviated RE in this Guide for simplicity) of ± 0.5/36 = ± 1.4 %, because it could have resulted from rounding the number 35.5, 36.5, or any number between 35.5 and 36.5. To be consistent with this RE, the converted numerical value “10.9728” is rounded to 11.0 or three significant digits because the number 11.0 has an RE of ± 0.05/11.0 = ± 0.45 %. Although this ± 0.45 % RE is one-third of the ± 1.4 % RE of the unconverted numerical value “36,” if the converted numerical value “10.9728” had been rounded to 11 or two significant digits, information contained in the unconverted numerical value “36” would have been lost. This is because the RE of the numerical value “11” is ± 0.5/11 = ± 4.5 %, which is three times the ± 1.4 % RE of the unconverted numerical value “36.” This example therefore shows that when selecting the number of digits to retain in the numerical value of a converted quantity, one must often choose between discarding information or providing unwarranted information. Consideration of the end use of the converted value can often help one decide which choice to make. [i]

(bold by me)

Please note, making some snide comments almost overcame me, but I’ll be nice since you are trying. Chemists, physicists, machinists, and surveyors all have training in the physical world and understand that any measurement is inaccurate for any number of reasons. Using significant digits has been refined over the years to both help eliminate errors and to insure that measurements are repeatable over future experiments or operations.

One simply can not add additional digits of precision by using averaging. By doing so you are, in essence, adding precision to measurements that was not there to start with and you are misleading others into thinking you used better measurement devices than you really had.

Reply to  Jim Gorman
February 17, 2019 5:57 am

Nick Stokes –> Let’s use some numbers to illustrate what can happen. The temps I am using are made up, but similar to what you used.

Day 1 22.7 19.2 17.1 14.4 13.2 all +/- 0.05
Day 2 21.3 22.2 16.9 13.2 15.7 all +/- 0.05

Please note the following numbers include the uncertainty and are taken out to 4 significant digits

Max temps by adding 0.05
Day 1 22.75 19.25 17.15 14.45 13.25
Day 2 21.35 22.25 16.95 13.25 15.75

Min temps by subtracting 0.05
Day 1 22.65 19.15 17.05 14.35 13.15
Day 2 21.25 22.15 16.85 13.15 15.65

Average of Max Temps
22.05 20.75 17.05 13.85 14.50

Average of Min Temps
21.95 20.65 16.95 13.75 14.40

Average of Recorded with proper significant digits
22.0 20.7 17.0 13.8 14.4 all +/- 0.05

This all looks normal except for the last one. It does not lie within average of the error, it lies at the lower boundary. This does occur, and is a well known artifact of rounding. I have done this with a lot of temps and some will match the upper figure and some will match the lower. It doesn’t occur often, but it does show why it is necessary to deal with uncertainty in a proper and consistent manner. The NIST document I referenced above discusses this in terms of percentage error.

Reply to  Jim Gorman
February 17, 2019 8:41 am

Let me finish up my example. As you can see, most of the time the average (to the correct number of significant digits) will appear to be within the range of error, i.e. +/- 0.05 degrees. However, the range of error is not reduced. This is because you don’t know for sure what the real value to the one one-hundredth place actually was. Any value you choose between -0.05 and +0.05 is as equally possible as any other value between these numbers. This is where measurement uncertainty arises. As a consequence, it propagates throughout any calculations you make. The end result is that you can not claim that a temperature of 0.01 is different that 0.09. The probability that either value (or any other value) is true is equal to 1.

Clyde Spencer
Reply to  Jim Gorman
February 17, 2019 10:21 am

Jim and Mickey
I get the feeling that mathematicians are so used to dealing with exact numbers that they get careless when dealing with real-world measurements that inherently have uncertainty associated with them.

Reply to  Jim Gorman
February 17, 2019 1:10 pm

Clyde –> +1.049

Reply to  Jim Gorman
February 17, 2019 1:25 pm

Clyde –> Actually, I have been there. When deep into coding and trying to build a program to do something it is easy to forget you are dealing with measurements. Truncating, rounding, multiplying, dividing, etc., i.e. building the software becomes your focus and it’s easy to loose sight of what the actual end result is. When you declare a variable to be a certain size, you can forget to check the number of decimal places in each calculation, just like using a calculator and writing down all the digits of a number multiplied by pi.

Reply to  Jim Gorman
February 17, 2019 2:34 pm

Jim,
“You want authority, here are three sources.”
If you want to cite authority, it isn’t enough to give a link. You need to quote what they say that supports your claim, which was “Your average simply can not contain more precision than the original measurements.”. In fact, none of your links says anything at all about averages. The nearest is the Purdue doc, which says that a sum cannot be more precise than its vaguest component. True. But an average is not a sum, it is a sum divided by N. And that divides the error by N.

“Max temps by adding 0.05”
That is calculating the uncertainty by assuming that every deviation is the same, and in the same direction. That is exceedingly improbable. The thing about averaging is that they go both ways, and cancel to a large extent. If they all go one way, that is not an expression of uncertainty. It is an error in calibration. You can’t attach probabilities to that. And in fact, for this purpose it doesn’t matter. Anomalies subtract the historic mean, which includes the same predictable error.

MrZ
Reply to  Nick Stokes
February 15, 2019 2:07 pm

That is exactely what I am saying above but you cannot reach 0.001.
How are you doing Nick, your feedback is really valuable to me.

Micky H Corbett
Reply to  MrZ
February 16, 2019 3:17 am

Nick and MrZ

You are assuming that the measurement of the data contains an intrinsic (zero) error signal and that averaging (which adds pseudo resolution) will improve accuracy. Also you are comparing a more accurate measurement (0.1 degC) to less (1 degC) which is itself just digitised (0.1 degC) data.

You are missing the point metrologically. Does the intrinsic measurement have sufficient accuracy to allow the sample distribution to be determined to a lower uncertainty?

No.

If I have 100 measurements stated to have an uncertainty of 1 degree C then this is the baseline uncertainty. All uncertainties will be greater. The ONLY way to improve it is to do what Rutherford has said: Build a better experiment.

You are performing a hypothetical analysis.

To futher explain. You have an average of decimal points as 22.72. By right it should be 22.72 +/- 0.05. If this was taken with a stated uncertainty of +/- 0.5 it would be 22.7 +/- 0.5. But you didn’t. You took the second as 22.77. WHere does the 0.07 come from?
You “invented” it.
You created a better reading by assuming the underlying reading captured an intrinsic value. This is what the CLT does when you have i.i.d. It removes the individual samples and maps them on to sets of distributions.
In reality, one reading is 22.72 +/ 0.05 deg C as a minimum (real uncertainty would be higher as the errors can chain)
The other reading if the original system was reading to 1 degree is 22.7 +/- 0.5 degC as a minimum. One reading system has higher uncertainty. The first could be used to calibrate the second.

Reply to  MrZ
February 16, 2019 8:10 am

“You are assuming that the measurement of the data contains an intrinsic (zero) error signal and that averaging (which adds pseudo resolution) will improve accuracy.”
I’m not assuming it. I’m showing it.

“You took the second as 22.77. WHere does the 0.07 come from?”
It’s just the result when I averaged the rounded (to 1C) data.

Micky H Corbett
Reply to  MrZ
February 16, 2019 8:34 am

Nick
“I’m not assuming it. I’m showing it.”

The data only shows it because you took a higher resolution value and artifically made it lower resolution (higher uncertainty) and then showed that the average of the lower resolution matches the higher resolution. So what?

And by resolution here I mean that it terms of uncertainty of a real measurement.

In a real system with higher uncertainty, especially one with levels an order above the variation you are trying to measure, you know much less if anything about the shape of the sample distribution with respect to the variation you are trying to show. Most measurement systems skew and have bimodal and other strange and unique shapes. The only way you catch it is by recharacterisation and frequent recalibration. Even then you apply more tolerance.

A simple analogy is taking a picture with an 4K camera and a 256 bit CCD, seeing something in the centre of the image (actually a cat) but only having the 4K show you that it’s a cat. You assume the blob on the 256-bit CCD is a cat and then continue to use this system to track the “cat’s” movements. You make proclaimations about cat movements and recommend legislation to curtail it.

Only after recalibration and characterisation with the 4K again do you find that after a few measurements the system was following any small animal. You never checked because you ran off with the theory.

Bottom line: You cannot magically extract higher resolution/lower uncertainty from a high uncertainty series of measurements just by assuming that the underlying data is distributed a certain way. You have to show it.

It’s the basis of philosophy of the scientific method. You build the tools for the job, otherwise all you have is hypothetical.

As Jim stated, the values are less important than the range of error (uncertainties)

Reply to  MrZ
February 16, 2019 12:40 pm

“In a real system”
This is a real system, and has the real distribution of temperatures that we are talking about.

Curious George
February 15, 2019 8:28 am

Much ado about nothing?

Ferdberple
February 15, 2019 8:52 am

Once the data has been homogenized, the sampling problem is largely irrelevant because you have removed the independence of the data points and the central limit theorem no longer applies.

Each time you homogenize already homogenized data the independence of the data points further degrades, destroying the value of the data for statistical analysis.

Simply put, rewriting temperature history to try and “fix” the data is a fools errand and a crime against data because you can no linger trust any statistical results.

In data sciences you NEVER “correct” the underlying data as that is your single version of the truth. Yes it contains errors, but these errors are still part of the truth. “Fixing” the data obscures the truth. It does not establish truth.

The place to make your “corrections” is in the summary data. AND if you have not messed around trying to fix the raw data the “correction” is relatively trivial, because the CLT guarantees convergence.

In other words, if you don’t screw around changing the original data, the CLT provides you pretty good assurances that you can calculate a representative mean and variance from the raw data using relatively trivial random sampling, without any need for error correction.

Screw around “fixing” the raw data and you have no assurance that your mean and variance is representative.

RPercifield
February 15, 2019 8:59 am

Looking at the article I see a few things some I agree with some not.

1. The sampling method he describes is basically Pulse Code Modulation (PCM). Any periodic changes greater than the sampling rate will be aliased and folded into the resultant output. And very short term changes that occur between samples are lost. This is demonstrated by the Low of -40 being missed in the hourly report sampling period. All of this is understandable.

2. What the Min and Max results are is nothing more than a signal processing being performed on the continuous temp dataset. Because it is a processed signal the only Nyquist parameters that must be met are for the sampling rate to be fast enough to detect the high and low peaks. After that Nyquist does not apply. By doing signal processing the values have lost all of the intervening data and not longer have a pure sampled system.

3. The use of the midpoint between min and max analytically has very little real value since it does not represent any information of the amount of time spent at a particular temp. However, since historically we have not characterized the atmosphere in anything else than this, it is difficult to change at this point to something else, but new methodology should be applied to make these measurements more accurate from now and into the future. There is no reason for us to be reporting temps this way any more, except for comparison.

4. To me the real issue is that in this sampled system much data is lost in the antiquated signal processing methods, poor siting leading to significant noise and bias, the need to have a single number to measure temperature irregardless of the meaning of the data behind it. To think we want to change the world on this measurement methodology……

William Ward
Reply to  RPercifield
February 15, 2019 2:42 pm

Hi RPercifield,

Regarding your #2: A minor but very important clarification to make sure the concepts are well understood. We don’t need to sample such that we actually catch the exact peaks. If we sample according to Nyquist, then we capture all of the information available about the signal. Using DSP algorithms the exact peak values can be determined from the Nyquist compliant samples, even if those peak values do not directly avail themselves in a particular sample.

Furthermore, Nyquist never stops applying, if your goal is to operate digitally on the original analog signal. If you violate Nyquist, for example by just discarding all samples except your max and min values then any operation on those max and min values no longer (accurately) relates to the original signal.

The beauty of Nyquist is that it gives us guard rails – we can stay on the road while enjoying the benefits of the digital domain.

I agree with your #3 and #4.

I’ll address #1 with Kevin in a separate reply – as it relates to the missing -40 value being lost.

February 15, 2019 10:35 am

Applies to any function, not just time. Essentially you are describing a moving average filter of width 1, a tautology.

Reply to  Johanus
February 15, 2019 10:51 am

Oops misplaced reply to Masters comment February 15, 2019 at 9:14 am

Reply to  Johanus
February 15, 2019 11:18 am

>>
. . . Masters comment . . . .
<<

The name is Masterson–but since we’re talking about signals, I restricted it to time functions. (My text on signal analysis does the same.) Convolution has many uses in applied mathematics, but for electrical engineers, we usually restrict its use to signal processing–time domain, frequency domain, real, complex, Fourier transforms, Laplace transforms, etc.

Jim

old engineer
February 15, 2019 10:37 am

Kevin Kilty-

Thank you for a most interesting post. The recent discussions here at WUWT on the use of Tmin and Tmax have been very enlightening.

Two things stand out to me: (1) the daily temperature distribution is almost never symmetrical, and (2) the distribution does not have the same form day-to-day. Therefore, how to use daily Tmin and Tmax for temperature trends needs to be very carefully thought out. Which it hasn’t been.

When I started my career in applied research, a wise statistician told me: “Too many engineers think statistics is a black box into which you can pour bad data and crank out good answers.” Apparently many climatologists think the same.

February 15, 2019 12:48 pm

I’m not sure why we average the daily highs and lows together in the first place. Don’t we just end up with less information rather than more? It completely obscures the temperature range experienced during any given period. I think if we simply averaged the highs over a given period and similarly averaged the lows, but kept them separated, we would get information that’s far more useful.

Clyde Spencer
Reply to  Hoyt Clagwell
February 15, 2019 2:31 pm

Hoyt
You said, “I think if we simply averaged the highs over a given period and similarly averaged the lows, but kept them separated, we would get information that’s far more useful.” Actually, I don’t see the advantage of averaging them. They carry more information as the raw, daily time-series. Averaging behaves as a low-pass filter and reduces the information content. The best we can say is what the annual average was and compare it to the annual averages for previous years. The annual standard deviations might actually be more interesting because they would suggest if the temperatures are getting more extreme or not.

The problem is that whether we average 30 highs and 30 lows and take the mid-range, followed by taking the mean of the 12 mid-range values, or take the average of 365 highs and 365 lows and take the mid-range value, we still end up with a non-parametric measure of annual temperatures that has been shown to NOT be statistically ‘efficient’ or robust. The first method allows us to assign a standard deviation to the monthly mid-range values, but with only 12 samples, it is of questionable statistical significance, let alone climatological utility.

I’m reminded of the story of Van Allen (of the famous Van Allen Radiation Belts fame) who analyzed the behavior of roulette wheels in Las Vegas. He discovered an eccentricity in them that allowed him to make a small killing before he was asked to leave. I suspect that if he was working with data as poorly fit for the task as is our meteorological data, he would never have been able to do what he did.

Reply to  Clyde Spencer
February 15, 2019 3:15 pm

Clyde, I think I was typing too soon. A daily time series for both highs and lows was what I was thinking too. Averaging seems to reduce things to their least useful representation. Like trying to determine what the average coin toss result is.

Reply to  Hoyt Clagwell
February 15, 2019 3:06 pm

“I think if we simply averaged the highs over a given period and similarly averaged the lows”
That is generally done. Then if you average those results, you get TAVG. GHCN supplied monthly TMAX, TMIN and TAVG

February 15, 2019 12:48 pm

“First, the problem of aliasing cannot be undone after the fact. It is not possible to figure the numbers making up a sum from the sum itself. Second, aliasing potentially applies to signals other than the daily temperature cycle.”
I think my main priority here should be to respond to the min/max question. On the aliasing, it is true that it can’t be undone just from the signal data. But you can after the fact calculate and remove the effect of predictable components in the signal, and diurnal is a big one. I did that in my post. Of course, there is still the problem of how to predict the predictable. You need to get data about diurnal from somewhere. But it doesn’t have to be the period that you are sampling.

Yes aliasing does apply to other signals. But one thing I sought to emphasise is that in climate records, the daily sampling is followed by monthly averaging, which is a low pass filter operation. So it is analogous to simple heterodyning for AM radio. There you have an E/M cacophony from which you want to select a single channel. So you mix the signal with an oscillation close to the desired carrier frequency. That causes aliasing and brings the sidebands with the information you want down into the audio range. Then an audio low pass filter makes the cacophony go away and you have the signal you want. The rest was aliased too, but the results fell outside the audio range.

That is mostly what happens to that other data here. It is aliased, but the results are attenuated by the month average filter. The exception is the diurnal, harmonics of which which produce a component at zero frequency. But you can calculate that.

“This provides a segue into a discussion about the “something otherness” of Max/Min records.”
The main point I tried to make there is that min/max isn’t necessarily close to hourly integrated, but is still a useful index. For climate purposes, the measure you take is used as representative of the region. In Wellington, NZ, a hilly town, they moved the station from Thorndon by the sea to Kelburn on the hill in 1928. You have to adjust for the change (1 °C, we argued about that) but neither place is right or wrong an a measure. That is one reason for taking anomalies – then it really doesn’t matter.

Min/max generally has an offset from the integrated, which furthermore depends on the time of reset. It’s as if you were measuring up the hill (and maybe later down if you changed reset time). And again, the difference largely disappears when you subtract the normal for the location. You still get a measure of how things change.

“It is difficult to argue that a distortion from unpredictable convolution does not have an impact on the spectrum resembling aliasing”
I think the heterodyning analogy is relevant here. The jitter is a modulation of the diurnal. But it is a high frequency modulation, relative to the later monthly filtering.

William Ward
Reply to  Nick Stokes
February 15, 2019 2:17 pm

Hello Nick,

I appreciated our previous discussions. On this one here, I agree with your description of how aliasing can be used to demodulate an AM signal. However, I think it is critical to provide a distinction between aliasing to demodulate and aliasing of a baseband signal like temperature.

In the AM radio example, the program information, say a music recording, is Amplitude Modulated creating the sidebands you mention, spaced around the carrier frequency. When receiving an AM signal, a clever use of aliasing can down-convert (shift in frequency) that program down to baseband or a low-IF so it can be tuned and retrieved. In that example no energy is aliased into the original program information. The E/M cacophony you mention has strict channel spacing to allow for proper reception without aliasing. The aliasing we experience when measuring temperature incorrectly does mix higher frequency energy into false low frequencies that were not present in the original signal.

I suggest it would be good to retract that analogy. It can not correctly be used to justify the aliasing we experience when measuring temperature with 2-samples/day. In the AM case the signal/program is unaltered by aliasing. With temperature it is altered by aliasing.

The low pass filtering you mention to get to monthly averaging happens after the aliasing already has done its damage. If we sample properly and then average to monthly the results are different.

Reply to  William Ward
February 15, 2019 3:03 pm

“In the AM case the signal/program is unaltered by aliasing.”
No, it’s is a frequency shift. Same with sampled temperature. In fact, sampling just multiplies the signal by the Dirac comb (periodic), just as in the radio mixer you seek to multiply the signal by the local oscillator frequency. The only real difference is that the Dirac comb contains all harmonics, which brings in a bit more noise. But not that much.

And the key thing is that the monthly averaging plays the same role as the audio low pass filter in radio, also after the mixing. It excludes the original and almost all the aliases.

William Ward
Reply to  Nick Stokes
February 15, 2019 4:49 pm

Nick if you do an AM broadcast the listener on the radio is not hearing aliasing from demodulation, no matter whether it is done in the analog domain or digitally via “undersampling” (using aliasing intentionally to downconvert).

Here is what happens when you don’t alias (example of 12-samples/day on signal bandlimited to 5-cycles/day):

https://imgur.com/iHRuFc7

The spectral images replicate at the sample rate (12-samples/day) and there is no overlap between the signal bandwidth and its images.

Here is what happens when you alias (example of 2-samples/day on a signal bandlimited to 5-cycles/day):

https://imgur.com/DmXCBOt

You can see the spectral images clobber the signal!

This is what happens when we measure temperature at 2-samples/day. Your low pass filters by averaging to monthly take place after the damage is done in this figure. The spectral overlap doesn’t happen with AM demod via undersampling due to the channel spacing required by the FCC.

I recommend you don’t overly complicate this with the Dirac comb – it is unnecessary to understand what is happening. All you need to do is look at the signal bandwidth and replications of it at the sample frequency.

Good sampling: https://imgur.com/L7Wc393

Bad sampling: https://imgur.com/hPgub33

K.I.S.S: Keep it simple Stokes.

Reply to  William Ward
February 16, 2019 8:02 am

William,
“This is what happens when we measure temperature at 2-samples/day. “
Your spectra are nothing like what happens. Here I have done a FFT of 5 minute data from Redding, CA for 2016. This plot shows magnitude. Frequency units are 1/month (30.5 days). I have marked with a red block the band of unit width around zero. This is the nominal range of data that passes the monthly average filter – of course a larger range passes with more attenuation. That is the data we want. I have marked with orange the range from 60 to 62 units. This is the corresponding range that would be aliased to the red range with 2/day sampling. As you see, it mainly contains a rather small spike, at twice diurnal, and nothing much else. The spike is what would alias to zero. It isn’t that bad anyway, and as I showed in my last post, you can calculate and adjust for it, since diurnal is repeated.

William Ward
Reply to  William Ward
February 16, 2019 1:41 pm

Hi Nick,

Thanks for your plot. I’ll get to yours in a minute.

My plots in the previous post were for the purpose of illustrating the concept. This link shows a real example using Cordova AK station, with 2-samples/day sample rate.

https://imgur.com/noC3WGu

I added dashed yellow lines to the graph. Between those yellow dashed lines is the signal information between 3-cycles/day and long term trend signals all the way to “Zero-frequency” offsets. Any signal in that region that is not blue (any green or red lines) is energy that has aliased into the signal through the sampling process. As you know, the magnitude of the aliased content is only part of the picture. The phase information is needed to understand how the aliased energy impacts the result. However, it is much easier to just look in the time domain. Compare the properly sampled mean (5-minute samples data) to the (Tmax+Tmin)/2 result. The 5-minute data is the correct (more correct) reference and the error is the difference between it and the result from max/min method. The FFT assumes that the 2-samples per day have no “jitter” – so the analysis is for the ideal. The fact that max and min occur with what is equivalent to jitter means there is additional error not seen in the FFT graph. Again, we see the real result in the time domain, much more easily.

What frequency range of energy do you consider to drive the trend? This is an important question. The “Zero Frequency” (mathematical zero) is essentially infinitely long time duration – this is the equivalent of a DC offset in electrical terms. It never changes. It provides a constant offset temperature for the scale used (°C, K, etc). As you travel away from zero, without traversing much distance mathematically, we cover multiple million year cycles, million year cycles, hundred thousand year cycles, hundred year cycles, decade cycles, yearly cycle and monthly cycle. The following is critically important. With 2-samples/day, the energy at 2-cycles/day lands on the Zero-Frequency. Any energy between 1-cycle/day and 2-cycles per day lands between the Zero-Frequency and the 1-cycle/day (diurnal)! Any energy between 2-cycles/day and 3-cycles/day aliases also to between the Zero-Frequency and the diurnal! It lands on the negative side of the spectrum but we know this works the same as if it lands on the positive side. Any “jitter” in the sampling means that this errant energy can land anywhere in that range, based upon the “jitter”! So back to the question, what frequencies do you consider to be contributing to the trend? I suggest that it is a combination of all spectrum energy slower than 1-month. So This entire range is clobbered by any energy between 1-cycle/day and 3-cycles/day.

The daily max min values are corrupted with this aliased energy. You can’t filter it out later. You can argue that it is too small to matter but then you have to give up the ground that the trends and record temperatures are also equally too small to matter.

As for the adjustments you mention, it would be instructive if you can take USCRN station data, using only Tmax and Tmin and adjust it to match the 5-minute sampled data. The only rule is that you can’t use the 5-minute data to get your adjustment because that would be a round trip to nowhere (cheating). I’m very skeptical that you can achieve anything with this but I’m willing to be shown it can be done. If you can that would be a big breakthrough. One could attempt this with the overall record and recalculate all trends, but alas, we would never know if the adjustments were actually corrections.

William Ward
Reply to  William Ward
February 17, 2019 12:32 am

Hi Nick,

A few things I’m pondering… some repeat of my previous reply to you… but to expand the thoughts…

You have much better tools for FFT analysis and plotting than I have. I’m not sure how much time it takes you or if you are even interested to try this, but I’m curious to see 2 particular stations that show much different error in the time domain. I have previously provided links to these charts and I show approximately 12 years, with yearly averages and linear trends. Blackville SC has one of the larger trend errors I have seen in my limited study. Fallbrook CA has very little trend error but a large offset error (1.5°C). If I can add a 3rd in there it would be Montrose CO. It is one of the few stations I saw where over 12 years the error shifted from warming to cooling and back again. Offset was very variable. Most stations maintained their warming or cooling trend error (5-min as reference compared to max/min). Do you think anything can be learned by looking at the spectrums of these 3 stations?

Also, what frequencies contribute to the trend and can you explain why (as you see it)? I see a frequency of “0” as being the infinitely long cycle – or DC offset in EE terms. It is common to every measured value. Between 1-cycle/day and 0-cycles/day there is an infinite range of possible longer cycle signals (weekly, monthly, yearly, decadal, multi-decadal, etc.). What range of these frequencies contribute meaningfully to the trend? For climate I would think it has to be the superposition of sum of a range of frequencies. At what point – how slow must a signal become so that we consider it “0” or DC in our analysis? 100 years? 1000 years? 100k years?

The reason for this pondering… if energy is spread anywhere between 1-cycle/day and 3-cycles/day, then if aliased it can land on one of these longer cycle signals. It can impact the offset or trend. FFT’s bin energy at the integer values, but the actual energy can exist at non-integer value frequencies, correct? If we were to use different units of frequency, uHz for example, could we get the tools to show us how the energy is spread more correctly? What are your thoughts on this?

Its too late at night and I hope I don’t regret thinking out-loud when I read this in the morning. Anyway – I hope this leads to something…

Thanks.

Reply to  William Ward
February 17, 2019 11:54 am

William,
I use the R fft() function. I’m impressed that it can do the 100,000 numbers in the year in a second or two on my PC. I could do the stations you mention – just a matter of acquiring the data and interpolating missing values (fft insists).

Here are some subsets. First the low freq here. I’ve subtracted the mean, so it starts at 0. The main spike is the annual cycle. The rest is the change on monthly scales, which is what we want to know.

Then there is the first diurnal harmonic here. The sidebands represent the annual cycle of change of diurnal (and harmonics of that). And here is the second harmonic of diurnal. Remember the vertical scale is expanded; the numbers are getting small.

Reply to  William Ward
February 17, 2019 11:58 am

William,
Sorry, I got those later links wrong. Here it is again
Then there is the first diurnal harmonic here. The sidebands represent the annual cycle of change of diurnal (and harmonics of that). And here is the second harmonic of diurnal. Remember the vertical scale is expanded; the numbers are getting small.

Reply to  William Ward
February 17, 2019 4:25 pm

William,
I’ve posted the corresponding fft parts for Blackville 2016 here. Processing is a bit fiddly, so I might not get the others done for a while. I need to work out a system.

William Ward
Reply to  William Ward
February 17, 2019 8:53 pm

Nick,

Thank you for the FFT plots. So far, I can’t see anything distinguishing that would explain some of the differences we see in the time domain. I have many experiences of analyzing audio, listening through a mastering quality system that allowed for blind identification of audible differences in mastered audio versions. Yet there was no way to identify this visually by using an Audio Precision analyzer.

Is there anything about the FFT algorithm or units chosen for analysis or resolution of analysis that is obscuring more distribution of energy around the integer values presented in your plots using 1 cycles/month? I would think energy between 1 year and 100 years would be significant to the trend we are experiencing, but the FFT is rather vague with energy spread out at a very low level. Can you comment on this?

Reply to  William Ward
February 17, 2019 9:48 pm

William,
“I would think energy between 1 year and 100 years would be significant to the trend we are experiencing, but the FFT is rather vague with energy spread out at a very low level. Can you comment on this?”
It’s vague because it is based on 1 year of data. There are 4 relevant timescales in the fft
1. sampling – 5 min
2. diurnal
3. monthly – not physical (or in fft), but that is the scale of subsequent smoothing
4. Annual, seasonal cycle
Annual is one frequency increment. It can’t tell you below that. The sidebands to the diurnal peaks, spaced 1 unit, are the annual cycle modulating the diurnal.

William Ward
Reply to  William Ward
February 17, 2019 11:21 pm

Nick,

You said: “It’s vague because it is based on 1 year of data.”

Ok, right – if we had 10 or 100 years of data then we could resolve longer trends in those ranges with the FFT. With 1 year of data I believe that energy ( in the longer trends) shows up as 0 or DC – it appears to be constant because we don’t measure much change against those trends in a short period of time. Do you agree? Likewise, the FFT is not going to tell us the entire story about whether or not trends could be affected by the aliasing of 1, 2 or 3-cycle/day energy. I still think the time domain is better for that.

I sent you another reply about how we calculated the trends. There are 26 stations in my paper that we analyzed with that method. I’m curious if you have looked at that method and if you agree or disagree that we are showing valid trend error over the 12 year period studied. I have not yet been able to square your TOBS analysis with our trend analysis.

Thanks Nick.

Udar
Reply to  Nick Stokes
February 16, 2019 6:52 am

Nick,

Your analogy is wrong.
In downconverting for radio we are making sure that wrong frequency components don’t get mixed with right ones. The receivers are carefully designed to avoid that.
For example, IF bandpass filter is equivalent to antialiasing filter, and so is low-pass filter used in direct conversion receivers. No matter the design of particular system, there is always a filter(s) somewhere in the system that works exactly like anti-aliasing filter to avoid mixing unwanted frequencies with wanted ones.

This not the case for temperature under sampling, and you should not use that analogy here.

Reply to  Udar
February 16, 2019 8:05 am

“IF bandpass filter is equivalent to antialiasing filter”
I specified simple heterodyne, not superhet. The bandpass filter is audio.

William Ward
Reply to  Udar
February 16, 2019 2:07 pm

Nick,

The practice of intentionally “undersampling” to downconvert an AM signal does not add any aliased energy into the original “program” signal. Said another way, the original signal that was modulated is recovered with no energy aliased into it.

The AM analogy should really be retracted as it doesn’t support your case.

Sampling at 2-samples/day will alias energy such that your monthly average can not be obtained without this aliasing included. Your best case is to argue that it is too small to matter – but there is a high cost to that position.

Verified by MonsterInsights