KEVIN KILTY

### Introduction

A guest blogger recently^{1 }made an analysis of the twice per day sampling of maximum and minimum temperature and its relationship to Nyquist rate, in an attempt to refute some common thinking. This blogger concluded the following:

(1) Fussing about regular samples of a few per day is theoretical only. Max/Min temperature recording is not sampling of the sort envisaged by Nyquist because it is not periodic, and has a different sort of validity because we do not know at what time the samples were taken.

(2) Errors in under-sampling a temperature signal are an interaction of sub-daily periods with the diurnal temperature cycle.

(3) Max/Min sampling is something else.

The purpose of the present contribution is to show that these first two conclusions are misleading without further qualification; and the third conclusion could use fleshing out to explain Max/Min values being “something else”.

### 1. Admonitions about sampling abound

In the world of analog to digital conversion admonitions to bandlimit signals before conversion are easy to find. For example, consider this verbatim quotation from the manual for a common microprocessor regarding use of its analog to digital (A/D or ADC) peripheral. The italics are mine.

“…*Signal components higher than the Nyquist frequency*

*(f _{ADC/}*

_{2}

*) should not be present to avoid distortion from unpredictable signal convolution. The user is advised to remove high frequency components with a low-pass filter before applying the signals as inputs to the ADC.*”

*Date*: February 14, 2019.

1 Nyquist, sampling, anomalies, and all that, Nick Stokes, January 25, 2019

### 2. Distortion from signal convolution

What does *distortion from unpredictable signal convolution *mean? Signal convolution is a mathematical operation. It describes how a linear system, like the sample and hold (S/H) capacitor of an A/D, attains a value from its input signal. For a specific instance, consider how a digital value would be obtained from an analog temperature sensor. The S/H circuit of an A/D accumulates charge from the temperature sensor input over a measurement interval, 0 → *t*, between successive A/D conversions.

(1)

Equation 1 is a convolution integral. Distortion occurs when the signal (*s*(*t*)) contains rapid, short-lived changes in value which are incompatible with the rate of sampling with the S/H circuit. This sampling rate is part of the response function, *h*(*t*). For example the S/H circuit of a typical A/D has small capacitance and small input impedance, and thus has very rapid response to signals, or wide bandwidth if you prefer. It looks like an impulse function. The sampling rate, on the other hand, is typically far slower, perhaps every few seconds or minutes, depending on the ultimate use of the data. In this case *h*(*t*) is a series of impulse functions separated by the sampling rate. If *s*(*t*) is a slowly varying signal, the convolution produces a nearly periodic output. In the frequency domain, the Fourier transform of *h*(*t*), the transfer function (*H*(*ω*)), also is periodic, but its periods are closely spaced, and if the sample rate is too slow, below the Nyquist rate, spectra of the signal (*S*(*ω*)) overlap and add to one another. This is aliasing, which the guest blogger covered in detail.

From what I have just described, several things should be apparent. First, the problem of aliasing cannot be undone after the fact. It is not possible to figure the numbers making up a sum from the sum itself. Second, aliasing potentially applies to signals other than the daily temperature cycle. The problem is one of interaction between the bandwidth of the A/D process and the rate of sampling. It occurs even if the A/D process consists of a person reading analog records, and recording by pencil. Brief transient signals, even if not cyclic, will enter the digital record so long as they are within the passband of the measurement apparatus. This is why good engineering seeks to match the bandwidth of a measuring system to the bandwidth of the signal. A sufficiently narrow bandwidth improves the signal to noise ratio (S/N), and prevents spurious, unpredictable distortion.

#### One other thing not made obvious in either my discussion, or that of the guest blogger, concerns the diurnal signal. While a diurnal signal is slow enough to be captured without aliasing by a twice per day measurement cycle, it would never be adequately defined by such a sample. One would be relatively ignorant of the phase and true amplitude of the diurnal cycle with twice per day sampling. For this reason most people sample at least as fast as 2 and one-half times the Nyquist rate to obtain usefully accurate phase and amplitude measurements of signals near the Nyquist rate.

### 3. An example drawn from real data

Figure 1. A portion of AWOS record.

As an example of distortion from unpredictable signal convolution refer to Figure 1. This figure shows a portion of temperature history drawn from an AWOS station. Note that the hourly temperature records from 23:53 to 4:53 show temperatures sampled on schedule which vary from −29^{◦}*F *to −36^{◦}*F*, but the 6 hour records show a minimum temperature of −40^{◦}*F*.

Obviously the A/D system responded to and recorded a brief duration of very cold air which has been missed in the periodic record completely, but which will enter the Max/Min records as Min of the day. One might well wonder what other noisy events have distorted the temperature record. Obviously the Max/Min temperature records here are distorted in a manner just like aliasing– a brief, high frequency, event has made its way into the slow, twice per day Max/Min record. The distortion is about 2^{◦}*F *difference between Max/Min and the mean of 24 hourly temperatures–a difference completely unanticipated by the relatively high sampling rate of once per hour, if one accepts the blogger’s analysis uncritically. Just as obviously, if such event had occurred coincident with one of the hourly measurement schedules, it would have become a part of the 24 samples per day spectrum, but at a frequency not reflective of its true duration. So, there are two issues here. The first one being the distortion from under-sampling, and the second being that transient signals possibly aren’t represented at all in some samples but are quite prevalent in others.

In summary, while the Max/Min records are not the sort of uniform sampling rate that the Nyquist theorem envisions, they aren’t far from being such. They are like periodic measurements with a bad clock jitter. It is difficult to argue that a distortion from *unpredictable convolution *does not have an impact on the spectrum resembling aliasing. Certainly sampling at a rate commensurate with the brevity of events like that in Figure 1 would produce a more accurate daily “mean” than does midpoint of the daily range; or, alternatively one could use a filter to condition the signal ahead of the A/D circuit, just as the manual for the microprocessor suggests, and just as anti-aliasing via the Nyquist criterion, or improvement of S/N would demand. Trying to completely fix the impact of aliasing from digital records is impossible after the fact. The impact is not necessarily negligible, nor is it mainly an interaction with the diurnal cycle. This is not just a theoretical problem; especially considering that Max/Min temperatures are expected to detect even brief temperature excursions, there isn’t any way to mitigate the problem in the Max/Min records themselves. This provides a segue into a discussion about the “something otherness” of Max/Min records.

### 4. Nature of the Midrange

The midpoint of the daily range of temperature is a statistic. It is among a group known as *order *statistics, as it comes from data ordered from low to high value. It serves as a measure of central tendency of temperature measurements, a sort of average; but is different from the more common mean, median, and mode statistics. To speak of the midpoint range as a daily mean temperature is simply wrong.

If we think of air temperature as a random variable following some sort of probability distribution, possessing a mean along with a variance, then the midpoint of range may serve as an estimator of mean so long as the distribution is symmetric (kurtosis, excess, and higher moments are zero). It might also be an efficient or robust estimator if the distribution is confined between two hard limits, a form known as *platykurtic *for having little probability in the distribution tails. In such case we could also estimate a monthly mean temperature using a midrange value from the minimum and maximum temperatures of the month or even an annual mean using the highest and lowest temperatures for a year.

In the case of the AWOS of Figure 1 the annual midpoint is some 20^{◦}*F *below the mean of daily midpoints, and even a monthly midpoint is typically 5^{◦}*F *below the mean of daily values. The midpoint is obviously not an efficient estimator at this station, although it could work well perhaps at tropical stations where the distribution of temperature is more nearly platykurtic.

The site from which the AWOS data in Figure 1 was taken is continental; and while this particular January had a minimum temperature of −40^{◦}*F*, it is not unusual to observe days where the maximum January temperature rises into the mid 60s. The weather in January often consists of a sequence of warm days in advance of a front, with a sequence of cold days following. Thus the temperature distribution at this site is possibly multimodal with very broad tails and without symmetry. In this situation the midrange is not an efficient estimator. It is not robust either, because it depends greatly on extreme events. It is also not an unbiased estimator as the temperature probability distribution is probably not symmetric. It is, however, what we are stuck with when seeking long-term surface temperature records.

One final point seems worth making. Averaging many midpoint values together probably will produce a mean midpoint that behaves like a normally distributed quantity, since all elements to satisfy the central limit theorem seem present. However, people too often assume that averaging fixes all sorts of ills–that averaging will automatically reduce variance in a statistic by the factor 1*/*√*n*. This is strictly so only when samples are unbiased, independent and identically distributed. The subject of data independence is beyond the scope of this paper, but here I have made a case that the probability distribution of the maximum and minimum values are not necessarily the same as one another, and may vary from place to place and time to time. I think precision estimates for “mean surface temperature” derived from midpoint of range (Max/Min) are too optimistic.

‘In summary, while the Max/Min records are not the sort of uniform sampling rate that the Nyquist theorem envisions, they aren’t far from being such. They are like periodic measurements with a bad clock jitter. ”

precious.

With respect to your AWOS.. post a link.

And say his name.

It’s Nick Stokes.

Eh? There is a link, and a name. Or did the author add these after posting?

I checked the edit log. The name and link were there from the get-go. It was just Steven Mosher playing out his role as drive-by hack again.

I still don’t understand his smug attitude. Is this just a character flaw or has the actor become stuck in a role?

There is no need for such arrogance, especially when being so painfully misguided

arrogance is a learned behaviorial trait. that becomes a permanently ingrained personality flaw of the pseudo intellectual babbling class.

It is how they talk to each other about the great unwashed that have yet to be indoctrinated. It is a off putting attempt to gain an advantage by talking down to others. They confuse arrogance for wit and in truth they don’t really like each other very much once the others turn their back.

Bill, that is exactly what I was thinking! Thanks for putting it into print.

I used to be that way for a brief stint in my 20’s until i recognized how obnoxious and conceited it is.

Funny: Mosher pretends to be a real scientists who knows numbers and stuff, even though his entire background is in words and stuff.

http://www.populartechnology.net/2014/06/who-is-steven-mosher.html

So here, instead of agreeing/disagreeing with the numbers, and showing his work (as he demands, constantly, of others), he uses his Awesome English Skillz, “proofreads” a post, finds it lacking in something that is actually there, farts into the wind, then leaves.

Not exactly bringing the A game there…

I didn’t want this to look like an attack on Nick Stokes, himself, who I have regard for. I just wanted to fill in things I thought were vague, and correct what I saw as mistakes in his thinking.

Once again, Mosh only sees what his paycheck requires him to see.

The final sentence of the article:

“I think precision estimates for “mean surface temperature” derived from midpoint of range (Max/Min) are too optimistic.”

I don’t know if Mosh just stops reading when he sees something he can use to support his paycheck, or if he really isn’t able to understand these papers.

He only understands what his paycheck lets him understand, anything else he just sneers at.

The name and link were there the whole time. with respect to your drive-by – learn to read before posting.

Mosher

What percentage of all surface “data”

are actually not “sampled” at all —

they are numbers wild guessed

by government bureaucrats,

and calling them “infilled” data

does not change that fact.

Do you know the percentage ?

Do you even care ?

If you don’t care, then why not ?

I expected the usual silence from

Steven al-ways clue-less Mosher,

so I was not disappointed !

“Thus the temperature distribution at this site is possibly multimodal with very broad tails and without symmetry. In this situation the midrange is not an efficient estimator. It is not robust either, because it depends greatly on extreme events. It is also not an unbiased estimator as the temperature probability distribution is probably not symmetric. It is, however, what we are stuck with when seeking long-term surface temperature records.”So what can be said of using such ‘averages’ for homogenizing temperatures over very wide areas?

I believe the technical term for what may be said regarding this is the ‘square root of Sweet Fanny Adams.

Sadly the whole of climate science – and it is prevalent on the ‘denier’ side, as well as endemic on the ‘warmist’ side – is pervaded by people using tools and techniques whose applicability they do not understand, beyond their (and the tools) spheres of competence….

What we have really is a flimsy structure of linear equations supported by savage extrapolation of inadequate and often false data, under constant revision, that purports to represent a complex non linear system for which the analysis in incomputable, and whose starting point cannot be established by empirical data anyway. That in the end cant even be forced to fit the clumsy and inadequate data we

dohave.Frankly, those who think they can see meaningful patterns in it might as well engage in tasseography…

There are only two things that years of climate science have unequivocally revealed, about the climate, and they are firstly that whatever makes it change, CO2 is a bit player, as the correlation between CO2 and temperature change of the sort we allegedly can measure, is almost undetectable, and secondly that we don’t have any accurate or robust data sets for global temperature anyway.

Other things that it has revealed are the problems of science itself in a post truth world. What, after all, is a ‘fact’ ? If nobody hears or sees the tree fall has it in fact fallen? (Schrödinger etc).

Has the climate ‘got warmer’? By how much? What does it mean to say that? How do we know that it has? How reliable is that ‘knowledge’? Is some ‘knowledge’ more reliable than other ‘knowledge’ ? Is there any objective truth that is not already irrevocably relative to some predetermined assumption? I.e. is there such a things as an objective irrevocable inductive truth?.

[ I.e. why when faced with a spiral grooved horn, some blood on the path and equine feces, do I assume that someone has dropped a narwhal horn, a fox has killed a pigeon and a pony has defecated there rather than assuming the forces of darkness killed a unicorn].

I think there are answers to these questions, but they will not, I fear, please either side in this debate.

Since both are redolent of the stench of sloppy one-dimensional thinking.

Chiefly important, ” There are only two things that years of climate science have unequivocally revealed, about the climate, and they are firstly that whatever makes it change, CO2 is a bit player, as the correlation between CO2 and temperature change of the sort we allegedly can measure, is almost undetectable, and secondly that we don’t have any accurate or robust data sets for global temperature anyway.”

Good comment.

Oh, I don’t know Leo we are able to measure the global temperatures to the tenth of a degree (doesn’t matter if it is C or F). I see this published all the time.

And better yet we can actually measure temperatures of vast areas of the Arctic and Antarctic with just a couple of thermometers to high precision. How impressive is that?

And let’s include knowing exactly what the temperature of the Pacific Ocean 1581 meters deep, 512 km east of Easter Island is to the tenth of a degree C!!

I mean with an internet connection, a super computer and a few thousand lines of code, we can do pretty much anything.

And of course we also know the temperature of the Pacific Ocean at 1581 meters deep, 512 km east of Easter Island to a tenth of a degree- – – 50 years ago, so we can “prove” a warming trend.

“I think there are answers to these questions, but they will not, I fear, please either side in this debate.

Since both are redolent of the stench of sloppy one-dimensional thinking.”

Ouch!

I am not sure why you are concerned with ‘pleasing’ one side or the other. If there are answers to your questions, please share, and damn the torpedoes. I, for one, wish to be enlightened, not ‘pleased’.

In the meantime, Mr. Smith, please pardon the stench of our ‘sloppy, one-dimensional thinking’.

Pardon my ignorance and off topic. Is anyone measuring and monitoring global heat content of the atmosphere? This seems the better parameter to follow than temperature when looking for a GHG signal.

In a word, no.

They could do but they won’t.

Due to the presence of water vapor the enthalpy of the various volumes of air vary considerably . The heat content should be reported in kilojoules per kilogram. Take air in a 100% humidity bayou in Louisiana at 75F, it has

twice the heat contentof a similar volume of close to zero humidity air in Arizona at 100F.Averaging the intensive variable ‘air temperature’ is a physical nonsense. Claiming that infilling air temperatures makes any sense is a demonstration of ignorance or malfeasance.

“Averaging the intensive variable ‘air temperature’ is a physical nonsense.”

Well said!

The contents of this article are quite complex, but do point to potential difficulty with temperature records, particularly if they are not continuously sampled and then the data collected subjected to the correct processing (which might be called averaging, but this opens a whole can of worms). Strictly speaking this is not an alias problem addressed by Nyquist, which makes sampled frequencies above half the sampling frequency appear as low frequency data, but the simple question of “what is the average temperature”? Measurements at a normal weather station are sampled at a convenient high rate, but taking peak low and high readings is clearly not right to get a daily temperature record. Even averaging these numbers (or some other simple data reduction) will not give the same result as correct sampling of the temperature with the data low pass filtered before sampling. However the data from the A/D converter can be digitally filtered to give a true correctly sampled result, but this raw data is not normally available. The difference between averaged min and max numbers and correctly sampled data may not be large, but when one is looking for fractional degree changes may well be important. As usual the climate change data is not the same as simple weather, but often assumed to be the same. Note that satellite temperatures will suffer from a sampling problem as the record is once per orbit at each point, and again we do not know how this is processed as the “temperature” readings!

I would like to see a simple, cleverly illustrated explanation of how the temperature is measured and averaged. Having lived in seven climatic zones the differences between the max and min, as well as when these occur, as well as fluctuations have all been different. Even working out an average for the seven areas where I have lived is a major headache. How anyone can be so confident of the average temperature of our whole world, of the average increase, of the relationship of increases and decreases at various times in different areas and how this impacts on the world average, baffles me.

Of course, valid statistical methodology is key to measurement in detail. But on broad, long-term semi-millennial and geologic time-scales, climate patterns are sufficiently crude-and-gruff to distinguish 102-kiloyear Pleistocene glaciations (“Ice Ages”) from median 12,250-year interstadial remissions such as the Holocene Interglacial Epoch which ended 12,250+3,500-14,400 = AD 1350 with a 500-year Little Ice Age (LIA) through c. AD 1850/1890.

In this regard, aside from correcting egregiously skewed official data, recent literature makes two main points:

First: NASA’s recently developed Thermosphere Climate Index (TCI) “depicts how much heat nitric oxide (NO) molecules are dumping into space. During Solar Maximum, TCI is high (Hot); during Solar Minimum, it’s low (Cold). Right now, Earth’s TCI is … 10 times lower than during more active phases of the solar cycle,” as NASA compilers note.

“If current trends continue, Earth’s overall, non-seasonal temperature could set an 80-year record for cold,” says Martin Mlynczak of NASA’s Langley Research Center. “… (pending a 70-year Grand Solar Minimum), a prolonged and very severe chill-phase may begin in a matter of months.” [How does this guy still have a job?]

Second: Australian researcher Robert Holmes’ peer reviewed Molar Mass Version of the Ideal Gas Law (pub. December 2017) definitively refutes any possible CO2 connection to climate variations: Where Temperature T = PM/Rp, any planet’s near-surface global Temperature T equates to its Atmospheric Pressure P times Mean Molar Mass M over its Gas Constant R times Atmospheric Density p.

Accordingly, any individual planet’s global atmospheric surface temperature (GAST) is proportional to PM/p, converted to an equation per its Gas Constant reciprocal = 1/R. Applying Holmes’ relation to all planets in Earth’s solar system, zero error-margins attest that there is no empirical or mathematical basis for any “forced” carbon-accumulation factor (CO2) affecting Planet Earth.

As the current 140-year “amplitude compression” rebound from 1890 terminates in 2030 amidst a 70+ year Grand Solar Minimum similar to that of 1645 – 1715, measurements’ “noise levels” will certainly reflect Earth’s ongoing reversion to continental glaciations covering 70% of habitable regions with ice sheets 2.5 miles thick [see New York City’s Central Park striations]. If statistical armamentaria fail to register this self-evident trend from c. AD 2100 and beyond, so much the worse for self-deluded researchers.

I see a definite problem with this formulation. While temperature is

relatedto those parameters, without an external heat source (i.e. a star), all those theoretical (and actual) planets would very quickly approach the 4K of space. There has to be a term for the heating of the atmosphere by external radiation or the whole thing collapses.OweninGa, your 4K of space

is https://www.google.com/search?client=ms-android-samsung&ei=28RnXNHZFa2FrwTih7rgDA&q=+space+min+temperature&oq=+space+min+temperature&gs_l=mobile-gws-wiz-serp.

Every planet IN THE UNIVERSE maintains its own VERY SPECIAL base temperature by

The air column above the ground floor generates heat by its own weight / pressure.

OweninGa,

the pressure / heat problem again:

The air column above the ground floor generates heat by its own weight / pressure.

ever worked with air pressure operated machines?

you go in Bermuda shorts and Hawaii shirts into the machine hall.

pressure –> compression

OweninGa,

the pressure / heat problem again:

The air column above the ground floor generates heat by its own weight / pressure.

ever worked with air compression operated machines?

you go in Bermuda shorts and Hawaii shirts into the machine hall.

And need lots of beverages during working hours.

My fault :

OweninGa, your 4K of space

is https://www.google.com/search?client=ms-android-samsung&ei=28RnXNHZFa2FrwTih7rgDA&q=+space+min+temperature&oq=+space+min+temperature&gs_l=mobile-gws-wiz-serp.

Every planet IN THE UNIVERSE maintains its own VERY SPECIAL base temperature by

UNIVERSE BASE TEMPERATURE +

The air column above the ground floor generated heat by the planets own weight / pressure

air column above the ground floor.

One analysis is here. It looks at Hadcrut and shows that it is simply awful.

https://judithcurry.com/2011/10/18/does-the-aliasing-beast-feed-the-uncertainty-monster/

Using a digit filter post sampling will not “give a true correctly sample result”. The information has already been lost.

Fractional degree changes, yes, this has been the point that I have tried to emphasize on several occasions in my peon comments.All this highly technical mathematical tooling seems ludicrous, with regard to seemingly tiny fractional differences that it produces, … in general and in relation to the precision of instrumental measurements.

You said the magic words: “identically distributed”. When considering a signal to be measured you need to consider if your tools are able to produce the necessary condition for analysis. For identically distributed data, each data point in the sample needs to have sufficiently less uncertainty compared to the variation of their nominal values (nominal being the 4 in say 4 +/- 0.5 units for example). Otherwise you cannot determine the distribution to sufficient resolution to achieve the conditions to meet the Central Limit Theorem, which underpins the Standard Error of the Mean.

The majority of textbook examples you see of this use populations with discrete sample elements, not individual measurements with their own intrinsic uncertainty. And if you read the scientific papers by people such as the Met Office, they take the same approach considering uncertainties in the data. They have to make assumptions about the sample measurement distribution. A wet finger in the air basically.

Bottom line: the derived temperature anomaly data is a hypothetical data set since determination of identical distributions cannot be obtained from the intrinsic measurement appartus tolerances. The original measurement apparatuses (I’ll stick to English plurals) were never designed to give this level of uncertainty. From the get-go climate science was about creating suppostion from data with little possibility of it ever being definitive.

And yet results from exercises are taken as real results applicable to the real world.

Make no mistake climate science advocacy is very little to do with logic and fact, or ethics for that matter. Climate scientists who push their results as being indicative of Nature have no skin in the game or accountability.

As I have said before, if they think their methods are sufficient then they should spend a month or two eating food and drinking water deemed safe for consumption under the same standards. Standards where the certification equipment has orders of magnitude more uncertainty than the variation of the impurities they are trying to minimise. And where a “safe” value is obtained by averaging many noisy measurements to somehow produce lower uncertainty.

I’m pretty sure physical reality would teach them some humility. They may even lose a few pounds.

I wonder why not record WHEN the temperature change!

Let say record the time when temperature change of 1°C. This way would be easy to follow what really happen.

Of course there will be a BIG DATA to deal with but …

Just an idea.

?

The temperature is always changing. 24 hours a day.

Temperature changes constantly, from moment to moment. Historically, it was a manual process (a person would read the thermometer and jot down the results a handful of times a day) before it was automated. Today, in theory, modern equipment could make a near continuous recording of temperature (to the tune of however fast the computer is that is doing the recording) but you’d quickly run out of storage space at the shear volume of data making such continuous recording impractical for little to no gain. Somewhere between those two extremes is a happy medium. I leave it to others to figure out where that is.

Kevin,

Thank you for the very interesting post. I would like to think about it more and then discuss a few things with you. Nick Stokes’ essay that you refer to was a response to the essay I presented. I would like to calibrate with you – can you tell me if you read my post as well? It is located here:

https://wattsupwiththat.com/2019/01/14/a-condensed-version-of-a-paper-entitled-violating-nyquist-another-source-of-significant-error-in-the-instrumental-temperature-record/

If you have not, and you would like to read it, then I recommend the full version of the paper located here:

https://wattsupwiththat.com/wp-content/uploads/2019/01/Violating-Nyquist-Instrumental-Record-20190101-1Full.pdf

The full version covers some basics about sampling theory, which you clearly do not need – but there are some comments and points made that might be lost if you just read the short version published on WUWT. The full paper version is a superset of the WUWT post. Because I cover a few points in the full version not covered in the WUWT post I have a few differences in the conclusions as well. For example, in the full version I detail what frequencies in the signal are critical for aliasing and where they land in the sampled result. (For 2-samples/day, the spectral content at 1 and 3-cycles/day aliases and corrupts the daily-signal and spectral content near 2-cycles/day aliases and corrupts the long-term signal.)

In my paper I stated the max and min samples, with their irregular periodic timing as being related to clock jitter, practically speaking – although the size of the jitter is quite large compared to what you would see from an ADC clock. This was a point of much discussion between Nick and me. We also debated whether or not Tmax and Tmin were technically samples. My position was/is that Nyquist provides us with a transform for signals to go between the analog and digital domains. The goal is to make sure that the integrity of the equivalency of domains is maintained. In the real world the signal we start with is in the analog domain. If we end up with discrete values related to that analog signal then we have samples and they must comply with Nyquist if the math operations on those samples are to have any (accurate) relevance to the original analog signal. In the discussion there was also some misunderstanding that needed to be cleared up and that was just what does the Nyquist frequency mean: was it what nature demanded or what the engineer designed to. There were discussions about what was signal and what was noise – did that higher frequency content matter. My position as an engineer is that the break frequency of the anti-aliasing filter needs to be decided – perhaps by the climate scientist. Then the sample rate had to be selected to correspond to Nyquist based upon the filter. Another key point was that sampling faster is good to ease the filter requirements and lessen aliasing – and practically speaking, the rates required for an air temperature signal are glacial – all good commercial ADCs can do that with no additional cost. The belief is that the cost of bandwidth and data storage are also relatively low so if we sample adequately fast then accurate data can be obtained and after sampling any DSP operation desired can be done on that data. Sample properly and the secrets of the signal are yours to play with. Alias and you have error that can not be corrected post sampling. No undo button available – despite many creative claims to the contrary.

Anyway, if you wouldn’t mind letting me know if you have read my paper (and which version), it will help me to communicate with you most efficiently. I’m pleased that this subject is getting more attention.

Thank you,

William

I’d be happy to cooperate on this topic. Beyond issues of sampling, I was hoping to raise some awareness about why Max/Min temperatures, by design, might not be well suited to the purpose of “mean surface temperature” calculated to the hundredth of a degree. Let me go read the papers you have referenced, which I am not familiar with. I’m not sure how to make contact.

Thanks for the reply Kevin – I appreciate it. Also, thanks for taking the time to read my paper. Let me know what you think.

I have some thoughts about your Figure 1 and the “missing” -40 reading.

You said: “Obviously the A/D system responded to and recorded a brief duration of very cold air which has been missed in the periodic record completely, but which will enter the Max/Min records as Min of the day.”

I agree with most of what you said in your paper but if I’m understanding you correctly I don’t think I agree with your assessment about this point. At this particular station, samples are taken far more frequently than 1 hour intervals. Consulting the ASOS guides (thanks to Clyde and Kip for supplying them recently) we see that samples occur every 10 seconds and these are averaged every 1 minute and then the 1 minute data is further averaged to 5 minutes. (A strange practice – but this is what is done.) So from this data we achieve the ability to get the max and min values for a given period. The data is also presented in an “hourly sample package”, and I would not expect to see any of those hourly samples contain the max min values – except by luck. I don’t see this as a problem. I see the hourly packaging as the problem – well maybe “problem” is too strong of a description – hourly packaging seems to be an unnecessary practice. We have more data – just publish all of it and let the user of the data process it appropriately. I don’t think we have a loss of data in your example – just a presentation of a subset that doesn’t include the max/min. What do you think?

For any properly designed system, whatever the chosen Nyquist rate is, assuming the anti-aliasing filter is implemented properly, we can use DSP to retrieve the actual “inter-sample peaks”. It is a common operation performed when mastering audio recordings as the levels are being set for the final medium (CD, iTunes, Spotify, etc.). A device called a “look-ahead peak limiter” is used on the samples in the digital domain. Through proper filtering and upsampling, the “true-peak” values can be discovered and the gains used in the limiting process can be set according to the actual true-peak instead of the sample peak values. For a sinusoid, the actual true-peak can be 3dB higher in level than the samples show. For a square wave the true-peak can be 6dB or more above the samples! This can, of course, be observed by converting the samples properly back into the analog domain. However, the problem in audio is that the DAC (Digital to Analog Converter) has a maximum limit to the analog voltage it can generate. If the input samples are max value for the DAC but the actual true-peak is +3 or +6dB higher than the DAC’s limit then the DAC “clips” and the result is an awful “pop” or “crackle” sound in the audio. The goal when mastering is to set levels with enough margin to handle the inter-sample peaks not visible from the samples – so that the DAC is never forced to clip.

In my paper (partly due to trying to keep the size of it under control) I do not mention inter-sample peaks or true-peak. In my analysis I simply select the highest value sample from the 5-minute samples and call this the max. This is because that is what NOAA does. A more accurate second pass analysis would actually show that the real “mean” error between 5-minute samples and max/min samples is even greater than I presented. The proper method would be to do DSP on the samples to integrate upsampled data (equivalent to analog integration of the reconstructed signal) and compare this to the retrieved inter-sample true-peaks. I suspect that analysis would yield even more error than I showed in my paper. This is because the full integrated mean would not change much as the energy in that content is small. But the max and min values could swing by several °C, more greatly influencing the midrange value between those 2 numbers.

What do you think?

William, good to see your response. Kevin, thanks for a most interesting post and your reply to William. I’ve learned much from both of your posts.

I particularly enjoyed your example of the six-hour min-max being different from the hourly. It made the issues clear.

Onwards,

w.

I missed out a point which is the low pass filter cutoff frequency, which should obviously match the day length, in other words something close to 1/24 hours. This will produce a true power accumulated temperature reading, to be measured at the same time each day. We do not want short period change readings for the climate data, and this filter will reduce all the HF noise variations to essentially zero, and so be able to be measured with great accuracy. Sampling noise reduction can be by averaging many samples taken over a time interval significantly less than a day length, so be capable of reproducing very accurate change data, if not absolute data due to instrument inherent accuracy.

An analog filter with that low of a cutoff frequency would be pretty hard to do. It’s simply not practical. It can be done in the digital domain by down sampling. The signal is first run through a digital low pass filter then is resampled at a lower frequency. For example, to reduce the amount of data points to half you apply a digital filter to the original data to remove all the frequency components above 1/4 the the original sampling rate. You would then decimate (resample) the data by dropping every other sample.

Greg F

One approach is to design a collection system with a specific thermal inertia to dampen high-frequency impulses. Actually, Stevenson Screens already do that, but they may not be the optimal low-pass filtering for climatology.

Greg F

Decimating the sampled data set exacerbates the aliasing problem by making the sampling rate effectively a fraction of the original sampling rate. If the initial sampling rate meets the Nyquist Criteria, to capture the highest frequency component(s), thereby preventing aliasing and assuring faithful reproduction of the signal, then and only then, digital low-pass filters can be applied to suppress the high-frequency component(s).

Perhaps I didn’t explain it well enough as you appear to think I didn’t LP filter the raw data before decimating it.

analog signal –> sampled data–> digital filter (low pass) –> decimate

I thought my example was simple and clear enough.

All temperature sensors have some thermal inertia. A/D converters and processing is sufficiently cheap enough that adding thermal inertia to the sensor (to effectively low pass filter the signal) is likely not the most economical solution.

David Stone,

I suggest that we should be careful to assess what is signal and what is noise. How do we know that higher frequency information doesn’t tell us important things about climate? There is energy in those frequencies and capturing them properly allows us to analyze it. I recommend erring on the side of designing the system to capture as wide a bandwidth as possible. If properly captured then all options are at our disposal. We can filter away as much of it as we desire or as much as the science supports. Likewise we can use as much as desired or as much as the science supports. If we don’t capture it then the range of options is reduced. It is difficult to make a case that there are any benefits (economic or technical) to capturing less data up front. But we do agree that the system design must have integrity: the anti-aliasing filter and the sample rate must comply with the chosen Nyquist frequency. Standardized front-end circuits should also be used with specifications for response times, drift, clock jitter, offsets, power supply accuracy, etc, etc. And with that of course the siting of the instrument. I’m not advocating any specific formulation – just stating that these are important parameters that should be standardized so that we know each station is common to those items. The design should be done to allow a station to be placed anywhere around the world and not fail to capture according to the design specifications.

Others have said correctly that there is a lot of hourly data out there and we should use it. I support that and an effort to understand how the way it was captured could contribute to inaccuracies. That data is likely a big improvement over the max/min data and method. There may or may not be much difference between that and a system engineered by even better standards. At this point I don’t know, but if any new designs are undertaken, then they should be undertaken with best engineering practices.

David Stone, note that satellites will suffer from sampling issues as the sample the same spot on (or over) the earth once every 16 days, approximately twice per month.

You can read more here:

https://atrain.nasa.gov/publications/Aqua.pdf

Look at page one, ‘repeat cycle’.

You can see the list of other satellites within the A-train here:

https://www.nasa.gov/content/a-train

Obviously all of the instruments on board each satellite will pass over once per fortnight…

It is all very complex, clever but the data is very ‘smeary’.

And of course there is that pervading idea from some climate scientists that they ‘know’ the average temperature of the world to hundredths of a degree centigrade!

” It describes how a linear system ” … and here lies a problem in climastrology, many times they use linear systems methodology that is not applicable to the non-linear system they study. Ex falso, quodlibet.

+100

Jim

Theoretically, the equations (“models”) of physics precisely and exactly represent only what we _believe_ is Reality.

In practice, observations tend to corroborate these models, or not. We are happy if our observations are approximately in agreement with the models, more or less.

For example, Shannon’s sampling theorem states that a signal may be perfectly reconstructed from band-limited signals if the Nyquist limit is satisfied. In the same sense that a line may be perfectly reconstructed from two distinct points.

But we all know that Bias and Variance dominate our observations, explaining why draftsmen prefer to use at least three points to determine a line

And using MinMax temps is a reasonable “sawtooth” basis for mean temp, if that is the only data you have.

Nick was replying to William’s article.

In William’s article is “Figure 1: NOAA USCRN Data for Cordova, AK Nov 11, 2017”. He does a numerical analysis and shows an error of about 0.2C comparing “5 minute sample mean vs. (Tmax+Tmin)/2”.

The point that (Tmax+Tmin)/2 produces an error is adequately demonstrated by William’s numerical analysis. The question is about how often you get a temperature profile like the one shown in Fig. 1. In my neck of the woods, within 25 miles of Lake Ontario, in the winter, anomalous temperature profiles are very common. In fact, the winter temperature profile looks nothing at all like a nice tame sine wave.

So, why does it matter whether the temperature profile looks like a sine wave? The simplest way to think about the Nyquist rate is to ask what is the waveform that could be reconstructed from a data set that does not violate the Nyquist criterion. The answer is that you could reconstruct an unchanging sine wave whose frequency is half the sampling rate.

What happens when you don’t have a sine wave? The answer is that the wave form contains higher frequencies. The Nyquist rate applies to those and their frequencies go all the way to infinity. Thus, you have to set an acceptable error and set your sample rate based on that.

The daily temperature profile is nothing like a sine wave (where I live) and does not repeat. Nyquist is a red herring. William’s numerical analysis is sufficiently compelling to make the point that (Tmax+Tmin)/2 produces an error.

Oops.

Should be.

CommieBob

You said, “The Nyquist rate applies to those and their frequencies go all the way to infinity.” That is why it is necessary to pre-filter the analog signal before sampling. We can’t have a discretized sample with an infinite number of sinusoids.

Exactly so. Of course, when you do the LPF, you lose information and that produces an error.

Thank you commieBob.

That convolution discussion is very confusing.

I’m sure that what the author meant to say is true. But when he says, “In this case h(t) is a series of impulse functions separated by the sampling rate,” it sounds as though he wants us to convolve a signal consisting of a sequence of mutually time-offset impulses with the signal being sampled.

I haven’t spoken with experts on this stuff for awhile, so I’m probably rusty. But what I think they said is that in this case the convolution occurs in the frequency domain, not in the time domain.

“… convolution occurs in the frequency domain, not in the time domain”Actually, convolution is strictly a mathematical operation and attaches no specific physical interpretation to the integration variable.

Or perhaps you are referring to the so-called ‘convolution theorem’, in Fourier analysis, which states that a convolution of two variables in one domain is equal to the product of their Fourier transforms on the other domain

That is indeed what I’m referring to. But if, as the author seems to say, h(t) is a sequence of periodically occurring impulses, then sampling is the product of h(t) and s(t), not their convolution. So this situation is unlike the typical one, in which h(t) is a system’s impulse response and convolution therefore occurs in the time domain, with multiplication occurring in the frequency domain. In this situation the frequency domain is where the convolution occurs.

As Jim Masterson pointed out below, convolving any function with the unit impulse is mathematically identical to the function itself.

But such ideal unit impulses do not exist in Nature. What we really have is a noisy, short interval in which a signal is present.

As I pointed out above the sampling theorem guarantees, in theory, perfect reconstruction of band-limited sampled signals, if the Nyquist limit is obeyed. In practice it means you can reconstruct a sampled signal with arbitrarily small error.

Any time function can be represented as a summation or integral of unit impulses as follows:

The function is the unit impulse. Of course, you must be dealing with a linear system where the principle of superposition holds.

Jim

Yes, the convolution of a function with the unit impulse is the function itself. But I don’t see how that makes sampling equivalent to convolving with a sequence of impulses

https://www.google.com/url?sa=t&source=web&rct=j&url=http://nms.csail.mit.edu/spinal/shannonpaper.pdf&ved=2ahUKEwjPh5atzr7gAhUHVd8KHaZyBiwQFjAAegQIBRAB&usg=AOvVaw0_2SG_yumr7ovVqV5JR3R1

Equation 7 is the working version of the reconstruction part of the sampling theorem.

Proof is based on constructing Fourier series coefficients.

Yes, yes, yes. All of Shannon’s papers are sitting in my summer home’s basement, so, although I can’t say I carry around all of his teachings in my head, you needn’t recite elementary signal-processing results. Doing so doesn’t address Mr. Kilty’s apparent belief that the output of a sampling operation is the result of convolving the signal to be sampled with a function h(t) that consists of a sequence of unit impulses.

I appreciate your trying to help, but you seem unable to grasp what the issue is.

The function above is not a convolution integral. It’s an identity. The identity can be used to solve convolution integrals. A convolution integral involves two signals as follows:

Essentially you are taking two signals and multiplying them together. The second signal is flipped end-for-end and multiplied with the first one–one impulse pair at a time. You start at minus infinity for both signals and go all the way to plus infinity. (Some convolution integrals start at zero.)

This is useful for solving the response to a network with a specific input. The network is reduced to a transfer function–f1 above. The input signal is f2. The convolution is the response of the network to the input signal. Using Laplace transforms, convolution is simply multiplying the transforms together. You then convert the Laplace transforms to their time domain equivalent. It takes some effort, but it’s a lot easier than doing everything in the time domain.

Jim

Again, I know what convolution is. I have for over half a century. The issue isn’t how convolution works. I know how it works.

The issue is Mr. Kilty’s apparent belief that sampling is equivalent to convolving with an h(t) (i.e., your f_2) that consists of a sequence of impulses that recur at the sampling rate.

I hope and trust that he doesn’t really believe that, but that’s how his post reads.

If you have something to say that’s relevant to that issue, I’m happy to discuss it. But I see no value in responding to further comments that merely recite rudimentary signal theory.

>>

But I see no value in responding to further comments that merely recite rudimentary signal theory.

<<

I don’t mean to insult you with rudimentary theories. But convolution requires linear systems. Weather, and by extension climate, are not linear systems. Therefore, convolution can’t be used for these systems. And as I and many others have said repeatedly, temperature is an intensive thermodynamic property. Intensive thermodynamic properties can’t be averaged to obtain a physically meaningful result. (However, you can average any series of numbers–it just may have no physical significance.)

After further consideration–your initial concern is a valid point.

Jim

Jim

Joe – I don’t get it either. Is it clumsy writing or clumsy understanding (or both).

Just in the first paragraph of the top post:

“. . . . . . . . . . This sampling rate is part of the response function, h(t). For example the S/H circuit of a typical A/D has small capacitance and small input impedance, and thus has very rapid response to signals, or wide bandwidth if you prefer. It looks like an impulse function. . . . . .”

No – the Sample/Hold response is a rectangle in time. If that is h(t), you convolve a weighted (weighted by the signal samples) delta-train with the rectangle to get the S/H output (to hold still) prior to A/D conversion. But this is NOT what he says h(t) is a few lines lower!

“. . . . . . . . . . In this case h(t) is a series of impulse functions separated by the sampling rate. . . . . . . .”

As suggested, h(t) is a rectangle of width 1/fs. A “series of impulse functions separated by the sampling rate” (a frequency) would be a Dirac delta comb in frequency. That is the function with which the original spectrum is convolved (in frequency) to get the (periodic) spectrum of the sampled signal. But he says h is a function of time h(t)!

“. . . . . . . . . . If s(t) is a slowly varying signal, the convolution produces a nearly periodic output. . . . . . . . “

What convolution? Equation (1)? What domain? He must mean the periodic sampled spectrum. Why “nearly periodic?” Is it the sync roll-off of the rectangle due to the S/H?

“. . . . . . . . . . In the frequency domain, the Fourier transform of h(t), the transfer function (H(ω)), also is periodic, but its periods are closely spaced. . . . . . . .”

Why does he say “also is periodic?” What is ORIGINALLY periodic in this statement? In what sense are the periods “closely spaced”? It depends on fs.

The rest of the paragraph is correct.

Bernie

“Doing so doesn’t address Mr. Kilty’s apparent belief that the output of a sampling operation is the result of convolving the signal to be sampled “It isn’t a convolution. Sampling multiplies the signal by the Dirac comb. This becomes convolution in the frequency domain, also with a Dirac comb (one comb transforms into another). As convolving with δ just regenerates the function, convolving with a comb generates an equally spaced set of copies of the spectrum, which then overlap. That overlapping is another way to see aliasing.

Sampling multiplies the signal by the Dirac comb. This becomes convolution in the frequency domain, also with a Dirac comb

Indeed, but only for strictly periodic sampling in the time domain. Otherwise, there’s no Nyquist frequency defined and there can be no periodic spectral folding in the frequency domain, a.k.a aliasing.

Sadly, entire threads have been wasted here on egregious misconceptions.

1sky1,

You said: “Indeed, but only for strictly periodic sampling in the time domain. Otherwise, there’s no Nyquist frequency defined and there can be no periodic spectral folding in the frequency domain, a.k.a aliasing.”

Can you give us a definition of “strictly periodic”? In that, can you explain how sampling works with semiconductor ADCs which operate with jitter? I assume you agree that not 1 clock pulse has ever triggered on planet Earth in the history of clock pulses that didn’t have some quantity of jitter. So how do you resolve your statement? What are the limits of jitter allowed for Nyquist to be defined? It sounds like you are saying the value is exactly 0 in every possible unit of time. If zero is the answer then how do you explain the function of an ADC in the real world? What does an imperfect clock mean for the operation? Does aliasing (spectral folding) get eliminated in reality? What then is the cause of imperfections of sampling when the clock frequency is < 2BW?

Responding to William at Feb 17, 2019 at 7:02 pm:

The suggestion that timing discrepancies due to the substantial ignorance of the actual times of Tmax and Tmin are analogous to the tiny “jitter” in familiar sampling situations (such as digital audio) is a stretch, and then some.

Might I suggest that you consider the practice of “oversampling” (OS) that is pretty much universal in digital audio. This (with “noise shaping” (NS) ) allows one to trade off expensive divisions for additional amplitude resolution (more bits) for inexpensive additional divisions in time (smaller sampling intervals). [An OSNS CD player (one bit D/A) can be purchased for less than the cost of even a single 16 bit D/A chip.] Note that Digital audio is something like a 44.1 kHz sampling rate (very low frequency as compared to things like radio signals). We are limited, for practical purposes, with amplitude resolution; but easily work at higher and higher data rates.

The timing jitter and amplitude quantization errors (round-off noise) are very similar effects, handled as random noise. This can be seen by considering that, for a bandlimited signal, any sample might be taken slightly early or slightly late, perhaps with no error after quantization, and at least is unlikely to be much more than the LSB. That is, given an error (noise), the choice of assigning it to timing jitter, or to limited amplitudes, is your choice. For the most part, you CAN arguably assume that the sampling grid is PERFECT and the signal slightly more noisy.

The point is that we have an IMMENSE amount of “wiggle room” with respect to precise timing of samples. Even a millisecond of error in weather data recording would be unusual. In contrast, the unknown times of Tmax and Tmin are hours.

Jitter seems a very poor analogy here.

Bernie

Hello Bernie,

Consider taking 5-minute samples and discarding all but the max and min. For this discussion we retain the timing of those samples.

Jitter is when a clock pulse deviates from its ideal time by some amount ∆t. Assuming the clock pulse happens inside of its intended period, what are the limits of ∆t before it is no longer jitter and becomes something else. And what is the something else?

Please be as succinct as possible because I think this should be answerable with brevity.

I’m open to being convinced otherwise. I know the amount of ∆t is large relative to the period as compared to modern ADC applications, but I could not find a limit so I went with it as an analogy. It has garnered a lot of attention. I’m not sure if it is or isn’t critical to anything I have said. So my intention at this point is to have some fun to see what we can learn on this.

My thinking so far is that the noise/error just scale with what I’m calling jitter. Its larger than we normally encounter but nothing else in the theory breaks down.

What do you think?

Replying to William Ward at February 17, 2019 at 11:11 pm

An explanatory model, or theory, usefully applies (or breaks down) according to overall circumstances, not to some threshold.

Jitter is small (perhaps 1% of a sampling interval), usually adequately modeled as random noise. For a substantial fraction of the sampling interval or greater, we need your “something-else”.

Jitter is when you show up for a party 5 minutes early or 5 minutes late (different watch settings, traffic, expected indifference). “Something-else” is when you show up the wrong evening!

Taking just two numbers (Tmax and Tmin), when much more and better data (with actual times included) is possible, is definitely a something-else.

Bernie

Hello Bernie,

I hope you are well.

Your said: “An explanatory model, or theory, usefully applies (or breaks down) according to overall circumstances, not to some threshold.”

Ok, you are on record here, as you were in our previous discussions, with your opinion about the applicability of jitter. But you offer no empirical evidence, math, rationale, nor cite any source to refute it.

I’m open to being convinced otherwise, but so far my rationale has stood up to the critique. I will continue to use it until someone can show my how it breaks down.

I’m still not sure that it is worth arguing about. 2 samples/day, whether poorly timed or not, don’t provide an accurate daily mean. Most people seem to think the trends are the more important issue. I have shown good evidence of trend error that results as well. This trend error is small to my standards of measurement, but then again so are the claimed warming trends small by my standards. The fact that they (trend error and trends) are of similar magnitude is what is most important in my assessment. No one yet has shown that the trends Paramenter and I have presented are not correct. There have been a lot of claims to that effect, and alternate forms of analysis that contradict it, but no findings of error in our analysis. And that is where we are with it.

The decision to measure max and min daily, was probably an intuitive decision – not one based upon science or signal analysis. It is a fortunate thing that the spectrum of the temperature signal does allow for sampling 2-samples/day without completely destructive effects. If there was more energy in 1, 2, 3-cycles/day, then the result would be different. It turns out to work ok – but not so good that we can claim records to 0.1C or 0.01C or trends of 0.1C/decade.

For those with even a modicum of comprehension of DSP math, the rationale is perfectly clear: there’s a categorical difference between periodic, clock-DRIVEN sampling of the ordinates of a continuous signal and clock-INDEPENDENT recording of phenomenological features (e.g. zero-upcrossings, peaks, troughs) of the signal wave-form. The former may contain some random clock-jitter in practice (e.g., imprecise timing of thermometer readings at WMO synoptic times), whereas the latter is entirely signal-dependent (although some features may be missed in periodic sampling).

There’s simply no way that the highly asymmetric diurnal wave-form that produces daily Min/ Max readings at very irregular times, clustered near dawn at mid-afternoon, can be reasonably attributed to any clock-jitter of regular twice-daily sampling. Jitter smears, but does not change the average time between samples. Ward starts with a highly untypical diurnal wave in Figure 1 of his original posting and proceeds to insist against all analytic reason that kiwi-fruit are apples.

Thanks 1sky1 – you said: “Jitter smears, but does not change the average time between samples. . . . . . . “ Good point .

On the “In search of the standard day” thread (Feb 16, 2019) I had a relataed observation (Feb 16 at 6:13 pm) while illustrating the problem of fitting sinewave min-max that were not 12 hours apart! The illustration is here:

http://electronotes.netfirms.com/StandDay.jpg

Below at 5:41 pm I just minutes ago posted an interesting result of sampling a one-day cycle at two samples a day where aliasing is “undone” by well-known FFT interpolation.

Bernie

Jitter is defined simply as the deviation of an edge (sample) from where it should be.

Nothing more and nothing less.

Total Jitter can be decomposed into bounded (deterministic jitter) and unbounded (random jitter). Bounded jitter can be decomposed into bounded correlated and bounded uncorrelated jitter. Bounded correlated jitter can be decomposed into duty cycle distortion (DCD) and intersymbol interference (ISI). Bounded uncorrelated jitter can be decomposed into periodic jitter and “other” jitter. Other jitter – meaning anything else not described that explains a sample deviating from its correct time.

DCD fits well with our temperature signal max/min issue. Feel free to call it whatever you prefer – it will not change the results. We have sample values that deviate from where they need to be for Nyquist. There is no other mathematical basis for working with digital signals except Nyquist. There is no Phenomenological Sampling Theorem. If you are working with discrete values from an analog signal, then they have to comply with Nyquist.

It does not matter if you get the sample values through an ADC or through a max/min thermometer or a crystal ball. It does not matter what provides the timing to get the digital samples. **It doesn’t matter if we obtain or keep the timing information. **

Here is the reason. Nyquist requires perfect correspondence between the analog and digital domains. We can demonstrate this correspondence by reconstructing back to the analog domain. Nyquist insists upon the perfect timing and perfect values for reconstruction. Nyquist provides the timing by forcing it to the rate inferred by the samples. We provide the values of the samples obtained. Jitter in the conversion in either direction deviates from the “strict periodicity” required for perfect reconstruction. This is seen as quantization error when we compare the sampled signal to the original signal at the perfect sampling times.

If you want to find the actual analog signal you are working with digitally then you simply run the sample values through a DAC at the sample rate inferred by your samples.

Examples: Assume typical air temperature signal and ADC with anti-aliasing filter compatible with 288-samples/day.

1) We sample at 288-samples/day (5-minute samples) with no perceptible jitter. If we run the samples back through a DAC set at 288-samples/day we get the original signal.

2) We sample at 2-samples/day, clocked with no perceptible jitter. We must run the samples back through the DAC at 2-samples/day. The reconstructed signal will differ from the original as a function of the aliasing.

3) We sample at 2-samples/day, clocked with DCD (jitter). The samples happen to correspond to the timing and values of Tmax and Tmin. To reconstruct, Nyquist requires we run this back through the DAC with a clock rate of 2-samples/day. The reconstructed signal will differ from the original as a function of the aliasing *and* as a function of the quantization error resulting from measuring the analog signal at the wrong time compared to what Nyquist required. The reconstruction places the samples at the time location they should have come from originally if the sampling was “strictly periodic.” However, the sample values used in the reconstruction are not what they should have been for that time.

This is an egregious display of illogic developed by blind fixation upon ADC hardware operation. With Min/Max thermometry we are NOT “working with digital signals,” but with phenomenological features of the CONTINUOUS temperature signal. It’s a straightforward matter of waveform analysis (q.v.), no different than registering the elevation of crests and troughs of waves above prevailing sea level or the directed zero-crossings. The number of such features observed per unit time has nothing to do with the discrete sampling frequency and everything to do with the original signal.

If the tortured rationale that claims this to be a DSP problem were ever submitted to any IEEE-refereed publication, it would elicit only head-shaking and laughter.

William Ward at February 19, 2019 at 10:49 pm said:

“. . . . . . . . . . DCD fits well with our temperature signal max/min issue. . . . . . . . . “

It would seem logical, but you don’t know the duty-cycle nor is it constant, so what is gained as opposed to just saying you don’t know the two times? You just postulate one of the times and postulate the wait till the second time. Right?

“ . . . . . . . . . . There is no other mathematical basis for working with digital signals except Nyquist. There is no Phenomenological Sampling Theorem. If you are working with discrete values from an analog signal, then they have to comply with Nyquist. . . . . . . . . . . .”

Of course there are – alternative signal models. For example, I curve-fitted a sinusoid of known frequency to non-uniform min-max. Four equations in four unknowns.

http://electronotes.netfirms.com/StandDay.jpg

“. . . . . . . . . . 3) We sample at 2-samples/day, clocked with DCD (jitter). The samples happen to correspond to the timing and values of Tmax and Tmin. . . . . . . . . . “

You have to describe this (happen to correspond) better. Is the temperature accommodating the physical clocking or the clocking accommodating the physical temperature max-min? How would you expect this to happen? Thanks.

Bernie

Bernie,

You said: “You have to describe this (happen to correspond) better. …”

I think you misunderstand my point. In the example (#3), I’m showing that the max and min values are not special. The ADC samples can hypothetically land on the max and min values, via some clocking jitter. Or they can land on other values similarly far from the ideal “strictly periodic” sample time. All that matters is the time deviation from the ideal, which results in quantization error for that sample.

Nyquist requires clocking to be strictly periodic when we convert from analog and when we convert to digital. Tmax and Tmin are periodic (2-samples/day) but not strictly periodic. But no sample is strictly periodic in reality. Every sample has some timing deviation from the ideal.

What happens when sampling analog at times that deviate from the ideal? (This is jitter by definition). We measure a value that is correct for the moment of sampling, but not correct for when Nyquist is expecting the sample to be taken. If we reconstruct, the DAC uses the correct sample-rate and sample-time (because Nyquist requires it) but with the wrong value. This is quantization error when we view any individual sample.

See this image for an illustration: https://imgur.com/KSgcDxm

In this example, the clock is supposed to trigger a sample at point #1, which corresponds to x=π/3. The correct value of the sample at x=π/3 is 0.866. But what if the clock pulse arrives early? Regardless of the cause, what if it arrives at point #2, which happens to correspond to π/4? The ADC samples and reads the correct value for x=π/4, which is 0.707, but not the correct value for what Nyquist expects (0.866) We have quantization error of 0.866-0.707 for this sample, due to the jitter. If reconstruction is performed, we get error compared to the original signal. The DAC will sample at x=π/3 but with the value from x=π/4. So reconstruction uses 0.707 instead of 0.866. Again, quantization error for that sample.

If we just sampled temperature twice a day (every 12-hours), we would have aliasing due to higher frequency components between 1 and 3-cycles/day, but no jitter related quantization error. If we move the sample times (perhaps such that they line up with where max and min occur – just to make the point), then this increases jitter and now we add jitter related quantization error.

Whether we know the times of these samples doesn’t change the fact that the reconstruction takes place where they were supposed to have happened. Digital audio is a good example. There is no sample timing information included with the sample values – just the overall sample rate required.

If you think some of your methods can be used to reconstruct the signal using max and min, then we have USCRN data you can use. The timing is available for these samples. You also have the 5-minute samples with which to use as your comparison. To test your results you can simply invert one of the signals and sum them. The closer to a null the closer the match.

The problem of unknown times corresponding to Tmax and Tmin can be approached by guessing likely times, perhaps 7AM for a min and 4 PM for a max. Assuming these times will do, we still have non-uniform sampling – the times are not 12 hours apart. If they were equally spaced, VERY standard recovery (interpolation) would be achieved with sinc functions. Instead, different interpolation functions (due to Ron Bracewell) are necessary, but work in VERY much the same way the sincs do:

http://electronotes.netfirms.com/BunchedSamples.jpg

After trying a number of approaches with the FFT, I succeeded instead with the continuous-time case as shown here:

http://electronotes.netfirms.com/NonUniform.jpg

The top panel shows one day with Tmin=50 at 7AM and Tmax=80 at 4PM. The interpolated (black) curve goes exactly through both samples. In order to get a better idea as to what an ensemble of days might look like, this day is placed between two other days in the bottom panel. Again, the interpolation is exact. Note that all other days are Tmax=0, Tmin=0 by default (so avoid the ends!).

The daily sinusoidal-like interpolated curve is evident, but the min and max of this black curve are not Tmin or Tmax, but the actual timing of these suggests better guesses for a second run (etc.) of an iterative program.

-Bernie

Moderator: I see I made one mistake by saying “variance reduced in a statistic by the factor 1/(sqrt n)” It should read (1/n). Also why does the font switch here and there?

This is just one problem with the temperature databases.

1) Precision – For an example look at the BEST data. They take data from 1880 onward and somehow get precision out to one one-hundredth of a degree. In other words, when they average 50 and 51, they keep the answer of 50.5. This is adding precision that is not available in the real world. Any college professor in chemistry or physics would eat a students lunch for not using the rules for significant digits when doing this. If recorded temperatures are in integer values, then every subsequent mathematical operation needs to end up with integer values!

2) The Central Limit Theory and Uncertainty of Mean have very specific criteria when using them to increase the accuracy of measurements. The measurements must be random, normally distributed, independent, and OF THE SAME THING. You simply can’t take measurements of different things, i.e. temperature at different times, average them and say you can increase the accuracy and precision because you can divide the errors by the sqrt(1/N). They are different things and the measurement of one simply can not affect the accuracy of the other.

3) Temperature is a continuous function. It is not a discreet function with only certain allowed values. Consequently, one must have sufficient sampling in order to accurately recreate the continuous function. As the author describes, what we have now is not something that accurately describes what the actual temperature function does.

Jim Gorman

+1

Excellent points.

Thank you Jim. I agree with Clyde, and will raise him one. +2 for your comments.

Jim, ++++10000. From an industrial chemist.

Macha

Its people like you who cause ‘Thumb Up’ inflation! 🙂

The best way to resolve the discussion is with a simulation.

You can create a made a made up temperature history spanning a 100 year period, knowing the continuous function of each day as if you measured once every 5 seconds. Against this set of data you can apply whatever sampling methods you choose and evaluate the usefulness and reliability of each.

Hello Steve O,

I did some of that in my paper:

https://wattsupwiththat.com/2019/01/14/a-condensed-version-of-a-paper-entitled-violating-nyquist-another-source-of-significant-error-in-the-instrumental-temperature-record/

Using USCRN data I was able to do it for up to 12 years using 5-minute sample data and compare it to the mix/max data method. I presented the trend errors in a table at the end.

Here are some charts not presented in the paper, showing yearly offset and long term linear trend errors. Note, these graphs were never intended for publication so some labeling might be below my normal standards.

https://imgur.com/xA4hGSZ

https://imgur.com/cqCCzC1

https://imgur.com/IC7239t

https://imgur.com/SaGIgKL

What do you think?

Leo Smith sums things up petty well IMO:

“What we have really is a flimsy structure of linear equations supported by savage extrapolation of inadequate and often false data, under constant revision, that purports to represent a complex non linear system for which the analysis in incomputable, and whose starting point cannot be established by empirical data anyway, that in the end can’t even be forced to fit the clumsy and inadequate data we do have”

The time aspect is missing from this analyses.

Even though any specific stations’s day value can (and will) be way off it is hard to see how this can create any lasting bias over longer time periods. The error will be random and stay within boundaries and hence not affect the overall long term trend. Moreover any possible bias should cancel out between multiple stations.

There is a much bigger problem how multiple station data records are aggregated to faithfully represent a larger area especially when locations and area coverage constantly changes. There you have the real “sampling error”.

MrZ

You said, “The error will be random and stay within boundaries and hence not affect the overall long term trend.”

Not strictly random. Most of the Tmins will be at night and most of the Tmaxes will be in the day time. That is over a 24-hour day, the bulk of the lows will be during the nominal 12-hour night and the bulk of the highs will be when the sun is shining.

Hi Clyde,

True, I can not object to that but how could that fact affect any average bias over time?

Are you saying in context of sampling theory or TOBS adjustments? I am asking because the mercury MIN/MAX is an almost perfect sample for those two points (the article discuss how relevant those are) but very much depending on WHEN you read and reset.

Your error is your assumption that the error is truly random.

That has not been shown to be the case.

Indeed there is evidence that it is not the case.

OK MarkW,

If so please, take me through how that creates a consistent and growing bias across stations and time. My main point was simply it does NOT affect the overall trend.

Next step i.e station aggregation DO affect those trends.

It may or may not create a trend. That’s the problem, the data is so bad that there is no way to tell for sure.

The reality is that it is absurd to claim that we can measure the temperature of the planet to with 0.01C today. It is several orders of magnitude more absurd to claim we can do it with the records from 1850.

Hi Mark!

Agree 0.001 precision is a joke.

The main error however does not originate from sampling the aggregation of stations is the main source of error

MrZ,

I provide you 4 links here (also provided elsewhere in other replies, but you seem to have not seen them). Using quality data from USCRN, I plot ~12 years of data using 5-minute samples. Yearly averages are provided along with the corresponding linear trend. From these samples, the daily max and min values are obtained. This is essentially the same as we would get with a well calibrated max/min thermometer. Yearly averages and trends are plotted for the max/min values. You can see the differences for yourself. Note: TOBS plays no role in this as the max and min values are determined by bounds of midnight to 11:55PM in a local calendar day.

https://imgur.com/xA4hGSZ

https://imgur.com/cqCCzC1

https://imgur.com/IC7239t

https://imgur.com/SaGIgKL

Hi William!

Thanks for this. You are right its hard to check all links when you join the thread afterwards.

Look at the trends in your graphs. They basically stated what I said. Any shift in trend is coincidental because last or first years differ expand a few yers and the trend difference is gone.

MrZ (MrS),

You said: “Look at the trends in your graphs. They basically stated what I said. Any shift in trend is coincidental because last or first years differ expand a few yers and the trend difference is gone.”

My reply: I don’t agree with your assessment. You would need to find a way to demonstrate that.

William

“Note: TOBS plays no role in this as the max and min values are determined by bounds of midnight to 11:55PM in a local calendar day.”TOBS plays a role. There is no special exemption for those arbitrary times. Midnight is a human convention, not anything physical. The bias created by double counting happens at any reading time, and causes an offset.

I see you have Boulder, CO there. Again in this post I showed this plot of annually smoothed Boulder data, from 2009 to 2011. It shows that every reset time creates an offset; I don’t have midnight, but 11 pm has an offset of about 0.2°C, with min/max being lower than integrated. That pretty much agrees with your plot.

Since it is a near constant offset, you wouldn’t expect it to have much effect on trend, and your plots show that. The small changes seen are statistically quite insignificant, and so entirely to be expected when you change the method of calculation.

Hello Nick,

You said: “TOBS plays a role. There is no special exemption for those arbitrary times. …”

I would like to better understand each other on this to see what we can learn (or at least what I can learn). I read the post you recommended and to the extent I fully digested it I have some thoughts and questions. First, let me detail what I did. For the record, Paramenter did some of the work on trends and this data was presented in my paper.

** All 5-minute data for 12 years was used, without intermediate averaging, and a linear trend was calculated.

** For the max/min method (MMM) [I’m tired of writing (Tmax+Tmin)/2! 🙂 ] we took NOAA’s stated monthly avg and fed 12 years of monthly avg into the calculation to determine a linear trend.

** The alternate method (less precise) that was used was: for the 5-minute data, each year was integrated and yearly mean values were used to determine the trend. For the MMM, 12 monthly averages provided by NOAA were averaged to a yearly data point and these were used to calculate a linear trend.

With the alternate method, we don’t see a consistent delta between the yearly datapoints and there is a trend difference for most stations between the methods.

Why are we seeing different results through our different methods (ours and yours)? I thought it was important to you to use monthly averages, but in your post on TOBS you seem to be using daily deltas. I must be misunderstanding something because, for Boulder CO, when I look at the day to day mean error it varies daily by quite a bit:

https://imgur.com/QyfAonp

NOAA uses midnight to 11:55 PM and we have data sampled automatically, so this is why I said there are no TOBS issues. But you are saying (I think) that if we (or NOAA) changes there definition of a day then the results change. Yes? Also, I understand you are defining a notional reset time and looking back to find the max and min for 24 hours prior, correct? Am I correct that this doesn’t present the exact same set of problems we get with a max/min thermometer? With a max/min thermometer, if reset when cooling and a cold front comes through then the next day we may actually read the previous day’s max – but this can’t happen when using the NOAA samples, correct?

Also, to be clear, if we integrate 5-minute samples then there really is no possible TOBS, correct? This was also factoring into my statement.

Over the next day or so, I will try to do the following to see what results: from the 5-minute samples select 2-samples spaced 12 hours apart. Repeat this for a number of different starting times. Ex: 12-12, 3-3, 6-6, 9-9. It would be very similar to what you did except the point would be to capture samples periodically vs. max and min.

I’m curious about what this will show – have some ideas but will let the results guide my analysis.

Thanks for engaging Nick. Good stuff.

The problem arises because the temperatures, especially up to the not too distant past, are recorded in integer values. That means the error is +/- 0.5 degrees. You can not reduce that error that is contained in each reading because you are measuring different things each time you make a measurement. That means each reading is stand alone, with that error. Averaging will not reduce it, i.e. the average will have that error also.

What does that mean? So the most accuracy you can claim is +/- 0.5 degrees regardless of how you apply statistical methods. So, you tell me how come climate scientists and the media run around whining about 0.001 degree changes. That is simply noise and they won’t admit it. Too many programmers and mathematicians who have no idea about the real world. Is it any wonder we have no reliable, physical experiments validating any of the claims?

Hi Jim!

Try a long series of 1 decimal measurements and average them.

The try the same series rounded without the decimal. You will be surprised. The longer the series the closer they are.

I do agree 0.001 precision is ridiculous though.

I have done this. Look up the problem with rounding numbers using the traditional method. You will find a bias. Why? Because there are 9 chances to have a number that you use to round. Four of them, 1, 2, 3, 4 go down. Five of them go up, 5, 6, 7, 8, 9. You get an automatic bias toward higher numbers. This is a very well known problem. Most computer languages use this when you utilize a standard function.

The digit zero is a special case. The only way to get it is to have all even integer numbers or numbers with 0 as the decimal value.

When you tried did the result get closer or deviate more with the number of records?

” Averaging will not reduce it, i.e. the average will have that error also.”Just not true, as demonstrated with real temperature data here. But you can do it yourself. Just take any set of data; get some monthly averages. Round it to 1° or 2° or whatever. Take the averages again. The averages will agree with the unrounded versions to a lot better than 1°.

You’re missing the point. The averages are not the point, the errors are! If the original values are integer values, you must round averages to integer values. Your average simply can not contain more precision than the original measurements. That means your average, at best has an error range of +/- 0.5 degrees. It does not matter how many data points that have an error of +/- 0.5 degrees you average together, the end result will not have gained any less error!

Jim,

Please try the simple example above! You appear as non-adjucated until….

“Your average simply can not contain more precision than the original measurements”Much dogmatism, as often expressed here, but no authority quoted. And as I said, it just isn’t true, as is easily demonstrated.

In the link, I took daily readings for Melbourne, given to 1 decimal. So you’d say the monthly averages could only be accurate to 0.1°C. I calculated the 13 monthly averages (Mar-Mar):

22.72 19.24 17.13 14.43 13.29 13.85 17.26 24.33 22.73 27.45 25.98 25.1 24.86

Then I rounded the daily data to 1°C. So, you’d say, the average can only be accurate to 1°C. Then I average that:

22.77 19.27 17.13 14.37 13.29 13.84 17.33 24.35 22.67 27.48 26 25.17 24.84

The differences were:

0.05 0.03 0 -0.06 0 -0.01 0.08 0.03 -0.06 0.03 0.02 0.08 -0.02

That’s a lot better than ±0.5°C accuracy. It’s very close to the theoretical, which is ±0.05.

Sorry, those were Melbourne daily max temperatures

He is not missing the point. You are…

Nick,

You are not doing it correctly. As many already said, your underlying accuracy is not what you assume in your tests.

What you need to do is to generate random error of +/- 0.5 deg, add it to your accurate data and

thenround it out to integer and then average it.Can you show that this will result in no error?

MrZ –> If you’ll notice I said integer values, not numbers that have decimal precision already.

Udar,

“What you need to do is to generate random error of +/- 0.5 deg, add it to your accurate data and then round it out to integer and then average it.”I did almost that here, and with the whole data set. Noise was normally distributed, ±1.0°C, applied to monthly averages. It made almost no difference to the global average. That did not include rounding, but there is no reason to expect that random noise and rounding would have an effect together when they separately have none.

Okay Nick, as I can see your usual flimflam.

All of the differences you calculated need to have the +/- 0.5 degC error added.

So really all we have are 1 degreee ranges. Basically useless for determining changes of 0.1 deg C

And that’s a real value.

The real values are magnitude +/- uncertainty.

Nick, I’ll use the example in the link you gave.

If you take March’s data, read to 0.1 degC (+/- 0.05 degC uncertainty), the average is 24.86 +/- 0.05 degC, using the 0.1 degC as reference

If you round the data (using a standard spreadsheet round) and average that you get 24.90 +/- 0.05 degC

If you add an offset of 0.3 degC, simulating an offset (either fixed or some slow varying one) and round, the average is 25.26 +/- 0.05 degC.

Okay. But now consider the differences between the offset and non-offset in terms of the system stated uncertainty for the rounded data (1 degree so +/- 0.5 degC)

25.3 +/- 0.5 degC,

where the “real” value in this case would be 24.9. Even though we see there is a drift, which in actuality would be caught by characterisation and calibration, there is still no significant difference, primarily as the system is designed to allow for a certain drift, having a larger uncertainty.

However if you assume your averaging creates a lower uncertainty then you may believe the 25.3 degC because you assume that the errors are random. You take the system to somehow have lower uncertainty.

This is the point. The uncertainty in a measurement system includes variations from normal distributions. It relates to reliability and insensitivity of results to small variations and drifts below the stated uncertainty.

You however think differently and seem to believe that the Theory of Large Numbers is a catch all. You are making the mistake of reading too much into the data and only considering the magnitudes not the magnitudes + error.

Nick Stokes –> “Much dogmatism, as often expressed here, but no authority quoted. And as I said, it just isn’t true, as is easily demonstrated. ”

You want authority, here are three sources.

http://www.occonline.occ.cccd.edu/online/ahellman/Sig%20Fig,%20%20Rounding%20&%20Scientific%20Notation%20%20WS.pdf

[i]• Use significant figures to deal with uncertainty in numbers and calculations. [/i]

You’ll notice the word “uncertainty”, it is important in measurements.

https://engineering.purdue.edu/~asm215/topics/calcrule.html

[i]Any properly recorded measurement can be said to have a maximum uncertainty or error of plus or minus one-half its last digit. Significant figures give an indication of the precision attained when taking measurements.[/i]

Again, please note the reference to uncertainty.

https://physics.nist.gov/cuu/pdf/sp811.pdf

[i]B.7.2 Rounding converted numerical values of quantities

The use of the factors given in Secs. B.8 and B.9 to convert values of quantities was demonstrated in Sec. B.3. In most cases the product of the unconverted numerical value and the factor will be a numerical value with a number of digits that exceeds the number of significant digits (see Sec. 7.9) of the unconverted numerical value. [b]Proper conversion procedure requires rounding this converted numerical value to the number of significant digits that is consistent with the maximum possible rounding error of the unconverted numerical value. [/b]

Example: To express the value l = 36 ft in meters, use the factor 3.048 E−01 from Sec. B.8 or Sec. B.9 and write l = 36 ft × 0.3048 m/ft = 10.9728 m = 11.0 m.

The final result, l = 11.0 m, is based on the following reasoning: The numerical value “36” has two significant digits, and thus a relative maximum possible rounding error (abbreviated RE in this Guide for simplicity) of ± 0.5/36 = ± 1.4 %, because it could have resulted from rounding the number 35.5, 36.5, or any number between 35.5 and 36.5. To be consistent with this RE, the converted numerical value “10.9728” is rounded to 11.0 or three significant digits because the number 11.0 has an RE of ± 0.05/11.0 = ± 0.45 %. Although this ± 0.45 % RE is one-third of the ± 1.4 % RE of the unconverted numerical value “36,” if the converted numerical value “10.9728” had been rounded to 11 or two significant digits, information contained in the unconverted numerical value “36” would have been lost. This is because the RE of the numerical value “11” is ± 0.5/11 = ± 4.5 %, which is three times the ± 1.4 % RE of the unconverted numerical value “36.” This example therefore shows that when selecting the number of digits to retain in the numerical value of a converted quantity, one must often choose between discarding information or providing unwarranted information. Consideration of the end use of the converted value can often help one decide which choice to make. [i]

(bold by me)

Please note, making some snide comments almost overcame me, but I’ll be nice since you are trying. Chemists, physicists, machinists, and surveyors all have training in the physical world and understand that any measurement is inaccurate for any number of reasons. Using significant digits has been refined over the years to both help eliminate errors and to insure that measurements are repeatable over future experiments or operations.

One simply can not add additional digits of precision by using averaging. By doing so you are, in essence, adding precision to measurements that was not there to start with and you are misleading others into thinking you used better measurement devices than you really had.

Nick Stokes –> Let’s use some numbers to illustrate what can happen. The temps I am using are made up, but similar to what you used.

Day 1 22.7 19.2 17.1 14.4 13.2 all +/- 0.05

Day 2 21.3 22.2 16.9 13.2 15.7 all +/- 0.05

Please note the following numbers include the uncertainty and are taken out to 4 significant digits

Max temps by adding 0.05

Day 1 22.75 19.25 17.15 14.45 13.25

Day 2 21.35 22.25 16.95 13.25 15.75

Min temps by subtracting 0.05

Day 1 22.65 19.15 17.05 14.35 13.15

Day 2 21.25 22.15 16.85 13.15 15.65

Average of Max Temps

22.05 20.75 17.05 13.85 14.50

Average of Min Temps

21.95 20.65 16.95 13.75 14.40

Average of Recorded with proper significant digits

22.0 20.7 17.0 13.8 14.4 all +/- 0.05

This all looks normal except for the last one. It does not lie within average of the error, it lies at the lower boundary. This does occur, and is a well known artifact of rounding. I have done this with a lot of temps and some will match the upper figure and some will match the lower. It doesn’t occur often, but it does show why it is necessary to deal with uncertainty in a proper and consistent manner. The NIST document I referenced above discusses this in terms of percentage error.

Let me finish up my example. As you can see, most of the time the average (to the correct number of significant digits) will appear to be within the range of error, i.e. +/- 0.05 degrees. However, the range of error is not reduced. This is because you don’t know for sure what the real value to the one one-hundredth place actually was. Any value you choose between -0.05 and +0.05 is as equally possible as any other value between these numbers. This is where measurement uncertainty arises. As a consequence, it propagates throughout any calculations you make. The end result is that you can not claim that a temperature of 0.01 is different that 0.09. The probability that either value (or any other value) is true is equal to 1.

Jim and Mickey

I get the feeling that mathematicians are so used to dealing with exact numbers that they get careless when dealing with real-world measurements that inherently have uncertainty associated with them.

Clyde –> +1.049

Clyde –> Actually, I have been there. When deep into coding and trying to build a program to do something it is easy to forget you are dealing with measurements. Truncating, rounding, multiplying, dividing, etc., i.e. building the software becomes your focus and it’s easy to loose sight of what the actual end result is. When you declare a variable to be a certain size, you can forget to check the number of decimal places in each calculation, just like using a calculator and writing down all the digits of a number multiplied by pi.

Jim,

“You want authority, here are three sources.”If you want to cite authority, it isn’t enough to give a link. You need to quote what they say that supports your claim, which was

“Your average simply can not contain more precision than the original measurements.”. In fact, none of your links says anything at all about averages. The nearest is the Purdue doc, which says that a sum cannot be more precise than its vaguest component. True. But an average is not a sum, it is a sum divided by N. And that divides the error by N.“Max temps by adding 0.05”That is calculating the uncertainty by assuming that every deviation is the same, and in the same direction. That is exceedingly improbable. The thing about averaging is that they go both ways, and cancel to a large extent. If they all go one way, that is not an expression of uncertainty. It is an error in calibration. You can’t attach probabilities to that. And in fact, for this purpose it doesn’t matter. Anomalies subtract the historic mean, which includes the same predictable error.

That is exactely what I am saying above but you cannot reach 0.001.

How are you doing Nick, your feedback is really valuable to me.

Nick and MrZ

You are assuming that the measurement of the data contains an intrinsic (zero) error signal and that averaging (which adds pseudo resolution) will improve accuracy. Also you are comparing a more accurate measurement (0.1 degC) to less (1 degC) which is itself just digitised (0.1 degC) data.

You are missing the point metrologically. Does the intrinsic measurement have sufficient accuracy to allow the sample distribution to be determined to a lower uncertainty?

No.

If I have 100 measurements stated to have an uncertainty of 1 degree C then this is the baseline uncertainty. All uncertainties will be greater. The ONLY way to improve it is to do what Rutherford has said: Build a better experiment.

You are performing a hypothetical analysis.

To futher explain. You have an average of decimal points as 22.72. By right it should be 22.72 +/- 0.05. If this was taken with a stated uncertainty of +/- 0.5 it would be 22.7 +/- 0.5. But you didn’t. You took the second as 22.77. WHere does the 0.07 come from?

You “invented” it.

You created a better reading by assuming the underlying reading captured an intrinsic value. This is what the CLT does when you have i.i.d. It removes the individual samples and maps them on to sets of distributions.

In reality, one reading is 22.72 +/ 0.05 deg C as a minimum (real uncertainty would be higher as the errors can chain)

The other reading if the original system was reading to 1 degree is 22.7 +/- 0.5 degC as a minimum. One reading system has higher uncertainty. The first could be used to calibrate the second.

“You are assuming that the measurement of the data contains an intrinsic (zero) error signal and that averaging (which adds pseudo resolution) will improve accuracy.”I’m not assuming it. I’m showing it.

“You took the second as 22.77. WHere does the 0.07 come from?”It’s just the result when I averaged the rounded (to 1C) data.

Nick

“I’m not assuming it. I’m showing it.”

The data only shows it because you took a higher resolution value and artifically made it lower resolution (higher uncertainty) and then showed that the average of the lower resolution matches the higher resolution. So what?

And by resolution here I mean that it terms of uncertainty of a real measurement.

In a real system with higher uncertainty, especially one with levels an order above the variation you are trying to measure, you know much less if anything about the shape of the sample distribution with respect to the variation you are trying to show. Most measurement systems skew and have bimodal and other strange and unique shapes. The only way you catch it is by recharacterisation and frequent recalibration. Even then you apply more tolerance.

A simple analogy is taking a picture with an 4K camera and a 256 bit CCD, seeing something in the centre of the image (actually a cat) but only having the 4K show you that it’s a cat. You assume the blob on the 256-bit CCD is a cat and then continue to use this system to track the “cat’s” movements. You make proclaimations about cat movements and recommend legislation to curtail it.

Only after recalibration and characterisation with the 4K again do you find that after a few measurements the system was following any small animal. You never checked because you ran off with the theory.

Bottom line: You cannot magically extract higher resolution/lower uncertainty from a high uncertainty series of measurements just by assuming that the underlying data is distributed a certain way. You have to show it.

It’s the basis of philosophy of the scientific method. You build the tools for the job, otherwise all you have is hypothetical.

As Jim stated, the values are less important than the range of error (uncertainties)

“In a real system”This is a real system, and has the real distribution of temperatures that we are talking about.

Much ado about nothing?

Once the data has been homogenized, the sampling problem is largely irrelevant because you have removed the independence of the data points and the central limit theorem no longer applies.

Each time you homogenize already homogenized data the independence of the data points further degrades, destroying the value of the data for statistical analysis.

Simply put, rewriting temperature history to try and “fix” the data is a fools errand and a crime against data because you can no linger trust any statistical results.

In data sciences you NEVER “correct” the underlying data as that is your single version of the truth. Yes it contains errors, but these errors are still part of the truth. “Fixing” the data obscures the truth. It does not establish truth.

The place to make your “corrections” is in the summary data. AND if you have not messed around trying to fix the raw data the “correction” is relatively trivial, because the CLT guarantees convergence.

In other words, if you don’t screw around changing the original data, the CLT provides you pretty good assurances that you can calculate a representative mean and variance from the raw data using relatively trivial random sampling, without any need for error correction.

Screw around “fixing” the raw data and you have no assurance that your mean and variance is representative.

Looking at the article I see a few things some I agree with some not.

1. The sampling method he describes is basically Pulse Code Modulation (PCM). Any periodic changes greater than the sampling rate will be aliased and folded into the resultant output. And very short term changes that occur between samples are lost. This is demonstrated by the Low of -40 being missed in the hourly report sampling period. All of this is understandable.

2. What the Min and Max results are is nothing more than a signal processing being performed on the continuous temp dataset. Because it is a processed signal the only Nyquist parameters that must be met are for the sampling rate to be fast enough to detect the high and low peaks. After that Nyquist does not apply. By doing signal processing the values have lost all of the intervening data and not longer have a pure sampled system.

3. The use of the midpoint between min and max analytically has very little real value since it does not represent any information of the amount of time spent at a particular temp. However, since historically we have not characterized the atmosphere in anything else than this, it is difficult to change at this point to something else, but new methodology should be applied to make these measurements more accurate from now and into the future. There is no reason for us to be reporting temps this way any more, except for comparison.

4. To me the real issue is that in this sampled system much data is lost in the antiquated signal processing methods, poor siting leading to significant noise and bias, the need to have a single number to measure temperature irregardless of the meaning of the data behind it. To think we want to change the world on this measurement methodology……

Hi RPercifield,

Regarding your #2: A minor but very important clarification to make sure the concepts are well understood. We don’t need to sample such that we actually catch the exact peaks. If we sample according to Nyquist, then we capture all of the information available about the signal. Using DSP algorithms the exact peak values can be determined from the Nyquist compliant samples, even if those peak values do not directly avail themselves in a particular sample.

Furthermore, Nyquist never stops applying, if your goal is to operate digitally on the original analog signal. If you violate Nyquist, for example by just discarding all samples except your max and min values then any operation on those max and min values no longer (accurately) relates to the original signal.

The beauty of Nyquist is that it gives us guard rails – we can stay on the road while enjoying the benefits of the digital domain.

I agree with your #3 and #4.

I’ll address #1 with Kevin in a separate reply – as it relates to the missing -40 value being lost.

Applies to any function, not just time. Essentially you are describing a moving average filter of width 1, a tautology.

Oops misplaced reply to Masters comment February 15, 2019 at 9:14 am

>>

. . . Masters comment . . . .

<<

The name is Masterson–but since we’re talking about signals, I restricted it to time functions. (My text on signal analysis does the same.) Convolution has many uses in applied mathematics, but for electrical engineers, we usually restrict its use to signal processing–time domain, frequency domain, real, complex, Fourier transforms, Laplace transforms, etc.

Jim

Kevin Kilty-

Thank you for a most interesting post. The recent discussions here at WUWT on the use of Tmin and Tmax have been very enlightening.

Two things stand out to me: (1) the daily temperature distribution is almost never symmetrical, and (2) the distribution does not have the same form day-to-day. Therefore, how to use daily Tmin and Tmax for temperature trends needs to be very carefully thought out. Which it hasn’t been.

When I started my career in applied research, a wise statistician told me: “Too many engineers think statistics is a black box into which you can pour bad data and crank out good answers.” Apparently many climatologists think the same.

I’m not sure why we average the daily highs and lows together in the first place. Don’t we just end up with less information rather than more? It completely obscures the temperature range experienced during any given period. I think if we simply averaged the highs over a given period and similarly averaged the lows, but kept them separated, we would get information that’s far more useful.

Hoyt

You said, “I think if we simply averaged the highs over a given period and similarly averaged the lows, but kept them separated, we would get information that’s far more useful.” Actually, I don’t see the advantage of averaging them. They carry more information as the raw, daily time-series. Averaging behaves as a low-pass filter and reduces the information content. The best we can say is what the annual average was and compare it to the annual averages for previous years. The annual standard deviations might actually be more interesting because they would suggest if the temperatures are getting more extreme or not.

The problem is that whether we average 30 highs and 30 lows and take the mid-range, followed by taking the mean of the 12 mid-range values, or take the average of 365 highs and 365 lows and take the mid-range value, we still end up with a non-parametric measure of annual temperatures that has been shown to NOT be statistically ‘efficient’ or robust. The first method allows us to assign a standard deviation to the monthly mid-range values, but with only 12 samples, it is of questionable statistical significance, let alone climatological utility.

I’m reminded of the story of Van Allen (of the famous Van Allen Radiation Belts fame) who analyzed the behavior of roulette wheels in Las Vegas. He discovered an eccentricity in them that allowed him to make a small killing before he was asked to leave. I suspect that if he was working with data as poorly fit for the task as is our meteorological data, he would never have been able to do what he did.

Clyde, I think I was typing too soon. A daily time series for both highs and lows was what I was thinking too. Averaging seems to reduce things to their least useful representation. Like trying to determine what the average coin toss result is.

“I think if we simply averaged the highs over a given period and similarly averaged the lows”That is generally done. Then if you average those results, you get TAVG. GHCN supplied monthly TMAX, TMIN and TAVG

“First, the problem of aliasing cannot be undone after the fact. It is not possible to figure the numbers making up a sum from the sum itself. Second, aliasing potentially applies to signals other than the daily temperature cycle.”I think my main priority here should be to respond to the min/max question. On the aliasing, it is true that it can’t be undone just from the signal data. But you can after the fact calculate and remove the effect of predictable components in the signal, and diurnal is a big one. I did that in my post. Of course, there is still the problem of how to predict the predictable. You need to get data about diurnal from somewhere. But it doesn’t have to be the period that you are sampling.

Yes aliasing does apply to other signals. But one thing I sought to emphasise is that in climate records, the daily sampling is followed by monthly averaging, which is a low pass filter operation. So it is analogous to simple heterodyning for AM radio. There you have an E/M cacophony from which you want to select a single channel. So you mix the signal with an oscillation close to the desired carrier frequency. That causes aliasing and brings the sidebands with the information you want down into the audio range. Then an audio low pass filter makes the cacophony go away and you have the signal you want. The rest was aliased too, but the results fell outside the audio range.

That is mostly what happens to that other data here. It is aliased, but the results are attenuated by the month average filter. The exception is the diurnal, harmonics of which which produce a component at zero frequency. But you can calculate that.

“This provides a segue into a discussion about the “something otherness” of Max/Min records.”The main point I tried to make there is that min/max isn’t necessarily close to hourly integrated, but is still a useful index. For climate purposes, the measure you take is used as representative of the region. In Wellington, NZ, a hilly town, they moved the station from Thorndon by the sea to Kelburn on the hill in 1928. You have to adjust for the change (1 °C, we argued about that) but neither place is right or wrong an a measure. That is one reason for taking anomalies – then it really doesn’t matter.

Min/max generally has an offset from the integrated, which furthermore depends on the time of reset. It’s as if you were measuring up the hill (and maybe later down if you changed reset time). And again, the difference largely disappears when you subtract the normal for the location. You still get a measure of how things change.

“It is difficult to argue that a distortion from unpredictable convolution does not have an impact on the spectrum resembling aliasing”I think the heterodyning analogy is relevant here. The jitter is a modulation of the diurnal. But it is a high frequency modulation, relative to the later monthly filtering.

Hello Nick,

I appreciated our previous discussions. On this one here, I agree with your description of how aliasing can be used to demodulate an AM signal. However, I think it is critical to provide a distinction between aliasing to demodulate and aliasing of a baseband signal like temperature.

In the AM radio example, the program information, say a music recording, is Amplitude Modulated creating the sidebands you mention, spaced around the carrier frequency. When receiving an AM signal, a clever use of aliasing can down-convert (shift in frequency) that program down to baseband or a low-IF so it can be tuned and retrieved. In that example no energy is aliased into the original program information. The E/M cacophony you mention has strict channel spacing to allow for proper reception without aliasing. The aliasing we experience when measuring temperature incorrectly does mix higher frequency energy into false low frequencies that were not present in the original signal.

I suggest it would be good to retract that analogy. It can not correctly be used to justify the aliasing we experience when measuring temperature with 2-samples/day. In the AM case the signal/program is unaltered by aliasing. With temperature it is altered by aliasing.

The low pass filtering you mention to get to monthly averaging happens after the aliasing already has done its damage. If we sample properly and then average to monthly the results are different.

“In the AM case the signal/program is unaltered by aliasing.”No, it’s is a frequency shift. Same with sampled temperature. In fact, sampling just multiplies the signal by the Dirac comb (periodic), just as in the radio mixer you seek to multiply the signal by the local oscillator frequency. The only real difference is that the Dirac comb contains all harmonics, which brings in a bit more noise. But not that much.

And the key thing is that the monthly averaging plays the same role as the audio low pass filter in radio, also after the mixing. It excludes the original and almost all the aliases.

Nick if you do an AM broadcast the listener on the radio is not hearing aliasing from demodulation, no matter whether it is done in the analog domain or digitally via “undersampling” (using aliasing intentionally to downconvert).

Here is what happens when you don’t alias (example of 12-samples/day on signal bandlimited to 5-cycles/day):

https://imgur.com/iHRuFc7

The spectral images replicate at the sample rate (12-samples/day) and there is no overlap between the signal bandwidth and its images.

Here is what happens when you alias (example of 2-samples/day on a signal bandlimited to 5-cycles/day):

https://imgur.com/DmXCBOt

You can see the spectral images clobber the signal!

This is what happens when we measure temperature at 2-samples/day. Your low pass filters by averaging to monthly take place after the damage is done in this figure. The spectral overlap doesn’t happen with AM demod via undersampling due to the channel spacing required by the FCC.

I recommend you don’t overly complicate this with the Dirac comb – it is unnecessary to understand what is happening. All you need to do is look at the signal bandwidth and replications of it at the sample frequency.

Good sampling: https://imgur.com/L7Wc393

Bad sampling: https://imgur.com/hPgub33

K.I.S.S: Keep it simple Stokes.

William,

“This is what happens when we measure temperature at 2-samples/day. “Your spectra are nothing like what happens. Here I have done a FFT of 5 minute data from Redding, CA for 2016. This plot shows magnitude. Frequency units are 1/month (30.5 days). I have marked with a red block the band of unit width around zero. This is the nominal range of data that passes the monthly average filter – of course a larger range passes with more attenuation. That is the data we want. I have marked with orange the range from 60 to 62 units. This is the corresponding range that would be aliased to the red range with 2/day sampling. As you see, it mainly contains a rather small spike, at twice diurnal, and nothing much else. The spike is what would alias to zero. It isn’t that bad anyway, and as I showed in my last post, you can calculate and adjust for it, since diurnal is repeated.

Hi Nick,

Thanks for your plot. I’ll get to yours in a minute.

My plots in the previous post were for the purpose of illustrating the concept. This link shows a real example using Cordova AK station, with 2-samples/day sample rate.

https://imgur.com/noC3WGu

I added dashed yellow lines to the graph. Between those yellow dashed lines is the signal information between 3-cycles/day and long term trend signals all the way to “Zero-frequency” offsets. Any signal in that region that is not blue (any green or red lines) is energy that has aliased into the signal through the sampling process. As you know, the magnitude of the aliased content is only part of the picture. The phase information is needed to understand how the aliased energy impacts the result. However, it is much easier to just look in the time domain. Compare the properly sampled mean (5-minute samples data) to the (Tmax+Tmin)/2 result. The 5-minute data is the correct (more correct) reference and the error is the difference between it and the result from max/min method. The FFT assumes that the 2-samples per day have no “jitter” – so the analysis is for the ideal. The fact that max and min occur with what is equivalent to jitter means there is additional error not seen in the FFT graph. Again, we see the real result in the time domain, much more easily.

What frequency range of energy do you consider to drive the trend? This is an important question. The “Zero Frequency” (mathematical zero) is essentially infinitely long time duration – this is the equivalent of a DC offset in electrical terms. It never changes. It provides a constant offset temperature for the scale used (°C, K, etc). As you travel away from zero, without traversing much distance mathematically, we cover multiple million year cycles, million year cycles, hundred thousand year cycles, hundred year cycles, decade cycles, yearly cycle and monthly cycle. The following is critically important. With 2-samples/day, the energy at 2-cycles/day lands on the Zero-Frequency. Any energy between 1-cycle/day and 2-cycles per day lands between the Zero-Frequency and the 1-cycle/day (diurnal)! Any energy between 2-cycles/day and 3-cycles/day aliases also to between the Zero-Frequency and the diurnal! It lands on the negative side of the spectrum but we know this works the same as if it lands on the positive side. Any “jitter” in the sampling means that this errant energy can land anywhere in that range, based upon the “jitter”! So back to the question, what frequencies do you consider to be contributing to the trend? I suggest that it is a combination of all spectrum energy slower than 1-month. So This entire range is clobbered by any energy between 1-cycle/day and 3-cycles/day.

The daily max min values are corrupted with this aliased energy. You can’t filter it out later. You can argue that it is too small to matter but then you have to give up the ground that the trends and record temperatures are also equally too small to matter.

As for the adjustments you mention, it would be instructive if you can take USCRN station data, using only Tmax and Tmin and adjust it to match the 5-minute sampled data. The only rule is that you can’t use the 5-minute data to get your adjustment because that would be a round trip to nowhere (cheating). I’m very skeptical that you can achieve anything with this but I’m willing to be shown it can be done. If you can that would be a big breakthrough. One could attempt this with the overall record and recalculate all trends, but alas, we would never know if the adjustments were actually corrections.

Hi Nick,

A few things I’m pondering… some repeat of my previous reply to you… but to expand the thoughts…

You have much better tools for FFT analysis and plotting than I have. I’m not sure how much time it takes you or if you are even interested to try this, but I’m curious to see 2 particular stations that show much different error in the time domain. I have previously provided links to these charts and I show approximately 12 years, with yearly averages and linear trends. Blackville SC has one of the larger trend errors I have seen in my limited study. Fallbrook CA has very little trend error but a large offset error (1.5°C). If I can add a 3rd in there it would be Montrose CO. It is one of the few stations I saw where over 12 years the error shifted from warming to cooling and back again. Offset was very variable. Most stations maintained their warming or cooling trend error (5-min as reference compared to max/min). Do you think anything can be learned by looking at the spectrums of these 3 stations?

Also, what frequencies contribute to the trend and can you explain why (as you see it)? I see a frequency of “0” as being the infinitely long cycle – or DC offset in EE terms. It is common to every measured value. Between 1-cycle/day and 0-cycles/day there is an infinite range of possible longer cycle signals (weekly, monthly, yearly, decadal, multi-decadal, etc.). What range of these frequencies contribute meaningfully to the trend? For climate I would think it has to be the superposition of sum of a range of frequencies. At what point – how slow must a signal become so that we consider it “0” or DC in our analysis? 100 years? 1000 years? 100k years?

The reason for this pondering… if energy is spread anywhere between 1-cycle/day and 3-cycles/day, then if aliased it can land on one of these longer cycle signals. It can impact the offset or trend. FFT’s bin energy at the integer values, but the actual energy can exist at non-integer value frequencies, correct? If we were to use different units of frequency, uHz for example, could we get the tools to show us how the energy is spread more correctly? What are your thoughts on this?

Its too late at night and I hope I don’t regret thinking out-loud when I read this in the morning. Anyway – I hope this leads to something…

Thanks.

William,

I use the R fft() function. I’m impressed that it can do the 100,000 numbers in the year in a second or two on my PC. I could do the stations you mention – just a matter of acquiring the data and interpolating missing values (fft insists).

Here are some subsets. First the low freq here. I’ve subtracted the mean, so it starts at 0. The main spike is the annual cycle. The rest is the change on monthly scales, which is what we want to know.

Then there is the first diurnal harmonic here. The sidebands represent the annual cycle of change of diurnal (and harmonics of that). And here is the second harmonic of diurnal. Remember the vertical scale is expanded; the numbers are getting small.

William,

Sorry, I got those later links wrong. Here it is again

Then there is the first diurnal harmonic here. The sidebands represent the annual cycle of change of diurnal (and harmonics of that). And here is the second harmonic of diurnal. Remember the vertical scale is expanded; the numbers are getting small.

William,

I’ve posted the corresponding fft parts for Blackville 2016 here. Processing is a bit fiddly, so I might not get the others done for a while. I need to work out a system.

Nick,

Thank you for the FFT plots. So far, I can’t see anything distinguishing that would explain some of the differences we see in the time domain. I have many experiences of analyzing audio, listening through a mastering quality system that allowed for blind identification of audible differences in mastered audio versions. Yet there was no way to identify this visually by using an Audio Precision analyzer.

Is there anything about the FFT algorithm or units chosen for analysis or resolution of analysis that is obscuring more distribution of energy around the integer values presented in your plots using 1 cycles/month? I would think energy between 1 year and 100 years would be significant to the trend we are experiencing, but the FFT is rather vague with energy spread out at a very low level. Can you comment on this?

William,

“I would think energy between 1 year and 100 years would be significant to the trend we are experiencing, but the FFT is rather vague with energy spread out at a very low level. Can you comment on this?”It’s vague because it is based on 1 year of data. There are 4 relevant timescales in the fft

1. sampling – 5 min

2. diurnal

3. monthly – not physical (or in fft), but that is the scale of subsequent smoothing

4. Annual, seasonal cycle

Annual is one frequency increment. It can’t tell you below that. The sidebands to the diurnal peaks, spaced 1 unit, are the annual cycle modulating the diurnal.

Nick,

You said: “It’s vague because it is based on 1 year of data.”

Ok, right – if we had 10 or 100 years of data then we could resolve longer trends in those ranges with the FFT. With 1 year of data I believe that energy ( in the longer trends) shows up as 0 or DC – it appears to be constant because we don’t measure much change against those trends in a short period of time. Do you agree? Likewise, the FFT is not going to tell us the entire story about whether or not trends could be affected by the aliasing of 1, 2 or 3-cycle/day energy. I still think the time domain is better for that.

I sent you another reply about how we calculated the trends. There are 26 stations in my paper that we analyzed with that method. I’m curious if you have looked at that method and if you agree or disagree that we are showing valid trend error over the 12 year period studied. I have not yet been able to square your TOBS analysis with our trend analysis.

Thanks Nick.

Nick,

Your analogy is wrong.

In downconverting for radio we are making sure that wrong frequency components don’t get mixed with right ones. The receivers are carefully designed to avoid that.

For example, IF bandpass filter is equivalent to antialiasing filter, and so is low-pass filter used in direct conversion receivers. No matter the design of particular system, there is always a filter(s) somewhere in the system that works exactly like anti-aliasing filter to avoid mixing unwanted frequencies with wanted ones.

This not the case for temperature under sampling, and you should not use that analogy here.

“IF bandpass filter is equivalent to antialiasing filter”I specified simple heterodyne, not superhet. The bandpass filter is audio.

Nick,

The practice of intentionally “undersampling” to downconvert an AM signal does not add any aliased energy into the original “program” signal. Said another way, the original signal that was modulated is recovered with no energy aliased into it.

The AM analogy should really be retracted as it doesn’t support your case.

Sampling at 2-samples/day will alias energy such that your monthly average can not be obtained without this aliasing included. Your best case is to argue that it is too small to matter – but there is a high cost to that position.

Hey Kevin,

Thanks for your post – another interesting piece around consequences of aliasing in the temperature record.

A guest blogger recently made an analysis of the twice per day sampling of maximum and minimum temperature and its relationship to Nyquist rate, in an attempt to refute some common thinking.Nick was responding to the earlier article by Mr Ward who argued that the way historical temperatures were captured, namely daily min/max introduces errors (differences between daily midrange value and true arithmetic mean). Those errors due to aliasing are not accounted in subsequent analysis that are based on averages derived from daily midrange values. Nick recognized the problem and had a good attempt trying to show that we can still preserve reasonable good daily mean by reducing number of samples per day and applying later adjustments knowing regularity of daily oscillations. Unfortunately, his adjustments do not apply to most of the historical record where all we have is daily min/max. Nick’ method requires (1) periodic two samples per day (not min/max) and (2) highly sampled reference signal where we can validate adjustments. For most of the historical record we don’t have (1) and for many places we don’t have (2) either. So it is not a magic wand generating valid information out of nothing – it’s basically way of saying that if we have good data we can correct bad data.

In summary, while the Max/Min records are not the sort of uniform sampling rate that the Nyquist theorem envisions, they aren’t far from being such. They are like periodic measurements with a bad clock jitter.Under previous posts around this subject there were ferocious, almost religious arguments whether Nyquist, or indeed even very concept of ‘sampling’ applies to daily recordings of min/max.

The articles on Nyquist problem by William Ward (?) were misguided. The current article by Kevin Kilty does not help that much to clarify the confusion sowed by Ward. His example of S/H does not apply to temperature measurement. The original problem was about whether Min and Max daily measurement can be used to estimate the long term averages or the gradient of averages. Yes, they can. The issue is what is the error and how to estimate it. This can be done numerically on real data. One you do it you realize there is no point of wasting time on pseudo-theoretical analysis by Ward, Stocks and Kilt.

unka

What reason(s) do I have to reject Ward and Kilty and accept your assertions? You make a claim without providing an explanation or even examples. Do you have any publications that you can direct us to to support your assertions?

Much of the current thread seems concerned with matters that are far removed from issues truly pertinent to actual in situ measurements of temperature by Max/Min thermometers. There simply is no A/D conversion of the type specified by the convolution of Eq. 1 and no attendant signal distortion. The extreme values are simply picked off a continuous signal of a mechanical instrument whose time-constant is typically on the order of tens of seconds. The highly irregular times of occurrence of extrema are not recorded nor are they particularly relevant to the (limited) utility of the data. They differ categorically from the digital samples obtained by multiplying the original signal by a Dirac comb (q.v.), which is the prerequisite for any possibility of frequency aliasing. But so does the very purpose of the data, which is not to record a signal consisting of a huge, asymmetric diurnal cycle (with harmonic line structure) and a much weaker random component, but to suppress the former in order to reveal the climatic variations of the latter.

There’s little presented here that truly clarifies or advances that practical purpose. That the mid-range value is persistently different from the true signal mean is well-understood. What is needed is a scientifically based reconciliation of historical data with far-more-revealing modern records. Instead we’re offered a smorgasbord of practical irrelevancies and analytic misconceptions based on superficial parsing of technical terminology.

If this was a drawing we are discussing pen pressure rather that the picture. For sure the pen pressure produces different grades of blacks at certain points but the overall painting remains the same.

What makes a picture, pressure or pattern? Pattern is the trend, pressure is a daily measurement.

If this was a drawing we are discussing pen pressure rather that the picture. For sure the pen pressure produces different grades of blacks at certain points but the overall painting remains the same.If all you need is just a sketch which somehow resembles an original is fine. But if you need precise scientific analysis – using for that a rough sketch may be not a great idea. Here trends for Goodwell, OK, 2007-2017. Green line is the trend computed from monthly averages of daily midrange values. Red line is the actual reference trend computed from highly-sampled records (every 5-min). Dotted line represents the bias where monthly trend deviates from the slope of the reference trend. Very little matches. Endpoints differ. Slopes differ.

Congrats there is definitely a bias during those years. Try the averages across a group of stations in this area or same station over a longer period. Trend is gone I can guarantee this 100%. Try it.

Try the averages across a group of stations in this area or same station over a longer period. Trend is gone I can guarantee this 100%. Try it.We’ve got high quality records for no longer than 12-14 years. But you can synthesize a signal that closely resembles highly sampled signal for longer periods, say 150 years, de-trend it, then apply min/max approach and compare the reference signal with no trend and averaged min/max. And for many runs spurious trend appears.

Try it.

I will!

The best source I know of is USCRN. Is that OK with you? This thread is getting old but I’ll share what I find when me meet next. OK?

That’s what you get by mismatching the dates of the 5-minute-resolution diurnal cycles and the daily mid-range values right from the start; see: https://malleusmaleficarumblog.files.wordpress.com/2019/01/undersampling.png

The characteristic mid-range value should be the average of the morning trough and the afternoon crest of the SAME day.

1sky1, until you acknowledge that any representation of an analog signal with discrete values is a sample, governed by Nyquist, your knowledge and analysis will be limited. The “purpose” you mention (understand climatic variations) will also be limited by the error resulting from not complying with Nyquist. You can certainly make a case that the error is insignificant – and perhaps you could do that by defining what is and is not significant. But you can’t access (accurately – to engineering precision) long term trends when the extrema you are using are tainted by aliasing.

Your desire for scientifically based reconciliation of historical data with modern data/methods will only yield disappointment. Once aliased there is no way to remove the error. Even Nick’s noble ideas are going to be cut short by the lack of good data to use for the correction – and furthermore you will never know if your “correction” is more correct – you will just know it is different. There is no advancement to be had. The data we have is inferior due to the methods used. The only advancement would be to statistically determine the range of possible error and increase the stated uncertainty in the data and subsequent calculations.

If I understand all this correctly, it would seem the data to test Mr. Kilty’s criticism of Mr. Stokes’ article is unavailable, due to the fact that most temp sampling is done twice a day? If I do understand this correctly, is there another, non-theoretical way to test Mr. Kilty’s theory?

Kevin,

Thank you for this added clarity.

Some of us who are used to working with numbers dismissed the early historical weather data as unfit for purpose when the purpose is to estimate global warming.

At some stage technology was advanced enough to start to estimate global warming. The date for this depends on error analysis lowering the limits to be acceptable for the purpose.

How would you recommend that both sampling and error analysis now be performed for daily captures, for the result to be likely fit for purpose? At what date was adequate performance achieved? (I put it about year 2000 for Australian data).

Geoff

What is the Nyquist frequency for temperature? I expect there us bi maximun unless you apply an arbitrary bandpass filter.

Otherwise, what prevents temperature from changing on timescales of less than a second when the sun pops out from behind a cloud?

Q: What prevents temperature from changing on timescales of less than a second when the sun pops out from behind a cloud?

..

..

A: Thermal inertia

How much thermal inertia does air have?

Answer: Very little.

The oceans which comprise over 70% of the surface of the planet have a significant impact on the air temperature. So the thermal inertia of the water in the oceans is very high. Ever hear of “El Nino?”

Ferdberple,

Nyquist tells us what the relationship must be between what goes into the ADC and the clock frequency that runs the ADC. The filter is designed to control what goes into the ADC. In the real world there are no bandlimited signals. We attempt to limit them with filters, but imperfectly – however the results can still be very low error. Someone with knowledge of climate needs to decide what is “signal” and what is “noise”. Is that sun popping out a part of climate (signal) or just noise? Climate scientists care. Engineers may or may not care – but they will make sure to design a system that works properly for what the climate scientists decide. The engineer always recommends to take more data in because you can discard it later. You can’t go back and get it later if you decide to throw it away up front.

There is not really a Nyquist frequency for atmospheric temperature, per se. But analysis from the previous discussion suggests 24-sample/day does pretty good, but 288-samples/day does better to cover more scenarios and corner cases. The question is, how much error can you tolerate? Nyquist gets set from this.

William

I remember when I was in the Army and assigned to the Photographic Interpretation Research and Development lab. I went to Greenland for a month. The lab’s photographer advised me to take lots of film and take lots of pictures because the cost of film and processing, although expensive by usual standards, was trivial compared to the costs of travel and housing while there. And, I might never get a second chance — which I haven’t!

Clyde – good story! Good advice from the photographer.

If you ever do go back, bring some ice with you. I hear they are running low.

Kevin, I think most of us already believed that averaging Tmin and Tmax did not accurately represent the average temperature, which itself is used to infer an energy imbalance. The previous posts on the subject and my limited explorations demonstrated this. The question I have is about the recent “high quality” data based on five-minute averages. Do these satisfy the requirements for calculating a meaningful average temperature for a particular location?

Loren

While energy is of interest, the alarmist claims of future cataclysm are based on the effects of temperature. That is, ocean temperatures killing coral and fish, land temperatures causing extinction of plants and animals that are altitude bound, and supposed declines in crop production. Therefore, it really is important to be able to accurately and precisely characterize temperatures, and to be able to predict both high and low temperature changes, not just the temperature between two extremes. After all, there are an infinite number of temperature pairs that can provide the same mid-range value. Relying on any kind of ‘average’ usually results in a loss of information. While climatologists are focused on averages and trends, there might well be things of interest happening with the variance of the highs and lows.

Eq. (7) in Shannon’s is indeed the key to understanding what Kevin wrote, but it is not at all elementary as you think. I’m not sure if Kevin himself understands it completely.

Yes, convolving unit impulses over an arbitrary signal generates that signal, in a trivial way which is not useful because it only returns the sample points. If you convolve continuously, only then does it return the entire range. Not useful because we want to reconstruct the entire function from evenly spaced discrete samples.

You will recognize eq (7) as the convolution of the sampled function with a _normalized sinc_ pulse sin(πx)/πx, which results in the perfect reconstruction of the entire original band-limited signal.

How is this possible?

Shannon did not explain the motivation for the sinc pulse in his paper, but it related to the fact that the Fourier transform of the sinc function is the rectangle function. If we multiply the spectrum of the band-limited signal by a rectangle with height=1 over the support of the spectrum and zero elsewhere, it is clear that result is the same band-limited signal. So, applying the convolution theorem, we know that we can obtain the same result in the time domain by convolving the sample points with the normalized sinc pulse, which has a value of 1 at t=0 and zero elsewhere, i.e. a _real_ unit pulse.

The resulting continuous function is exactly the original signal, not an approximation.

I have met very few engineers who correctly understand this theorem. Most insist it only returns an approximate reconstruction of the original signal. But it is perfect, in exactly the same sense that two points determine a line. Two points are also sufficient to reconstruct the highest frequency in a band-limited signal, iff the Nyquist limit is obeyed

Again my reply to Joe Born February 15, 2019 at 1:36 pm is misplaced. I’m pretty sure I pressed the correct reply button. (But several hours passed while I was composing)

There’s been much talk of time here but not of space.

I’m not sure what the overall goal is, but when you start averaging all the different stations together it smacks of wanting to know the total energy (or potential) in the boundary layer so you can track how it changes over time. If that’s wrong well, then forget the rest of this comment.

If I wanted to get the total energy I would sample the boundary layer temperature everywhere at the same time and then average that. That means sampling some stations during day and some at night, others at dawn or dusk, etc. depending on where they are at the time of sampling.

Taking min/max over the whole day causes things that move over time like warm or cold air masses to all get all blurred into readings in many different stations. Plus vertical mixing in the atmosphere throws another wrench into things. My conclusion is that the min/max data is not all that useful for tracking total energy/potential changes in the atmosphere.

Seems like the min/max data would be more useful to monitor local climate — not for tracking over large regions. For example, is it getting warmer in Greenland where there’s lots of ice (and very few thermometers)?

Just my two cents…and likely worth about what you paid for it.

Observer

I think that if one is looking for a daily, global average, to be used for calculating an annual global average, it does make sense to synchronize all readings to be done simultaneously. With high temporal resolution sampling, that could be done in post processing without any special need for coordination around the world. Although, one problem that I see is that, because the land masses are not uniformly distributed, one might want to do that twice daily, for when the minimum land area is experiencing heating and the second time when the maximum land area is experiencing heating.

Kevin,

The use of this old historic max/min temperature is wrong for a number of reasons.

For example, T max happens at a time when the balance of several competing effects, some negative and some positive over time, reaches a maximum detected according the th thermometer design.

The timing of Tmax and inexorably some of its value is thus governed by some physical effects that have little to no relevance to whether the globe is warming or not.

E.g., the maximum might occur on an overcast day at an uncharacteristic time because there happened to be a break in the cloud long enough to capture the short period of elevated temperature. Maybe this happened, but at the same time some rain had just ceased to pour down and a variable evaporative cooling effect of rain on temperature was present at the particular time the sun shone. Neither sunshine nor rainfall are supposed to be primary determinants of the energy assessed through its temperature proxy. There is no place for elevation of temperatures like this to the power of 4 when dealing with S-B math. (There are more interfering exogenous variables, but I hope I made the point with rain and sunshine).

In Australia, it was not until about year 2000 that thermometry and data acquisition became good enough to measure temperatures as relevant to global energy.

Therefore, discussion of Nyquist frequencies is largely academic, of little practical effect but tremendously interesting. Thank you for your essay. Geoff.

Die Zeit verfliegt im Sauseschritt und wir wir fliegen mit.

Wilhelm Busch.

carpe diem.

Lets look at this Tmin and Tmax a bit more. The assumption behind simply averaging these as the “temperature” is entirely false. Say we have a Tmin which is pretty much constant at 0C and a few seconds of Tmax of 100C. Is the temperature really 50C? Of course not. Tmin and Tmax are perhaps useful for weather, but have nothing useful to contribute if averaged. The correct outcome needs two things, the first a time estimate of each temperature, taken with even sampling at a short enough period to not “miss” the peak values due to the rate of change of the temperature; and the second a time weighted average of the temperature value. This is an approximation to the low pass characteristic I suggested earlier, but capable of reasonable implementation digitally. So the data for 24 hours with my example, 900 samples of 0C and 100 samples of 100C gives a heating value of 10C, not the “average” value of 50C.

All the discussion of convolution above is really not attacking the data problem, the data of simple max and minimum values actually says almost nothing about climate or even perceived temperature. Weather forecasts tend to use maximum temperature or minimum depending on if they think it is cold or hot for the time of year. You choose clothes to suit, but what is that to do with climate?

Realistically all the data used by climate scientists is very unsatisfactory, and is processed in ways which are invalid. However if it gives the result they want, who is going to notice? I was “tutored” on the satellite data above, and you will see that duplicating samples every 18 days, is not going to give much that is accurate. It must contain huge variation due to weather, wind, time of day and season. If it is accurate to 0.01 degrees how can I filter out all the other variables? I cannot, so I must “adjust” the data in invalid ways to get any result at all. Once I start to add in area averaging as well one has very little idea of the error band, except that it is unknown and probably large.

The most interesting feature of climate science is the difficulty of obtaining the original raw data!

One of my pet peeves. Invalid processing. Taking daily temperatures recorded in integer values, averaging them and adding a digit of precision so you now have temps accurate to one tenth of a degree. Then averaging monthly values to annual ones and adding another digit of precision again, out to one one-hundredth of a degree. Or converting from degrees F to some other unit by multiplying by an improper fraction and again adding extra precision. The same exact thing is done to anomalies, basically because they are using a base that has false precision.

I just can’t believe that scientists in other more robust physical specialties haven’t roundly criticized the data practices being used by climate scientists. I would have flunked most of my lab classes if I had done similar data torturing.

Jim

This is a point I made to Nick above after reading his post from a few years back trying to use the theory of Large Numbers as a catch all. Pure theoretical fantasy really.

What Nick fails to understand is that the person making the apparatus will plan the uncertainty in the instrument depending on various factors, including reliability and maintenance. What Nick and others seem to think is that when a reading can be averaged that it means those extra decimals mean anything. He doesn’t see that the apparatus is often designed to be “insensitive” to small variations.

I took his March example and showed that if you assume the 1 degree system and add an offset ot 0.3 degC, you can have drifts or offsets in the data (against a well-known ideal) and it won’t matter as long as it is within the uncertainty of the apparatus. The average is 25.3 +/- 0.5 degC whereas the ideal real (obtained with a more accurate system reads 24.9 +/- 0.05 degC. If all you had is the system with higher uncertainty (which applies to the temperature measurement systems we have) then you wouldn’t know or WOULDN’T CARE for small changes like this.

Because the use of the apparatus (the tools) is related to the variation it is trying to measure. This is also what Kevin is talking about in his essay about Nyquist. You need to develop the tools to acquire data at uncertainties related to the variation you are seeing.

But you don’t assume that just because you can numerically calculate higher precision, that it means anything.

Jim

A voice of sanity in the darkness!

I was amazed how the smartest guy in this thread always stays so calm and answer with facts and examples (as he always does). Now I get it.

When you know and understand something deeply and others don’t it is really fun to read their comments and straw men arguments. Here we have spent millions of characters describing a tree in the middle of a forest. What the forest looks like? -no idea.

Who is he smartest in this thread is your guess. (He knows for sure and he is still smiling).

Kevin — Thanks for a remarkable and thought provoking essay. And thanks to everyone else for an equally remarkable assortment of comments — most of them well worth reading.

Just an idea. Why not integrate a graph of a curve with vertical axis in degrees of temperature in degrees kelvin and horizontal axis in time for 24 hours in one minute intervals and then compare the areas under curves on a year to year basis. What if anything will this tell.

Jim, Micky, Nick,

I’d like to believe this discussion should be settled empirically. What about recipe below?

1. Take some temperature series or generate artificial one – one decimal place. That represents measurements.

2. Each measurement from the point 1 has uncertainty +/- 0.05.

3. To each measurement add a random value drawn from the set [-0.05, 0.05]. The sum represents true, precise temperature.

4. Average the measurements and true temperatures

5 Compare results from the step 4.

Does it sound sensible?

OK, I did it for daily max, Melbourne, monthly means of daily max to 1 decimal (GHCND) for 2012. Adding normal dist error sd 0.05. The error, to 2 decimals was imperceptible. Here were the max as measured:

27.39 26.96 23.65 21.93 16.96 14.44 14.90 15.24 18.56 20.75 23.30 25.70

and, after adding noise

27.39 26.96 23.65 21.93 16.96 14.44 14.90 15.24 18.56 20.75 23.30 25.70

I had to multiply the differences by 100 to show at 2 dp. Then they were:

-0.02 0.17 -0.13 -0.03 -0.05 -0.09 0.01 -0.04 -0.01 -0.03 0.09 0.05

Even adding noise sd 0.5 barely made a difference. The differences multiplied by 100 were

0.63 -0.57 -0.01 -0.44 0.77 0.08 0.46 0.74 1.52 0.85 1.62 0.52

The effect is proportional.

No

You didnt read what I wrote. If you add random samples then they will cancel out with averaging.

You are assuming as many theorists do that the errors are random.

But what about discontinous drift which is common? Or slow non linear skews? And what about when you have small variations that are within the appartus uncertainty you just leave them there.

The assumption being made is the magnitude value obtained from a measurement contains intrinsic information about the quantity being measured. The reality is that it also includes the limitations and decisions made by the person maintaining the equipment. And there is no way of deconvoluting them to achieve a higher precision. That is why the real value is magnitude and error. It is up to the user to decide the context of this.

“You didnt read what I wrote.”What you wrote was

“All of the differences you calculated need to have the +/- 0.5 degC error added.”And that is just wrong, as these many examples show. When you quote an uncertainty, it is an error for each reading that could go either way. And there is no basis for adding that to the average. As you occasionally concede, that is reduced (a lot) by averaging.

What you keep coming back to is a claim that the instruments might be out by a fixed, repeated amount. That isn’t uncertainty; it is just wrongly calibrated instruments. It has different consequences and a different remedy. And it isn’t something to which you can sensibly assign a probability.

Nick

What you fail to realise is that they can intentionally be designed to drift but without compromising the overall use.

By your method you have no way of telling drift from a real value.

If an appartus is well maintained and regularly calibrated then this is less of an issue. But if left alone for years as often is the case, or if the whole system phase space is not fully characterised you have no way of telling what fluctuations in the magnitude of the readings are. And it may be designed precisely for this use.

You keep thinking that the magnitude of the readings can be used to elicit greater precision than what the system is designed for.

Micky, I’ve had similar discussions with Nick, but to no avail. In Nick’s world, all instruments are perfectly accurate and have infinite resolution. Nick-world error is always random and always averages away.

Nick either has no concept of systematic error or perhaps just refuses to acknowledge its existence.

Even highly accurate platinum resistance thermometer air temperature sensors suffer from systematic measurement error unless the shield is aspirated. The measurement error arises from uncompensated solar irradiation, including reflected heat, and slow wind speed.

Only the new CRN network uses aspirated shields. Only they provide temperatures reliable to about ±0.1 C (assuming everything remains in repair).

All prior air temperature measurements, and still including worldwide measurements outside the US CRN and perhaps parts of Europe, even the best modern air temperatures are not good to better than ±0.3 C, under ideal conditions of calibration, repair, and siting.

A better global uncertainty estimate is ±0.5 C, and in some places and times, probably rises to ±1-2 C.

Ssystematic error violates the assumption of the Central Limit Theorem, and does not average away. In proper science, it gets reported as the root-mean-square uncertainty in any mean.

But consensus climate science is not proper science.

And in Nick-world, physical science does not exist at all. Every Nick-world process follows the statistical ideal. It makes life there very easy.

I think one issue is that application pf uncertainty in pure theory is very different than in real world practice. Jim touched on this.

When determining systematic (irreducible) uncertainty, or just, error the decision is made to follow a convention dependent on the use of the measuring system.

For an average of measured data points you either root mean square the individual systematic errors or you assume the average belongs to the set of measurements and thus only the systematic error applies. Often it’s a mix and is subjective because in the real world error deterrmination is an art and often conservative.

As I have said on multiple occasions you design the tools for the job. Hence the reliability of the data is related to its use.

Nick –>

Read the documents I provided. The uncertainty arises in the next unstated digit. If the temp recorded is an integer, then the error component is +/- 0.5. If it is recorded to the tenth, then the error is 0.05. It is uncertain because you have no way of knowing what it’s value is.

When you then average numbers, for example, 29 and 32 you don’t know if you are averaging 29-0.5 and 32+0.5, or perhaps 29-0.0 and 32+0.3. That is why it is called uncertainty and why it must carry through. You simply can’t say 29+32=61/2=30.5 and keep the tenths digit. At best and depending on rounding you could say 30+/- 0.5 or 31+/-0.5.

Either way, you can’t then average another reading and say you can add another digit of precision, for example, 30.53. If you could do that, you could end up with an average containing 30, 60, who knows how many, numbers to right of the decimal place. Heck, you could end up with a repeating decimal out to infinity!

I still don’t think you are getting the point. When you did your data run you added in evenly distributed random noise that was artificially meant to cancel out. And it did.

With instruments, you don’t *know* what the “noise” from each instrument actually is. You know what the error “band” is, i.e. +/- some value. But you don’t know where in the band each instrument actually is. You can’t just assume that the errors have a perfect gaussian distribution in the error band and will all cancel out when you average all the values together.

Think of it this way. I am manufacturing thermocouples on a semi-conductor substrate for a digital thermometer. As my equipment ages are those thermocouples going to have the same error band or wider bands? Will the error bands have higher positive biases or negative biases? Or will they be equal? As the temperatures changes will the substrate expand or contract and what will that do the tolerance of the thermocouple? What will that do to the error band of the thermocouple?

If you don’t know all of this for each individual station then any kind of averaging of readings becomes a crap shoot for trying to cancel out errors using significant digits beyond the intrinsic capability of the instrument. In essence the error band of your average reading remains the error band of the instruments used.

If you add random samples then they will cancel out with averaging.Indeed. Software randomizers usually generate random samples from strictly normal distribution. When I used however randomizer which draws samples from the Rayleigh distribution an error in averages, as expected, was significantly larger.

So, condition

sine qua nonif the ‘improving by averaging’ technique may work is that the error guys need to be strictly ‘white, Gaussian’? Any deviation from that as systematic errors, drifts, biases etc. would make ‘improving by averaging’ questionable?That’s an interesting example.

An even simpler one is having a slow varying drift of +/- 0.3 degC over a few years that when using the 1 degC system produces a variation in the average magnitude.

But NOT any significant change in the measurement (magnitude with error)

You may think it’s interesting. It may be a real signal. But you would not be able to tell the difference unless you used a more accurate and higher precision system to check.

If you didn’t have this then attributing the drift to a real signal would be false.

In fact you probably wouldn’t care because you would not be using a 1degC system to determine sub degree changes.

In fact you probably wouldn’t care because you would not be using a 1degC system to determine sub degree changes.That is one of the key points in such discussions, I reckon. Many historical temperature records were rounded to nearest C/F, as far as I’m aware. Still, climate science wizards claim that tiny variations in the mean are true representation of the reality.

You may think it’s interesting. It may be a real signal. But you would not be able to tell the difference unless you used a more accurate and higher precision system to check.If I remember correctly Jim said he’s working on a paper that highlights issues around that. Looking forward to see that, hopefully supported by real life examples.

“So, condition sine qua non if the ‘improving by averaging’ technique may work is that the error guys need to be strictly ‘white, Gaussian’?”No, it isn’t. In fact, that is the point of the central limit theorem. It says not only that the distribution of the mean will tend to gaussian, even if the samples aren’t, but that the sd of that normal mean will be what you would get by combining those of the samples as if they were normal (IOW, it doesn’t matter).

Huh? What is the “distribution of the mean”? And if the errors are not gaussian in distribution then they provide a bias to any calculation based on them since the positive errors cannot cancel the negative errors.

“What is the “distribution of the mean”? “A sum of random variables is a random variable, and has a distribution. If you divide by N that is just a scaling – it still has a distribution. And it is what the Central Limit Theorem says tends to normal.

“since the positive errors cannot cancel the negative errors”Positive errors always cancel negative, and this has nothing to do with being gaussian.

Probability doesn’t always work in the real world. If I give you a coin and ask you to flip it one hundred times can you tell me how many heads and how many tails you will get?

You can tell me the probability of each combination coming up but you can’t tell me for any single run of 100 flips exactly what combination you will get. That’s what the error band represents in a physical measurement from a large number of measurement devices. You can *assume* that the error distribution from all those devices will take on a gaussian distribution and that the average from all of them will be the most likely outcome but you can’t *know* that! What you *can* know is that the actual value of the average will lie somewhere in the error band representative of the totality of the instruments.

The central limit theorem only works when the samples themselves are defined with no error bands associated with them. For instance, samples taken from a population distribution. The population distribution may be skewed, e.g. based on income so that you have a large number of individuals on the low income side of the curve and few on the high side of the curve. A distribution far from being gaussian. If you take enough test sample collections from the distribution and calculate their averages you will find a gaussian distribution of the averages from those calculations. But those samples have no error band associated with them! Their income is their income! The minute you create uncertainty about what each individuals income is then you automatically create uncertainty about what the average is for each sample. And it then follows that the distribution for all of the samples put together is uncertain. Any graph of the probability curve should be done with an unsharpened carpenters pencil and not a fine point No. 2 pencil.

And this is what the the “average temperature of the Earth” should have. A huge error band associated with it! And that error band should be at least as large as the error band of the least reliable instrument used to calculate the average!

“And it is what the Central Limit Theorem says tends to normal.”

I should have also pointed out in my comment that with a normal distribution you *still* don’t KNOW, for any specific run, exactly what result you will get. You have a probability curve. But the mean of that curve is not *the* answer, not in the physical world. It’s not the most accurate. It’s only the most probable. But that’s why people bet on horse races, the most probable horse doesn’t always win in the physical world. And when that probability distribution curve has to be drawn with an unsharpened carpenters pencil you can’t even tell what the value of the mean at the center actually *is*. That’s why systemic error bands carry through in any measurement of the physical world. Not every measurement device gives perfectly repeatable measurements, i.e. the same absolute error, time after time over years of operation. Wasps can build nests in vent holes, mold can build up around critical components, even quantum effects can have impacts on semiconductor elements.

Mathematicians and programmers can assume perfect precision out to any number of significant digits, can assume probability distributions always resolve to one single number every single time (i.e. the same horse always wins), and that measurement devices have no systemic error bands but engineers that live in the real world will all tell you differently. So will most physical scientists. I often wonder why climate scientists won’t admit the same.

You can do that but you won’t prove anything because the hundreths digit is uncertain. The distribution might be random, but it might not be. That is the reason that the last unknown digit of precision is always stated as the full range of possible numbers. You don’t know the “true value” so it must be stated as a range.

Another issue with error is using standard deviation. Many places require the use of standard deviation in order to get an idea of the size of the possible values.

I did this for a Topeka, Kansas station in August 2018. Here is what I got when mapping daily averages (rounded to two significant digits).

mean – 79.13

min – 70

max – 89

standard deviation – 5.01

standard error – 0.9

3 sigma’s – 15.03

79.13 + 15.03 = 94.16

79.13 – 15.03 = 64.10

Funny how the ~ 95+% confidence level is outside the total range of recorded values. The standard error is also very large. Makes one wonder how accurate the monthly mean is when using in averaging.

In the (now 4?) recent threads on the subject of two measurements a day (Tmax and Tmin without associated times), a central question is: Is it is even possible to reconstruct an underlying continuous-time signal from just two samples? That is, assuming we are talking about a continuous-time signal that is suitably bandlimited. Any temperature curve driven by the earth’s rotation would seem to be limited to a constant plus a day-length sinusoid (assumption of daily periodicity, just a fundamental, at least for a start). The sampling theorem would seem to suggest that we need a number of samples GREATER THAN 2: perhaps 2.1 or 2.5 or 4. This is the common wisdom, and IT IS WRONG.

With the use of simple FFT interpolation techniques,

http://electronotes.netfirms.com/AN398.pdf

we CAN get perfect recovery with just two samples/cycle. Here N=2 (even) so we generally do have to assume we have energy at k=1 (half the sampling frequency of 2). (If we didn’t, we have only a constant so it would not be interesting.) In this case (see app note) we split that “center term” (k=1 for N=2)

The figure here: http://electronotes.netfirms.com/TwoSamps.jpg

Shows the 2 samples/day case. The top panel show a raised (by an additive 2) cosine sampled twice in one cycle (red stems at +3 and at +1). Here we overplotted the supposed cosine in blue, but this known origin is not input to the program – we intend to get this from the invented samples. The seconds panel is the FFT of the first, for k=0 and k=1. Because this is a length-(N=2) DFT, X(0) is the sum (4) and X(1) the difference (2), purely real.

Now to zero-pad. We suppose that we will learn where the DFT “thinks the two samples came from” if we had perhaps 16 samples rather than just 2. So we interpolate by zero-padding the CENTER of the FFT with 14 zero. If N were odd, we would have zero at the center. But N=2 (even) so we split X(1) in two and thus place 1’s at k=1 and k=15 with 13 zeros in-between to form the FFT of the interpolated signal (panel 3 – follow the green lines). This is REVERSING THE ALIASING. Then taking the inverse FFT (and multiplying by 8) we get back 16 samples of the cosine (last panel). The whole exercise is the four lines of MATLAB/Octave code:

x = [3,1]

X = fft(x)

XI = [X(1), X(2)/2, zeros(1,13), X(2)/2]

xi = 8*ifft(XI)

There was earlier talk about “Signals 101” and missing the first week. This sort of thing here is what you learn with true experience. It is important to understand HERE when simultaneously worrying about aliasing, bandlimiting, and minimal samples/cycle. It’s neat too!

Bernie

Hi Bernie,

That is neat. Thanks.

You said: “The sampling theorem would seem to suggest that we need a number of samples GREATER THAN 2: perhaps 2.1 or 2.5 or 4. This is the common wisdom, and IT IS WRONG.”

The theorem states fs >= 2B, so the fs = 2B case is covered, correct? But what about this scenario?

Sample 1sin(x), where the sample clock falls at exactly x = nπ, where n = 0, 1, 2, 3, …

This is one of the reasons it is usually written fs > 2B.

Every practical application exceeds 2x for a number of good reasons.

William –

My FFT-interpolation demo is merely a more recent answer to the Oldie-but-Goodie philosophical issue of a sinusoid sampled exactly twice/cycle which you have reintroduced here.

The older rebuttal? Well a sinusoid of constant freq, constant amplitude, and constant phase (no FM, AM, PM, pauses) has only one bit of INFORMATION – it’s either there of it isn’t. Further, its (one-sided) bandwidth is zero (B=0). So any sampling rate greater than zero is sufficient to assure that “no information is lost”. Even one sample. The probability that this sample is EXACTLY at a zero-crossing is zero.

Bernie

Bernie,

One thing a lot of us share in common is a curious mind and the love of learning things – and knowing things. But we have to try to keep clear whether we are at the chalkboard or in the real world with our interests. The fun things you brought up with the recent points are interesting but not applicable in the real world for any practical purpose.

You said: “The probability that this sample is EXACTLY at a zero-crossing is zero.”

Yes. But you were discussing theory. The probability of finding a real world application where we need to sample a signal-generator-grade sine wave and try to sample it at exactly 2x the frequency is zero.

The topic is really how sampling affects the accuracy and therefore value of our temperature record. We are dealing with a real world application. What is the point of bringing up academic-only special cases that have no application to the subject? None of those special cases can help us go back and improve the record and none of them would be useful if we were to try to improve the way to do capture data.

William Ward at February 20, 2019 at 10:13 am said in part

“. . . . . . . . We are dealing with a real world application. What is the point of bringing up academic-only special cases that have no application to the subject? . . . . . . . .”

Well – I guess it’s the same point YOU intended in bringing up the SAME special case:

William Ward at February 19, 2019 at 11:18 pm

“. . . . . . . . . . But what about this scenario? Sample 1sin(x), where the sample clock falls at exactly x = nπ, where n = 0, 1, 2, 3, … . . . . . . “

So William employs these sort of academic parlor-games in formulating an original “gotcha” but objects to someone else, using standard basic DSP theory, following up to a FULL understanding of the simplest of examples. Toy examples still have to follow theory – no slight-of-hand is used. There are no “academic-only special cases”. Instead only Nature reminding us (sotto voce) that She can be subtle.

At the same time, he misuses DSP notions of sampling (values, but no corresponding times given), aliasing (what spectral overlaps occur), and jitter (the change of average time between samples that 1sky1 pointed out).

Bernie

Bernie, I’m sorry for offending you. I hope you will accept my apology.

I addressed your technical points in another recent reply.