Correlation, filtering, systems and degrees of freedom

Guest post by Richard Saumarez

Correlations are often used to relate changes in climate variables in order to establish a potential relationship between them. These variables may have been processed in some way before they are correlated, possibly by filtering them. There has been some debate, which may have shed more heat than light, about the validity of combining these processes and whether they interact to make the conclusions drawn from them invalid. The object of this post is to explain the processes of correlating and filtering signals, the relationship between them and show that the results are predictable provided one takes care.

The importance of the Fourier Transform/Series.

Fourier analysis is of central importance in filtering and correlation because it allows efficient computation and gives theoretical insight into how these processes are related. The Fourier Transform is an analytical operation that allows a function that exists between the limits of – and + infinity to be expressed in terms of (complex) frequency and is a continuous function of frequency. However, a Fourier Transform of a real-World signal, which is sampled over a specific length of time – a record -, is not calculable. It can be approximated by a Fourier series, normally calculated through the discrete Fourier Transform algorithm, in which the signal is represented as the sum of a series of sine/cosine waves whose frequencies are an exact multiple of the fundamental frequency (=1./length of the record). Although this may seem to be splitting hairs, the differences between the Fourier Transform and the series are important. Fortunately, many of the relationships for continuous signals, for example a voltage wave form, are applicable to signals that are samples in time, which is the way that a signal is represented in a computer. The essential idea about the Fourier transform is that it takes a signal that is dependent on time, t, and represents it in a different domain, that of complex frequency, w. An operation performed on the time domain signal has an equivalent operation in the frequency domain. It is often simpler, far more efficient, and more informative to take a signal, convert it into the frequency domain, perform an operation in the frequency domain and then convert the result back into the time domain.

Some of these relationships are shown in figure 1.

clip_image002

Figure 1. The relationship between the input and output of a system in the time and frequency domains and their correlation functions. These are related through their (discrete) Fourier Transforms.

If a signal, x(t) passes through a linear system, typically a filter, the output, y(t) can be calculated from the input and the impulse response of the filter h(t), which is, mathematically, its response to an infinite amplitude spike that lasts for an infinitesimal time. The process by which this is calculated is called a convolution, which is often represented as “*”, so that:

y(t)=x(t)*h(t)

Looking at figure 1, this is shown in the blue upper panel. The symbol t, representing time, has a suffix, k, that indicates that this is a sampled signal at t0, t1, t2 …… Arrows represent the Discrete Fourier Transform (DFT) that convert the signal from the time to the frequency domain the inverse transform (DFT-1) that converts the signal from the frequency to the time domain. In the frequency domain, convolution is equivalent to multiplication. We can take a signal and transform it from x(tk) to X(wn). If we know the structure of the filter, we can calculate the DFT, H(wn), of its impulse response. We can write, using the symbol F as the forward transform and F-1 as the inverse transform, the following relationships to get the filter output:

X(w)=F[x(t)]

H(w)=F[h(t)]

Y(w)=X(w)H(w)

y(t)=F-1[Y(w)]

What we are doing is taking a specific frequency component of the input signal, modifying it by the frequency response of the filter to get the output at that frequency. The importance of the relationships shown above is that we can convert the frequency response of a filter, which is how filters are specified, into its effect on a period of a time domain signal, which is usually what we are interested in. These deceptively simple relationships allow the effects of a system on a signal to be calculated interchangeably in the time and frequency domains.

Looking at the lower panel in Figure 1, there is a relationship between the (discrete) Fourier Transform and the correlation functions of the inputs and outputs. The autocorrelation functions, which are the signals correlated with themselves are obtained by multiplying the transform by a modified form, the complex conjugate, written as X(w)*, (see below), which gives the signal power spectrum and taking the inverse transform. The cross correlation function is obtained by multiplying the DFT of the input by the complex conjugate of the output to obtain the cross-power spectrum, Gxy(w) and taking the inverse transform, i.e.:

Gxy(w)=X(w)Y*(w)= X(w)X*(w)H*(w)

Rxy(t)=F-1[Gxy(w)]

Therefore there is an intimate relationship between time domain signals representing the input and output of a system and the correlation functions of those signals. They are related through their (discrete) Fourier Transforms.

We now have to look in greater detail at the DFT, what we mean by a frequency component and what a cross-correlation function represents.

Figure 2 shows reconstruction of a waveform, shown in the bottom trace in bold by adding increasingly higher frequency components of its Fourier series. The black trace is the sum of the harmonics up to that point and the red trace is the cosine wave at each harmonic. It is clear that as the harmonics are summed, its value approaches the true waveform and when all the harmonics are used, the reconstruction is exact.

Up to this point, I have rather simplistically represented a Fourier component as being a cosine wave. If you compare harmonics 8 and 24 in figure 2, the peak of every third oscillation of harmonic 24 coincides with the peak in harmonic 8. In less contrived signals this does not generally occur.

clip_image004

Figure 2 A wave form, shown in bold, is constructed by summing its Fourier components shown in red. The black traces show the sum the number of Fourier components up to that harmonic.

The Importance of Phase

Each Fourier component has two values at each frequency. A sine and cosine waves are generated by a rotation of a point about an origin (Figure 3). If it starts on the y axis, the projection of the point is a sine wave and its projection on the x axis is a cosine wave. When the Fourier coefficients are calculated, the contribution of both a sine and a cosine wave to the signal at that frequency are determined. This gives two values, the amplitude of the cosine wave and its phase. The red point on the circle is at an arbitrary point and so its projection becomes a cosine wave that is shifted by a phase angle, usually written as j. Therefore the Fourier component at each frequency has two components, amplitude and phase and can be regarded as a vector.

Earlier, I glibly said that convolution is performed by multiplying the transform of the input by the transform of the impulse response (this is true since they are complex). This is equivalent to multiplying their amplitudes and adding their phases. In correlation, rather than multiplying X(w) and Y(w), we use Y(-w), the transform represented in negative frequency, the complex conjugate. This is equivalent to multiplying their amplitudes and subtracting their phases. Understanding the phase relationships between signals is essential in correlation[i].

clip_image006

Figure 3 The Fourier component is calculated as a sine and cosine coefficient, which may be converted to amplitude, A, and phase angle j. The DFT decomposes the time domain signal into amplitude and phases at each frequency component. The complex conjugate at is shown in blue.

Signal shape is critically determined by phase. Figure 4 shows two signals, an impulse and a square wave shown in black. I have taken their DFT, randomised the phases, while keeping their amplitudes the same, and then reconstructed the signals, shown in red.

clip_image008

Figure 4. The effect of phase manipulation. The black traces are the raw signal, and the red trace is the signal with a randomised phase spectrum but an identical amplitude spectrum

This demonstrates that phase has very important role in determining signal shape. There is a classical demonstration, which space doesn’t allow here, of taking broad band noise and imposing either the phase spectrum of a deterministic signal, while keeping the amplitude spectrum of the noise unaltered or doing the reverse: imposing the amplitude spectrum of the deterministic signal and keeping the phase of the noise unaltered. The modified spectrum is then inverse-transformed into the time domain. The phase manipulated signal has a high correlation with the deterministic signal, while the amplitude manipulated signal has a random correlation with deterministic signal, so underlining the importance of phase in determining signal shape.

clip_image010

Figure 5 The phase spectra of delayed impulses.

A very important concept is that phase represents delay in a signal. A pure time delay is a linear change in phase with frequency as shown in figure 5. The amplitude of the signal is unaltered, but in the case of a delay, there is increasing negative phase with frequency. However, any system that changes the shape of the input signal as it is passed through it, as is usually the case, will not have a linear phase spectrum. This is a particularly important concept when related to correlation.

We are now in a position to understand the cross-correlation function. Looking at the formula for correlation shown in figure 1, the CCF is:

clip_image012,

which can be normalised to give the coefficient: clip_image014, where N is the number of points in the record.

This rather formidable looking equation is actually quite straightforward. If k is zero, this is simply the standard formula for calculating the correlation coefficient and x is simply correlated with y. If k is one, the y signal is shifted by one sample and the process is repeated. We repeat this for a wide range of k. Therefore the function rxy is the correlation between signals two signals at different levels of shift, k and this tells one something about the relationship between the input and output.

We have a signal x(t) which has been passed through a physical system, with specific characteristics, which results in an output y(t) and we are trying to deduce the characteristics of the system, h(t). Since, from Figure1, the DFT, Y(w) of the output is the product of the DFTs of the input X(w) and the impulse response, H(w), could we not simply divide the DFT of the output by the DFT of the input to get the response? In principle, we can, providing the data is exact.

clip_image016

However most real world measurements contain noise, which is added to the inputs and outputs, or even worse other deterministic signals, and this renders the process somewhat error prone and the results of such a calculation are shown below (figure 6), illustrating the problem:

clip_image018

Figure 6. Left: Input (black) and output (red) signals for a system. Right: the calculated impulse response with 2.5% full scale amplitude noise added to the input and output (black) compared with the true impulse response (red). The low pass filtered response is shown in green.

This calculation illustrates another very important concept: linear physical systems store and dissipate energy. For example, a first order system, which could be the voltage output of a resistor/capacitor network or the displacement of a mechanical spring/damper system, absorbs energy transients and then releases the energy slowly, resulting in the negative exponential impulse response shown in figure 6. The variables which fully define the first order system are its gain and the time constant. The phase spectrum of the impulse response in distinctly non-linear. Attempts to measure another variable, for example delay, which implies a linear phase response, is doomed to failure because it doesn’t really mean anything. For example, if one is looking at the relationship between CO2 and temperature, this is likely to be a complex process that is not defined by delay alone and therefore the response of the system should be identified rather than a physically meaningless variable.

Noise and Correlation

Correlation techniques are used to reduce the effects of noise in the signal. They depend on the noise being independent of, and uncorrelated with, the underlying signal. As explained above, correlation is performed by shifting the signal in time, multiplying the signal by itself (auto-correlation) or with another signal (cross-correlation), summing the result, and performing this at every possible value of shift.

clip_image020

Figure 7. Broad band noise ( black) with the autocorrelation function( red) superimposed.

In broad band noise, each point is, by definition uncorrelated with its neighbours. Therefore, in the auto-correlation function, when there is no shift, there will be perfect correlation between it and its non-shifted self. For all other values of shift, the correlation is, ideally, zero, as shown in figure 7.

The auto correlation function of a cosine wave is obtained in the same manner. When it is unshifted, there will be a perfect match and the correlation will be 1. When shifted by ¼ of its period, the correlation will be zero, be -1 when shifted by ½ a period and zero when shifted by ¾ of period.

clip_image022

The ACF of a cosine wave is a cosine wave of the same frequency with an amplitude of the square of the original wave. However if there is noise in the signal, the value of the correlation will be reduced.

Figure 8 shows the ACF of broadband noise with two sine wave embedded in it. This indicates recovery of two deterministic signals that have serial correlation and are not correlated with the noise. This is a basis for spectral identification in the presence of noise.

clip_image024

Figure 8 The ACF of a signal containing two sine waves of different frequencies embedded in noise. The ACF (red) is the sum of two cosine waves with the same frequencies.

A very important feature of the ACF is that if destroys phase information. Referring to Figure 1, the DFT of the ACF is X(w) (or Y(w)) multiplied by its complex conjugate, which has the same amplitude and negative phase. Thus when they are multiplied together, the amplitudes are squared and the phases are added together making the resultant phase zero. This is the “power spectrum” and is the ACF is its inverse DFT. Therefore the ACF is composed entirely of cosine waves and is symmetrical about a shift of zero.

However, the cross-correlation function, which is the inverse DFT of the cross-power spectrum contains phase. By multiplying the complex conjugate of the output by the input in the frequency domain, one is extracting the phase difference and the delays at each frequency between the input and the output and the cross-correlation function reflects this relationship. If the power spectrum, e.g.: the DFT, of rxx(t) is Gxx(w) and the cross-power spectrum of rxy(t) is Gxy(w), then:

H(w)=Gxy(w)/Gxx(w)

Figure 9 shows the same data as used in figure 6 to calculate the impulse response and the error is very much reduced because signal correlation is a procedure that separates the signal from noise.

clip_image026

Figure 9. Estimate of the impulse response using the data in figure 6 via cross-correlation (black) and pre-filtering the data (green).

These calculations are based on a single record of the input and output. When available, one uses multiple records and calculates the estimated response E[H(w)] from the averaged power spectra:

E[H(w)]=< Gxy(w)>/<Gxx(w)>

Where < x> means the average of x. This leads to a better estimate of the impulse response. It is possible to average because correlation changes the variable from the time domain to relative shift between the signals so aligning them.

One simple check that can be performed to check that one is getting reasonable results, assuming that one has enough individual records, is to calculate the coherence spectrum. This is effectively the correlation between the input and output at each frequency component in the spectrum. If this is significantly less than 1.0, it is likely that there is another input, which hasn’t been represented, or the system is non-linear.

One of the major problems in applying signal processing methods to climate data is that there is only one, relatively short, record and therefore averaging cannot be applied to improve estimates.

Improving resolution by record segmentation and filtering.

One can improve estimates of the response if one has a model of what the signal represents. If one is dealing with a short term process, in other words the output varies quickly and transiently in response to a short term change in input, one can estimate the length of record that is required to capture that response and hence the frequency range of interest. This enables one to segment the record into shorter sections. Each segment has the same sampling frequency, therefore the highest frequency is preserved. By shortening the length of each record we have thrown away low frequencies because the lowest frequency is 1/(record length). However, we have created more records containing high frequencies, which can be averaged to obtain a better estimate of the response.

The other strategy is filtering. This, again, involves assumptions about the nature of signal. Figure 10 shows the same data as in figures 7 & 8 after low-pass filtering. The ACF is no longer an impulse but is expanded about t=0. However the ACF of the deterministic signal is recovered with higher accuracy.

This can be done here because the signal in question has a very small, low frequency, bandwidth and is not affected by the filtering (figure 11). The effects of the filter are easily calculable. If it has a frequency response of A(w), the input and output signals become X(w)A(w) and Y(w)A(w). The cross correlation spectrum is therefore simply:

Gxy(w)= A(w)X(w)A*(w)Y*(w)=A2(w)X(w)Y*(w)

clip_image028

Figure 10 The ACFs using the same data as in Figure 6. Note that the ACF of noise is no longer an impulse at t=0 and that there has been a considerable improvement in the ACF of the two sine waves as they now represent a higher fraction of the total power in the signal.

A2(w)is the autocorrelationfunction of the filter, which has no phase shift and will not affect the phase of the cross-power spectrum, provided the same filter is used on the input and output. This is because the phase reversal of the complex conjugate of the filter in the output cancels out that applied to the input, so the timing relationships between the input and output will not be affected. Provided the ACF spectrum is in the pass band of the filter, it will be preserved. In figure 9, the estimated impulse responses are shown using filtered (green) and non-filtered data. If one wishes to characterise the response, by assuming it is a first order system (which this is), one can fit an exponential to the data so getting its gain and time constant. The filtered result gives marginally better estimates but one has to design the filter rather carefully, appreciate that filtering modifies the impulse response and correct the results for this.

Thus, it is possible to filter signals and perform correlations, provided that the frequency band being filtered does not overlap to much the system response, as illustrated in figure 11, and one is careful.

clip_image030

Figure 11. The signal spectrum is composed of the true signal and noise spectra. A good filter response (green) will attenuate (grey) some of the noise but preserve the true signal, while a bad filter will modify the true signal spectrum and hence the ACF.

In practice, however there is likely to be an overlap between the noise and signal spectra. If the system response is filtered, the correlation functions will be filtered and become widened and oscillatory. In this case, the results won’t mean much and almost certainly will not be what you think they are! There are more advanced statistical methods of determining which part of the spectra contain deterministic signal but, in the case of climate data, the short length of the modern record and the poor quality of the historical record makes this very difficult.

Degrees of Freedom.

Suppose we have two signals and we want to determine if they have different means. They both have a normal distribution and the same variance. Can we test the difference in means by performing a “Student’s t” test? This will almost certainly be wrong, because in most simple statistical tests, there is an assumption that each observation is independent. In figure 7, the ACF is an impulse and nominally zero elsewhere, showing that each point is independent of each other. If the signal is filtered, the points are no longer independent because we have convolved the signal with the impulse response of the filter, as shown in figure 10. Looking at figure 1, the time domain convolution is given by:

clip_image032

This is similar to the correlation formula, except that the impulse response is reversed in time. It shows that the output at any point is a weighted sum of the inputs that have preceded it and are therefore no longer independent. Therefore in applying statistical tests to signal data, one has to measure the dependence of each sample on others by using the autocorrelation of the signal to calculate the number of independent samples or “degrees of freedom”.

Conclusion.

Correlation and filtering are highly interdependent through a set of mathematical relationships. The application of these principles is often limited because of signal quality and the “art” of signal processing is to try to get the best understanding of a physical system in the light of these constraints. The examples shown here are very simple, giving well defined results but real world signal processing may be messier, require much more statistical characterisation and give results that may be limited statistically by inadequate data.

One always has to ask what is the goal of processing a signal and does this make any sense physically? For example, as discussed earlier, cross correlation is often used interchangeably with “delay and it is only meaningful if one believes that phase response of the system in question has a linear phase response with frequency. If one is estimating something that is not meaningful, additional signal processing will not be helpful.

Rather, if one has a model of the system, one can then calculate the parameters of the model and, having done this, one should look carefully at the model to see if it accounts for the measured signals. Ideally, this should be tested with fresh data if it is available, or one can segment the raw data and use one half to create the model and test it with the remaining data.

Modern scripting programs such as “R” allow one to perform many signal processing calculations very easily. The use of these programs lies in not applying them blindly to data but in deciding how to use them appropriately. Speaking from bitter experience, it is very easy to make mistakes in signal processing and it is difficult to recognise them. These mistakes fall into three categories, programming errors, procedural errors in handling the signals and not understanding the theory as well as one should. While modern scripting languages are robust, and may largely eliminate straight programming errors, they most certainly do not protect one from making the others!


[i] I have used the electrical engineering convention in this post, i.e.: the basis function of the Fourier Transform is a negative complex exponential.

0 0 votes
Article Rating
128 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Great Greyhounds
April 9, 2013 3:55 am

A first for me! I understood the entire article! I had the chance to work in a field where using FFT, DFT, and ACF were an everyday occurrence. The biggest point in the article is in the conclusion, where he states: “Speaking from bitter experience, it is very easy to make mistakes in signal processing and it is difficult to recognise them.”
How true! And I’m not impressed with some of the mathematics errors being made by the ‘Climate Clergy’ as of late…

Ian H
April 9, 2013 4:01 am

Have Climate Audit and WUWT swapped places today? I’m more used to seeing this kind of heavy statistical analysis on Climate Audit. Meanwhile Climate Audit has a cartoon by Josh.

richardscourtney
April 9, 2013 4:02 am

Richard Saumarez:
You have provided a fine article in that it provides a good overview of the subject and it contains much important detail. However, it reads as though it is the text for a tutorial lecture intended to introduce the subject.
The introductory paragraph of your article only states what you intend “the object” of your article to be
I write to respectfully suggest that you amend your first paragraph to convert it into an introductory abstract stating the main points which your article explains (perhaps as bullet points). I make this suggestion because I suspect few people who do not know its contents will read much of your article in its present form: they will be constantly thinking “So what?” before deciding to not bother reading further.
Richard

April 9, 2013 4:20 am

Jean Baptiste Joseph Fourier (1768 – 1830) was a brilliant Mathematician and Physicist who did not live to see the formulation of the Laws of Thermodynamics. His series expansions are used in calculators to derive the standard Trigonometry values. Fourier is misquoted as the creator of the greenhouse hypothesis. “The Most Misquoted and Most Misunderstood Scientific Papers in the Public Domain” by Timothy Casey, posted at http://geologist-1011.mobi has the explanation of the original Fourier statement as well as the Tyndall mistake of opacity and absorption. The Casey site also has the original translation from French for Fourier and the complete published work by Tyndall on the absorption experiments. The AGW hypothesis suffers from a lack of transparency, compounded by stupidity and coupled with dishonesty. These defects cannot be filtered out.

Useless grad student
April 9, 2013 4:37 am

When you calculate the magnitude and phase from the real and imaginary components with respect to the frequency does the phase, does the function look like the following?
sum(Mag*cos(frequency*time+phase)) from frequency 0 to n.

Mike McMillan
April 9, 2013 4:48 am

We have a straightforward CO2 signal since 1958, and spotty CO2 data points prior. Can we come up with correlation numbers, 0 to 1, with the various temperature records since then? Pretending that the records we have from GISS and Hadley are real, of course.

peter azlac
April 9, 2013 5:05 am

Do you think “Phil” will find this easier than Excel?

April 9, 2013 5:14 am

In order to use the Fourier transform for time series analysis, should one not first determine whether or not the time series data is stationary, at least in the weak sense
http://en.wikipedia.org/wiki/Stationary_process
so at to determine whether or not FT analysis is applicable?

rbabcock
April 9, 2013 5:25 am

Thank you Richard for a great explanation in understandable terms. It reminded me why I went into building things and let someone else do the math behind it.

Andor
April 9, 2013 5:33 am

I agree fully with the (k)= *1 but certainly not with the DFT amplitude! It has to correlate directly and if proportional to the mean frequency high-point. The relevance here is surely practical but to multiply the ACF spectrum. Now…processing the impulse spectrum assumption to zero which is in this case not possible but if, ACF=filtered points to a much wider mathematical spectrum.
Characterize the following: Composition of attenuation noise and the statistical noise, will not give cross-over to auto correlation! How can it be??The characters does not multiply. Even divide the sum of the 2 points directly showing a mean point of amplitude, it won’t sum-up? I would rather go for the Effix-drp method and will slowly trend it to fgg-trp analogy for this instance? Take Z=(*1) over the Fx1.3 -g2.4 where g=gravity and not velocity, it will surely put more pressure on the correlated values of the synoptical chart proportional to force(f-d*1)

April 9, 2013 6:04 am

A correlation coincidence, or maybe not.
http://www.vukcevic.talktalk.net/Peru-Ice.htm
North and the South hemisphere marching in step, warmth in the north with ice in the south.

Stephen Richards
April 9, 2013 6:14 am

Richard
Good try. I hated fourier analysis at uni and I hate it now.

Editor
April 9, 2013 6:39 am

Great post, Richard… Brought back Dif Eq nightmares!
The same principles that apply to electrical engineering and seismic signal processing are all too frequently ignored in climate signal processing.

banjo
April 9, 2013 6:53 am

Wanna hear something funny?
Just watching cnn weather here in the uk.
Rising co2 causes greater turbulence and is a danger to flights.
I quote “Better hang on tight!”
Now stop laughing! get up off the floor, and dry your eyes,you chaps have work to do:)

Paul Westhaver
April 9, 2013 7:14 am

Fantastic!!! Move over greenie biologists and let the EEs do the modelling. Great article.

DirkH
April 9, 2013 7:16 am

Joseph A Olson says:
April 9, 2013 at 4:20 am
“Jean Baptiste Joseph Fourier (1768 – 1830) was a brilliant Mathematician and Physicist who did not live to see the formulation of the Laws of Thermodynamics. His series expansions are used in calculators to derive the standard Trigonometry values. Fourier is misquoted as the creator of the greenhouse hypothesis. “The Most Misquoted and Most Misunderstood Scientific Papers in the Public Domain” by Timothy Casey”
That’s just so beautiful! Arrhenius misquotes Fourier! And to this day warmists claim Fourier claimed a greenhouse effect through CO2 – which would have amazed me as during his day the battle between atomists and non-atomists was still raging.
(Even in the 1890’s it still was:
“During the 1890s Boltzmann attempted to formulate a compromise position which would allow both atomists and anti-atomists to do physics without arguing over atoms.”
http://en.wikipedia.org/wiki/Boltzmann
)

William C Rostron
April 9, 2013 7:19 am

Articles like this are one reason I read WUWT daily. I was trained in electrical engineering, and currently do power plant simulation for a living. When one has fought through trying to tune a high order interdependent thermal-hydraulic model for both accuracy and stability, one can appreciate the practical implication of these mathematical laws.
One of the major problems with sampled data systems is the ability to know that samples are truly random with respect to the underlying data. Periodic sampling will always give rise to the problem of aliasing. Once alisasing occurs, the resulting information is forever corrupted, unless perhaps, and only in a very narrow sense, essential characteristics of the original data is somehow already known. For climate data this is problematic since periodicies exist both daily and seasonally. An obvious example of the aliasing problem is trying to tease out accurate numbers from periodic samples taken only twice a day, as most of the early temperature records were done. Only random sampling prevents aliasing and preserves some characteristics of the underlying data. Only random sampling can ensure that statistical analyses are valid. But how can we know that the sampling is truly random?
It is interesting to me that many essential characterists of periodic data can be teased out of random sampled data sets (tamino link). The question in my mind is: can anyone have much assurance that climate data has been randomized sufficiently, or sampled sufficiently, to have confidence that the underlying data is valid to the claimed accuracies? I cut my teeth in metrology before doing controls engineering, then doing math modeling. I have a hard time believing that temperatures over the world were known to tenths of a degree even a hundred years ago, much less before that. I’ve personnal experience with measurement bias and test equipment accuracy, and the effects on process measurement. It is not a trivial exercise to get temperature uncertainties to less than one degree C in a modern industrial environment, where everything about the problem that can be known is known. How much greater uncertainty exists for proxies that were developed over hundreds or thousands of years where no instruments or calibrations exist at all?
That’s not to say that there isn’t any useful information there. You use what you can use. We have a data archival system here that stores data in a compressed format, that only preserves accuracy at specific psuedo random sample points. The idea is to preserve useful underlying data without using up terrabytes of disc space. One of the things that happens with this is, the archived data is useful for some kinds of analysis but completely useless for other kinds of analysis. The trend of data over weeks, months, and years is useful, because the frequency of the archive data is fast relative to these periods. The data is useless for trying to analyse transient events that we know occured in the past, but have lost the original records. Because the compression criteria destroys the short term timing information, no valid analysis can be determined for periods shorter than, say, one hour. Things that we know have to be correlated due to design, can’t be proven in the archived data. No amount of sophisticated analytical techniques can fix this. The data is corrupted in some essential way, and the corruption can’t be characterized, so transient analysis always fails. This is generally true with any data corruption. I’ve seen this talked about on this blog and elsewhere, but I don’t know that it is sufficiently addressed in many climate studies.
-BillR

April 9, 2013 7:27 am

BTW….Fourier did not live to see the Periodic Table of Elements, the Electromagnetic Spectrum or Theory of Atoms. For an excellent history on EMR see, http://principia-scientific.org/publications/History-of-Radiation.pdf by Dr Matthias Kleespies. Fourier assumed that all energy within the Earth’s crust was residual heat of origin, which he calculated having the remaining capacity to melt a one meter square, three meter high block of ice per year. One glance at an active volcano would disprove this assumption. To extrapolate, from a single observation of a glass covered box, that this great scientist, with no knowledge of atoms, molecules, radiation or nuclear fission….was the FATHER OF GREENHOUSE GAS HYPOTHESIS….is shameless slander.

FrankK
April 9, 2013 7:29 am

A highly mathematical and interesting article and I commend the author.
I often do peer review of earth science reports and often pull up authors for using “length of time”.
I know it has now become part of the colloquial language, but why use it in a scientific or engineering report?
Time does not have length. They are two different physical units – Time [T} and Length [L]. Why not use “time period” of “duration of time”
Cheers.

John Blake
April 9, 2013 7:38 am

As noted by statistical cognoscenti, Global Climate Models (GCMs) universally assume that simplistic linear rather than non-linear/complex relationships govern atmospheric/oceanic circulation patterns, especially when discussing CO2. Fourier’s math/statistical ingenuities aside, this root error invalidates all conventional conclusions whatsoever– regardless of computer power, input data, weighted variables, results are neither right nor wrong but meaningless, sheer bumpf.
Modelers’ sovereign neglect of basic physics –hot air rises, water flows downhill– merely aggravates their systemic failure to assess real-world phenomena by valid technical means. Given most sad-sack politicians’ abusive neglect of critical thought despite consequences affecting tens of million human souls, it’s hard to say who is more culpable– the pseudo-scientists who push this nonsense, or the obtuse jacks-in-office who use AGW Catastrophism as a club to beat down opposition, blithely sabotaging global energy economies in the process.

OldWeirdHarold
April 9, 2013 7:47 am

I’m waiting with bated breath for the next article which will tie this in to the hockey schtick business.

April 9, 2013 8:01 am

I think the most important & interesting part of this post is figure 6 & this text :
” if one is looking at the relationship between CO2 and temperature, this is likely to be a complex process that is not defined by delay alone and therefore the response of the system should be identified rather than a physically meaningless variable.”
This statement implies that a convolutional model for climate forcing might be applicable.
That is an absolutely profound statement as it is a completely different way of looking at climate.
The convolutional model for climate is a model I have contemplated investigating (coming at it from a geophysical data perspective & the signal processing we do on seismic data) & I have wondered if anyone in the climate community (or weather forecasting community) has ever attempted. Using the convolutional model defined in the paper :
y(t)=x(t)*h(t)
Temperature could be looked at as the system output y(t). CO2 forcing (or any other forcing of choice) could be viewed as the filter x(t), and underlying it there is some sort of underlying climatic impulse response h(t).
Now this is where it gets interesting in that we have a measurement for y(t) – our temperature time series & x(t) – and our CO2 time series and what we are interested in is h(t) – the climatic impulse response function. Now we need to assume stationarity & linearity (and those assumptions ultimately would need to be tested) but in theory, using the convolutional model, we can calculate h(t) . This is very similar to what we do in seismic inversion, although we are solving for the x(t), the earth filter (we know our time series output, which is a convolution of a known impulse response h(t) (known because we generate it) and the earth filter x(t).
As I haven’t attempted this, I have no idea what h(t) might look like, but it opens up a huge set of questions for investigation. Would h(t) vary in different place around the earth (local time series vs a global average time series)? What would be the physical significance of h(t) to the underlying physics of the atmosphere? Would this fundamentally change how we view the climate? Could climate be represented as the sum of a series of forcings & impulse responses? Could you isolate the effects of various forcings in this manner (treating other forcings as noise while isolating the forcing of interest) ? Would h(t) be the same for different forcings or would it be different? This could be applied not just to temperature as the output function y(t) but could be also applied to any other measurable time series, such as humidity, pressure, heights , thicknesses, etc. What would the h(t) functions of these other output time series look like ? What would be the physical meaning & relationships between these various h(t) functions ? There are tons of different y(t) functions to investigate (think about all the climate datasets out there that are expressed as a function of time – atmospheric temp data, oceanic temp data, humidity data, pressure data, global time series, sub-regional time series, local time series, etc etc) and lots of different potential forcings ( CO2, solar variability (in various forms), ENSO, PDO, AMO, etc etc). In theory, one could look for h(t) functions between every combination of response & forcing. How would these h(t) vary? Would there be systematic spatial variation? What would be the physical significance of these variations with the underlying atmospheric physics?
The most important question is : Could we use these various h(t) functions to develop models to predict future climate & maybe even weather ? IF the answer yes, then climate science might be turned on end by this approach.
This is a short list of questions that could be investigated using a convolutional model for climate (and potentially weather) forecasting. It is daunting , which is why I have never dove into it – it is a full time research job for someone (and I already have a full time job ).
I hope this inspires this post inspires someone with the time, skills & interest to dive into this research.
As far as I know, no one has ever approached climate research from this perspective ( if anyone knows of research taking this approach, please post for all to see & a summary of the results). This is such a fundamentally different way of looking at climate that the results have the chance to be revolutionary & transformational, especially if h(t) can be used in a predictive manner. Of course, as it goes with science, this line of research may completely bomb out just as easily (a failed experiment, if you will) , but no one will know until someone does the work. I wish it could be me, but I just don’t have time.
Again, I hope this inspires someone with the skills & time to dive into this line of research.

April 9, 2013 8:07 am

A quick follow up comment on my comment :
” Now we need to assume stationarity & linearity (and those assumptions ultimately would need to be tested)”.
We always assume that climate & weather is non-linear but perhaps it just appears chaotic / non-linear to us because there is a series of forcings x(t) & associated impulse responses h(t) that are all interwoven into the total climatic response but in fact each individual x(t)*h(t) system is actually linear & it’s just that h(t) is a complicated function so that the system just appears chaotic to us.
… just more food for thought & additional researched needed associated with the convolutional climatic model.

James at 48
April 9, 2013 8:33 am

RE: “The same principles that apply to electrical engineering and seismic signal processing are all too frequently ignored in climate signal processing.”
Yes! (As a side note, I knew all those upper division and graduate EE and Calculus courses would come in handy some day!)

David in Texas
April 9, 2013 8:37 am

Richard,
“A sine and cosine waves are generated by a rotation of a point about an origin (Figure 3). If it starts on the y axis, the projection of the point is a sine wave and its projection on the x axis is a cosine wave.”
Thank you for this article. I find it most in lighting, but I’m confused. Above is what you “wrote” and below is what I “heard”.
“If it starts on the y axis” > Then I image a point in the diagram on the little circle at x = 0, y = -1 (or minimum), because the angle measurement seem to be starting from down position.
“the projection of the point [on to the y-axis] is a sine wave” > here I image the point will be projected again to x =0 (zero because it is the starting point) and to y = -1 (or minimum because that appears what the projection would look like to me and what appear in the black trace in the figure), but sin(0) is not -1.
“its projection on the x axis is a cosine wave.” > the projection I believe would be x = 0, y = 0 (rotating the x and y axes), but cosine(0) is not 0.
What am I doing wrong?
Thanks again,
David

DesertYote
April 9, 2013 8:38 am

Very nice review, but it makes me miss my last job. What I am doing now is rather boring. The most complicated math I use now is some basic graph theory 🙁
BTW, I have been getting interested in wavelet analysis.

Greg Goodman
April 9, 2013 8:52 am

Excellently written article, clear and concise.

Greg Goodman
April 9, 2013 8:58 am

Jeff L ” Now we need to assume stationarity & linearity (and those assumptions ultimately would need to be tested) but in theory, using the convolutional model, we can calculate h(t) .”
Temperature will not be stationary, however, dT/dt probably will be. That is the first step I usually take when looking at temperature time series.

April 9, 2013 9:04 am

Thanks, Richard.
Your excellent article transported me back to electronics engineering school.
This is one way to look at climate data and clearly see that we know almost nothing yet.
I hope more knowledge will bring back sanity to the climate debate.

April 9, 2013 9:05 am

I second the comments of richardscourtney and OldWierdHarold.
My reaction upon seeing the post was the same as that of, I suspect, a not insignificant minority: the post seems unlikely to contain anything I don’t already know (well, knew at one time), and, since its introduction fails to identify a specific question before the house to which the post would apply its basic signal processing, reading it does not appear worth my time. So I haven’t read it.
This is not to say that it may not be a valuable post for many here; it’s just to let you know how it plays to one segment of your audience.

Greg Goodman
April 9, 2013 9:15 am

vukcevic says:
A correlation coincidence, or maybe not.
http://www.vukcevic.talktalk.net/Peru-Ice.htm
North and the South hemisphere marching in step, warmth in the north with ice in the south.
===
How about ANTARCTIC melting period (inverse) correlating with ARCTIC Oscillation index?
http://climategrog.wordpress.com/?attachment_id=208
Ironically it appears correlate somewhat better than the Arctic melting period does. That was quite a surprise.
This is still work in progress, more details once I’ve checked it more thoroughly.
Any suggestions on how to get a numerical measurement of correlation between daily AO and once per year melting periods?

April 9, 2013 9:38 am

Excellent article. I have always wondered why FFT analysis has not been applied to the GW data. My only guess is that they want the scam and not the truth.
It is amazing what a FFT can do. In 1977 I was a nuclear engineer (with a second degree in applied mathematics) responsible for starting up a nuclear power plant. I had spent weeks trying to find the cause of a problem with the flow of one of the pumps. One of the engineers was using the HP 5420A Digital Signal Analyzer (I believe this was the very first FFT analyzer of its kind) on the vibration transducer in an effort to see if there were bearing problems. My math degree kicked in and a light-bulb went off in my head. I borrowed the HP 5420, and hooked it up to the flow transducer and pressure transducer. The signal was coupled through a capacitor so that only the AC signal and not the DC voltage were going to the HP. Within 15 – 20 minutes I had determined that one of the volutes on the pump was causing minor, but enough, cavitation to create the difference in flow from this pump. I later used this to tune the feed-water control system to such a degree that we were able to get about 2% more power out of the plant. Two percent of 1000 megawatts is 20 megawatts more power! I went on to solve several other problems with the analyzer, The HP allowed me to solve problems in a few hours instead of days or weeks. The HP analyzer became my best friend, I used it more than my new fangled pocket calculator. Many of the new “intelligent” flow, level, and temperature control systems have a bare-bones FFT analyzer in their electronics today. If the AGW crowd really wants to know the truth they will start analyzing the data correctly, not massaging to fill their pocket book.

cd
April 9, 2013 9:57 am

Richard
Firstly, whoa!
That’s a lot to take in, would it not have been easier just to refer to the Wiener–Khinchin theorem – would it not have cut down your article by half. Perhaps I’ve missed the point (or completely off track) but it did seem like hard going until I got to the best bits.
Secondly, do you not think you are taking a sledgehammer to crack a nut. Surely simple regression analysis will do for most bivariate issues. I agree that the t-stat may be a little to primitive but it could be vetted with other tests such as the Durbin Watson stat and simple cross-correlation. Don’t get me wrong, it seems really good to look at every scenario, but everyone seems data processing nuts in this field.
Thirdly, while I’m not questioning anything you say (you’re obviously the expert), you fleetingly mention phase shift as a result of certain types of filtering. In one of the examples above you show what looks like a Butterworth filter; this, as you allude to, will introduce a phase shift, surely this will affect any subsequent analysis. Perhaps a more elaborate discussion of this with result from a single study would make for another good article.

cd
April 9, 2013 10:09 am

Joe Born
Well bully for you. If the article is below you then no need to comment unless you want a brownie point for being so knowledgeable.
Yes the article is long winded and probably a referral to some good online articles would’ve saved a lot of screen space, but he at least has put something out there and does attempt, I admit unsuccessfully, to communicate some commonly applied techniques in a less technical fashion. In short, he hasn’t condescended the reader…unlike you I hasten to add.
I think a good follow up would be to show why such approaches are better than just using more common approaches or we even need worry about such things in this (or any) field.
You don’t have to please all the people all the time; that’s why it’s good for websites to host all kinds of articles. Just thought you might want to know what other posters think.

otsar
April 9, 2013 10:12 am

This is a good introductory article on data processing and should be a must read and comprehend for scientists interpreting data.
Someone that has the time and energy should write an article on the acquisition of the data. The the data from the point of the measurement: Pt RTD, TC, PH probe etc, to the point where the data is stored on disk in a useful format for data processing.
This is one of the areas I work in. Very often I run across situations where the investigator is not purely measuring what they believe they are measuring.
The article should address the problems of the actual measurement process: linear, non-linear sensors, S&H aperture time, sample interval choice, A/D conversion problems, driver problems, operating system problems (not real time vs real time), choice of floating point or binary, etc, etc.
The way I see it, this the point where the starting materials for the sausages are made.

April 9, 2013 10:14 am

Great post Richard!
Do you mind if I keep a copy for reference; not for publicizing?

Reed Coray
April 9, 2013 10:20 am

I want to second (third, fourth, ….) Richard’s comment to the effect that access to a good “tool box” of processing algorithms (e.g., the R suite of tools) isn’t sufficient to employ properly used those tools. My example is the man who is an expert in the use of every tool know to man. He is put in him in a room where any tool ever made is available to him and he knows how to properly use every tool. He is asked to build a car. His first question will likely be: “What’s a car?” I, too, have experience with the improper use of signal processing tools–especially digital signal processing tools.
I would like to add that for signal processing that can be implemented to a high degree of fidelity convolving the signal with a finite duration impulse response (e.g., lowpass, bandpass, differentiation, Hilbert Transform, etc.), the Discrete Fourier Transform (DFT) can be used (and often is used because it is computationally efficient) to filter a uniformly sampled time waveform. Although the magnitude of the DFT of a finite duration segment of an input signal provides a measure of the spectral content of that segment of the signal and thus provides a way to visualize the filtering process in the frequency domain, the spectral properties of the DFT are secondary to the filtering process. The DFT (and other transform procedures whose outputs give little if any information about the frequency content of the signal) can be used to efficiently implement finite impulse response convolution, The DFT is one of several digital signal processing algorithms having the property that the circular convolution of two equal length sampled waveforms is, within a scale factor, equal to the inverse DFT of the product of the DFTs of the two waveforms. The fact that the DFT of a finite duration segment of a waveform provides information about the spectral content of the waveform is a side benefit of the process.

JM VanWinkle
April 9, 2013 10:21 am

William C Rostron, thank you for raising the central point of physical sanity of “world temperature,” measured to a tenth of a degree C. Only someone who has lived their life in books would think that has real world meaning. One might just as well ask how many feathers are there on an average bird 100, 1000, 10,000 years ago, calculated to the nearest feather.

Reed Coray
April 9, 2013 10:22 am

Ooops. In my immediately foregoing comment, replace employ properly used with properly employ.

RCSaumarez
April 9, 2013 11:04 am

Thanks for the comments.
This post was in response to an argument about filtering data and then cross correlating them – is it valid?
1) @ Joe Born and others. I agree that if you are familiar with DSP, this is a very simple exposition. My goal was to show that filtering and correlation are related and this may not be realised by some readers. If done incorrectly, correlation will be affected by filtering and this has been the subject of a number of posts and disputes – Eschenbach, Mckintyre, Briggs et al.
2) . Thanks, but I would need to write a textbook!
3) @cd. The purpose of the last section was explain, in simple terms, that autocorrelation affects statistical degrees of freedom and so needs to be taken into account when testing hypotheses statistically. As regards filtering, if one is using a physical filter with a phase shift on the input and output, the point of cross-correlation is that it eliminates the phase response of the filter, provided the filters have the same characteristics (and generally one should use Bessel filters that linear phase characteristics). If one uses a digital zero-phase filter for both, you won’t screw up the phase response at least.
4) @Useless grad student. Yes you are basically correct. One has to careful in defining the sign of phase, otherwise the signal comes out backwards! Also, the magnitude in most DFTs needs care because it represents HALF the spectrum. If you remember that cos(wt)=(exp(jwt)+exp(-jwt))/2, the spectrum is represented in both positive and negative frequencies (with sine, imaginary, component negative) you will calculate amplitude/2 if you use a real transform.
5) in Texas. A point rotating in unit radius circle will trace out cos(wt) on the x axis and sin(wt) on the y axis, i.e. they will be 90 degrees out of phase. I had some difficulty in drawing figure 3, and the solution to your problem depends on when you consider time to zero. If I start measuring a cosine wave at t=0, the phase will be zero but if I start 1/4 of a period later, the phase will 90 degrees. Phase is related to delay and this depends in a given signal on when you started and finished recording it. The important point in signals is not the absolute phase but the way phase changes with frequency – the phase spectrum.
A more general point is this. The type of analysis I have described is related to LINEAR systems, that obey 1) the output is proportional to the input, 2) the output of the sum of the two different input functions is equal to the sum of the outputs of the responses of the individual functions (superposition) and 3) the process does not vary with time (stationarity)..
In the brutal real world, as opposed to technological signal processing, systems are rarely truly linear or stationary and outputs may depend on the product of the inputs; signal processing in these systems is much more difficult (tiger country!). If you start with a basic proposition of, say, the oceans absorb heat and release it, they may act as a simple mixer and one can treat their behaviour through linear methods. In practice, mixing between levels, currents etc (I’m no expert) may make this invalid. However, for small fluctautions, over short periods they may be sufficiently linear for this not to matter – but this depends on the problem that you are trying to model and why you are doing so .
The importance of this idea is signal signal processing is not a “dark art” but a tool to gain insight into the behaviour of physical systems, especially when one has noise, or other signals mixed in. Therefore one has to start with an idea of what one is trying to establish rather than using a battery of techniques blindly (and incorrectly).

Bart
April 9, 2013 11:10 am

Jeff L says:
April 9, 2013 at 8:01 am
“Now we need to assume stationarity & linearity (and those assumptions ultimately would need to be tested) but in theory, using the convolutional model, we can calculate h(t) “
Actually, a valid model over the past 55 years is not stationary, but it has stationary increments. The relationship is essentially
dCO2/dt = k*(T – To)
dCO2/dt = rate of change of CO2
k = coupling factor in ppm/degC/unit-of-time
T = global temperature anomaly
To = equilibrium level of temperature
This highly representative model shows that temperature is necessarily driving CO2, and not the reverse. The arrow of causality is necessarily from temperature to CO2, as it would be absurd to argue the converse, that temperatures are being driven by the rate of change of CO2 – once the CO2 reached a plateau, no matter what its level, the temperature would return to the equilibrium value of To.
A cross correlation done between the rate of change of CO2 and temperature will yield a flat spectrum and essentially an impulse for the impulse response.
I know it is hard to believe, but the climate modelers have got even the most basic relationship backwards and wrong.

Bart
April 9, 2013 11:16 am

RCSaumarez says:
April 9, 2013 at 11:04 am
“Therefore one has to start with an idea of what one is trying to establish rather than using a battery of techniques blindly (and incorrectly).”
So true. The climate community should have started by employing identification techniques, such as PSD estimation, to identify how things were actually working, then fit their models to the observations. Instead, they have attempted to work up models, and then fit the data to them. Doyle’s famous detective warned against such methodology over a century ago:
“It is a capital mistake to theorize before one has data. Insensibly one begins to twist facts to suit theories, instead of theories to suit facts.” – Sherlock Holmes

Bart
April 9, 2013 11:18 am

Bart says:
April 9, 2013 at 11:10 am
On
dCO2/dt = k*(T – To)
This type of relationship is to be expected in a continuous transport system where CO2 is constantly entering and exiting the surface system, and the differential rate at which it does so is modulated by surface temperature.
It’s a done deal. I’m just waiting for others to recognize it.

April 9, 2013 11:27 am

Greg Goodman says:
April 9, 2013 at 9:15 am
…..
Hi Greg
It makes sense (the CET temperatures correlate with AO in winter months, the Antarctic’s summer). For correlation I use Excel, but it requires same ‘scale resolution’ for both variables. As a test I suggest use the average of monthly AO’s data ( for the ice melting months) tabulated as annual data. Hope someone else may know of a better method.

commieBob
April 9, 2013 12:31 pm

There is a serious caveat when it comes to applying this stuff. It works only for linear time-invariant systems. If we understand a system well enough, we can work it so our analysis is LTI (or close enough). In the case of the climate, that probably isn’t the case.

Greg Goodman
April 9, 2013 12:34 pm

Bart says: “A cross correlation done between the rate of change of CO2 and temperature will yield a flat spectrum and essentially an impulse for the impulse response.”
It will or it could do ?
Call be skeptical, but I’d like to see at least a simple demonstration that is the case. I have no reason to think you are wrong but that is not enough for me to accept such a bold statement without question.

Greg Goodman
April 9, 2013 12:36 pm

Bob: ” In the case of the climate, that probably isn’t the case.”
Aw, com’on Bob. Climate is CO2 plus stochastic variation. I know cos Tamino told me so.

Bart
April 9, 2013 12:49 pm

commieBob says:
April 9, 2013 at 12:31 pm
“In the case of the climate, that probably isn’t the case.”
Actually, it probably is the case. The inapplicability of these methods is more the exception than the rule. And, as I demonstrate above, this system is clearly amenable to such an approach.

Die Zauberflotist
April 9, 2013 1:47 pm

OK, you lost me after “Correlations”, but I love the way you fill out an x-y axis, so I trust you. What’s your opinion of the 5 second rule? And, is it safe to use a meat-tenderizing hammer on a ganglion cyst?

george e. smith
April 9, 2013 2:05 pm

Well I had a longer post, but somehow M$ UNI-Virus blew it away.
While I understand Richard Courtney and other’s suggestions; my preferencs Richard (author), is that you are best writing in the manner that is comfotable to you. Your various audiences, just have to adapt to you; not verse vicea.
A nice presentation, and a good refresher, after too many years. I’ll save it for reference.
Now for an encore (in your spare time); the single biggest absence from climate science is the theory of sampled data systems.
All your nice transform stuff pre-supposes, that you do have actual real data, and not just noise.; which is about ALL there is in climate “data”. So a primer on the Nyquist sampling theorem, would be a good mate to this current dissertation.
Thanks for taking the time; it was well worth it.
George E. Smith

April 9, 2013 2:14 pm

RCSaumarez: “I agree that if you are familiar with DSP, this is a very simple exposition.”
Of course it is to DSP types (if my assumptions based on the first diagram and first few paragraphs are correct). But that certainly does not mean that your post has no value to much of this site’s audience. My experience suggests that, even to many of this site’s brighter habitues, it would indeed. So you are to be commended for the effort that you have expended.
My criticism–which I intended to be constructive–was not that your post is simple. My purpose simply was to help you help your readers. My guess is that of this site’s regulars the subset who are familiar with digital signal processing may be larger than you imagine. (For example, my own exposure to it arose from the practice of law, not from engineering) So, if there is indeed something in your post that you think even DSP types would find of interest, it would be helpful to say so up front. That way such readers can know whether they would be right to do what I did: allocate their time to other things.
Be that as it may, I join you in your belief that certain of the climate discussions we’ve seen would have benefited significantly from DSP-theory results. And, again, you are to be commended for your effort.

Editor
April 9, 2013 2:51 pm

My thanks to RIchard for a very understandable explanation of the Fourier transform. As a self-taught mathematician, such expository work is very valuable to me.
Next, FrankK says:
April 9, 2013 at 7:29 am

A highly mathematical and interesting article and I commend the author.
I often do peer review of earth science reports and often pull up authors for using “length of time”.
I know it has now become part of the colloquial language, but why use it in a scientific or engineering report?
Time does not have length. They are two different physical units – Time [T} and Length [L]. Why not use “time period” of “duration of time”
Cheers.

Man, you grammar nazis must like being frustrated. “Length of time” is a good, understandable, and perfectly valid English expression. Yes, it does not make literal sense … so what? It makes perfect sense and is totally without ambiguity, which is why it is used in English. When someone says it you know exactly and precisely what they mean.
So I don’t care if you beat your head against that wall for a hundred years … we’ll still be saying “length of time”, just like we have been for centuries.
But good luck with your project … how about you leave off correcting meaningless mistakes until you’ve succeeded with that one? To give you a sense of the size of the task you’ve set yourself, consider that there are TWENTY THREE MILLION separate pages on the web that use that phrase, so you’d best get going …
Of course, once you finish that, you’ll have to go fight the grammar criminals that talk about a long span of time. Span, of course, originally didn’t have anything to do with time, so it’s exactly as illogical as a length of time … and despite that, it’s been used to mean a length of time since the 1500’s … see, that’s how English changes, Frank.
I know you and many others would like to keep English the same forever, and to force it into a logical straight-jacket. But here’s the ugly truth about not only English but many other languages:
LANGUAGES ARE NOT LOGICAL, NEVER WERE, AND NEVER WILL BE
So it doesn’t matter how long your futile attempts last, Frank … you’re not going to succeed.
Finally, you should understand that such pedantry as you peddle will rarely be appreciated by authors. See, we choose our words very carefully to frame exactly the thought we’re trying to explain. And many authors, like myself, don’t give a damn whether our language is logical.
We may want it to be effective, arresting, quotable, strong, bathetic, sad, or any one of a number of things, and while we are doing that to the best of our abilities, having some jumped-up joker come along to tell us a phrase isn’t logical is … well … unappreciated at best, and much worse at worst.
You like that? “Worse at worst”? It’s a kind of truism, but then it expresses my meaning exactly.
And that is all I care about, that my words express and convey my meaning as clearly as possible. And in that quest, I have no interest at in whether one of the more common phrases in the English language is logical or not.
w.
PS—Did you notice my use of another evil phrase above, “how long your futile attempts last”? I suppose once you’ve exterminated “length of time”, you’ll have to start in on things like “How long did the concert drag on?” and “Will you be gone long?” … because as you correctly point out, time doesn’t have length, it has duration.
I suspect a phrase like “Don’t stay too long” is used, not because it is logical, but because it is economical. We don’t say “Don’t stay for an over-extended duration of time” as you might recommend, which means the same thing and is indeed logical, because it is clumsy.
And language needs to be efficient at times, for practical reasons. Generally, if two phrases mean the same thing, the shorter one will win out, whether it is logical or not.
Short version? Don’t bother fighting to make language logical … it isn’t, and good authors and critics just live with that as long as the meaning is clear.

Martin A
April 9, 2013 2:57 pm

FrankK says:
April 9, 2013 at 7:29 am
Time does not have length.

It will be a long time before you convince me of that.
But it only took a short time to type this comment.

Adam
April 9, 2013 3:18 pm

This is a great article! Thanks.

April 9, 2013 3:43 pm

Of time and duration ….
O gentlemen, the time of life is short;
To spend that shortness basely were too long

William Shakespeare
Good for Shakespeare, good for most of us.

Bart
April 9, 2013 3:51 pm

Greg Goodman says:
April 9, 2013 at 12:34 pm
“It will or it could do ?”
It will. The equation
dCO2/dt = k*(T – To)
is equivalent to the statement. I actually did such an analysis to see if any other features might pop up. But, except for normal, to-be-expected variation from data with random errors, this is the essential feature. You can see the correlation in the plots with your naked eyes. It is really trivially true. And yet, incredibly, the debate rages on, and the scientific illiterates of the AGW movement press forward.

Svend Ferdinandsen
April 9, 2013 3:59 pm

Very good overview of the basics in filtering and correlation, and you could add the problems of undersampling e.g. spurious responses.
I wonder if you or others could make a simple article about the matrix operations that are used from time to time. Like SVD or Principal Components.
I have played a little with SVD via some net application, and it can certainly extract a signal, but it can for sure also show some signal even where no signal exist, or it can show a completely different signal. I am not able to look through the finer details, but i have a natural scepticism when these methods are used instead of plain averages.

FrankK
April 9, 2013 4:51 pm

Willis Eschenbach says:
April 9, 2013 at 2:51 pm
My thanks to RIchard for a very understandable explanation of the Fourier transform. As a self-taught mathematician, such expository work is very valuable to me.
Man, you grammar nazis must like being frustrated. “Length of time” is a good, understandable, and perfectly valid English expression. Yes, it does not make literal sense … so what? It makes perfect sense and is totally without ambiguity, which is why it is used in English. When someone says it you know exactly and precisely what they mean. etc etc etc
—————————————————————————————————————
Goodness me Willy you do get very excited and aggro sometimes. I just disagree with you. OK.

KevinK
April 9, 2013 5:48 pm

Mr Saumarez,
Thank you for a very nice summary of frequency analysis and correlation. Well written and as short as can reasonably be expected for a complicated topic.
You wrote (regarding the applicability of these analysis techniques to this problem (climate modeling));
“3) the process does not vary with time (stationarity)..”
This leaves me a little concerned. It should be remembered that all of the material properties of the materials acting in the climatic system (thermal conductivity, thermal capacity, thermal diffusivity, density, refractive index, etc.) all vary with temperature (and pressure for some of them). Since temperature varies with time (the Sun still rises and sets) all of these parameters vary with time.
So I suspect that stationarity is not to be assumed for this analytical problem.
While these analytical techniques have some applicability, the results must be viewed very carefully, especially when extrapolating into future decades.
I maintain that modeling the climate of this complex system with a view towards making forward looking “projections” is actually an intractable problem. There are far too many unknowns and of course the errors bars must of necessity widen after each subsequent time interval (ie; if the projection for tomorrow’s energy content is +/- 1%, the projection for the day after must necessarily be 101% + 1% or 99% – 1%, or +/- 2.01%) thus weather forecasts are generally only good for a few days.
In aerospace engineering we have identified those problems that are “intractable” from an analytic perspective. For example, when a satellite is launched that pesky rocket shakes holy H—L out of it. It will likely never again vibrate that hard on orbit, but customers prefer that the satellite survive the necessary launch sequence. While it is conceivable that an analytic approach could possibly predict that all the bolts will stay in place, from a pragmatic view nobody would believe such a complex model (incorporating friction, stiction, bending, surface properties, proper assembly techniques, etc.). So instead, a qualification model (qual model, or QM) is built. This represents the final design down to the last detail (same structure, same bolt torques, same assembly sequence, etc.). And then we shake that model even harder than the final Flight Model (FM). If it survives, (90%+) we build an exact copy for flight. If not, we figure out why and then rinse and repeat.
I maintain that climate modeling is an “intractable” analytic problem and we should never expect the “predictions” to be worth much, if anything at all. Funny that almost two decades of empirical evidence (what the climate is really doing, ie; NOT MUCH) seem to align with my belief.
Cheers, Kevin (MSEE 1980, with DSP experience)

April 9, 2013 6:12 pm

KevinK
You need to fix the incorrect use of bring/take before you tackle “length of time.” Worse yet. supposedly college educated public school (and even college) English teachers are using and/or allowing the use of the incorrect use of bring/take, thus another generation is learning the incorrect use.

Bart
April 9, 2013 6:18 pm

KevinK says:
April 9, 2013 at 5:48 pm
“So I suspect that stationarity is not to be assumed for this analytical problem.”
Statistical stationarity does not mean the process does not vary with time, it means the joint probability distribution does not vary with time. Moreover, there is a relaxed qualification for applying these methods, that of “wide sense stationarity”, which means that the correlations between the state variables do not vary with time. And, even when a process is not strictly stationary, it is often stationary in its increments (e.g., Brownian motion). These concepts are widely applicable to natural phenomena.

KevinK
April 9, 2013 6:35 pm

usurbrain;
I searched my posting for the following terms; “bring”, “take” and “length of time”. Sorry, but those terms do not appear in my posting. Perhaps you meant to respond to a different poster ?
Thanks, Kevin

KevinK
April 9, 2013 6:37 pm

Bart;
I maintain that climate modeling is an “intractable” analytic problem and we should never expect the “predictions” to be worth much, if anything at all.
Cheers, Kevin

April 9, 2013 7:33 pm

To relate the Laplace transform, to the Fourier, which may bridge the atmospheric transfer function, with the stocastic data observed…on a mechanistic level, I recommend knowing the RADON transform: http://frontcom.ing.uniroma1.it/~gaetano/texware/Radon.MI.pdf

April 9, 2013 9:03 pm

My brain hurts, Brian.

April 9, 2013 11:31 pm

I’m with you FrankK at 7:29 on that
http://wattsupwiththat.com/2013/04/09/correlation-filtering-systems-and-degrees-of-freedom/#comment-1270240
I’d use “elapsed time”, or just “duration”, ‘time’ being implicit in “duration”. However, “length of time” is in common usage so you will never eradicate it.

April 9, 2013 11:38 pm

If the first 3 dimensions can have a length, then why can’t the fourth, time ? It’s been a long time since I’ve spent such a short time considering the length of time. It’s inescapable.

Greg Goodman
April 9, 2013 11:56 pm

Bart says: “A cross correlation done between the rate of change of CO2 and temperature will yield a flat spectrum and essentially an impulse for the impulse response.”
Greg “It will or it could do ?”
“It will. The equation
dCO2/dt = k*(T – To)
is equivalent to the statement.”
Saying the same thing twice does not make it any truer. So “will yield a flat spectrum” was a totally speculative remark that you can not back up. Disappointing. I thought you had something interesting to show.
” I actually did such an analysis to see if any other features might pop up. But, except for normal, to-be-expected variation from data with random errors, this is the essential feature. You can see the correlation in the plots with your naked eyes. It is really trivially true. And yet, incredibly, the debate rages on,”
Well if the best you can to is come up with some plot on WTF.org using a crappy 12 month running mean that looks to show some rough similarity I’m not surprised ” the debate rages on”.
Is that the sum total of your “analysis” ? I thought you meant you’d actually done something.
That is about as weak as the weakest “climate science” . With efforts like that on both sides, it is hardly surprising thet ” the debate rages on”.

Greg Goodman
April 10, 2013 12:18 am

Bart says: “Statistical stationarity does not mean the process does not vary with time, it means the joint probability distribution does not vary with time.”
It also requires that the mean remains constant over time. There is no point in doing FFT on ramp like 20th c temp or anything dependant on it. The rate of change does not ramp up.
This is just one basic error that Grant Foster aka Tamino makes in his attempt to “school” me. He challenges by graph of rate of change by doing an FFT on the actual ice area which is plunging downwards for a good part of the record. LOL
When I gently asked wether he had a done a test of stationarity like Dicky-Fuller test he avoided ansering and replied that I was “just showing off”, then refused to answer any more on the subject he had chosen to make two full articles out of. Seems like school’s out early this year then Grant?
One thing I learnt at school is that teachers don’t know half of what they claim to teach. They just get away with it because most children have not worked that out yet.
A expression that sums this up nicely: Those who can’t do, teach.
The follow up is ; Those who can’t teach , teach in university.

Greg Goodman
April 10, 2013 12:21 am

Dear thisisgettingtiresome , this is getting tiresome. Not to mention OT !

Greg Goodman
April 10, 2013 12:28 am

KevinK: ” Since temperature varies with time (the Sun still rises and sets) all of these parameters vary with time. ”
If something does not “vary with time” we would not be trying to work out if FFT !
If you have sufficiently long sample in relation to the period it can be regarded as “stationary”. However, if you have 30 or 35 years of statellite data on a system with circa 60y pseudo periodic change it is not.
Oops, “Dr Foster” needs to go back to school.

peterg
April 10, 2013 2:12 am

I believe control engineers in things like rocketry tended to make use of the Kalman filter for transfer function parameter and state variable estimation.
This climate change controversy suggests the application of Bayes theory, where different camps such as the skeptics and the AGWists could propose differing a-priori assumptions, then given the data, see how the assumptions develop. At least people might understand each others positions better.

cd
April 10, 2013 2:33 am

Greg
Can I just add, and perhaps what Dr Foster is alluding to this, is that there may be structural drift and while the DFT can deal with this it should be removed as it will drown out the “harmonics”, obviously this will be expressed in the autocorrelation as well.
The other point to note is that the signal itself may not be stationary. That is the frequency of the signal may increase or decrease along the chronology. Hence, DFT approach will not be much use. You’ll need to use something more sophisticated such as a wavelet transform. I have much less experience of these so I don’t know how you might use them for doing the sort of stuff discussed above – perhaps as some type of mask prior to running other routine analysis.

RCSaumarez
April 10, 2013 2:52 am

@Bart.
As far as I can see from your differential equation, it is an integrator: i.e.; the CO2 will be proportional to the integral of the temperature excursion. The Laplace Transform of an integrator is 1/s and therefore one would not expect to see a flat cross-correlation function if you are correct.

Greg Goodman
April 10, 2013 4:12 am

cd: “Can I just add, and perhaps what Dr Foster is alluding to this,…”
No , it was he who introduced the DFT (of the straight time series) in order to “school” me. When asked whether he had a done any test of stationarity he went and hid behind a wall.
He’s a sham.

Greg Goodman
April 10, 2013 4:16 am

cd : “The other point to note is that the signal itself may not be stationary. ”
Indeed, and that is why I’d investigated d/dt (the subject was Arctic ice extent) . The std ADF test indicated that rate of change could be regarded as stationary and the time series not.
I guess that may have _something_ to do with Grant Forst going to hide behind his control of what gets seen/deleted on his “Open Mind” pseudo-science fiasco.

cd
April 10, 2013 6:05 am

Greg
Sorry, I’m sort of joining the conversation late.
I think when looking at time series I was always of the opinion that you remove drift with a spline then work on the residuals. I can see this might be a problem where the dependency is defined in the trend you’re removing (e.g. CO2 vs Temperature).
As for determining whether the signal is stationary – excuse my ignorance but I’m not sure what std ADF is – a wavelet transform, or less efficiently, a moving window DFT (or even better the moving window cosine transform of the autocorrelation – reduced clutter in the spectra) would show if there is possible drift in the signal. Anyways, if you have found that, you could apply an attenuation to both signals – I can see Richard’s head in his hands.
But still, and as I said in my original comment, I don’t think that all this data processing is getting us anywhere. In this field statistical analysis takes priority. You just need look how statistical solutions are proposed for experimental problems. Surely we need to start at observations and check if they are reliable. For example, take a look at the global temperature record. You’re talking about a number of sources of error:
Experimental error:
1) Land use changes
2) Instrumental failure
3) Instrumental changes
Processing error:
4) Choice of interpolation method for gridding
5) Choice of projection system for gridding
7) Summary statistic
And all for a global measure that changes on the order of a few tenths of a degree. Then you add all this processing and analyses. This is nuts. We should be trying to get a handle on the principle sources of error, principally 1-3 and then 4-7 (as listed above).

Tom in Indy
April 10, 2013 6:21 am

Synonym for Duration is Length. 🙂 Therefore, “duration of time” and “length of time” are both acceptable phrases, scientific journal or otherwise.

cd
April 10, 2013 6:49 am

Ah but Tom, domains have different metrics. And in the time domain, length has no meaning. Now let that be an end to it, otherwise the pedant police will close WUWT down ;-).

Editor
April 10, 2013 8:01 am

FrankK says:
April 9, 2013 at 4:51 pm (Edit)

Willis Eschenbach says:
April 9, 2013 at 2:51 pm

My thanks to RIchard for a very understandable explanation of the Fourier transform. As a self-taught mathematician, such expository work is very valuable to me.
Man, you grammar nazis must like being frustrated. “Length of time” is a good, understandable, and perfectly valid English expression. Yes, it does not make literal sense … so what? It makes perfect sense and is totally without ambiguity, which is why it is used in English. When someone says it you know exactly and precisely what they mean. etc etc etc

—————————————————————————————————————
Goodness me Willy you do get very excited and aggro sometimes. I just disagree with you. OK.

How typical of a grammar nazi … you don’t attack the message, you simply deny it and then claim the messenger is unbalanced … and you mis-spell my name.
w.

Bart
April 10, 2013 9:47 am

KevinK says:
April 9, 2013 at 6:37 pm
I maintain it is not. I can tell you exactly how things are going to unfold in the next several decades.
The global temperature metric is going to continue to trend upward at about 0.6 degC/century, as it has for the past century. At the same time, it is going to exhibit a superimposed cycle of about 60-70 years fundamental period which, in the near term of roughly 2 more decades, is going to cause a gradual decline. If you want to know what is going to happen, take the data in the first plot at roughly 1945, raise it up, and splice it onto the record now.
It is all perfectly natural, and following a long term pattern which was in place long before CO2 in the atmosphere grew significantly in the last century.
For CO2, the rate of change is going to continue to track the temperature anomaly, and so the accumulation will continue to decelerate for the next two decades, which should produce a marked deviation from the anthropogenic CO2 emissions, which will continue accelerating.
And then, perhaps at last when these portents become obvious and undeniable, we can rid ourselves of this Chicken Little silliness once and for all.
Greg Goodman says:
April 9, 2013 at 11:56 pm
“So “will yield a flat spectrum” was a totally speculative remark that you can not back up. “
What??? That’s cracked. It’s completely backed up. It is a flat spectrum. It is trivially true. You can apprehend it at a glance. Look at dCO2/dt and T – they are affinely similar. Ergo, flat spectrum. Fini. You are arguing with me over whether 2 + 2 = 4, when I have plainly laid out two sticks, and another two sticks, and shown you that together they are four.
Greg Goodman says:
April 10, 2013 at 12:18 am
“It also requires that the mean remains constant over time.”
Such processes often have stationary increments, and those can be analyzed using Fourier methods. This is fairly standard in the field.
thisisgettingtiresome says:
April 9, 2013 at 11:38 pm
Indeed, just multiply it by the speed of light, and you’ve got a length. Physicists do this as a matter of course to simplify the notation when dealing with relativistic effects.
peterg says:
April 10, 2013 at 2:12 am
Yes. The whole debate could be settled by people who were familiar with these methods.
RCSaumarez says:
April 10, 2013 at 2:52 am
“Transform of an integrator is 1/s and therefore one would not expect to see a flat cross-correlation function if you are correct.”
Of course not. That is why I specifically advocated using the derivative of CO2 in your correlation analysis.

Gary Pearse
April 10, 2013 9:52 am

FrankK says:
April 9, 2013 at 4:51 pm (Edit)
Frank, the use of “length of time” has brought you static for being nitpicky. To make things worse, you are also wrong about length not being a term for quantity of time. It is a perfectly good idiom used for centuries in one form or another. Would you for example reject the usage – “He spoke at great length on the topic.” “It took a very long time.” “Long ago and far away.” “At length, the fruit ripened.” “He lengthened his stay by two weeks”. And there are other idiomatic uses such as: “He went to great lengths to solve the problem”. In this instance we are measuring how much “effort” was made, or how “hard” one worked on the problem.
I heard this kind of stuff from one my less-gifted professors, probably as you did – drop it for your own sake.

Bart
April 10, 2013 10:13 am

Bart says:
April 10, 2013 at 9:47 am
“Such processes often have stationary increments, and those can be analyzed using Fourier methods. This is fairly standard in the field.”
Let me further elucidate. We are not looking for series with a stationary individual representation. We are looking for an input/output relationship which is Lebesque integrable. It is not necessary that the series we are correlating represent stationary processes to apply the Fourier analysis. Only, as-stated, that the I/O be L2. When we seek the impulse response of the system with temperature as input and the derivative of CO2 as output, then the response is L2 – it is all-pass, flat spectrum, with an impulse response of a scaled impulse.

Greg Goodman
April 10, 2013 10:20 am

Bart: saysGreg Goodman says:
“So “will yield a flat spectrum” was a totally speculative remark that you can not back up. “
Bart: says “What??? That’s cracked. It’s completely backed up. It is a flat spectrum. It is trivially true. You can apprehend it at a glance. Look at dCO2/dt and T – they are affinely similar. Ergo, flat spectrum. Fini. You are arguing with me over whether 2 + 2 = 4”
So , it’s cracked, trivially ture , affinely similar, flat, Fini etc.
Yet , obvious as you seem to think it is, you are unable to back up your claim that is as simple as 2 + 2 = 4 with a plot of this supposedly “flat cross-correlation function “.
In very rough terms I think d/dt(CO2) is as was shown by Allen MacRae, your _may_ be correct (roughly). Now let’s see it.
No more waffle. You made several categorical statements, even when challenged, now please back it up if you have something specific or admit you were speaking a bit loosely and don’t have anything specific to back up your rather over-assertive comments.
I repeat that I suspect you may be largely, roughly, vaguely correct. I would like to have proof of what you say. (and I don’t count your thice posted and crappy WFT.org running mean) .
You spoke with certainty,several times, about “flat cross-correlation function “, let’s see one.
My guess from the data is that it is far from “flat” and also that you have not produced one so far.
I suspect that you don’t know how but would be delighted to be proved wrong on that point.

Greg Goodman
April 10, 2013 10:25 am

Bart says “… then the response is L2 – it is all-pass, flat spectrum, …”
can you demonstrate that rather than just asserting it?

Bart
April 10, 2013 10:58 am

Greg Goodman says:
April 10, 2013 at 10:25 am
“can you demonstrate that rather than just asserting it?”
I am not just asserting it. It is an equivalence relationship. The series are affinely related if and only if the frequency response is constant in magnitude and phase across frequency.

Bart
April 10, 2013 11:02 am

Bart says:
April 10, 2013 at 10:58 am
Greg Goodman says:
April 10, 2013 at 10:25 am
Allow me to preempt further discussion by simply laying out how the argument will progress from here.
Greg: Ah, so I see you can’t show it.
Bart: I did show it. It is an equivalence relationship. The series are affinely related if and only if the frequency response is constant in magnitude and phase across frequency.
Repeat ad infinitum.

RCSaumarez
April 10, 2013 11:21 am

@ bart.
The differential equation you present is:
dCO2/dt=k(t-t0)
on integration, we get:
CO2 = Integral[k(t-t0)] +constant.
Hence, if you treat the system as a co2 producer/absorber as a function of temperature, the transfer function is 1/s which is of course a hyperbola in amplitude with a 1/-j = j =pi/2 phase shift.
If you feel the need to differentiate the CO2 signal numerically, you are on shifting sands. Differentiation is a filter with frequency response:
D[exp(-jwt] = -jw.exp(-jwt). ie a linear increase in magnitude with frequency and a -pi/2 phase shift. If you apply this filter to an integrator you will end up with balderdash.
If you cannot see that if a linear function of one variable is equal to the derivate of a second variable is equivalent to the second variable being equal to the intergral of the first variable, it would seem that you have some way to go. I suggest to read the last paragraph of my post and reflect.

Bart
April 10, 2013 12:23 pm

RCSaumarez says:
April 10, 2013 at 11:21 am
“If you apply this filter to an integrator you will end up with balderdash.”
It is getting very difficult to restrain myself. Do they teach calculus anymore in schools?
If you apply a differentiator to the output of an integrator, you get a unity gain, all pass, zero phase filter. The one is the inverse of the other. That is why, in some quarters – particularly British, the integral is referred to as the “anti-derivative”.
The response of an integrator is 1/s. The response of a differentiator is “s”. Now, follow this calculation closely, because it is oh so hard: (1/s) * s = 1.
Differentiation amplifies at high frequency, which is why noise would kill an ideal, continuous time differentiator. But, the response of a discrete time differentiator is 2*j*sin(w/2), and only goes up to the Nyquist frequency, and so the noise response is bounded. Filtering by, e.g., annual averaging also reduces the amplification of the noise, but the derivative is still well represented within the passband of the averaging filter.
“If you cannot see that if a linear function of one variable is equal to the derivate of a second variable is equivalent to the second variable being equal to the integral of the first variable, it would seem that you have some way to go.”
What does that have to do with anything? Fine, do your correlation on CO2 versus the integral of the temperature series as input. It doesn’t matter. The relationship is the same all-pass transfer function.
Sheesh.

Greg Goodman
April 10, 2013 12:27 pm

RCS: : “If you feel the need to differentiate the CO2 signal numerically, you are on shifting sands. Differentiation is a filter with frequency response:
D[exp(-jwt] = -jw.exp(-jwt). ie a linear increase in magnitude with frequency and a -pi/2 phase shift.
If you apply this filter to an integrator you will end up with balderdash.”
What Bart refers to as “increaments” and is otherwise known as “first difference” and is also the simple numerical approximation of the d/dt or rate of change, is frequently recommended in ecomometrics as a means of rendering a time series stationary, precisely for this kind of analysis.
This process is indeed a crude high pass filter and this the reason that it can render a non stationary process, roughly stationary.
Now, I’m not saying what is done in econometrics is a reference for what is valid signal processing, but are you saying that such a process will necessarily produce “balderdash” ?
If I followed your article correctly, auto(cross)-correlation functions remove phase anyway, so deriving the power spectrum will still be valid , though it may be necessary to take account of the attenuation.

Bart
April 10, 2013 12:36 pm

In case of confusion, the response of a discrete time differentiator is 2*j*sin(w*T/2)/T, where T is the sampling period. I am used to working with systems where this is all normalized.

Lance Wallace
April 10, 2013 1:00 pm

RCSaumarez:
Did the bad and good filters get switched in Fig. 11?

Greg Goodman
April 10, 2013 1:00 pm

Bart: I did show it. It is an equivalence relationship. The series are affinely related if and only if the frequency response is constant in magnitude and phase across frequency.
So you have stated a theoretical relationship. So if you can now prove that (T-To) and d/dt (CO2) are “affinely related” we can expect flat cross-correlation, the two being equivalent. So far you have not shown that either is true for the data in question other than a lash up plot on WTF.org.
I think it must be clear to all by this stage that you do not have more than that, which is a shame. Like I said I would like to have proof of such a relationship, without which ” the debate rages on”.

Greg Goodman
April 10, 2013 1:08 pm

Bart , could you just clarify whether you are the same Bart that did the rather impresive spectral analysis on SSN that was reproduced on Tallbloke’s talkshop?

Bart
April 10, 2013 2:21 pm

Greg Goodman says:
April 10, 2013 at 1:00 pm
“So you have stated a theoretical relationship”
Theoretical, perhaps, but not with the connotation of being conjectural. To the degree that the axioms of mathematics hold true, it is a definite relationship.
“So if you can now prove that (T-To) and d/dt (CO2) are “affinely related” we can expect flat cross-correlation, the two being equivalent. So far you have not shown that either is true for the data in question other than a lash up plot on WTF.org.”
The WTF plots show it quite plainly. It matches in every significant detail, with just minor deviations due to measurement error and leakage through the non-causal averaging (WTF automatically adjusts the data for phase lag) of CO2 measurements (we have to introduce a zero at the annual cycle, or it tends to overwhelm, and the 12 month average does that). The dips and curves occur in the same location and at the same magnitude, and the longer term trend is the same. It’s about as good a match as I’ve ever seen in test data in the lab.
You can demand some R^2 figure or other, but such measures are generally useful for teasing out relationships in borderline cases. In such a case as this, where the relationship is blatantly obvious, it amounts to gilding the lily. I am loathe to divert even a minute of time from other tasks to “prove” the blatantly obvious, and would prefer to leave that to others so inclined. It is your choice whether to accept it or not. But, it really is rather blatantly obvious.
Greg Goodman says:
April 10, 2013 at 1:08 pm
Likely enough though, no offense to Roger, I would not generally want to be associated with some of the fringe stuff which he likes to post to stimulate ideas and conversation.

RCSaumarez
April 10, 2013 2:45 pm

@Bart
Thanks, for your comments.
If I understand you correctly. which some might find difficult, you have differentiated a low pass fileered signal. In which case you have attenuated the deternistic components and amplified the noise. In which case you will end up with a practically random signal.
Thank you, I learnt a certain amount of signal processing theory during my PhD..
@ Greg Goodman. I am well aware of the difference between numerical approximation “first differnce” and the true derivative,
I note that you appear to be incapable of integrating a first order, sperable differential equationn
and appreciating its significance in the frequecy domain.
Again, I suggest that you read the last paragraph og my post and reflect.

Greg Goodman
April 10, 2013 2:50 pm

Bart: “But, it really is rather blatantly obvious.”
OK, so eye-balling the graph is all you’ve got. We could have got there a lot quicker, but at least it’s clear now. Thanks.
“Likely enough though …”
I was asking whether you were to author of this rather good analysis.
http://tallbloke.wordpress.com/2011/07/31/bart-modeling-the-historical-sunspot-record-from-planetary-periods/
You didn’t seem to realise what I was referring to in your reply, so I guess that’s a no. There’s plenty of Bart’s around , I was just wondering, though I was having some difficulty reconciling your posts here with the degree of expertise shown in that article.

RCSaumarez
April 10, 2013 2:51 pm

@ Bart,
Having looked at ypur last comment in detail, you are talking complete rubbish. You appear to be unable to do elementary calculus and your derivation of a the frequency response of a differebtiator is just plain wrong
Frankly, I have seen your idiotic, uneducated posts on other threads and and, as many others woukd wish, a period of silence from you would be desirable while you reflect on your lack of mathematical skill.

Greg Goodman
April 10, 2013 3:18 pm

RCSaumarez ” I am well aware of the difference between numerical approximation “first differnce” and the true derivative,”
I was not suggesting you weren’t. I appreciated your well informed and well written article. In view of your apparent expertise in this area I was asking a question.
To do the kind of frequency analysis being discussed it is necessary AFAIK, to ensure the data is at least approximately stationary. There are several common techniques. Taking first differences, “detrending” with linear or other low frequency function or windowing. All have their short-comings.
You mention the phase shift introduced by differentiation, however, it would seem from your article that autocorrelation removes phase.
Am I correct therefore in thinking that a valid power density spectrum can be derived by using differencing to make the data stationary, compensating if necessary, for the attenuation of lower frequencies?

Bart
April 10, 2013 3:54 pm

RCSaumarez says:
April 10, 2013 at 2:45 pm
” In which case you will end up with a practically random signal. “
How do you get there? It isn’t random at all. The SNR is high. Look at the plot.
Do you even know what SNR is?
I GOT my PhD in this stuff, and you need schooling.
Greg Goodman says:
April 10, 2013 at 2:50 pm
“…so I guess that’s a no.”
You mean this plot? And, we are talking about this relationship? Wow, that Bart guy and I have a lot in common.
RCSaumarez says:
April 10, 2013 at 2:51 pm
God, how I had smug ignorance. Dude, you have no clue.

Bart
April 10, 2013 4:02 pm

RCSaumarez says:
April 10, 2013 at 2:51 pm
God, how I hate smug ignorance.
No, it was not Freudian. This is outrageous.
“You appear to be unable to do elementary calculus and your derivation of a the frequency response of a differebtiator is just plain wrong”
What the hell do you THINK the derivative operator is for a Laplace transform???
Here’s a clue: d/dt(exp(s*t) = s*exp(s*t).
What kind of an idiot makes a statement like yours with no understanding whatsoever???
Why don’t you at least look it up in a few links before spouting off so stupidly?

Greg Goodman
April 10, 2013 4:14 pm

“You mean this plot? And, we are talking about this relationship? Wow, that Bart guy and I have a lot in common.”
you flatter yourself.

Bart
April 10, 2013 4:15 pm

RCSaumarez says:
April 10, 2013 at 2:51 pm
Maybe you are confused about the discrete time differentiation operator, D(z) = (1 – z^-1)/T. This is what the WTF site is doing when it “differentiates”.
I doubt you have a clue what the Z-transform is. I will walk you through it.
With z = exp(s*T), get
D(exp(s*T)) = exp(-s*T/2)*(exp(s*T/2) – exp(-s*T/2))/T
Evaluated at s = j*w, get
D(exp(j*w*T)) = 2*j*exp(-j*w*T/2)*sin(w*T/2)/T
In order to line up the data in time, you have to advance the phase (shift the time series forward 1/2 step). Thus, the response for the phase advanced discrete time derivative (which is what the WTF site implements) is
D(exp(j*w*T))*exp(j*w*T/2) = 2*j*sin(w*T/2)/T
THERE. Do you understand NOW???

Bart
April 10, 2013 5:14 pm

Greg Goodman says:
April 10, 2013 at 2:50 pm
“OK, so eye-balling the graph is all you’ve got.”
You say this as if it were a bad thing. Quite the contrary, when you can look at data and see an obvious relationship, that is a very good thing.
Whatever method of extracting some numerical figure of merit you might come up with, it will necessarily reflect that obvious relationship. Indeed, if it did not, you would know from that sanity check that you had done your analysis wrong.
Everything I have stated is so elementary and obvious, it just leaves me nonplussed to step into this bizarro, surreal world and be assualted with arguments such as I have sustained on this thread. In my world, among my peers, I am accounted an authority. In this forum, I am the proverbial one eyed man in the land of the blind who, contrary to the usual saying, is no king, but a lunatic spouting off that he can “see” things, whatever that means.
Well, anyway, we’ve gone about as far as we can go. Good luck in your future endeavors, Mssrs. Goodman and Saumarez.
You’re going to need it.

Bart
April 10, 2013 7:13 pm

Greg Goodman says:
April 10, 2013 at 2:50 pm
“OK, so eye-balling the graph is all you’ve got.”
One last thing. As I have pointed out on numerous occasions above:
dCO2/dt = k*(T – To)

This type of relationship is to be expected in a continuous transport system where CO2 is constantly entering and exiting the surface system, and the differential rate at which it does so is modulated by surface temperature.

The gas is continuously upwelling from ocean circulation, volcanism, and biological process. It is continually downwelling from ocean circulation, mineral weathering, and biological processes. And, it is doing so at some rate. Any differential in rate between what is coming in, and what is going out, is either going to result in either an increase or decrease in the surface system.
The rates at which it comes in, and goes out, are strongly affected by temperatures, and not generally in the same measure.
So, you have a temperature modulate rate of increase or decrease. That is essentially what the equation says: the rate of change is proportional to the temperature anomaly relative to a particular baseline where equilibrium attains, i.e.,
dCO2/dt = k*(T – To)
And, lo and behold, we look at the data, and that is exactly what we see.
Come on, guys. This is unbelievably straightforward. How can you not see it?

Auto Phil
April 11, 2013 3:59 am

Bart is absolutely correct. Seasonal temperatures activate CO2 and anthropogenic inputs of CO2 fill in the overall secular trend.

RCSaumarez
April 11, 2013 4:18 am

No Bart, I don’t understand, or rather I do.
f=sin(wt), let u=wt,
df/dt =d(sin(u))/dt . du/dt= w .cos(wt).
This is an increase in amplitude of w and a phase shift of pi/2.
AS regards my future endeavors, there is quite a large body of opinion that I have been reasonably successful over the last 20 years.

RCSaumarez
April 11, 2013 6:12 am

sorry, that should read:
df/dt=d(sun(u))/du. du/dt = w cos(wt)

Bart
April 11, 2013 9:06 am

RCSaumarez says:
April 11, 2013 at 4:18 am
“AS regards my future endeavors, there is quite a large body of opinion that I have been reasonably successful over the last 20 years.”
As have I. In, I might add, THIS particular field. Prudence, if not courtesy, should dictate that you take that into consideration before jumping to conclusions.
You seem to have a confusion regarding continuous time differentiation versus discrete time first differencing and dividing by sample time.
Do you think the WTF site is performing continuous time differentiation on its data? How, precisely, would it accomplish this? How are you going to get amplification beyond the Nyquist frequency for discrete data?
Auto Phil says:
April 11, 2013 at 3:59 am
“Seasonal temperatures activate CO2 and anthropogenic inputs of CO2 fill in the overall secular trend.”
The trend is fully accounted for by the temperature. Anthropogenic inputs are superfluous. They are unnecessary for the reconstruction of CO2 levels.

Bart
April 11, 2013 9:20 am

Look, guys, this match is signal, not noise. It is not random. These are two entirely different time series, and yet they lie right on top of each other. The odds of that happening randomly are effectively zero.
Both the trend, and the variation, match as perfectly as likely possible with inherent measurement errors. There is no room to add in additional forcing from anthropogenic inputs without subtracting out the long term temperature trend. And, if you do that, the variation will no longer match. Occam’s Razor: temperature is driving essentially the whole shooting match, and anthropogenic inputs have, at most, a minor role.

Bart
April 11, 2013 9:25 am

RCSaumarez says:
April 11, 2013 at 4:18 am
f=sin(wt), let u=wt,
df/dt =d(sin(u))/dt . df/dt= w .cos(wt).

Let f = sin(w*T). The phase advanced discrete time differentiation is
df/dt = ( f(t+T/2) – f(t-T/2) ) / T
= (sin(w*(t+T/2)) – sin(w*(t-T/2)) / T
= cos(w*t) * ( 2 * sin(w*T/2) / T)

Bart
April 11, 2013 9:26 am

Let f = sin(w*t) in the above.

Bart
April 11, 2013 2:29 pm

RCSaumarez says:
April 10, 2013 at 2:45 pm
“If I understand you correctly. which some might find difficult, you have differentiated a low pass fileered signal. In which case you have attenuated the deternistic components and amplified the noise. In which case you will end up with a practically random signal.”
Looking back up the thread, I realized from this statement that you totally misapprehended what is going on. By low pass filtering, one has not “attenuated the deternistic components and amplified the noise.” One has attenuated the noise and passed through the deterministic components. Deterministic components are generally low frequency, and noise is generally high frequency.

RCSaumarez
April 11, 2013 2:31 pm

Bart,
I’m beginning to see where you are coming from.
The problem with phase advanced differentiation, is that it is not a true derivative. It is a band pass filter with zeros at the fundamental and at the Nyquist frequencies and a pi/2 phase shift across the frequency range. In the lower frequency ranges, it is an approximate derivative in terms of amplitude, except that it does not have the phase characteristics of a derivative.
That’s OK unless you want to treat the results as the driving function of a differential equation. In this case you a substituting some in place of a derivative, when it is definitely not.
In your differential equation:
dCO2/dt=k(t-t0)
what is the significance of k? Written like this I would have assumed that k is a constant. Do you intend it to be an arbitrary (complex) function, in which case, this equation has a very different complexion.
If k is a constant, why not simply integrate temperature and correlate it with CO2?
Looking at the plot of dCO2/dt and temperature in the link, there is certainly a relationship between them. However, it is also clear that there is a variable lead/lag between them. Given the solubity of CO2, might one not expect ther to be a non-linear relation between them? In which case, any simple linear processing is not useful.
On a wider point, I really can’t see the point you are trying to make.

RCSaumarez
April 11, 2013 2:55 pm

I’m sorry, I made a mistake typing, the center difference derivative DOES have the phase characteristics of a differentiatior, i.e. a pi/2 phase shift but over the frequency band only approximates in the lower frequency range. This was a slip of the keyboard. Apologies.

Auto Phil
April 11, 2013 2:57 pm

We should applaud Bart for finding a new chemistry whereby a small increase in temperature will effectively drive all the CO2 out of the ocean. That is such a neat trick.

RCSaumarez
April 11, 2013 2:59 pm

Bart. If you really want to characterise the relationship, and since the CO2 is a trend, have you considered a state variable approach, i.e.: Kalman Filters?

richardscourtney
April 11, 2013 3:30 pm

Friends:
Richard Saumarez provided a fine article which gives a good overview of Fourier analysaes. So, I have been following your discussion with interest in genuine hope of learning more about principles of time series analyses.
I write to respectfully suggest that you are going in a tangent which is not helpful to those – including me – observing this thread with a view to learning. And I am writing in hope the discussion will get back ‘on track’.
Before explaining my point, I need to make something clear. Bart knows I do not accept the indications of his analysis, but that is NOT why I am asking for the thread to stop being side-tracked onto debate of his analysis. Indeed, I do not like snark about Bart inventing a new chemistry: if it is right then his analysis indicates biological changes probably in the ocean surface layer.
Detailed consideration of a case would be a useful expansion of the article by Richard Saumarez, but Bart’s analysis is far too limited a case to provide such expansion.
If you want to use investigations of temperature and CO2 time series as a case study then it would be useful to onlookers – such as me – to compare various ways of conducting such analyses, and not the merits of only one.
However, temperature and CO2 may not be the optimum such case study for illustration of the principles outlined in the article by Richard Saumarez. Only Richard Saumarez can decide that.
I hope everybody will understand I am writing this with sincerity and my only intent is to assist the value – now and as an archive – of this thread.
Richard

RCSaumarez
April 11, 2013 4:58 pm

@ Richard S Courtney
Thanks, I felt that the whole drift of the conversation was going off the rails. What started with a rather simple minded question on whether it kosher to filter a signal and then form a correlation has expanded well beyond its remit into something that is far more tricky.
I am not sure that signal processing for its own sake is useful here. There are (at least ) two issues.
The first is whether temperature drives CO2 (or its derivative) or vice versa. Is the system linear, or at least linearisable? Does the basic physical chemistry and physics suggest that it is? Should, as you suggest consider that there is a biological compartment that absorbs CO2 rapidly and releases it much more slowly, if at all? Therefore, without a model, I am not sure that DSP would be helpful. I’m no expert in this field – I do biomedical engineering.
The second issue is the length and quality of data. Although modern data is better than the historical record, one is dealing with a short segment segment with a lot a variability. Normally, one forms ensembles (i.e. averages in English) to improve estimates. Looking at the Temperature CO2 data, there is clearly some relationship, but there is also considerable leads and lags between them and any estimate would have large error limits in determining this – I’m guessing but I suspect that one wouldn’t get an answer that was helpful. In my post I remarked that one of the major problems in climate based signal processing is that we only have one, relatively short record which precludes the normal approach where one tries to gets tons of data, divide it into many seperate records whose length captures the effect one is looking for, so that one can do sensible statistics. ( Rather counter-intuitively having a long record does not increase the accuracy of estimates. It simply enables one to get an single estimate over a wider frequency range. The key is being able to average data). There are undoubtly technical issues on how you handle the signals, but these are details – although it is important that these are done correctly.
I would say that signal processing is a tool that allows one to test hypotheses about the structure of a system rather than a rod and line that allows one to go on a fishing expedition. Once one has a hypothesis that is based on sensible assumptions, one can make statements about the quality of the data required to test that hypothesis. My feeling is that the data isn’t good enough and that any simple model won’t account for their relationship..

Bart
April 11, 2013 5:00 pm

Auto Phil says:
April 11, 2013 at 2:57 pm
“We should applaud Bart for finding a new chemistry whereby a small increase in temperature will effectively drive all the CO2 out of the ocean.”
There is A LOT of CO2 in the ocean. A small differential rate between upwelling and downwelling, integrated over time, makes a huge difference to the surface system.
RCSaumarez says:
April 11, 2013 at 2:31 pm
“…what is the significance of k?”
It may be taken to be constant over a given interval of time. It holds up pretty well as a constant since 1958, when precise CO2 measurements began to be made. But, overall, it should probably vary over time, possibly not even particularly smoothly.
For example, in the circulation of the oceans, there may be different CO2 concentrations along the flow in the “pipeline”. When a relatively (to overall surface waters) CO2 rich volume starts surfacing, it would start outgassing and accumulating in the atmosphere. The constant “k” would increase, and the effective equilibrium temperature To would shift downward, as temperatures would have to cool significantly to allow equilibrium between inflow and outflow to be reestablished.
But, all these dynamics would be slow, and within the space of some years, the parameters could be considered relatively constant. The simple model
dCO2/dt = k*(T – To)
is then seen as a linearization of a more complicated, nonlinear and time varying process.
The units of “k” would be ppm/degC/unit-of-time, and basically is simply the amount by which the rate of accumulation in the atmosphere changes per degree of temperature change.
“If k is a constant, why not simply integrate temperature and correlate it with CO2?”
You could do that, but as an integrator attenuates high frequencies, you will miss a lot of detail, and that is the discriminator which tells us that this is really what is driving atmospheric CO2, and not human inputs.
If you look at anthropogenic rate of input, you will see it does not really correlate with the rate of change of atmospheric CO2. The total accumulation of anthropogenic inputs is slightly quadratic, and so also looks affinely similar to the measured CO2 concentration. But, it is only a superficial resemblance, as the rate of change does not correlate to the measured rate of change like the temperature does.
The superficial resemblance of accumulated anthropogenic inputs to measured concentration is really not remarkable or compelling. Both are slightly quadratic series, and you can always make two slightly quadratic series look affinely similar simply by doing a linear regression of the one against the other to find the affine parameters which produce the best fit. It is when you get into the fine detail of the rate of change that you recognize that the resemblance is superficial.
“However, it is also clear that there is a variable lead/lag between them. “
Keep in mind that the WFT site is doing the numerical differentiation and averaging, and advancing the resulting time series to match up in phase. So, it is an anti-causal filter. Therefore, events in the CO2 series can appear to have started before the time at which they actually do.
Moreover, it is important to keep in mind that the temperature measure is a global spatial average, and whatever events are actually causing the CO2 rise may not be in evidence in that average until after the CO2 measurement has been taken. Some people have noted that there appears to be an especially good fit with Southern Hemispheric data, which suggests this may be largely an oceanic phenomenon.
“On a wider point, I really can’t see the point you are trying to make.”
If you do your correlation analysis between the rate of change of CO2 and the temperature, you will find that the dynamics are a good fit to the dCO2/dt = k*(T – To) model, and you may discover finer structure which could lead to better understanding of how this comes about. I don’t do this stuff for a living, and do not have time to chase down everything. Moreover, I am sure I am not nearly as familiar with the particular system as you are, and you can most likely come up with connections of which I am not aware which would help explain how this all comes about.
RCSaumarez says:
April 11, 2013 at 2:59 pm
“If you really want to characterise the relationship, and since the CO2 is a trend, have you considered a state variable approach, i.e.: Kalman Filters?”
Absolutely. But, I do not have the time. I wish someone would do it.
richardscourtney says:
April 11, 2013 at 3:30 pm
My goal was not to sidetrack the conversation. Quite the contrary, I believe the CO2/temperature relationship is an excellent case study for the methods Richard has outlined, and that is why I brought it up.

Bart
April 11, 2013 5:04 pm

RCSaumarez says:
April 11, 2013 at 4:58 pm
“The first is whether temperature drives CO2 (or its derivative) or vice versa.”
If the rate of CO2 drives temperature, then it can accumulate to very high concentration but, once it stops accumulating, the temperature will drop back to the equilibrium level.
That does not make any sense. Thus, we are compelled to conclude the converse.

Bart
April 11, 2013 5:14 pm

richardscourtney says:
April 11, 2013 at 3:30 pm
“Bart knows I do not accept the indications of his analysis…”
I still have no idea how you can imagine that this kind of correlation between two completely independently measured quantities can arise by chance, and not be a real, shared characteristic.

Bart
April 11, 2013 7:22 pm

Bart says:
April 11, 2013 at 5:04 pm
“…once it stops accumulating, the temperature will drop back to the equilibrium level.”
Even more absurd, you could ramp the CO2 up an insignificant amount, but in a very short time, and temperatures would spike upward.
No, the rate of CO2 accumulation is not driving temperature. Temperature is driving the rate of change of CO2. Allow me to remind everyone of the obvious dynamic. From a previous post:

The gas is continuously upwelling from ocean circulation, volcanism, and biological process. It is continually downwelling from ocean circulation, mineral weathering, and biological processes. And, it is doing so at some rate. Any differential in rate between what is coming in, and what is going out, is either going to result in an increase or decrease in the atmosphere.
The rates at which it comes in, and goes out, are strongly affected by temperatures, and not generally in the same measure.
So, you have a temperature modulated rate of increase or decrease. That is essentially what the equation says: the rate of change is proportional to the temperature anomaly relative to a particular baseline where equilibrium attains, i.e.,
dCO2/dt = k*(T – To)
And, lo and behold, we look at the data, and that is exactly what we see.

BTW, I have what I believe to be solid reasons to suspect that a fuller description of this system is something along the lines of
dCO2/dt = (Co2eq – CO2)/tau + f*H
dCO2eq/dt = k*(T – To)
where CO2eq is the equilibrium level of CO2 to which the system is attracted based on current conditions, tau is a time constant associated with sequestration, H is the rate of human inputs, and f is the fraction of human inputs which is not rapidly absorbed into the oceans and remains airborne. If tau is “short”, then H will be rapidly sequestered, and CO2 will track CO2eq.
The quantities “f” and “tau” could be made operator theoretic to account for more complex dynamics, but I think starting with an assumption of constant values and attempting to estimate them from the data would be a good starting point.

Auto Phil
April 12, 2013 1:27 am

Bart is right.
His term H is the human forcing function for CO2 and it swamps the temperature activated portion.

Bart
April 12, 2013 5:37 am

Auto Phil says:
April 12, 2013 at 1:27 am
Absurd. H does not correlate with the rate of change of CO2. It is necessarily strongly attenuated by the time constant of sequestration.

Matt in Houston
April 12, 2013 12:24 pm

I stumbled in here a bit late, but wanted to give kudos to the author and say this has been one of the more enjoyable discussions I have read through in awhile. My Engineering mathematics are a bit rusty (13 years out) but the article and commentary brought back painful and happy memories of my school days. The commentary has been excellent, and I agree with many here that our dimwitted leftist greeny “science” (Mr.Mann et al) friends could do with a good set of lessons on signal analysis from some angry EE’s. Although, I confidently suspect they could care less whether they have accurate & valid analyses, they are more interested in the fame they have garnered in the “work” and don’t care the end they are working western civilization to.
PS-Willis, I was ROFLMAO on your grammar nazi commentary, that was just awesome. I am still smiling now.

Auto Phil
April 13, 2013 2:51 pm

Bart is right. His term H is a forcing function, when convolved with the impulse response of sequestration gives the residual CO2 concentration. Bart nicely tied up the loose ends, thanks.