Guest Post by Willis Eschenbach
Looking at a recent article over at Tallbloke’s Talkshop, I realized I’d never done a periodogram looking for possible cycles in the entire Central England Temperature (CET) series. I’d looked at part of it, but not all of it. The CET is one of the longest continuous temperature series, with monthly data starting in 1659. At the Talkshop, people are discussing the ~24 year cycle in the CET data, and trying (unsuccessfully but entertainingly in my opinion) to relate various features of Figure 1 to the ~22-year Hale solar magnetic cycle, or a 5:8 ratio with two times the length of the year on Jupiter, or half the length of the Jupiter-Saturn synodic cycle, or the ~11 year sunspot cycle. They link various peaks to most every possible cycle imaginable, except perhaps my momma’s motor-cycle … here’s their graphic:
Figure 1. Graphic republished at Tallbloke’s Talkshop, originally from the Cycles Research Institute.
First off, I have to say that their technique of removing a peak and voila, “finding” another peak is mondo sketchy on my planet. But setting that aside, I decided to investigate their claims. Let’s start at the natural starting point—by looking at the CET data itself.
Figure 2 shows the monthly CET data as absolute temperatures. Note that in the early years of the record, temperatures were only recorded to the nearest whole degree. Provided that the rounding is symmetrical, this should not affect the results.
Figure 2. Central England Temperature (CET). Red line shows a trend in the form of a loess smooth of the data. Black horizontal line shows the long-term mean temperature.
Over the 350-year period covered by the data, the average temperature (red line) has gone up and down about a degree … and at present, central England is within a couple tenths of a degree of the long-term mean, which also happens to be the temperature when the record started … but I digress.
Figure 3 shows my periodogram of the CET data shown in Figure 2. My graphic is linear in period rather than linear in frequency as is their graphic shown in Figure 1.
Figure 3. Periodogram of the full CET record, for all periods from 10 months to 100 years. Color and size both show the p-value. Black dots show the cycles with p-values less than 0.05, which in this case is only the annual cycle (p=0.03). P-values are all adjusted for autocorrelation. The yellow line shows one-third the length of the ~350 year dataset. I consider this a practical limit for cycle detection. P-values for all but the one-year cycle are calculated after removal of the one-year cycle.
I show the periodogram in this manner to highlight once again the amazing stability of the climate system. One advantage of the slow Fourier transform I use is that the answers are in the same units as the input data (in this case °C). So we can see directly that the average annual peak-to-peak swing in the Central England temperature is about 13°C (23°F).
And we can also see directly that other than the 13°C annual swing, there is no other cycle of any length that swings even a half a degree. Not one.
So that is the first thing to keep in mind regarding the dispute over the existence of purported regular cycles in temperature. No matter what cycle you might think is important in the temperature record, whether it is twenty years long or sixty years or whatever, the amplitude of the cycle is very small, tenths of a degree. No matter if you’re talking about purported effects from the sunspot cycle, the Hale solar magnetism cycle, the synodic cycle of Saturn-Jupiter, the barycentric cycle of the sun, or any other planetasmagorica, they all share one characteristic. If they’re doing anything at all to the temperature, they’re not doing much. Bear in mind that without a couple hundred years of records and sophisticated math we couldn’t even show and wouldn’t even know such tiny cycles exist.
Moving on, often folks don’t like to be reminded about how tiny the temperature cycles actually are. So of course, the one-year cycle is not shown in a periodogram, too depressing. Figure 4 is the usual view, which shows the same data, except starting at 2 years.
Figure 4. Closeup of the same data as in Figure 3. Unlike in Figure 3, statistical significance calculations done after removal of the 1-year cycle. Unlike the previous figure, in this and succeeding figures the black dots show all cycles that are significant at a higher p-value, in all cases 0.10 instead of 0.05. This is because even after removing the annual signal, not one of these cycles is significant at the p-value of 0.05.
Now, the first thing I noticed in Figure 4 is that we see the exact same largest cycles in the periodogram that Tallbloke’s source identified in their Figure 1. I calculate those cycle lengths as 23 years 8 months, and 15 years 2 months. They say 23 years 10 months and 15 years 2 months. So our figures agree to within expectations, always a first step in moving forwards.
So … since we agree about the cycle lengths, are they right to try to find larger significance in the obvious, clear, large, and well-defined 24-year cycle? Can we use that 24-year cycle for forecasting? Is that 24-year cycle reflective of some underlying cyclical physical process?
Well, the first thing I do to answer that question is to split the data in two, an early and a late half, and compare the analyses of the two halves. I call it the bozo test, it’s the simplest of all possible tests, doesn’t require any further data or any special equipment. Figures 5a-b below show the periodograms of the early and late halves of the CET data.
Figure 5a-b. Upper graph shows the first half of the CET data and the lower graph shows the second half.
I’m sure you can see the problem. Each half of the data is a hundred and seventy-five years long. The ~24-year cycle exists quite strongly in the first half of the data, It has a swing of over six tenths of a degree on average over that time, the largest seen in these CET analyses.
But then, in the second half of the data, the 24-year cycle is gone. Pouf.
Well, to be precise, the 24-year peak still exists in the second half … but it’s much smaller than it was in the first half. In the first half, it was the largest peak. In the second half, it’s like the twelfth largest peak or something …
And on the other hand, the ~15 year cycle wasn’t statistically significant at a p-value less than 0.10 in the first half of the data, and it was exactly 15 years long. But in the second half, it has lengthened almost a full year to nearly 16 years, and it’s the second largest cycle … and the second half, the largest cycle is 37 months.
Thirty-seven months? Who knew? Although I’m sure there are folks who will jump up and say it’s obviously 2/23rds of the rate of rotation of the nodes on the lunar excrescences or the like …
To me, this problem over-rides any and all attempts to correlate temperatures to planetary, lunar, or tidal cycles.
My conclusion? Looking for putative cycles in the temperature record is a waste of time, because the cycles appear and disappear on all time scales. I mean, if you can’t trust a 24-year cycle that lasts for one hundred seventy-five years, , then just who can you trust?
w.
De Costumbre: If you object to something I wrote, please quote my words exactly. It avoids tons of misunderstandings.
Data and Code: I’ve actually cleaned up my R code and commented it and I think it’s turnkey. All of the code and data is in a 175k zip file called CET Periodograms.
Statistics: For the math inclined, I’ve used the method of Quenouille to account for autocorrelation in the calculation of the statistical significance of the amplitude of the various cycles. The method of Quenouille provides an “effective n” (n_eff), a reduced count of the number of datapoints to use in the various calculations of significance.
To use the effective n (n_eff) to determine if the amplitude of a given cycle is significant, I first need to calculate the t-statistic. This is the amplitude of the cycle divided by the error in the amplitude. However, that error in amplitude is proportional to

where n is the number of data points. As a result, using our effective N, the error in the amplitude is
where n_eff is the “effective N”.
From that, we can calculate the t-statistic, which is simply the amplitude of the cycle divided by the new error.
Finally, we use that new error to calculate the p-value, which is
p-value = t-distribution(t-statistic , degrees_freedom1 = 1 , degrees_freedom2 = n_eff)
At least that’s how I do it … but then I was born yesterday, plus I’ve never taken a statistics course in my life. Any corrections gladly considered.
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.


@vukcevic
I am not doubting that your curve adequately describes what is happening now,
but it does not foresee the change that will happen in 2015 or 2016
If we follow your curve we would end up with zero solar polar field strength….
This is the part of puzzle that you miss:
at some point, it has to cycle back!!
The whole idea behind this part of creation is, again, just like Willis paper on tropical storms,
to keep temperature on earth within certain boundaries.
Willis Eschenbach says:
May 9, 2014 at 9:55 am
I just tried that, I took the periodograms for June only full data, early, and late. It shows little difference from the periodograms in the head post—strong ~24-year cycle in the first half, and no 24-year cycle at all in the second half. Go figure.
Hi Willis
Agree about 24 years, but ~22 years looks OK to me.
http://www.vukcevic.talktalk.net/CET-June-spec.htm
Two data sections I selected are related to the sunspot historical records.
For next project: perhaps you could look at ENSO
I am somewhat puzzled now as to why all my comments are put in moderation?
HenryP says:
May 9, 2014 at 12:43 am
I’d take that bet, but I suspect it was literary exaggeration … in any case, hang on, let me run the analysis … … …

OK, you’d have lost the bet. I just looked at that very thing, CET max temperatures. I can’t compare it directly to my analysis of the mean temps, because the CET max temps dataset only starts in 1879.
In any case, the results are quite similar to the latter half of the mean dataset, in that they lack th dominant 24-year cycle as in the full CET mean periodogram shown in FIg. 4, or the early CET mean data shown in Figure 5a.
Regards,
w.
PS—The Hale-Nicholson cycle is the magnetic cycle of the sun, which varies in sync with the sunspots. Since the sunspot cycle varies so widely, if you are looking for traces of that in the climate data, you’d do better to look for that with something like a cross-correlation function.
Steven Mosher says:
May 9, 2014 at 9:36 am
Why should you see any cycles in climate data. What cycles should you see, where EXACTLY to you expect to see them?
In Steven Mosher’s data:
Richard A. Muller1,2,3, Judith Curry: Decadal Variations in the Global Atmospheric Land Temperatures
Fig.7 A strong peak is observed in the AMO at 0.110 ± 0.005 cycles/year, corresponding to a period of 9.1 ± 0.4 years, at the 98.3% confidence level. The maximum peak in the PDO occurs at a similar frequency, 0.111 ± 0.006, although with a confidence level of 94%., Page 10
Hi Steven ,
That graph looks a bit of a ‘unicorn’ , 98.3 confidence (?!); as you say GO figure
These appearing and disappearing cycles remind me of the motion of one of those toy chaotic pendulums. As you watch the swinging pendulum arm, it occasionally appears to settle into regular periodic motion, but this only lasts for a few cycles and then it’s off to another, completely different cycle. But we already know that weather and climate are chaotic, right?
I always enjoy your insightful posts, Willis.
” But we already know that weather and climate are chaotic, right?”
err no we dont know that. we know that some of the underlying physical systems can be characterized by functions that are chaotic, but we dont know that climate (long term stats)
or “weather” whatever that is, is in fact chaotic.
It’s chaotic is an unexamined belief that most skeptics have.
demonstrating it conclusively is hard. proving it is even harder. But the claim serves a purpose
so people make it
REPLY: Now you are arguing that we don’t know if weather is/is not chaotic? Are you smoking something? If Ed Lorenz was around he’d kick you in the butt. I may very well do it for him next time I see you. – Anthony
PaulH says: May 9, 2014 at 6:14 am
It’s like looking for patterns in clouds. If you try hard enough you are bound to find a horse, a boat, or anything else you want to see.
While I don’t claim to speak for Willis, I think that that is what he has been ‘getting at’ for some time. It is very easy to fool oneself and a thorough (open to criticism) statistical analysis is one of the few tools we have to distinguish ‘cloud gazing’ from reality.
One of the problems I see currently is that we have an awful lot of data, fast machines, sophisticated modelling tools and we tend to employ them toward ‘cloud gazing’. We want to find a ‘sheep’ and so we do.
We can now count, in real time, the number of Ants entering and leaving a nest. We have nothing to compare it to though. If the count falls over the next ten years, is this significant? Has it always been that way or is this something new? We can use Satellites to gauge cloud cover but we have nothing to compare it to in the pre-satellite era.
We can fool ourselves concerning the stock market, Climate, Ant numbers and any number of ‘data rich’ environments. But I certainly wouldn’t invest my ‘pension pot’ with a company that claimed to have identified ‘patterns’ in the stock market that a simple ‘Willis test’ proved were nonsense.
Cyclomania is often Astrology not Astrophysics.
Science becomes an ‘all one can eat buffet’. Ants up, Ants down, Ants left/right and it’s all traced back to ‘politics’.
I fully expect some ‘scientist’ will publish a paper suggesting that the down cycle he identified in the number of Ants is significantly correlated to stock market moves – thereby ‘proving’ that human commercial activity is killing Ants. What’s the betting that he is an ardent socialist?
All you can eat buffet. Link Ant numbers to Jupiter/Sun/Moon/stock market/US car production… Cyclomania is all things to all men. Only when Willis casts his eye upon a ‘paper’ do we all start to question what we ‘know to be true’.
steveta_uk says:
May 9, 2014 at 1:39 am
Thanks, steve. Seems possible. Let me think about it.
w.
@Willis
thanks for that analysis, I do appreciate. I agree that I have lost my bet!!!
I had anticipated/hoped that the weather effect (GH effect due to clouds) would be less in maxima
However, you must admit that vukcevic’s analysis is undeniably very clear:
http://www.vukcevic.talktalk.net/CET-June-spec.htm
even if it is only for only one month of the years
(which happens to be the month when they have the best (unclouded) weather)
btw
Do I notice something at 44 years there?
That would not surprise me…..
You have to try and understand this graph, or you will miss it altogether
http://ice-period.com/wp-content/uploads/2013/03/sun2013.png
Steven Mosher says:
May 9, 2014 at 12:49 pm
(Yeah I removed the weather part, want no part of that controversy)
Actually it’s nice to hear someone say that. Usually I’m glibly told the opposite without any decent proof, that we know climate is not chaotic, with just dubious analogies as justification that may or may not be applicable. If we actually knew the energy budget with confidence that’d be one thing. Fine, if energy is accumulating in the system there’s going to be warming sooner or later. If you buy that climate sensitivity can’t be zero, then there’s going to be at least a little warming over time, OK. Beyond that? I don’t see how we know one way or the other right now. But I’m always looking for the opportunity to learn something I don’t know here, so feel free to tell me where and why I’m wrong.
Willis says
PS—The Hale-Nicholson cycle is the magnetic cycle of the sun, which varies in sync with the sunspots. Since the sunspot cycle varies so widely, if you are looking for traces of that in the climate data, you’d do better to look for that with something like a cross-correlation function
henry says
as far as I am concerned, having looked at ssn, I don’t believe in it. I can see a linear trend in ssn that seems to be going up, as time progresses. The observation of ssn is subjective due to
a) people’s eyesight
b) magnification improved over time
c) “corrections” applied over time, to correct for a) and b)
better to leave ssn out of everything
\is my policy
anyway
I think there are problems with the “Slow Fourier Transform”.
One is a question of nomenclature. A Fourier Transform is an analytical concept and is a function that is continuous in frequency. One has to know its existence between minus and plus infinity. As is often stated in DSP 101, ‘one cannot take the Fourier Transform of a real signal – it does not exist’.
A Discrete Fourier Transform, more properly a complex exponential discrete Fourier series, is limited over the sampling period and its harmonics are multiples of 1/period and extend as an infinite series. In a DFT the other frequencies are not zero – they do NOT EXIST.
My understanding of a slow Fourier Transform is that it evaluates the Fourier Integral for each term naively, rather than using the recursive nature of the FFT. They should yield identical results.
What is being done here is not a DFT in the normal accepted sense.
This may seem an abstract point but it is fundamental to the concept presented here and important in interpreting the results. If you do not get the theory of signal processing correct, you will not understand the results.
The results of the “Slow Fourier Transform” are, I presume, continuous in frequency because the frequency specified may be an irrational number, although they are computed here for discrete frequencies. In other words, data exists at frequencies that are not harmonics of the fundamental frequency, which in a true DFT do not exist.
Therefore what is being calculated and how does it relate to the Fourier transform?
In order to take a Fourier Transform of a function, it has to be a bounded integral, in other words the integral must converge to a value with infinite limits of integration. A sine wave, for example does not have an FT because the integral is not bounded. However, the FT can be approximated by multiplying the sine wave by a window that limits the sine wave to a specific time interval. The resulting spectrum is that of a rectangular pulse that is centred about the fundamental frequency of the sine wave, i.e.: a sin (x)/x function.
What is being done here approximates to a numerical evaluation of Fourier Transform, as opposed to the series, of the signal multiplied by a time window. In the case of a sine wave, this produces lobes around the harmonic of the signal, as shown clearly in the example above. The magnitude of these lobes depends critically on the number of samples in the time frame, but the essential point is that by this type of processing, one is producing artefacts into the “spectrum”. In a complicated signal, the various components will overlap unpredictably predictably and force correlation between components where none in fact exists.
This influences the statistical methods used to determine if a particular harmonic is greater than would be expected from a random signal. In the case of using a DFT, this is well understood and the power of each component is a Chi-squared distribution with 2 degrees of freedom (for a Gaussian signal). The distribution is for the “Sow Fourier Transform” strikes me as an extremely difficult problem and this does not appear to have been properly analysed. It is certainly not a matter of simple filtering as you suggest.
Most signal processing methods stem from formal, analytical theory. This does not seem to have any theory. There are many methods of spectral estimation that are not based on the DFT including, for example, maximum entropy methods and principle component analysis of the autocorrelation matrix. These are mathematically tractable, are widely used and are the results of a great deal of research. For example, see Monson Hayes: “Statistical Digital Signal Processing and Modeling”, Wiley 1996, chapter 8, “Spectrum Estimation”.
Steven Mosher says:
“So return to the fundamental question. Why should you see any cycles in climate data. What cycles should you see, where EXACTLY to you expect to see them? what physical theory tells you to expect this?”
Solar grand minimums on average every 10 solar cycles, of variable length and with a drift from the mean periodicity by +/- 1-2 solar cycles. E.g. Dalton was cycles 5&6, and the next minimum was cycles 12-14, and the current one is from SC24. The 20th century, apart from SC14, largely missed out on a grand minimum, while the 19th century had two.
I will be doing an article on why and where they occur, and where the maximum of each cycle occurs.
Hey, I’ve thought of a name for people who obsessively study climate data looking for regular cycles. I was thinking we could call them ‘Cyclists’?
Just wanted to add that I did a spectral analysis on CET a couple of days ago and found the same 24yr cycle. Since it wasn’t what I was looking for (i.e. a ~60yr cycle) I called it quits. Then I saw this post and decided to replicate the “bozo test”. I just want to report that Willis’s results were replicated using the R “spectrum” function.
BTW… Peeking at the code, it looks like Willis is fitting a sine wave using linear regression. Kewl!
All the best, AJ
Duster May 9, 2014 at 10:08 am
You provide an example of what Willis warns against namely seeing patterns that are not there. What you have noticed is a consequence of taking values tabulated in degF and converting them to degC.
How they were measured is another matter entirely as back then there were no standardised scales nor accurate thermometers, nor did anyone bother about consistent time-of-day for measurements, siting problems etc. so it is ironic that a fuss is made about these factors now, but the accuracy of the pre ~1750 CET measurements are taken as read.
Anthony when you find the proof that rain hail wind temperature hurricanes lightning. You know the weather is chaotic. Put that proof up here. Lorenz had ideas. Appealing to authority is not proof. Practice skepticism. Question everything as einstein wrote.
Just show the proof. Start with a definition of weather.
Ha that will be fun
REPLY: OK smartass. Weather is the state of the atmosphere at a given place and time that is defined by temperature, dryness, solar insolation, wind, pressure, gravity, and Coriolis force from Earth’s rotation. All of these determine what sort of weather condition will be present at that place and time. A chaotic system is a system which exhibits dynamics over time that are highly sensitive to the initial conditions. After a period of time, the chaotic nature of the system makes any linear extrapolation from the initial starting condition vary in unpredictable ways, making forecasting with linear methods useful only in the time nearest the starting conditions. This isn’t an idea, it’s reality. That’s why weather forecasting breaks down after a few days.
I don’t have to prove weather is chaotic, it’s already accepted that it is. Lorenz last interview:
Bulletin of the American Meteorological Society 2013 ; e-View
doi: Full PDF: http://journals.ametsoc.org/doi/pdf/10.1175/BAMS-D-13-00096.1
Last Interview with Professor Edward Lorenz? – Revisiting the “limits of predictability” – Impact on the Operational and Modeling Communities?
You however have to prove that weather isn’t chaotic.
Starting with weather conditions today, tell me:
1. When we’ll have a thunderstorm over Sacramento, CA that will produce at least 1″ of rain.
2. Or if that’s too small for you, tell me the next date a Cat3 or greater hurricane will hit the USA East Coast and where it will hit.
Show your work, including data and code, and if you turn out to be correct, I’ll believe your assertion that weather is not chaotic.
-Anthony
May I add my two pence, can we say weather is unpredictable and changeable. It’s the same meaning really. There are patterns but occasionally mother nature does something not observed often, like snow in summer. But central England is only one part of England, it is along the Penines, a beautiful area, rather undulating. But you go up to the far North of Scotland and you have a land of the midnight sun. There are ice core records, look at the Russians. There is a news article yesterday in the Daily Telegraph, about retrieving an ice core from the Antarctica, to provide the IPCC with some more bullets. Go for it Willis.
Ryan says:
May 9, 2014 at 10:06 am
Oh, I’m liking that plan, many thanks … someone upthread asked why I don’t do wavelets. Two reasons. First, I can’t figure the errors and the p-values and such of the wavelets. Second, I like methods that I understand from the bottom up. Oh, I understand wavelets in theory. But not the nuts and bolts.
Give me some time, your plan sounds good. The problem, of course, is the requirement that we need 3x data length to find a cycle. For a 24-year cycle, a 75-year window works. If you want to find a 40 year cycle you need a 120-year window.
Let me see what I can do … weekend coming up.
w.
AJ says:
May 9, 2014 at 4:23 pm
AJ, always good to hear from you. I’m glad to know you’ve replicated my results …
I was quite proud when I dreamed that one up. Before that I was optimizing a sine wave, a very slow process. Instead, I just created a sine wave and a cosine wave, and used linear regression to give the optimum results using the two waves as the independent variable and the data as the dependent variable. Then I could take the peak-to-peak amplitude of the resulting fitted sine wave.
Indeed, life is good,
w.
Willis,
Nice article, nice touch to split the data & compare.
I’m wondering if the ‘raw’ CET that you use is the best metric to go hunting for cycles.
There are some known cycles like diurnal that have been averaged out of the data beforehand, then you take more out by autocorrelation correction, but there are still more.
Colleagues are looking in detail at representative long term Australian sites and finding a reverse correlation between rainfall and Tmax (but not Tmin. You have Tmean from CET). Shorthand, “Water Cools”.
It is possible to ‘adjust’ the Tmax and hence the Tmean by a stats correction for rainfall. I suggest that it you did this, it would make cycle detection more stark. (How I dislike suggesting an adjustment, but you have already stripped 1 year cycles so what the heck?)
The rainfall effect on Tmax is quite strong, varies in size of effect from site to site and is usually stats significant where studied so far, not many sites yet, most 100 years or more duration. So it is not easy for CET where sites are averaged together and where I’m not too sure of the extent of rainfall coverage.
Then one has to think if there are more, similar variables perturbing Tmean. Cloud coverage is an obvious one and I’ve noted your past work on what a difference a cloud 10 minutes late in forming can make. It depends on whether you are looking for all possible classes of cycles in an overarching data set, or whether there is more to be found by stripping out as many known cycles/perturbations as possible, then analysing the corrected set. It’s your choice. (I’d do it myself if I could but I’m fighting debilitating illness presently). Geoff.
Not too original, I am afraid. I use the multiple linear regression technique on sine/cosine pairs in the CSALT model of global warming, described here:
http://contextearth.com/context_salt_model/
This is used to predict CO2 AGW to a gnat’s eyelash:
http://imageshack.com/a/img842/2517/r2f8.gif
WUWT’ers would be advised to pay close attention to this kind of thermodynamic modeling, as it lays waste to skeptical arguments.
Bushbunny says
‘There are ice core records, look at the Russians. There is a news article yesterday in the Daily Telegraph, about retrieving an ice core from the Antarctica, to provide the IPCC with some more bullets. Go for it Willis.’
CET is collected from 3 stations geographically some distance apart. Why would collecting an ice core representing a few square inches of the over sensitive polar regions be a better matrix? Indeed, why is so much faith put in ice cores as a global proxy?
tonyb
Well you are right tony, but actually deep sea cores, have shown up temperature changes over the millennium. What those jokers acting for the IPCC are hoping to find is warming trends. I doubt that proves anything, unless they are measuring the amount of pollen deposited at various times and not some intrepid Antarctic explorer, emptying his tea post out side his tent. LOL. But I was asking Willis to do one for us. I have it in one of my archaeology books but that dates to 1986. Certainly they found a pattern between interglacial and full glacial and mini ice ages.