Guest Post by Willis Eschenbach
Looking at a recent article over at Tallbloke’s Talkshop, I realized I’d never done a periodogram looking for possible cycles in the entire Central England Temperature (CET) series. I’d looked at part of it, but not all of it. The CET is one of the longest continuous temperature series, with monthly data starting in 1659. At the Talkshop, people are discussing the ~24 year cycle in the CET data, and trying (unsuccessfully but entertainingly in my opinion) to relate various features of Figure 1 to the ~22-year Hale solar magnetic cycle, or a 5:8 ratio with two times the length of the year on Jupiter, or half the length of the Jupiter-Saturn synodic cycle, or the ~11 year sunspot cycle. They link various peaks to most every possible cycle imaginable, except perhaps my momma’s motor-cycle … here’s their graphic:
Figure 1. Graphic republished at Tallbloke’s Talkshop, originally from the Cycles Research Institute.
First off, I have to say that their technique of removing a peak and voila, “finding” another peak is mondo sketchy on my planet. But setting that aside, I decided to investigate their claims. Let’s start at the natural starting point—by looking at the CET data itself.
Figure 2 shows the monthly CET data as absolute temperatures. Note that in the early years of the record, temperatures were only recorded to the nearest whole degree. Provided that the rounding is symmetrical, this should not affect the results.
Figure 2. Central England Temperature (CET). Red line shows a trend in the form of a loess smooth of the data. Black horizontal line shows the long-term mean temperature.
Over the 350-year period covered by the data, the average temperature (red line) has gone up and down about a degree … and at present, central England is within a couple tenths of a degree of the long-term mean, which also happens to be the temperature when the record started … but I digress.
Figure 3 shows my periodogram of the CET data shown in Figure 2. My graphic is linear in period rather than linear in frequency as is their graphic shown in Figure 1.
Figure 3. Periodogram of the full CET record, for all periods from 10 months to 100 years. Color and size both show the p-value. Black dots show the cycles with p-values less than 0.05, which in this case is only the annual cycle (p=0.03). P-values are all adjusted for autocorrelation. The yellow line shows one-third the length of the ~350 year dataset. I consider this a practical limit for cycle detection. P-values for all but the one-year cycle are calculated after removal of the one-year cycle.
I show the periodogram in this manner to highlight once again the amazing stability of the climate system. One advantage of the slow Fourier transform I use is that the answers are in the same units as the input data (in this case °C). So we can see directly that the average annual peak-to-peak swing in the Central England temperature is about 13°C (23°F).
And we can also see directly that other than the 13°C annual swing, there is no other cycle of any length that swings even a half a degree. Not one.
So that is the first thing to keep in mind regarding the dispute over the existence of purported regular cycles in temperature. No matter what cycle you might think is important in the temperature record, whether it is twenty years long or sixty years or whatever, the amplitude of the cycle is very small, tenths of a degree. No matter if you’re talking about purported effects from the sunspot cycle, the Hale solar magnetism cycle, the synodic cycle of Saturn-Jupiter, the barycentric cycle of the sun, or any other planetasmagorica, they all share one characteristic. If they’re doing anything at all to the temperature, they’re not doing much. Bear in mind that without a couple hundred years of records and sophisticated math we couldn’t even show and wouldn’t even know such tiny cycles exist.
Moving on, often folks don’t like to be reminded about how tiny the temperature cycles actually are. So of course, the one-year cycle is not shown in a periodogram, too depressing. Figure 4 is the usual view, which shows the same data, except starting at 2 years.
Figure 4. Closeup of the same data as in Figure 3. Unlike in Figure 3, statistical significance calculations done after removal of the 1-year cycle. Unlike the previous figure, in this and succeeding figures the black dots show all cycles that are significant at a higher p-value, in all cases 0.10 instead of 0.05. This is because even after removing the annual signal, not one of these cycles is significant at the p-value of 0.05.
Now, the first thing I noticed in Figure 4 is that we see the exact same largest cycles in the periodogram that Tallbloke’s source identified in their Figure 1. I calculate those cycle lengths as 23 years 8 months, and 15 years 2 months. They say 23 years 10 months and 15 years 2 months. So our figures agree to within expectations, always a first step in moving forwards.
So … since we agree about the cycle lengths, are they right to try to find larger significance in the obvious, clear, large, and well-defined 24-year cycle? Can we use that 24-year cycle for forecasting? Is that 24-year cycle reflective of some underlying cyclical physical process?
Well, the first thing I do to answer that question is to split the data in two, an early and a late half, and compare the analyses of the two halves. I call it the bozo test, it’s the simplest of all possible tests, doesn’t require any further data or any special equipment. Figures 5a-b below show the periodograms of the early and late halves of the CET data.
Figure 5a-b. Upper graph shows the first half of the CET data and the lower graph shows the second half.
I’m sure you can see the problem. Each half of the data is a hundred and seventy-five years long. The ~24-year cycle exists quite strongly in the first half of the data, It has a swing of over six tenths of a degree on average over that time, the largest seen in these CET analyses.
But then, in the second half of the data, the 24-year cycle is gone. Pouf.
Well, to be precise, the 24-year peak still exists in the second half … but it’s much smaller than it was in the first half. In the first half, it was the largest peak. In the second half, it’s like the twelfth largest peak or something …
And on the other hand, the ~15 year cycle wasn’t statistically significant at a p-value less than 0.10 in the first half of the data, and it was exactly 15 years long. But in the second half, it has lengthened almost a full year to nearly 16 years, and it’s the second largest cycle … and the second half, the largest cycle is 37 months.
Thirty-seven months? Who knew? Although I’m sure there are folks who will jump up and say it’s obviously 2/23rds of the rate of rotation of the nodes on the lunar excrescences or the like …
To me, this problem over-rides any and all attempts to correlate temperatures to planetary, lunar, or tidal cycles.
My conclusion? Looking for putative cycles in the temperature record is a waste of time, because the cycles appear and disappear on all time scales. I mean, if you can’t trust a 24-year cycle that lasts for one hundred seventy-five years, , then just who can you trust?
w.
De Costumbre: If you object to something I wrote, please quote my words exactly. It avoids tons of misunderstandings.
Data and Code: I’ve actually cleaned up my R code and commented it and I think it’s turnkey. All of the code and data is in a 175k zip file called CET Periodograms.
Statistics: For the math inclined, I’ve used the method of Quenouille to account for autocorrelation in the calculation of the statistical significance of the amplitude of the various cycles. The method of Quenouille provides an “effective n” (n_eff), a reduced count of the number of datapoints to use in the various calculations of significance.
To use the effective n (n_eff) to determine if the amplitude of a given cycle is significant, I first need to calculate the t-statistic. This is the amplitude of the cycle divided by the error in the amplitude. However, that error in amplitude is proportional to

where n is the number of data points. As a result, using our effective N, the error in the amplitude is
where n_eff is the “effective N”.
From that, we can calculate the t-statistic, which is simply the amplitude of the cycle divided by the new error.
Finally, we use that new error to calculate the p-value, which is
p-value = t-distribution(t-statistic , degrees_freedom1 = 1 , degrees_freedom2 = n_eff)
At least that’s how I do it … but then I was born yesterday, plus I’ve never taken a statistics course in my life. Any corrections gladly considered.
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.


I appreciate your going over stuff like this, because I’m lazy. If I cared enough to go dig I would most likely by robot jesus deceive myself into thinking I’d found something. Looking at that chaos feels like drowning to me. But I’ve begun to grasp why you and AW find the cyclomaniacs uninteresting.
Thanks Willis.
My wife has a cycle.
& I have four (rideable, with a couple of frames besides).
They’re great for getting around a city.
My wife’s cycle is also good for predicting when I should keep my opinion to myself.
Stark!!!! Funny!!!!
Much appreciated post Willis. Always good to have the thinking cells poked again. I’m not sure i trust the CET record so much as to draw too many conclusions myself. I have an image of a man holding up a wet finger and making an educated guess for the first half of it 😉
:large
Perhaps I’m just being grumbly. I have a fasting blood test in the morning. I’m hungry and could murder a glass of Prosecco 😀
Whilst from previous brief conversations with you it’s clear you and I have some areas of interest in common I do think that your idea of cycling in England and mine are very different things 😉
Forgive the levity. It’s nearly 2am here and I need sugar 😉
Hence the use of the term “Digital Forecast” on the meteorology web-sites.
Call me a maniac if you want, but the fact that climatic cycles exist is as close to indisputable as you can get in paleo proxy data. These (pseudo-) cycles indubitably have fluctuated at least quasi-regularly during the Pleistocene, first on a roughly 40,000 year glacial/interglacial basis, then more recently at ~100,000 year intervals, with interglacials varying in length from a few thousand to tens of thousands of years’ duration, apparently under control of orbital mechanics.
These cycles were first proposed by Adhemar, Croll & others in the 19th century, then Milankovitch while a PoW during WWI, & finally convincingly confirmed by Hays, Imbrie & Shackleton in 1976, as better paleoclimatic data became available, & not shown false since.
Hays, J. D.; Imbrie, J.; Shackleton, N. J. (1976). “Variations in the Earth’s Orbit: Pacemaker of the Ice Ages”. Science 194 (4270): 1121–1132
A good case can also be made for much longer term icehouse/hothouse cycles in Earth’s climatic history.
Maintaining the existence of climatic cycles on shorter time scales (decennial, centennial or millennial) during the Holocene or previous interglacials may well however be the maniacal sun-stroked maunderings of Maunder Minimum temperature eddy advocates, but count me among them. Along with Eddy. And Lamb. And Herschel.
milodonharlani says:
May 8, 2014 at 5:59 pm
…
Beg pardon Milodonharlani. There are some cycles. Sun comes up roughly every 24 hours. Seasons are awfully regular. Tides are basically predictable. So on. The graphs of ice ages I’ve seen look impressively regular to me, while I don’t want to take a stand and defend that position with you, I’m not about to quarrel with it right now.
I didn’t mean for my statement to be construed that broadly.
Amen, brother!!
Next up willis. Paleo cyclemania.
Wrt your method. Intuitive. Straightforward. And easy to interprete.
Willis, I have watched with interest as you grapple with this analysis and tools for exploring your ideas. A few suggestions…
Google the Lomb Periodogram – I think you will find this interesting.
Rather than splitting the dataset in two, randomize the order of the dataset (which should yield NO cycles by construction) and redo your analysis. Repeat several times and see how often you get a peak of that size at 24yrs. If it happens often, you can write off the observed peak as noise.
One issue you have is that Fourier analysis is best suited for identifying regular behaviors. Climate is not regularly periodic, so is there a better tool for identifying irregular regularity, if you will? I’d suggest that you look at a simple Wavelet Transform.
The periodicity analysis is always intriguing. I’d be interested to see you lower the split point (in terms of years) to see how far back you can get until the 24 year signal spikes up in the latter portion of the signal. That might give insight as to why it’s happening (such as it being in a time span that had particular precision limitations).
Willis, you wrote “First off, I have to say that their technique of removing a peak and voila, “finding” another peak is mondo sketchy on my planet”, so I want to explain a little more. When a method is used, as I do, that interpolates the spectrum between the FFT points so as to find the accurate frequency the resulting spectrum does not just have a sharp peak at a point. Rather, it has a peak with the height dropping off each side and going slightly negative and then oscillating positive and negative with diminishing amplitude as the frequency changes. Removal of this component will allow any other peaks to be seen which are hiding in the shadow of that peak. In this case the second peak can already be seen as a broadened shoulder on one side. If you have any doubt about this, I suggest generating a data set of the length of the one used here with real periods of 23.9 years and 22.2 years (of about 70% the amplitude I think). Then do a spectral analysis of this and remove the major component to see what remains.
Wiillis, I don’t know how you calculate your probabilities of cycles. I don’t think that they should be linear in relation to degrees amplitude. I use Bartel’s Test which is the best test for cycles accuracy. When I use rate of change of temperature then the 23.9 year cycle has p=0.002 and the 15.15 year cycle has p=0.02 so both are significant. If only straight temperature is used then the 23.9 year cycle has p=0.01. The remaining Hale period when I remove the 23.9 year cycle is not significant, which is why I say that “maybe”.
Figure 3: Beautiful noise !
Figure 4 through 5ab: The Phantoms Of The Opera !
🙂
Willis, if there are both 23.9 and ~22.2 year cycles in temperature, then they will form beats with a period of approximately 300+ years. Therefore they will support each other and almost cancel each other exactly as you found in your two halves.
Purely for completeness sake, how does the 1 year cycle fare under the “bozo” test?
Steven Mosher says:
May 8, 2014 at 6:58 pm
No kidding. I just looked into some of the basics and the problems and gave myself a headache. There seems to be some order there, but it’s not simple, cut, and dried. How do you handle the situation when periodicity changes for no apparent reason?
Heck I sometimes think I find meaningful patterns in my breakfast cheerios, maybe I should stick to that.
Willis,
You’re not the smartest guy in the room.
You’re the smartest guy ’round these parts.
“Note that in the early years of the record, temperatures were only recorded to the nearest whole degree. ”
I would be surprised if it were that small. You have your series in degC but the best to hope for in early records would be [2] degF. Also are not the early records proxies, like diary entries, rather than real temperature measurements?
Oops, read 2 degF in above, in the belief that early thermometers are calibrated at 2 deg intervals
Willis,
Now that you’ve done CET, have you thought about doing the GISP2 ice core, which has thousands of years of record?
The CET data show ‘reoccurring events’ that probably wouldn’t show up in any precise cycles but they do match similar recurring events in the GISP2, 10Be, 14C, and solar records.
Best regards,
Don
And another correction….
The old thermometers that I have seen with the 2degF scale are not as early as I had thought – they are early 19th century, like this one,
http://thumbs1.ebaystatic.com/d/l225/m/mJcU8WBi2E1dx3VZwBDa4og.jpg
(zoom in to see the scale)
and Fahrenheit did not even invent his scale till 1724, so how were early measurements taken and to what accuracy? This is relevant as the 23 year peak is only visible in Willis’ analysis in the early data.
CET is essentially one dimensional data. The cycles are 3d. It isn’t really surprising that an apparent cycle, even one of long duration, could be overwritten for a while. Since you play a stringed instrument, you have a real understanding of harmonics, and sympathetic harmonics.
Statistics are tools for the blind (and the deaf). So far your statistics have shown that volcanoes have no effect on climate, 10Be flux has nothing to do with cosmic rays, and there are no discernible cycles in temperature. Now I can find no fault with your technique, but at some point don’t you have to wonder if some fault lies in the statistical tools themselves?
When statistics argue with fish, personally, I’m going with the fish.
Humans are great at seeing patterns. So good, in fact, that we can see patterns where there are none. & as Feynman pointed out: the easiest person to fool is yourself. The beauty of statistics (properly applied) is that it shows us that there are not patterns where we tell ourselves there are patterns.
Like a great many other things, statistics can only show us what is not, it can never conclusively show us what is (no matter how hard some people would try to say otherwise).
From your graph it seem s a small change in average temperatures can have large effect such as the Thames freezing over on a regular bascis?