Guest Post by Willis Eschenbach
Looking at a recent article over at Tallbloke’s Talkshop, I realized I’d never done a periodogram looking for possible cycles in the entire Central England Temperature (CET) series. I’d looked at part of it, but not all of it. The CET is one of the longest continuous temperature series, with monthly data starting in 1659. At the Talkshop, people are discussing the ~24 year cycle in the CET data, and trying (unsuccessfully but entertainingly in my opinion) to relate various features of Figure 1 to the ~22-year Hale solar magnetic cycle, or a 5:8 ratio with two times the length of the year on Jupiter, or half the length of the Jupiter-Saturn synodic cycle, or the ~11 year sunspot cycle. They link various peaks to most every possible cycle imaginable, except perhaps my momma’s motor-cycle … here’s their graphic:
Figure 1. Graphic republished at Tallbloke’s Talkshop, originally from the Cycles Research Institute.
First off, I have to say that their technique of removing a peak and voila, “finding” another peak is mondo sketchy on my planet. But setting that aside, I decided to investigate their claims. Let’s start at the natural starting point—by looking at the CET data itself.
Figure 2 shows the monthly CET data as absolute temperatures. Note that in the early years of the record, temperatures were only recorded to the nearest whole degree. Provided that the rounding is symmetrical, this should not affect the results.
Figure 2. Central England Temperature (CET). Red line shows a trend in the form of a loess smooth of the data. Black horizontal line shows the long-term mean temperature.
Over the 350-year period covered by the data, the average temperature (red line) has gone up and down about a degree … and at present, central England is within a couple tenths of a degree of the long-term mean, which also happens to be the temperature when the record started … but I digress.
Figure 3 shows my periodogram of the CET data shown in Figure 2. My graphic is linear in period rather than linear in frequency as is their graphic shown in Figure 1.
Figure 3. Periodogram of the full CET record, for all periods from 10 months to 100 years. Color and size both show the p-value. Black dots show the cycles with p-values less than 0.05, which in this case is only the annual cycle (p=0.03). P-values are all adjusted for autocorrelation. The yellow line shows one-third the length of the ~350 year dataset. I consider this a practical limit for cycle detection. P-values for all but the one-year cycle are calculated after removal of the one-year cycle.
I show the periodogram in this manner to highlight once again the amazing stability of the climate system. One advantage of the slow Fourier transform I use is that the answers are in the same units as the input data (in this case °C). So we can see directly that the average annual peak-to-peak swing in the Central England temperature is about 13°C (23°F).
And we can also see directly that other than the 13°C annual swing, there is no other cycle of any length that swings even a half a degree. Not one.
So that is the first thing to keep in mind regarding the dispute over the existence of purported regular cycles in temperature. No matter what cycle you might think is important in the temperature record, whether it is twenty years long or sixty years or whatever, the amplitude of the cycle is very small, tenths of a degree. No matter if you’re talking about purported effects from the sunspot cycle, the Hale solar magnetism cycle, the synodic cycle of Saturn-Jupiter, the barycentric cycle of the sun, or any other planetasmagorica, they all share one characteristic. If they’re doing anything at all to the temperature, they’re not doing much. Bear in mind that without a couple hundred years of records and sophisticated math we couldn’t even show and wouldn’t even know such tiny cycles exist.
Moving on, often folks don’t like to be reminded about how tiny the temperature cycles actually are. So of course, the one-year cycle is not shown in a periodogram, too depressing. Figure 4 is the usual view, which shows the same data, except starting at 2 years.
Figure 4. Closeup of the same data as in Figure 3. Unlike in Figure 3, statistical significance calculations done after removal of the 1-year cycle. Unlike the previous figure, in this and succeeding figures the black dots show all cycles that are significant at a higher p-value, in all cases 0.10 instead of 0.05. This is because even after removing the annual signal, not one of these cycles is significant at the p-value of 0.05.
Now, the first thing I noticed in Figure 4 is that we see the exact same largest cycles in the periodogram that Tallbloke’s source identified in their Figure 1. I calculate those cycle lengths as 23 years 8 months, and 15 years 2 months. They say 23 years 10 months and 15 years 2 months. So our figures agree to within expectations, always a first step in moving forwards.
So … since we agree about the cycle lengths, are they right to try to find larger significance in the obvious, clear, large, and well-defined 24-year cycle? Can we use that 24-year cycle for forecasting? Is that 24-year cycle reflective of some underlying cyclical physical process?
Well, the first thing I do to answer that question is to split the data in two, an early and a late half, and compare the analyses of the two halves. I call it the bozo test, it’s the simplest of all possible tests, doesn’t require any further data or any special equipment. Figures 5a-b below show the periodograms of the early and late halves of the CET data.
Figure 5a-b. Upper graph shows the first half of the CET data and the lower graph shows the second half.
I’m sure you can see the problem. Each half of the data is a hundred and seventy-five years long. The ~24-year cycle exists quite strongly in the first half of the data, It has a swing of over six tenths of a degree on average over that time, the largest seen in these CET analyses.
But then, in the second half of the data, the 24-year cycle is gone. Pouf.
Well, to be precise, the 24-year peak still exists in the second half … but it’s much smaller than it was in the first half. In the first half, it was the largest peak. In the second half, it’s like the twelfth largest peak or something …
And on the other hand, the ~15 year cycle wasn’t statistically significant at a p-value less than 0.10 in the first half of the data, and it was exactly 15 years long. But in the second half, it has lengthened almost a full year to nearly 16 years, and it’s the second largest cycle … and the second half, the largest cycle is 37 months.
Thirty-seven months? Who knew? Although I’m sure there are folks who will jump up and say it’s obviously 2/23rds of the rate of rotation of the nodes on the lunar excrescences or the like …
To me, this problem over-rides any and all attempts to correlate temperatures to planetary, lunar, or tidal cycles.
My conclusion? Looking for putative cycles in the temperature record is a waste of time, because the cycles appear and disappear on all time scales. I mean, if you can’t trust a 24-year cycle that lasts for one hundred seventy-five years, , then just who can you trust?
w.
De Costumbre: If you object to something I wrote, please quote my words exactly. It avoids tons of misunderstandings.
Data and Code: I’ve actually cleaned up my R code and commented it and I think it’s turnkey. All of the code and data is in a 175k zip file called CET Periodograms.
Statistics: For the math inclined, I’ve used the method of Quenouille to account for autocorrelation in the calculation of the statistical significance of the amplitude of the various cycles. The method of Quenouille provides an “effective n” (n_eff), a reduced count of the number of datapoints to use in the various calculations of significance.
To use the effective n (n_eff) to determine if the amplitude of a given cycle is significant, I first need to calculate the t-statistic. This is the amplitude of the cycle divided by the error in the amplitude. However, that error in amplitude is proportional to

where n is the number of data points. As a result, using our effective N, the error in the amplitude is
where n_eff is the “effective N”.
From that, we can calculate the t-statistic, which is simply the amplitude of the cycle divided by the new error.
Finally, we use that new error to calculate the p-value, which is
p-value = t-distribution(t-statistic , degrees_freedom1 = 1 , degrees_freedom2 = n_eff)
At least that’s how I do it … but then I was born yesterday, plus I’ve never taken a statistics course in my life. Any corrections gladly considered.


And guess what, generally ancient people moved around, they didn’t stay anywhere long, they were hunter and gatherers, and followed animals. Fishing came much later. The seas were too low I suspect and too far away.
Sorry for my poor english
I have not read all the comments
I think you need to take into accont the period of the moon
England in an island with hight level of tide
We can consider the England weather systen as a non linear system subjet of sun variations and moon variations
Conséquently we can find the sun frequency + or – the frequency or the moon
Sun period = 22 so frequency = 0.045
Moon period = 9 so frequency = 0.111
first case
0.111 + 0.045 = 0.156 so périod = 6.41
0.111 – 0.045 = 0.066 so period = 15.15
You find the 15.146 period
Congratulation for your site
Yves LETOILE (France)
I had a look at CET max myself, and did a best fit polynomial to the 6 order (correlation of 0.3 or so). Just looking at that data plotted from 1927, I do find 2 max. bending points about 50 years apart, namely 1949 and 2003.
so from 2003 we are going down, even in CET, but we knew that…
http://wattsupwiththat.com/2014/05/08/cycling-in-central-england/#comment-1632316
WebHubTelescope says:
May 10, 2014 at 12:05 am
Oh, piss off, you nasty little man. Your jealousy is overwhelming your good sense. I came up with the idea myself, and I was proud of it. So sue me. Was I the first man to come up with the idea? Of course not … but I did come up with it independently myself. You are great at trying to tear down something someone else has built, but you never seem to build anything yourself … funny how that works.
w.
Hi Willis, you wrote: “It turns out that with those two frequencies, one 70% the amplitude of the other, the amplitude of the smaller half of the beat frequency is about 50% of the amplitude of the larger half of the beat frequency. This is a long ways from “almost cancel each other exactly” … and also a ways from what we see in the figures of the head post.”
Not so, (1.0-0.7) / (1.0+0.7) = 0.176 which is a lot less than 50%.
Hi Willis, you wrote: “Glad to, you’ll see what I mean. In my view you just need a more accurate probe … let me recommend the slow Fourier transform. Here is my periodogram of 350+ years of pseudo-data composed of two sine waves with cycles of 23.9 and 22.2 years.”
The method that I use for cycles analysis is clearly different to yours. If I use this data it will show a peak with another peak on its shoulder. Perhaps you can separate the two cycles in the real data. With the method that I use, the procedure that I follow is quite sound.
You state that you perform a linear correlation with the signal and a (co) sine wave.
This strikes me as incorrect and it stems from my earlier comment on the difference between the true Fourier Transform and the Discrete Fourier Transform.
The definition of the Discrete Fourier Transform for a record, y(n) of N points is:
Yk=sum(n=0 to N-1)[y(n).exp(-2*pi*j*n*k/N)],
where k is a the set of integers 0 to N/2+1
This is simply a correlation between a (co) sine wave of frequency 2*pi*k/N. If we write w=2*pi/N, we get:
Yk=y(1)*cos(wk*1)+y(2)*cos(wk*2)……., which is analogous to the variance.
The reason that the DFT works is that it evaluates the sum for all integer values of k:
Sum(n=0 to N-1)[cos(wkn).cos(wmn)].
If k=m, this sum is N*pi, because one is summing cos(wkn)^2, while if k does not equal m, it is zero.
Thus the DFT selects only one frequency, PROVIDED that K is an integer.
If you do not confine your frequencies to multiples of the fundamental frequency, by performing a correlation, you are evaluating:
Integral(limits:0 to 2pi)[cos(ft)^2.dt] = pi+sin(4*pi*f)/4f.
The second half of this result sin(4pif)/4f is only zero when f is a multiple of 1/(signal period), otherwise it has a value for all the frequencies in the signal which are summed together according to this result to give an “amplitude”. In other words all the non harmonic frequencies will contribute.
Therefore I think that your derivation does not give correct results.
I don’t like the approach of throwing up one’s hands and saying, “It’s chaotic.” There was a comment on a prior thread where a gentleman sought to make a distinction between chaotic and random. Not even sure if his distinction is generally accepted but it was basically that chaos is what we don’t understand and randomness is what has no cause.
Personally, I don’t even believe in randomness by this definition, but it is too close to what many seem to mean when they invoke chaos.
RC Saumarez says:
May 10, 2014 at 7:37 am
Well, RC, you always seem to think that about my derivations. However, as we saw in the last post, it gives results that are indistinguishable from an FFT with long zero padding … so as usual, you’re swimming upstream against the facts on this one.
Your complaints in general remind me of the old joke about the Soviet commissar who says to a man trying some new method, “Well, I see it works fine in practice, Comrade … but I assure you, it will never work in theory.”
w.
Willis Eschenbach says:
May 8, 2014 at 11:44 pm
Pseudo in this case refers to cycles that occur at approximately the same interval, not precisely, & which are subject to change, yet controlled by the same variable parameters, at least in part.
As I mentioned, glaciations are themselves periodic. Icehouse phases in earth history have occurred for much longer than just since the Cambrian. They are pronounced, lengthy & severe in the Pre-Cambrian. The glacial conditions of the past 2.6 million years, ie the Pleistocene, are just the recent low point in the Icehouse that began at the Eocene/Oligocene boundary. There was a tepid Icehouse in the Mesozoic & a super-duper deep cold one in the latter Carboniferous to early Permian. Milankovitch cycles operated during those Icehouses, too.
Cosmoclimatologists like Svensmark, Shaviv, Veizer, et al., have proposed an explanation for the apparent cyclicality of icehouses. You may not find it convincing, but from whatever cause, the Pleistocene & prior Cenozoic glaciation didn’t just pop up out of nowhere.
CACA focuses on the very shortest temporal units of climate, ie 30 years to 300 or at most 3000, without looking at the big picture of change on the scales of 30,000, 300,000, 3 million, 30 million, 300 million & 3 billion years. That’s how alarmists gin up “unprecedented” events, by ignoring the billions of years of prior climate proxy data.
gymnosperm says:
May 10, 2014 at 8:31 am
If you have a method for predicting the future evolution of a chaotic system, simple or complex, this would be the time to bring it out.
And if you don’t have such a system … aren’t you throwing up your hands? Isn’t that what we are forced to do when faced with a chaotic system? Because I don’t know of even one single chaotic system of even medium complexity whose future state can be reliably forecase.
No, his distinction is purely his own, I’ve never even heard that.
Some people think a chaotic system means a very complex system. Nothing could be further from the truth. Complex systems may or may not be chaotic, and chaotic systems may or may not be complex.
The thing that distinguishes chaotic systems is that infinitesimal differences in the starting conditions lead to widely separated trajectories. This is measured with something called the “Lyapunov Exponent”. If the Maximal Lyapunov Exponent (MLE) is positive, this indicates that the system is indeed chaotic. So your anonymous informant is incorrect. Chaos is a specifically defined type of system, which is mathematically distinguishable from randomness via inter alia the MLE, and we understand both to some degree.
Finally, and most importantly, not only is weather chaotic, so is climate. If it were not, there would be a step change in the Lyapunov Exponent as we looked at longer and longer windows on the weather. But as no less an authority than Mandelbrot has shown, no such jump occurs—so climate is chaotic as well. Steve McIntyre discusses this here.
And Mosh, you challenged Anthony to show that weather is chaotic. Mandelbrot did just that, examining (per McIntyre) “12 varve series, 27 tree ring series from western U.S. (no bristlecones), 9 precipitation series, 1 earthquake frequency series, 11 river series and 3 Paleozoic sediment series”, in his study “Mandelbrot and Wallis, 1969. Global dependence in geophysical records, Water Resources Research 5, 321-340.” I tracked it down here. It’s a fascinating read, Mosh, and it does exactly what you asked. It shows mathematically that not only is weather chaotic, but climate is as well..
Regards,
w.
Ray Tomes says:
May 10, 2014 at 3:44 am
Thanks for quoting what I said, Ray. This is a great example of why quoting is so important, because it turns out we are talking about different things.
You are talking about the difference in peak (maximum) amplitudes.
But what I said was that if you take an entire cycle of the beat frequency of the two (about 300 years, as you point out) and divide it into a smaller half and a larger half, the average amplitude of the smaller half is about 50% of the average amplitude of the larger half. How do I know this? I measured it on the actual cycle.
So both of our statements are correct.
w.
@Willis Eschenbach.
You either do mathematics or you don’t.
Computing a result is generally not proof – mathematical results are obtained by formal analysis.
If you think I am wrong, kindly refute what I say through proper mathematics rather than hurling insults.
I suggest you go and read a proper book on signal processing and spectral analysis that is written by a professional who has mastered the problems you are trying to address. I.e.: Openheim & Schafer, Papoulis or Hayes.
When you have come to grips with their contents, you might see your errors and gain a little intellectual humility.
While we’re on the subject of the CET, I see that it’s well on course for a new record this year. It’s currently more than 2 degrees C above the mean for the year to date. The warmest year was 2006, with an anomaly of 1.35 degrees, followed by 2011 with 1.23 degrees. We could end up with the warmest 3 years out of 350-odd all within the last 9 years. Mind you, that’s with only just over a third of the year gone. At this stage in proceedings, two other recent warm years (1990 and 2007) were warmer than 2014, which is in 3rd place.
My impression is that it hasn’t been especially warm. Rather, there’s been a complete absence of any cold weather.
Richard Baraclough
Mind you, that’s with only just over a third of the year gone.
Henry says
That’s why I would not take any bets yet. Almost all data sets show it is cooling, globally.
CET is also showing it is going downhill (from 2003). I hear they had a nice winter this year in the US. Europe might be next.
Willis
If you have a method for predicting the future evolution of a chaotic system, simple or complex, this would be the time to bring it out.
Henry says
well, I don’t have CET means on my computer here, but I think the difference between the lowest ever average yearly temp. and the highest ever average yearly temperature is only between 1 and 2 degrees C. Please correct me if I am wrong?
There is your chaos….in absolute terms it is less than the difference between 2 rooms in our house here. I wonder if it is even still worth even talking about such small change…
However, to understand why this difference is so small you have to try and understand a few things. First, it is “weather” itself that is acting in a way to neutralize big differences and to keep the difference on earth so small, as correctly pointed out (by Willis) here
http://wattsupwiththat.com/2009/06/14/the-thermostat-hypothesis/
I would call this earth’s internal protection system to prevent overheating.
There is a second protection system, that prevents that too much energy enters earth from the outside. It is quite ingeniously fabricated, by the interaction of the sun’s most energetic particles and molecules present in the earth’s atmosphere. By the time you can predict what the next 40 years of this graph
http://ice-period.com/wp-content/uploads/2013/03/sun2013.png
will look like, you have probably figured it all out,
but, do let me know if you did.
Hello Henry,
How are things on the Highveld?
Yes – you’re right, with 8 months to go it could all change, and although it’s mid-May, and supposedly almost a summer month, it isn’t particularly warm, but still more than 1 degree above average month to date. The absence of colder than average weather continues. All that winter rainfall came on mild south westerly winds. It led to record snowfall across the Scottish Highland above about 900 metres, but barely a flake anywhere in lowland England.
I have an Excel spreadsheet with all the monthly values of the CET, together with a few analyses of decadal averages, etc. If you’d like a copy, send me an email address. You can have hours of amusement. The coldest year was 1740, at 6.84 degrees, and the warmest was 2006 at 10.82, so a 4-degree extreme range over 350 years. There is even a dataset of daily records dating from the late 19th century, but I haven’t downloaded that.
@richard
Hi, after your reply, I am like: do I know you? As you know? we have a good climate here, I think the weather in England is horrible…..
luckily I can still watch the Eurovision song festival here whilst I am blogging a bit, just trying to educate people…..
I only downloaded CET max and it shows a difference of 3 degrees C absolute ( 11 and 14). Means’ difference must be smaller than this (value of 3). It seems to me that 1740 was a complete outlier? That is a bit suspicious, to me.
I also must have CET means somewhere, it might be on another computer. Anyway, I remember that CET means seems to run a bit off the global wave, probably due to the GH effect (i.e. more clouds during a cooling period).
So, don’t worry (about global warming) when it gets a bit warmer in England, whilst the rest of the world is cooling down….
Willis,
By heck, I’m going to learn something about climate and chaos this weekend! 🙂 Thank you for the reference / lead (Lyapunov exponent) and the link.
HenryP
I graphed cet in a number of ways here
http://judithcurry.com/2013/06/26/noticeable-climate-change/
Many scientists believe it is a good proxy for global,or at least northern hemispherical temperatures
The 1695 to 1740 period was the most remarkable in the period including as it does a decade,the 1730’s, that were a whisker of the hottest decade in the 1990’s and came from the depths of the LIA in a remarkable hockey stick
1740 brought this warming to a crashing halt. Phil jones was so struck by it that he wrote a paper on it and this caused him to believe that natural variability was much greater than he had previously believed. If you want to see the paper I can dig it out for you
Tonyb
Richard Barraclough says:
May 10, 2014 at 12:00 pm
…………
Here are some more details on the CET daily max/min and average temperatures
http://www.vukcevic.talktalk.net/CET-dMm.htm
Tonyb says:
May 10, 2014 at 2:19 pm
Many scientists believe it is a good proxy for global,or at least northern hemispherical temperatures
The 1695 to 1740 period was the most remarkable in the period including as it does a decade, the 1730′s, that were a whisker of the hottest decade in the 1990′s
Two important points, illustrated here:
http://www.vukcevic.talktalk.net/CET1690-1960.htm
Vuk
That first graph in particular is very interesting, graphing as it does those two periods together.
Interesting that you have identified volcanoes. I maintain they can have a short term effect and the data seems to indicate a rapid rebound.
Tonyb
Ray Tomes says:
May 10, 2014 at 3:50 am
Thanks, Ray. I’ve used the method you describe many times, and it works. The problem I have with it is how to decide the amount of the signal to remove. For example, if the signal is just a single cycle of say 11 years, in the periodogram you get a result that peaks at 11 years, but is also significantly non-zero at 10 years 10 months, or 11 years 6 months, and so on. So if we remove just the 11-year signal, those other signals will still remain …
w.
milodonharlani says:
May 10, 2014 at 10:29 am
Thanks, milodon. My problem with the term is that people use it to cover a variety of sins. It is a hand-waving term, as when people say they are looking for a 60-year pseudo-cycle, then they go out and look for nothing more or less than a 55-year cycle … if they’re looking for 55-year cycles, then they should say so and leave the “pseudo” out of it.
First, there is nothing “pseudo” about the Milankovich cycles. They are calculable into the far past and the distant future. So I’m not sure why you bring them in.
Second, as I’ve mentioned, no one knows why the Milankovich cycles suddenly started causing ice ages a million or so years ago. And no one knows either when or if we will slip back into another ice age or if we do, how long it will last.
Which is exactly my point. These are not “pseudo-cycles”. They are very real cycles that appear and disappear.
Yes, and the current glaciation didn’t just “pop out of nowhere”. However, distinguishing causation in climate is never easy. I’m just saying we don’t know why the climate suddenly descended into a Milankovich-driven bi-stable state.
I’m sure CACA is a cute acronym for something. I’m also sure I don’t know what it stands for. I do know that when people use acronyms without explanation, they lose both traction and politeness points with me. The problem is I now feel like an idiot for not knowing what CACA means, like I’m the only guy who didn’t get the memo … is that the way you want your readership to view you, as someone who makes them feel like a fool? Just askin’ …
As to the “big big picture of change on the scales of 30,000, 300,000, 3 million, 30 million, 300 million & 3 billion years”, well, we have various proxies for these kinds of things over the years. However, there are a number of problems with proxies.
The first one is encapsulated in the saying “Trees make poor thermometers.” Bear in mind the ongoing disputes about temperatures actually measured with thermometers … now consider the kinds of issues that abound with proxy data.
The next problem is dating. Even something as apparently clocklike as the gradual deposition of sediment on the ocean floor is disturbed by compression, subsea slips and slides, and that perennially overlooked favorite, bio-turbidity. And there is no proxy without such dating issues.
The next problem is confounding variables. Changes in ocean currents affect sediment deposition rates. Precipitation affects tree ring widths. Firn closure time affects the dating of ice cores. Salinity affects the temperature dependence of the Mg/Ca ratios in seashells. The list is endless.
The next problem is expressed in the aphorism “Everything is connected to everything else, which in turn is connected to everything else … except when it isn’t.”
For example, as mentioned above, the ice age/air age difference depends on the “firn closure time”, how many years/decades/centuries it takes for enough snow to fall on top to close off and completely encapsulate the air bubbles below. Firn closure time depends on snowfall. Snowfall depends on temperature. Then we use the air contents to estimate the temperature at the time of closure … a time of closure that in turn depends on the very temperature we are trying to mention. I’m sure you can see the problem …
The next problem is the paucity of proxy data of a given type. The Berkeley Earth dataset contains tens of thousands of records. Compared to that, we have a handful of Mg/Ca proxy temperature datasets.
As a result of these and other difficulties, while I invariably find such proxies interesting, if they were as good as people seem to think, then we could toss our thermometers and just use the proxies …
So I view proxies quite differently than I do say the TAO buoy datasets … I don’t ignore them, I try to learn from them, but I don’t trust them one bit.
My best to you,
w.
RC Saumarez says:
May 10, 2014 at 11:03 am
Dang, RC, you sure go the long way around to say that you can’t find any errors in my work.
As to whether you are wrong or right, I fear that’s of no interest to me. Perhaps someone else cares.
w.