Guest Post by Willis Eschenbach
Looking at a recent article over at Tallbloke’s Talkshop, I realized I’d never done a periodogram looking for possible cycles in the entire Central England Temperature (CET) series. I’d looked at part of it, but not all of it. The CET is one of the longest continuous temperature series, with monthly data starting in 1659. At the Talkshop, people are discussing the ~24 year cycle in the CET data, and trying (unsuccessfully but entertainingly in my opinion) to relate various features of Figure 1 to the ~22-year Hale solar magnetic cycle, or a 5:8 ratio with two times the length of the year on Jupiter, or half the length of the Jupiter-Saturn synodic cycle, or the ~11 year sunspot cycle. They link various peaks to most every possible cycle imaginable, except perhaps my momma’s motor-cycle … here’s their graphic:
Figure 1. Graphic republished at Tallbloke’s Talkshop, originally from the Cycles Research Institute.
First off, I have to say that their technique of removing a peak and voila, “finding” another peak is mondo sketchy on my planet. But setting that aside, I decided to investigate their claims. Let’s start at the natural starting point—by looking at the CET data itself.
Figure 2 shows the monthly CET data as absolute temperatures. Note that in the early years of the record, temperatures were only recorded to the nearest whole degree. Provided that the rounding is symmetrical, this should not affect the results.
Figure 2. Central England Temperature (CET). Red line shows a trend in the form of a loess smooth of the data. Black horizontal line shows the long-term mean temperature.
Over the 350-year period covered by the data, the average temperature (red line) has gone up and down about a degree … and at present, central England is within a couple tenths of a degree of the long-term mean, which also happens to be the temperature when the record started … but I digress.
Figure 3 shows my periodogram of the CET data shown in Figure 2. My graphic is linear in period rather than linear in frequency as is their graphic shown in Figure 1.
Figure 3. Periodogram of the full CET record, for all periods from 10 months to 100 years. Color and size both show the p-value. Black dots show the cycles with p-values less than 0.05, which in this case is only the annual cycle (p=0.03). P-values are all adjusted for autocorrelation. The yellow line shows one-third the length of the ~350 year dataset. I consider this a practical limit for cycle detection. P-values for all but the one-year cycle are calculated after removal of the one-year cycle.
I show the periodogram in this manner to highlight once again the amazing stability of the climate system. One advantage of the slow Fourier transform I use is that the answers are in the same units as the input data (in this case °C). So we can see directly that the average annual peak-to-peak swing in the Central England temperature is about 13°C (23°F).
And we can also see directly that other than the 13°C annual swing, there is no other cycle of any length that swings even a half a degree. Not one.
So that is the first thing to keep in mind regarding the dispute over the existence of purported regular cycles in temperature. No matter what cycle you might think is important in the temperature record, whether it is twenty years long or sixty years or whatever, the amplitude of the cycle is very small, tenths of a degree. No matter if you’re talking about purported effects from the sunspot cycle, the Hale solar magnetism cycle, the synodic cycle of Saturn-Jupiter, the barycentric cycle of the sun, or any other planetasmagorica, they all share one characteristic. If they’re doing anything at all to the temperature, they’re not doing much. Bear in mind that without a couple hundred years of records and sophisticated math we couldn’t even show and wouldn’t even know such tiny cycles exist.
Moving on, often folks don’t like to be reminded about how tiny the temperature cycles actually are. So of course, the one-year cycle is not shown in a periodogram, too depressing. Figure 4 is the usual view, which shows the same data, except starting at 2 years.
Figure 4. Closeup of the same data as in Figure 3. Unlike in Figure 3, statistical significance calculations done after removal of the 1-year cycle. Unlike the previous figure, in this and succeeding figures the black dots show all cycles that are significant at a higher p-value, in all cases 0.10 instead of 0.05. This is because even after removing the annual signal, not one of these cycles is significant at the p-value of 0.05.
Now, the first thing I noticed in Figure 4 is that we see the exact same largest cycles in the periodogram that Tallbloke’s source identified in their Figure 1. I calculate those cycle lengths as 23 years 8 months, and 15 years 2 months. They say 23 years 10 months and 15 years 2 months. So our figures agree to within expectations, always a first step in moving forwards.
So … since we agree about the cycle lengths, are they right to try to find larger significance in the obvious, clear, large, and well-defined 24-year cycle? Can we use that 24-year cycle for forecasting? Is that 24-year cycle reflective of some underlying cyclical physical process?
Well, the first thing I do to answer that question is to split the data in two, an early and a late half, and compare the analyses of the two halves. I call it the bozo test, it’s the simplest of all possible tests, doesn’t require any further data or any special equipment. Figures 5a-b below show the periodograms of the early and late halves of the CET data.
Figure 5a-b. Upper graph shows the first half of the CET data and the lower graph shows the second half.
I’m sure you can see the problem. Each half of the data is a hundred and seventy-five years long. The ~24-year cycle exists quite strongly in the first half of the data, It has a swing of over six tenths of a degree on average over that time, the largest seen in these CET analyses.
But then, in the second half of the data, the 24-year cycle is gone. Pouf.
Well, to be precise, the 24-year peak still exists in the second half … but it’s much smaller than it was in the first half. In the first half, it was the largest peak. In the second half, it’s like the twelfth largest peak or something …
And on the other hand, the ~15 year cycle wasn’t statistically significant at a p-value less than 0.10 in the first half of the data, and it was exactly 15 years long. But in the second half, it has lengthened almost a full year to nearly 16 years, and it’s the second largest cycle … and the second half, the largest cycle is 37 months.
Thirty-seven months? Who knew? Although I’m sure there are folks who will jump up and say it’s obviously 2/23rds of the rate of rotation of the nodes on the lunar excrescences or the like …
To me, this problem over-rides any and all attempts to correlate temperatures to planetary, lunar, or tidal cycles.
My conclusion? Looking for putative cycles in the temperature record is a waste of time, because the cycles appear and disappear on all time scales. I mean, if you can’t trust a 24-year cycle that lasts for one hundred seventy-five years, , then just who can you trust?
w.
De Costumbre: If you object to something I wrote, please quote my words exactly. It avoids tons of misunderstandings.
Data and Code: I’ve actually cleaned up my R code and commented it and I think it’s turnkey. All of the code and data is in a 175k zip file called CET Periodograms.
Statistics: For the math inclined, I’ve used the method of Quenouille to account for autocorrelation in the calculation of the statistical significance of the amplitude of the various cycles. The method of Quenouille provides an “effective n” (n_eff), a reduced count of the number of datapoints to use in the various calculations of significance.
To use the effective n (n_eff) to determine if the amplitude of a given cycle is significant, I first need to calculate the t-statistic. This is the amplitude of the cycle divided by the error in the amplitude. However, that error in amplitude is proportional to

where n is the number of data points. As a result, using our effective N, the error in the amplitude is
where n_eff is the “effective N”.
From that, we can calculate the t-statistic, which is simply the amplitude of the cycle divided by the new error.
Finally, we use that new error to calculate the p-value, which is
p-value = t-distribution(t-statistic , degrees_freedom1 = 1 , degrees_freedom2 = n_eff)
At least that’s how I do it … but then I was born yesterday, plus I’ve never taken a statistics course in my life. Any corrections gladly considered.
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.


“Well, the first thing I do to answer that question is to split the data in two, an early and a late half, and compare the analyses of the two halves.”
_______________________________________________
I think you could save yourself some work and come to more complete and clear results if you just ran wavelet analysis on the record. Instead of comparing just one half with another, you can get a clear picture of which period is present at which time and how they appear, disappear, and shift as time passes. Each of your graphs can be obtained by averaging wavelet analysis output over time, but why look at averages if you can get the whole picture?
Wavelet analysis tools for R are available, too.
http://cran.r-project.org/web/packages/wavelets/index.html
tony b: “For example warm prevailing westerly winds are frequently replaced by colder easterlies for identifiable periods, which then switch back again”
vukcevic: “CET is highly influenced by a conflict between the ‘warming’ from the N. Atlantic SST, and ‘cooling’ from the Icelandic low atmospheric pressure, two totally different beasts.”
Unintentional consensus ?
The change in temperature of less than 1 degree is indeed much less than the annual variation, but the point of interest then is the date at which reliable germination of various seeds will take place. This is something which may indeed be of relevance, as it could see changes of 30 – 45 days (possibly variation of 60 days between coldest year and mildest year) in terms of germination between years, which may have very significant ramifications for agriculture.
Obviously also, the key issue is not an average temperature, but the dates at which crop killing cold can occur in any one year. That can vary up to 120 days in my own lifetime (1975 saw snowfall in May, this past winter we barely had a frost since January 1st).
It is also the distribution of heat: a very warm and sunny March (by March standards) may warm up soil to temperatures which promote germination, such that a cooler summer thereafter may not affect many things too adversely.
2013 saw a very cold spring followed by a very warm, dry July and August. The growing season was highly compressed and this year’s season to date is 6 – 7 weeks advanced in many instances (fruit set most notably).
Excellent as always.
I’ve done analysis of some long
North American temperature series.
Cycles come and go. Switch phase
etc.
Regardless, the effects are too small
for any useful forecast purposes.
Willis
Interesting. Personally, I’m a bit skeptical about the first 60 years or so of the CET. If you check the Wikipedia entry on thermometers, you’ll see that the sealed tube thermometer with a scale and immune to atmospheric pressure wasn’t invented until the 1650s and finely divided thermometer scales didn’t appear until Daniel Fahrenheit started making and selling mercury thermomters in the 1720s. Exactly what were they using to measure temperatures in the early years of the CET? And how reliable are their numbers?
One might argue that poorly resolved measurements should still average out and one might well be right. Still though it might be well to keep in mind the the data in the early part of the CET might not be measuring the same thing in the data in same way as the later parts.
And no, I don’t have a clue why dubious thermometry might introduce a 23 year cycle. My bet would be that it can’t/hasn’t. That’s probably coming from something else … or from pure chance.
Oh yeah, And weren’t sunspots pretty much missing (Maunder minimum) from 1600-1750?
Don K says:
May 9, 2014 at 5:39 am
Oh yeah, And weren’t sunspots pretty much missing (Maunder minimum) from 1600-1750?
……………
22 year is the solar magnetic cycle
1850-1700 few sunspots but magnetic cycle was present, see Svalgaard, lot of papers on that one.
1750 -1750 strong solar cycles http://sidc.oma.be/silso/yearlyssnplot
correction:
1700 -1750 strong solar cycles http://sidc.oma.be/silso/yearlyssnplot
It’s like looking for patterns in clouds. If you try hard enough you are bound to find a horse, a boat, or anything else you want to see.
@ur momisugly Daniel Heyer:
IMO the point isn’t that the peaks are noise, it is that because the same peaks are not present in all of the dataset they are not predictors for the time beyond the dataset.
My free-of-charge view (worth every penny) is that this is one of the signatures of a chaotic rather than Newtonian system, periodicity comes and goes in an arbitrary fashion quite unlike planetary orbits and the like.
HenryP says:
May 9, 2014 at 5:14 am
………….
Hi Henry
Tony Brown (tony b) is possible the person who knows more about CET that anyone I have encountered on this or any other blog. Having said that see my comment at ‘May 9, 2014 at 3:04 am’.
The CET annual spectrum (Tav or Tmax) is unremarkable, an idea of external forcing can only be glimpsed around annual solstices, in June-July insolation is the strongest and the Icelandic Low is weakest, in December-January is the other way around, when the IL atmospheric pressure is in charge. During the rest of the year there is meandering from one to the other direction.
Spectrum wise, summer and winter seasons have very little in common (very similar Tav and Tmax spectra within either season, but very different between two seasons), with only major ( see link) common component around 70 year period.
This is a graphic for a different project that I find myself staring at.
http://wp.me/a1uHC3-iH
It shows macro cycles in Ocean cores for the last 5 million years as we transitioned to a full blown ice age. It certainly clarifies that at least on this scale a warmer planet has far less extreme “weather”.
I’m wondering if there may be some way to devise a spectral inflection point analysis that would derive the period of the signal required to override the reigning regime and produce the new one.
vukcevic says
http://sidc.oma.be/silso/yearlyssnplot
henry says
IMHO that graph does not tell much, especially not about the magnetic solar cycle of 22 years.
This is the graph that everyone should try to understand
http://ice-period.com/wp-content/uploads/2013/03/sun2013.png
Note that for the last two Hale cycles (from 1972) you can draw a parabolic and hyperbolic binomial which would show that the lowest point of the solar field strength will be reached around 2016.
It appears (to me, clearly ) that as the solar polar fields are weakening, more energetic particles are able to escape from the sun to form more ozone, peroxides and nitrogenous oxides at the TOA.
In turn, more radiation is deflected to space.This is what causes the cooling for the next 3 or 4 decades. Most likely there is some gravitational- and/or electromagnetic force that gets switched every 44 year, affecting the sun’s output (of highly energetic particles).
Something strange will happen in 2016 on the sun (the poles switch over again?) and (my expectation is that) from 2016 we will slowly cycle back to full polar strengths again 40 years from now. Like a mirror. Amazing, how God created things.
Mark my words.
Willis will enjoy
http://tylervigen.com/
vukcevic says
http://www.vukcevic.talktalk.net/CET-June-spec.htm
henry says
true, the 22 years Hale cycle is very clear there
I have a comment awaiting moderation
it would be interesting hearing your comment on that,
please
“I am not familiar with your analysis, so I cannot do it myself, but I will take a bet with you that if you were to look at maximum temperatures, in CET, you will find the elusive cycle that we are all talking about.”
You see people here is the problem in a NUTSHELL.
Absent any Theory of why and how there should be cycles “in the data” one is left with hunting
EVERYWHERE for ANYTHING.
Dont find it in global Tave, look in land only. dont find it there, look in SST. dont find it there, llok in CET, dont find it there, look in tmax of CET. dont find it there, look in tree rings, look in sea level, look in swedish sea levels.
In all this looking folks forget what we know. random shocks can combine to create the appearence of cycles in data.
Now lets take look for AGW as a different example.
AGW theory ( the physics ) tells us if we increase Co2 then we can expect the surface
to warm and the stratosphere to cool. We know exactly where to look, we know what to look for, and we know how to look. And of course when we look this is what we find, a warming surface
and a cooling stratosphere. And we also know from theory that if the warming was caused by the
sun, that we would NOT see a cooling stratosphere.
Is that the end of the story? of course not. The warming of the surface hasnt been as clear
as one would expect.. the hiatus needs to be explained, But what I would call your attention
to is the structure of inquiry. Theory directs inquiry. There is no systematic way to just pick up a pile of data and start “looking” for things. Well one can. One can pick the pile of climate data
and just start mindlessly mining it for cycles and correlations. Guess what you will find them, you must find them. and almost all will be meaningless. What you find will be utterly disconnected from the rest of physics and as such will be useless even if it is true or meaningful.
So return to the fundamental question. Why should you see any cycles in climate data. What cycles should you see, where EXACTLY to you expect to see them? what physical theory tells you to expect this?
vukcevic says:
May 9, 2014 at 12:10 am
Thanks, vuk. I just tried that, I took the periodograms for June only full data, early, and late. It shows little difference from the periodograms in the head post—strong ~24-year cycle in the first half, and no 24-year cycle at all in the second half. Go figure.
w.
Just to leave a suggestion. Try to build a waterfall plot. Window your data with some smallish window (75 years or so) then take your SFT, assign height to a line of colors then move your window over a year. Rinse, Repeat. Plot all of these color lines next to each other across your whole data set. It is sort of like your bozo test but much more fluid. You might find that your 24year peak doesn’t just go away but moves from 24 to 40 with a higher damping. For a 1DOF harmonic, damping is the width of a modal peak at the half power point divided by the peak frequency ((w2-w1)/wn=2z). It will also show you if a peak goes away quickly then you know something changed at about that time. Butterflies in Africa, the Dutch with their windmills, or something take your pick but at least you know what time it happend (within half the cycle rate).
What programming language are you using? (I’m not much of a programmer but hack my way with reasonable results in Matlab)
It appears in Figure 1 that up until about 1720 – eyeball estimate – temperatures are measured at 1/2-degree intervals. Could that rounding have any effect on the apparent periodicity in the first half of the record? How about rounding the the entire record to that resolution?
Steven Mosher says
http://wattsupwiththat.com/2014/05/08/cycling-in-central-england/#comment-1632259
Henry says
it seems you missed my comment
http://wattsupwiththat.com/2014/05/08/cycling-in-central-england/#comment-1632210
please do explain, if you can, as to why the solar polar field strengths are weakening?
@Steven Mosher
just in case you donot believe that we are (already) global cooling, look here,
http://www.woodfortrees.org/plot/hadcrut4gl/from:1987/to:2015/plot/hadcrut4gl/from:2002/to:2015/trend/plot/hadcrut3gl/from:1987/to:2015/plot/hadcrut3gl/from:2002/to:2015/trend/plot/rss/from:1987/to:2015/plot/rss/from:2002/to:2015/trend/plot/hadsst2gl/from:1987/to:2015/plot/hadsst2gl/from:2002/to:2015/trend/plot/hadcrut4gl/from:1987/to:2002/trend/plot/hadcrut3gl/from:1987/to:2002/trend/plot/hadsst2gl/from:1987/to:2002/trend/plot/rss/from:1987/to:2002/trend
My final report says we will be cooling globally for another 30-40 years.
http://blogs.24.com/henryp/2013/04/29/the-climate-is-changing/
HenryP says:
May 9, 2014 at 8:35 am
………….
Hi again,
My view of future polar field course Time will tell.
The ‘issue’ I see here is just that you are looking for a constant cycle. IFF the cycle at about 22 years were solar induced, we already know that the amplitude of solar output / sunspots / activity has a very long term rise / fall such that each ’22 year cycle’ changes amplitude in a long climb up then a long drift down to max then to min.
This is also confounded by the way that the cycle length changes with amplitude. Longer at small amplitudes, shorter at large amplitudes.
Now the ‘kicker’ is that while sunspots (activity / output / …) has an about 11 year average, it is actually bimodal with a peak each side of 11, but not so much AT 11. I would be very surprised if the double-that 22 year cycle were not similarly bi-modal.
So seeing peaks at 23-24 ish and 15-16 ish is not much of a surprise.
Seeing them stronger in some section of that data than in another is also not a surprise.
What would ‘clinch it’ though would be to see if the early data had a solar cycle that did average to 23 ish while the later data did not. To see if there really is any coorelation between the ACTUAL solar cycles and the ACTUAL temperature data (and not the hypothetical average-cycle that does not really exist).
So, in general, I find your analysis interesting for what it shows. But it does not dismiss a solar ‘cycle’ connection to temperatures since the averages hide more than they display… and the solar ‘cycle’ is not regular.
Stark Dickflüssig says:
May 8, 2014 at 5:48 pm
” I have an image of a man holding up a wet finger and making an educated guess for the first half of it 😉
Hence the use of the term “Digital Forecast” on the meteorology web-sites.”
I was convinced by the recent post about the changes in the flowering dates of the cherry trees in Japan over many centuries. These and dates of ice formation and thickness at break-up on the Thames and the like as good proxies for temperatures. Has anyone tried the method of counting the chirps of crickets?
“To convert cricket chirps to degrees Fahrenheit, count number of chirps in 14 seconds then add 40 to get temperature. For example: 30 chirps + 40 = 70° F”
http://lifehacker.com/5817534/how-to-tell-the-temperature-from-a-crickets-chirp
At least these are hard to “fiddle”
Steven Mosher says: “Why should you see any cycles in climate data?”
Weather data is inherently cyclic at high frequencies. (diurnal and annually) It is a small jump to ask the question: Are there any eigenvectors at longer wavelengths? This can inform your theory and give clues as to where to look. One should always ask: “What is the data doing?” before you try to ask “Why?” If you can’t agree on What then nobody is going to get anywhere with Why, let alone “How do we alter it?” or “Should we?”
Politicians tend to jump directly to the How with the same answer: “Give us more power”
Based on your analysis, I started thinking about ocean waves. Small ripples build into ocean waves and with the right circumstances can even wind up as rogue waves. As I remember surfers will look for “sets” of waves. I’m surmising that some similar effect may be going on here. Some set of random and periodic processes are creating fluctuations in the temperature that produce these quasi periodic fluctuations. But I have no idea how you would prove that that’s going on. If that’s true, then without knowing what the drivers are I don’t know how to build a testable hypothesis to see if that’s true. Generate a high frequency pseudo series using the same frequencies you found in the actual data and see if the longer waves show up?